Abstract
Multidimensional optical imaging has seen remarkable growth in the past decade. Rather than measuring only the two-dimensional spatial distribution of light, as in conventional photography, multidimensional optical imaging captures light in up to nine dimensions, providing unprecedented information about incident photons’ spatial coordinates, emittance angles, wavelength, time, and polarization. Multidimensional optical imaging can be accomplished either by scanning or parallel acquisition. Compared with scanning-based imagers, parallel acquisition—also dubbed snapshot imaging—has a prominent advantage in maximizing optical throughput, particularly when measuring a datacube of high dimensions. Here, we first categorize snapshot multidimensional imagers based on their acquisition and image reconstruction strategies, then highlight the snapshot advantage in the context of optical throughput, and finally we discuss their state-of-the-art implementations and applications.
1. Introduction to multidimensional imaging
When performing optical measurement with a limited photon budget, it is important to assure that each detected photon provides as much information as possible. Conventional optical imaging systems generally capture light with just two characteristics (x,y), measuring its intensity in a 2D (x,y) lattice. However, this throws away much of the information content actually carried by a photon. This information can be written in nine dimensions as (x,y,z,θ,φ,λ,t,ψ,χ): the spatial coordinates (x,y,z), the propagation polar angles (θ,φ), the wavelength (λ), emission time (t), and polarization orientation and ellipticity angles (ψ,χ). Neglecting coherence effects, a photon thus carries with it nine tags. In order to explore this wealth of information, an imaging system should be able to characterize measured photons in 9D, rather than in 2D.
To accomplish multidimensional imaging, most systems today rely on scanning, varying one parameter at a time and recording the resultant light intensities at the detector. However, this introduces a trade-off between light throughput and the number of elements in a high-dimensional dataset. For example, to measure a hyperspectral datacube (x,y,λ) with Nx × Ny × Nλ voxels, a scanning-based spectral imaging system sacrifices light throughput by a factor of Nx × Ny when conducting point scanning in the spatial domain [1], by a factor of Nx when conducting line scanning in the spatial (x) domain [2], or by a factor of Nλ when conducting wavelength scanning in the spectral domain [3]. This scanning-induced throughput loss escalates into a more serious problem when measuring a dataset with even higher dimensions because light is allocated into more bins and only a small number of them can be measured at a time. To mitigate this trade-off, the most effective approach is to measure multiple photon tags simultaneously, maximizing the information content acquired from a single camera exposure. Such a parallel acquisition of a high dimensional dataset is referred to as snapshot multidimensional imaging.
In the past decade, the field of snapshot multidimensional imaging has experienced rapid growth. The emergence of a variety of snapshot imagers is a result of the convergence of three major technical advancements. The first contributor is the development of large format 2D focal plane arrays (FPA). For example, current scientific-grade charge-coupled device (CCD) or complementary metal–oxide–semiconductor (CMOS) cameras can have as many as 50 megapixels [4], enabling parallel acquisition of datasets with remarkable information content. The second contributor is the development of new computational techniques and algorithms and their applications in imaging science [5, 6]. In particular, efforts to leverage compressed sensing in optical imaging have broken the bandwidth limit of a conventional camera in both spatial and temporal domains, thereby opening a new area of investigation, dubbed compressed optical imaging [7, 8]. The last but not the least contributor is the development of high-precision micro- or nano-scale fabrication techniques and their availability to the research community [9, 10]. For example, microelectromechanical-systems(MEMS)-based instruments, such as the digital micro-mirror device (DMD), enable rapid spatial encoding at a repetition rate up to 20 kHz, a process that is essential to several snapshot multidimensional imagers, such as coded aperture snapshot spectral imaging [11, 12] (discussed in Section 3.1) and programmable pixel compressive camera [13] (discussed in Section 3.4). Another example is the availability of nano-precision optical fabrication lathes, such as the Nanotech four-axis lathe 250UPL [14], that facilitate the custom fabrication of high-quality optics, such as a multi-facet mapping mirror, a core component in image mapping spectrometry [15–17] (discussed in Section 3.1).
In this review, we first introduce the general acquisition schemes of snapshot multidimensional imaging according to their acquisition strategies and computational strategies. Then we discuss the advantages of parallel acquisition compared with scanning-based measurement in the context of light throughput. Although a variety of metrics have been established to compare snapshot implementations, such as compactness, information density, and efficiency of utilizing an FPA [18, 19], herein we adopt light throughput as the major criterion because it becomes a dominating factor when acquiring datacubes of high dimensions. The subsequent section focuses on the state-of-the-art implementations of snapshot multidimensional imaging instruments and their applications, particularly in remote sensing and biomedicine. Finally, the field is summarized and future directions are discussed.
2. General acquisition schemes and advantages of parallel measurement in multidimensional imaging
To acquire a multidimensional datacube, a system must be able to differentiate photons with different characteristics. The most intuitive approach is to successively apply a variety of filters to the incident light and let photons with only desired characteristics pass through at each stage (Fig. 1a). Unfortunately, this results in a severe loss in optical throughput. By contrast, if an approach directs, rather than filters, photons with different tags towards distinct pixels on an FPA, the optical throughput will be maximized (Fig. 1b). However, the difficulty of this ideal photon-to-pixel mapping increases dramatically with the number of desired dimensions, especially when cost and compactness also pose constraints. Because of this limitation, most current snapshot multidimensional imagers normally acquire just three to five dimensions of information simultaneously.
Fig. 1.
Filter-based versus mapping-based multidimensional imaging. a. Filter-based multidimensional imaging suffers from throughput loss at each filtering stage. In this illustrative example, the incident photons experience throughput losses both at the filter wheel (wavelength selection unit) and polarization filter wheel (polarization selection unit). b. Mapping-based multidimensional imaging retains full throughput because it utilizes optical devices—e.g., a diffraction grating and Wollaston prism—that direct, rather than filter, the incident photons towards the corresponding pixels at the FPA. The colored spheres represent the incident photons of different wavelengths, and the arrow above each photon depicts its linear polarization direction.
In this review, we categorize multidimensional optical imaging techniques using the conceptual architecture shown in Fig. 2. The general strategies are direct measurement and computation. In the direct-measurement category, the techniques are further grouped into three sub-categories—image division, aperture division, and optical path division—according to their acquisition strategies. In the computation category, the techniques are either grouped into two sub-categories—direct image reconstruction and iterative image reconstruction—based on their reconstruction strategies, or grouped into four sub-categories—image division, aperture division, optical path division, and frequency domain division—based on their acquisition strategies. The terminology used in Fig. 2 is defined in Section 2.1.
Fig. 2.
Conceptual architecture for categorizing snapshot multidimensional imaging techniques.
2.1 Definitions
Snapshot multidimensional imaging refers to the quantification of multiple light characteristics using a 2D FPA within a single camera exposure.
Direct measurement refers to a general strategy that directly quantifies each voxel in a multidimensional datacube using FPA pixels. At the condition of Nyquist sampling, each datacube voxel is represented by at least 2 × 2 consecutive FPA pixels. Therefore, the number of datacube voxels cannot be greater than the number of FPA pixels divided by four.
Computation refers to a general strategy that computes the values of datacube voxels based on indirect measurements. Different from direct measurement, the number of calculated datacube voxels can be larger than the number of FPA pixels divided by four provided that the scene can be considered sparse in a given domain.
Image-division refers to an acquisition strategy that spatially splits an image, followed by dispersing or filtering the resultant elements in other domains, such as wavelength, polarization, or propagation angles.
Aperture-division describes an acquisition strategy that splits the system’s aperture, followed by dispersing or filtering the resultant sub-pupils in other domains, such as wavelength and polarization.
Optical-path-division refers to an acquisition strategy that splits the system’s optical path and directs photons with different characteristics in different directions.
Frequency-domain-division refers to an acquisition strategy that multiplexes photons with different characteristics in the spatial or spectral domain, followed by splitting the resultant signals in the corresponding frequency domains.
Direct image reconstruction is a reconstruction strategy that directly applies linear operators, e.g., the inverse Fourier transformation or the wavelet transformation, to the captured data to recover a multidimensional datacube.
Iterative image reconstruction is a reconstruction strategy that iteratively calculates a multidimensional datacube while minimizing an object function. The reconstruction process normally starts with an initial estimate of the datacube, computes the corresponding measurement data, compares it with the actual measurement, and makes suitable adjustments to the datacube. Compared with direct image reconstruction, the computational cost of iterative image reconstruction is generally higher.
2.2 The snapshot advantage in multidimensional imaging
Akin to the Jacquinot advantage in Fourier transform spectrometry [20], snapshot multidimensional imaging has a much higher optical throughput than its scanning-based counterparts. This throughput improvement due to parallel acquisition has been referred to as the snapshot advantage [21] and has been considered as an important criterion to evaluate the performance of a multidimensional imager. Before proceeding to detailed discussions, we first define the optical throughput of a multidimensional optical imaging system as the ratio of the photons measured at an FPA to the incident photons collected by the entrance pupil of the system and within a unit time interval (i.e., a single camera exposure). For easy comparison, we also assume that the incident photons have an equal distribution across all characteristic bins.
When acquiring a datacube with the number of voxels of ΠNk (k=x,y,z,θ,φ,λ,t,ψ,χ), snapshot imagers, which eliminate the need for scanning, can potentially improve the optical throughput by a factor of ΠNk (k=x,y,z,θ,φ,λ,t,ψ,χ) over with their scanning-based counterparts. This throughput improvement becomes more significant when measuring a datacube of multiple dimensions. For example, in volumetric spectral (4D) imaging, when acquiring a 500 × 500 × 30 × 3 (x,y,z,λ) datacube, the snapshot advantage is a remarkable factor of 2.25 × 107. Although scanning-based techniques can compensate for their low light throughput to some extent by increasing the illumination intensity, i.e., increasing the photon flux at the system’s entrance pupil, this approach fails if (i) employing active illumination is not an opinion, as in passive remote sensing [22], (ii) the maximum illumination intensity has been limited for safety [23], or if (iii) the objects have already been boosted to their saturation state, a condition in which further increasing the illumination intensity contributes little to emitting photons [24].
However, not every snapshot imager takes advantage of this potential throughput improvement. When a snapshot imager utilizes filters, e.g., wavelength filters and polarization filters, to acquire a multidimensional datacube, it sacrifices throughput by the same factor as a scanning-based counterpart. By contrast, “full-throughput” snapshot imagers differentiate photons by using filterless geometries. For example, image mapping spectrometry (IMS) [15–17] and imaging spectrometry using a light field architecture (IS-LFA) [25] are both snapshot spectral imagers (discussed in Section 3.1). However, IMS utilizes a prism to disperse light into its spectrum, while IS-LFA employs a filter array. When measuring Nλ spectral bands, the throughput of IMS thus surpasses that of IS-LFA by a factor of Nλ.
In addition, not every snapshot imager’s acquisition capability can be scaled up to “full” dimensions. When a snapshot imager sacrifices data in one dimension in order to measure another dimension, it ceases to be a “full-dimension” snapshot imager because the resultant conflict prevents the imager from measuring these two photon characteristics in parallel. By contrast, if a snapshot imager measures one photon characteristic without affecting the others, it can be potentially modified to acquire datacubes of even higher dimensions. For example, sequentially timed all-optical mapping photography (STAMP) [26] and compressed ultrafast photography (CUP) [27] are both snapshot temporal imagers (discussed in Section 3.4). However, because STAMP trades spectral information (λ) for temporal information (t), it cannot measure a 4D (x,y,λ,t) datacube. By contrast, data acquisition by CUP is not sensitive to optical wavelengths. Therefore, its functionality has been readily expanded to the realm of 4D (x,y,λ,t) imaging.
3. Snapshot multidimensional imaging implementations and applications
3.1 Snapshot spectral imaging (x, y, λ)
Rather than simply capturing two-dimensional intensity images like a monochromatic camera or measuring spectra like a spectrometer, a spectral imager acquires entire 3D datacubes (x, y, λ) for multivariate analysis, providing structural, molecular, and functional information about the sample with unprecedented detail [28, 29]. Using the conceptual framework in Fig. 2, snapshot spectral imagers can be divided into two categories. In the direct-measurement category, representative techniques are image mapping spectrometry [15–17], imaging spectrometry using hyperpixels [30–32], imaging spectrometry using a fiber bundle [33], imaging spectrometry using a filter stack [34], imaging spectrometry using a light field architecture [25], and image-replicating imaging spectrometry [35]. In the computation category, representative techniques are snapshot hyperspectral imaging Fourier transform spectrometry [36, 37], multispectral Sagnac interferometry [38], computed tomography imaging spectrometry [39, 40], and coded aperture snapshot spectral imaging [11, 12].
Image mapping spectrometry (IMS) is an image-division direct-measurement technique [15–17]. Based on the concept of image slicing from astronomy [41, 42], IMS utilizes a custom-fabricated spatial mapping unit, referred to as an image mapper, to slice the input image and redirect the resultant image stripes onto different parts of a CCD, thereby creating blank spaces for spectral dispersion. The optical setup of IMS is shown in Fig. 3a. The input image is relayed to the image mapper through an 4f imaging system. The image mapper (Fig. 3b) consists of hundreds of mirror facets. Each mirror facet is about 70 microns wide and 25 mm long, and has a two-dimensional tilt angle [43, 44]. On the image mapper, the mirror facets are grouped into periodic blocks, and in each block, the mirror facets are fabricated with different tilt angles. The light reflected from these mirror facets is collected by a lens and enters the corresponding pupils at the collecting lens’s back aperture. The light is spectrally dispersed by a prism array and reimaged by a lenslet array to a CCD. By using this method, each (x,y,λ) voxel is mapped to a unique (x′,y′) position at the CCD. By using a simple image remapping algorithm, the original (x,y,λ) datacube can be accurately measured. Because the datacube voxel is directly mapped to the CCD’s pixels, the datacube that an IMS can measure is fundamentally limited by the number of CCD pixels. With a large-format CCD, a current state-of-the-art IMS can measure a 350 × 350 × 48 (x,y,λ) datacube [45] within a single camera snapshot. The IMS has been demonstrated in combination with a variety of imaging modalities, such as microscopy [16, 24, 46–48], endoscopy [45], fundus photography [49], and macroscopy [50, 51], and has been employed for imaging both in the visible [45, 46] and infra-red spectral ranges [51].
Fig. 3.
Image mapping spectrometry (IMS). a. Optical setup. b. Photograph of an image mapper. Figure reprinted with permission from [45] and [17].
Integral field imaging using hyperpixels [30–32] is also an image-division direct-measurement technique. Based on a concept that was initially proposed in astronomy [52, 53], integral field imaging using hyperpixels first images the input scene onto a lenslet array, then filters the sub-pupils at the back focal plane with a pinhole array. The filtered sub-pupil images are spectrally dispersed by a prism and reimaged onto an FPA. Because of spatial filtering by the pinhole array, void spaces are created between adjacent pinhole images for spectral dispersion (Fig. 4a). The spectral dispersion direction of the prism is arranged at an angle with respect to the lenslet array, resulting in a pattern that the detector pixels can be fully used (Fig. 4b). However, due to spatial filtering by the pinhole array, integral field imaging using hyperpixels suffers from a significant loss of optical throughput. In addition, this approach requires that the input scenes have a uniform irradiance distribution at different view angles, a condition that does not hold for cases such as specular reflection.
Fig. 4.
Integral imaging spectrometry using hyperpixels. a. Image of undispersed sub-pupils after pinhole filtering. b. Image of spectrally dispersed sub-pupils. Figure reprinted with permission from [32].
Imaging spectrometry using a fiber bundle (IS-FB) is yet another image-division direct-measurement technique [33, 54–56]. The concept was initially proposed in astronomy, where researchers used individual fibers to selectively sample areas where stars are located rather than sampling the entire FOV [57]. This concept was not further developed until the invention of maneuverable coherence fiber bundles, which can transform a 2D image at the input end to 1D signals at the output end (Fig. 5). IS-FB takes advantage of this image reformatting by spectrally dispersing the resultant 1D signals with a slit spectrometer and measuring the spectrograph with an FPA. The deployment of the maneuverable fiber bundle thus allows spectral imaging of a 2D scene within a snapshot, avoiding the spatio-spectral crosstalk seen when a 2D image is directly dispersed by a prism or diffractive grating. A state-of-the-art IS-FB instrument can measure 44 × 40 × 300 (x,y,λ) datacubes in real time [56]. However, because of the difficulty of manufacturing such a fiber bundle, IS-FB suffers from breakage of image pixels, as seen at the fiber input end in Fig. 5. In addition, due to the low optical coupling efficiency from air to fiber, IS-FB normally has a low optical throughput compared with other spectral imaging modalities working in free space.
Fig. 5.
Reformatting a 2D image to 1D signals by a maneuverable coherence fiber bundle. Figure adapted with permission from [55].
Imaging spectrometry using a filter stack (IS-FS) [34, 58] is an optical-path-division direct-measurement technique. As shown in Fig. 6, to separate the input beam in the spectrum, ISFS utilizes a filter stack to reflect different wavelengths in different directions. The reflected light is collected by a lens and forms spectral images on different parts of an FPA. Because the angles between adjacent filters are small, the introduced optical path differences between different wavelengths are negligible. This condition assures that each spectral channel image can be in focus simultaneously. State-of-the-art IS-FS can capture 12 spectral channels within a single camera snapshot [59]. However, it is difficult to further increase the number of spectral channels for IS-FS because of the limited tilt angle range that can be accommodated in a filter stack.
Fig. 6.
Imaging spectrometry using a filter stack. Figure reprinted with permission from [60].
Imaging spectrometry using a light field architecture (IS-LF) [60] is an aperture-division direct-measurement technique. First proposed by Levoy et al. [25], IS-LF places an array of filters with different spectral transmission bands at the aperture of an imaging system, then reimages this filtered aperture on to a detector with a pinhole array (Fig. 7). Because different parts of the aperture have different transmission wavelengths, the FPA pixels associated with each pupil image measure the spectrum emanating from a specific spatial location at the object plane. Variants of Levoy’s design include replacing the pinhole array with a lenslet array [61] and replacing the filter array with a linear variable filter [62]. Despite easy implementation on a light-field camera, the drawbacks of IS-LF are parallax effects associated with multi-view imaging, the Lambertian reflectance assumption, and loss of optical throughput by a factor of Nλ, the number of filters in the filter array in the case of continuous and uniform spectral sampling.
Fig. 7.
Optical setup of an imaging spectrometer using a light field architecture. u, v, coordinates at the lens aperture; s, t, coordinates at the pinhole array. Figure reprinted with permission from [61].
Image-replicating imaging spectrometry (IRIS) [35, 63] is an optical-path-division direct-measurement technique. Based on the concept of Lyot spectral filtering, IRIS utilizes a cascade of birefringent interferometers to separate the input light in the spectrum and redirect the components of different wavelengths in different directions. Each birefringent interferometer consists of a retarder and a Wollaston prism. The operating principle of IRIS is illustrated by a simplified model with two cascaded birefringent interferometers (Fig. 8a). The input light is linearly-polarized filtered by a polarizer and passed to a retarder, where the fast axis of the wave plate is aligned at 45° with respect to the optic axis of the polarizer. An OPD difference, bt1, is introduced between the ordinary and extraordinary polarization components, where b is the birefringence and t1 is the thickness of the retarder, respectively. The transmittance of these two polarization components through the retarder is wavelength-dependent, as described by the function
(1) |
where k is the wavenumber. These two polarization components are separated by the Wollaston prism and directed in different directions. Then the two divergent beams pass through the second birefringent interferometer and yield four divergent output rays, each associated with a distinct spectral band. The number of spectral bands is thus determined by the number of cascaded birefringent interferometers in the system. In an IRIS prototype (Fig. 8b), Gorman et al. demonstrated the acquisition of eight spectral bands within a single camera snapshot by using four cascaded birefringent interferometers [35]. The drawbacks of IRIS are a loss of half of the optical throughput when imaging unpolarized scenes, the difficulty of measuring a large number of spectral bands because of the need for a large-format Wollaston prism with sufficient birefringence, and the difficulty of correcting polarization-dependent chromatic aberrations.
Fig. 8.
Image-replicating imaging spectrometry (IRIS). a. Birefringent spectral demultiplexor. b. Optical setup. t1, t2 thickness of retarders. Figure reprinted with permission from [35].
Snapshot hyperspectral imaging Fourier transform spectrometry (SHIFT) [64] is an aperture-division computational technique using direct image reconstruction. Conceptually, SHIFT is based on multiple-image Fourier transform spectrometry, which was first demonstrated by Hirai et al. [65]. In Hirai’s original design, the modulation of optical path difference is achieved by tilting a mirror along two axes in a Michelson interferometer. However, this setup is sensitive to environmental vibration because the input signals traverse two different optical paths before they interfere at the detector. SHIFT solves this problem by using a birefringent polarization interferometer. As shown in Fig. 9a, the object is first imaged by a lenslet array. The formed N × M subimages are passed to the birefringent polarization interferometer, which consists of two Nomarski prims. Rotating the prisms by a small angle with respect to the detector results in different optical path differences (OPDs) for different subimages. The spectrum at each spatial position can be recovered by Fourier transforming the intensity signals along the OPD axis in the 3D interferogram (Fig 9b). Compared with Hirai’s approach, SHIFT is more compact and less affected by vibration due to its common optical path design. However, SHIFT suffers from the parallax effect inherent in multi-view imaging. In addition, because of the dependence on the birefringence effect, the optical throughput of SHIFT is limited to 50% when imaging unpolarized scenes.
Fig. 9.
Snapshot hyperspectral imaging Fourier transform spectrometry (SHIFT). a. Optical setup. b. 3D interferogram. A, analyzer; FPA, focal plane array; G, polarizer; HWP, half wave plate; NP1, NP2, Nomarski prims. Figure adapted with permission from [65].
Multispectral Sagnac interferometry (MSI) [38] is a frequency-domain-division computational technique using direct image reconstruction. Based on the concept of channeled imaging polarimetry [66], MSI utilizes two multi-order blazed gratings to introduce different OPDs for different wavelengths in a modified Sagnac interferometer (Fig. 10). The modulated OPDs are manifested in the interference fringes at the detector, adding carrier frequencies to the object’s native spatial frequency band. The object’s spatial frequency band is shifted by these wavelength-dependent carrier frequencies, thereby creating a mosaic of spectral channels in the spatial frequency domain. By windowing these spectral channels in the spatial frequency domain, then applying inverse Fourier transforming, Kudenov et al. demonstrated that spectral scenes can be recovered from the coincident interference field measured at the detector [38]. However, this approach can image only a few selected wavelengths because the spectral channels must correspond to the blazed wavelengths of the gratings’ diffraction orders. In addition, the optical throughput is halved due to the linear polarization input.
Fig. 10.
Optical setup of a multispectral Sagnac interferometer. G1, G2, diffraction gratings; LP, linear polarizer; M1, M2, mirrors; WGBS, wire-grid beam splitter. Figure reprinted with permission from [38].
Computed tomography imaging spectrometry (CTIS) [67, 68] is an aperture-division computational technique using iterative image reconstruction. As shown in Fig. 11a, a computer-generated-holograph (CGH) is placed at the conjugate plane of the aperture stop of an imaging system. Different from a conventional diffractive grating which disperses light along only one dimension, a CGH can disperse light along two dimensions, forming different combinations of diffraction-order images at the camera (Fig. 11b). Each diffraction-order image is the result of two successive operations applied to the object’s datacube—shearing the wavelength axis towards the direction associated with the image’s diffraction order, followed by summing the intensities along the wavelength axis. A multiplicative algebraic reconstruction algorithm [69] allows the object’s datacube to be reasonably estimated. Due to its compactness, CTIS has been used in combination with a variety of imaging modalities, such as microscopy [39, 40], macroscopy [70, 71], and ophthalmoscopy [72, 73]. However, CTIS is essentially a limited-view instrument—each voxel of the object’s datacube is viewed through a limited set of angles, which correspond to the limited number of projected images at the camera. Because of its limited detector area and low diffraction efficiency at high diffraction orders, CTIS suffers from two missing cones in the spatio-spectral frequency domain [68, 74]. Therefore, it is difficult to image objects with flat spatial features and sharp spectral transitions. A recent work compensates, to some extent, for this missing cone problem by incorporating prior knowledge about the discreteness of spectra into the image formation framework through a parametric model [75].
Fig. 11.
Computed tomography imaging spectrometry (CTIS). a. Optical setup. b. Diffraction-order images of retina dispersed by a computer-generated-holograph (CGH). Figure adapted with permission from [77].
Coded aperture snapshot spectral imaging (CASSI) [11, 12, 77] is an optical-path-division computational technique using iterative image reconstruction. Based on the concept of compressed imaging, CASSI encodes the input image with a random binary pattern using an absorption mask, then disperses the encoded image with a prism. The spatio-spectrally multiplexed image is measured by an FPA. The image reconstruction is the solution of the inverse problem of the image formation process. By employing an algorithm such as gradient projection for sparse reconstruction [78], or a two-step iterative shrinkage/thresholding algorithm [79], Wagadarikar et al. demonstrated that a (x,y,λ) datacube can be reconstructed from such a measurement. However, because CASSI is built upon the compressed sensing paradigm, it requires the input scene to be sparse in the gradient domain in order to work properly. To improve CASSI’s reconstruction quality, recent efforts encompass utilizing multiple camera shots with a varying mask [80–82], a higher-order image reconstruction model [83], an optimized coded aperture [84, 85], and a hybrid design employing two cameras [86].
Snapshot spectral imaging modalities are compared in Table 1. The spatial resolutions of IMS, IS-FS, SHIFT, IRIS, and MSI are all diffraction limited. By contrast, the spatial resolutions of other modalities are poorer than the diffraction limit because of various trade-offs. In integral field imaging using hyperpixels, because a micro-lens array (MLA) is used to divide the image, the spatial resolution is limited by the number of lenslets on the MLA. This limitation also constrains IS-LFA, which utilizes an MLA to divide the aperture. In IS-FB, because the image is transmitted through a fiber bundle, the spatial resolution is limited by the fiber bundle’s pitch. In CTIS, the spatial resolution is object-dependent and practically limited by the number of projected views of the (x,y,λ) datacube on the camera for a fixed FPA. In addition, because of the missing cone in the spatiospectral frequency domain, even with as many projections as desired, the recovered spatial bandwidth at a given spectral modulation frequency is still limited—the higher the spectral modulation frequency, the lower the spatial frequency bandwidth. In CASSI, the spatial resolution is worse than imposed by the diffraction limit, mainly because of the introduced spatio-spectral multiplexing along the spectral dispersion direction. In addition, the reconstruction process, which encourages sparsity in the spatial gradient domain, further smoothes the high-frequency spatial features.
Table 1.
Comparative features of snapshot spectral imaging modalities
Modality | General strategy |
Acquisition strategy |
Reconstruction strategy |
Lateral spatial resolution |
Spectral resolution |
Optical throughput |
---|---|---|---|---|---|---|
Image mapping spectrometry (IMS) |
Direct measurement |
Image- division |
Not applicable | Diffraction limited, assuming Nyquist sampling at the image mapper |
Limited by the number of mirror facets in a periodic group at the image mapper for a given spectral range |
100% |
Integral field imaging using hyperpixels |
Direct measurement |
Image- division |
Not applicable | Limited by the number of lenslets for a given FOV |
Limited by the size of a sub- pupil image after pinhole filtering |
Limited by the pinhole filtering |
Imaging spectrometry using a fiber bundle (IS-FB) |
Direct measurement |
Image- division |
Not applicable | Limited by the fiber bundle pitch |
Diffraction limited by the optics inside the spectrometer |
Limited by the optical throughput of a fiber bundle |
Imaging spectrometry using a filter stack (IS-FS) |
Direct measurement |
Optical- path- division |
Not applicable | Diffraction limited |
Determined by the interval of the cut-on wavelengths of adjacent filters; practically limited by the number of dichroic filters that can be fitted into a stack |
100% |
Imaging spectrometry using a light field architecture (IS- LFA) |
Direct measurement | Aperture-division | Not applicable | Limited by the number of lenslets for a given FOV |
Limited by the bandwidth of filters at the aperture |
1/Nλ (Nλ, number of spectral bands) |
Image-replicating imaging spectrometry (IRIS) |
Direct measurement | Optical- path- division |
Not applicable | Diffraction limited |
Limited by the number of cascaded birefringent interferometers for a given spectral range |
50% (polarization sensitive) |
Snapshot hyperspectral imaging Fourier transform spectrometry(SHIFT) |
Computation | Aperture- division |
Direct image reconstruction |
Diffraction limited |
Limited by the number of lenslets for a given spectral range |
50% (polarization sensitive) |
Multispectral Sagnac interferometry (MSI) |
Computation | Frequency- domain- division |
Direct image reconstruction |
Diffraction limited |
Determined by the blazed- wavelength- correspondent diffraction order |
50% (polarization sensitive) |
Computed tomography imaging spectrometry (CTIS) |
Computation | Aperture- division |
Iterative image reconstruction |
Object dependent; practically limited by the number of projected views of a (x, y, λ) datacube at the camera. Fundamentally limited by the missing cone problem in the spatio-spectral frequency domain |
Object dependent; practically limited by the number of projected views of a (x, y, λ) datacube at the camera; fundamentally limited by the missing cone problem in the spatio-spectral frequency domain |
100% |
Coded aperture snapshot spectral imaging (CASSI) |
Computation | Optical- path- division |
Iterative image reconstruction |
Object dependent; worse than diffraction limit generally because of spatio-spectral multiplexing at the FPA and the sparsity constraint on the input scene |
Object dependent; worse than diffraction limit generally because of spatio-spectral multiplexing at the FPA and the sparsity constraint on the input scene |
50% |
The spectral resolutions of snapshot spectral imaging modalities vary and are restricted by different factors. In IMS, given a desired spectral range, the spectral resolution is limited by the number of mirror facets in a periodic group at the image mapper. In integral field imaging using hyperpixels, because the sub-pupil image associated with each lenslet acts as the point-spread-function, its FWHM determines the system’s spectral resolution. In IS-FB, because each fiber is an independent source for the spectrometer, the spectral resolution is diffraction limited by the optics inside the spectrometer. In IS-FS, the spectral resolution is determined by the interval of the cut-on wavelengths of adjacent filters, and is practically limited by the number of dichroic filters that can be fitted into a stack. In IS-LFA, because the aperture is divided and filtered with different color filters, the spectral resolution is determined by the bandwidth of each individual filter. However, the maximal resolvable spectral bands are fundamentally limited by the number of resolvable spatial pixels associated with each lenslet. In SHIFT, an MLA divides the aperture and introduces different OPDs for each sub-image. Given a desired spectral range (OPD sampling interval), the number of lenslets on the MLA thus determines the OPD range and thereby the spectral resolution. In IRIS, the spectral bandwidth is approximately halved after the light passes through a birefringent interferometer (Eq. 1). The final spectral bandwidth of a spectral channel is thus limited by the number of cascaded birefringent interferometers in use. In MSI, the spectral resolution is determined by the diffraction efficiency of a multi-order blazed grating and dependent on its diffraction order. In general, at a lower diffraction order (i.e., shorter blazed wavelengths), the spectral resolution is higher. In CTIS and CASSI, the spectral resolutions are limited by the same factors that restrict their spatial resolutions, as previously discussed.
Measured by optical throughput, IMS, IS-FS, and CTIS have the best performances, all maintaining 100% light throughput. The light throughput of IS-FB is limited by the light coupling efficiency, fill factor, and transmittance loss of the fiber bundle. The light throughput of integral imaging using hyperpixels is limited by the pinhole filtering. The light throughputs of SHIFT, IRIS, and MSI are ~50% when imaging a natural scene because all these modalities require a linear polarization input. The light throughput of CASSI is also ~50% because an absorption mask is employed to encode the input image. IS-LFA has an optical throughput of 1/Nλ when imaging Nλ spectral bands, and thus it is not suitable for hyperspectral imaging applications when many wavelengths are collected.
3.2 Snapshot plenoptic imaging (x, y, θ, φ)
Plenoptic imaging, also referred to as light field or integral imaging, can capture a 4D light field (x, y, θ, φ) within a single exposure [87]. First proposed by Lippmann in 1908 [88], plenoptic imaging has found numerous applications in photography [89, 90], stereoscopy [91, 92], otoscopy [61], ophthalmoscopy[93], and microscopy [25, 94, 95]. Conventional plenoptic imaging captures varied perspectives of a scene using an array of independent cameras [96, 97]. This array complicates experimental setup, calibration, and synchronization. As an alternative, a light field can be measured by scanning a single camera from different viewpoints [98]. This method, however, cannot be used to image dynamic scenes because of low temporal resolution. To overcome this limitation, a variety of snapshot plenoptic imaging methods have been developed in the past decade, allowing a 4D light field to be captured with a single image sensor and within a single camera exposure.
Currently there are three major approaches to implement snapshot plenoptic imaging. The first approach, referred to as near-field integral imaging, directly images the scene through a lenslet array, creating multiple images at varied view angles (Fig. 12a). Each perspective image from a lenslet is referred to as an elemental image (EI), and the entire collection of these EIs is referred to as the integral image of the scene. To effectively sample the angular information, this approach requires the object to be close to the imaging system, covering by the lenslets a relatively large angular extension of emanated light rays.
Fig. 12.
Snapshot plenoptic imaging. a. Near-field integral imaging setup. b. Far-field integral imaging setup. c. Dappled photography setup. FPA, focal plane array.
By contrast, the second approach, referred to as far-field integral imaging, first images a distant scene onto the lenslet array using a camera lens, also called a depth-control lens (Fig. 12b). Then each lenslet spatially samples this intermediate image and creates a pupil image, which provides the angular distribution of radiance at the corresponding point on the object. To create necessary parallax, the depth-control lens must have a relatively large aperture. The angular resolution is determined by the number of detector pixels associated with a pupil image, and the spatial resolution is determined by the total number of pupil images (i.e., the number of lenslets). Because of this pixel allocation, the resultant image’s spatial resolution is generally worse than the diffraction limit. For example, with a 16-megapixel image (4000 × 4000) sensor, a system implementing this design has a spatial resolution of only 300 × 300 [89]. To improve the spatial resolution, the most intuitive method is to use a denser lenslet array, trading in angular resolution for spatial resolution. However, simply reducing the size of each lenslet cannot effectively remedy this problem because the information measured by pixels at the pupil image’s boundary is either entirely lost or noisy [99]. More effective solutions include using focused plenoptic cameras [100, 101] and using an array of negative lenslets and prisms [99]. These designs allow high spatial sampling at the expense of reduced angular resolution. Nevertheless, because information along the spatial dimension is generally considered to have more variation than that along the angular dimension [98], previous studies showed that, even with a limited number of angular samplings, a 4D light field can be reasonably estimated [99, 100].
The third approach, referred to as dappled photography or heterodyne light field imaging [102–104], resembles a traditional camera setup. However, to modulate the 4D light field it places an absorbing mask with a broadband code, e.g., a sum-of-sinusoids pattern, between the camera lens’s aperture stop and the image sensor (Fig. 12c). These patterns are designed to map high-frequency angular information to the spatial frequency domain. The light field can be recovered by assembling the tiles of the 2D Fourier transform of the captured image into a 4D datacube and computing the inverse Fourier transform. Given a large-format camera, dappled photography retains full spatial resolution for the in-focus image. However, the light throughput is halved due by the absorbing mask. Xu et al. improved this technique by introducing dual attenuation masks to modulate the light field: one with a random code placed at the lens’s aperture stop, and another, with a broadband code, placed at a plane between the lens’s aperture and sensor [105]. Compared with single-mask-based dappled photography, Xu’s method utilizes the camera’s spatial frequency bandwidth more efficiently and therefore achieves higher spatial resolution, at the expense of more severe throughput loss (> 95%).
All above strategies have a common trade-off between the spatial and angular resolution: that is, the total number of reconstructed light field elements cannot surpass the number of sensor pixels. To overcome this limitation, compressed sensing architectures have been introduced into light field imaging [106, 107]. The initial goal is to reduce the number of measurements compared with their non-compressed counterparts [108] by leveraging the light field’s intrinsic angular or spatial correlations. However, these techniques still require multiple camera exposures and thus are not suitable for imaging dynamic scenes. Marwah et al. recently constructed a single-shot, high-resolution light field camera using a compressed imaging architecture [109]. Similar to the original dappled photography, a coded mask is placed between the lens aperture and sensor to modulate the light field. However, rather than using a broadband or random code, Marwah optimized the code pattern by minimizing the mutual coherence between the measurement matrix and dictionary matrix [110]. The resultant method provides higher reconstruction quality than dappled photography while maintaining a reasonable optical throughput (~50%).
We compare the snapshot plenoptic imaging strategies in Table 2. In near-field integral imaging, the spatial resolution is limited by the number of camera pixels associated with each lenslet, while the angular resolution is limited by the number of lenslets. By contrast, in far-field integral imaging these two limiting factors are switched. In general, near-field and far-field integral imaging are favored in spatial-resolution-priority and angular-resolution-priority imaging, respectively. In dappled photography, there is a trade-off between spatial resolution and angular resolution because high angular frequency components are mapped to the spatial frequency domain and occupy the same axes as spatial frequency components. Due to the linear reconstruction, i.e., mosaicking in the frequency domain, the number of reconstructed spatial and angular light field elements cannot surpass the total number of camera pixels. Compressed integral imaging mitigates this limitation by leveraging the intrinsic sparsity of a natural scene and reconstructs the light field using an iterative algorithm, i.e., gradually minimizing the difference between the estimated values and the measurement in the form of the L2 norm. However, if the sparsity requirement is not met, the reconstruction process yields artifacts. Measured by optical throughput, near-field and far-field integral imaging perform better because they maintain all light rays emitted from the object. On the other hand, because of the usage of absorption masks, both dappled photography and compressed integral imaging suffer from at least 50% throughput loss. However, these mask-based methods are easier to implement than microlens-array-based methods.
Table 2.
Comparative features of snapshot plenoptic imaging modalities
Modality | General strategy |
Acquisition strategy |
Reconstruction strategy |
Lateral spatial resolution |
Angular resolution |
Optical throughput |
---|---|---|---|---|---|---|
Near-field integral imaging |
Direct measurement |
Aperture- division |
Not applicable | Limited by the number of camera pixels associated with each lenslet for a given spatial FOV |
Limited by the number of lenslets for a given angular FOV |
100% |
Far-field integral imaging |
Direct measurement |
Image- division |
Not applicable | Limited by the number of lenslets for a given spatial FOV |
Limited by the number of camera pixels associated with each lenslet for a given angular FOV |
100% |
Dappled photography |
Computation | Frequency- domain division |
Direct image reconstruction |
Limited by the trade-off between spatial and angular frequency bandwidth |
Limited by the trade-off between spatial and angular frequency bandwidth |
≤50% |
Compressed integral imaging |
Computation | Not applicable | Iterative image reconstruction |
Object dependent; limited by the sparsity constraint on a light field in a given representing basis |
Limited by the sparsity constraint on a light field in a given representing basis |
50% |
3.3 Snapshot volumetric imaging (x, y, z)
Volumetric imaging, one of the earliest embodiments of multidimensional imaging, has been long pursued because the world around us is in 3D. In this section, we review the main snapshot volumetric imaging techniques that allow us to see a 3D scene in the ballistic or quasi-ballistic regime. For snapshot 3D surface imaging techniques, such as using structured illumination or parallel light detection and ranging (LIDAR), we refer the readers to more specific articles, [111] and [112] respectively.
When imaging a 3D scene, a conventional 2D imager integrates light intensities along the depth axis. For direct volumetric measurement, devices using a 2D FPA thus face three major challenges. The first challenge is to remap different depth layers to different areas of the FPA. The second challenge is to compensate for defocusing in these remapped depth layers. The third challenge is to suppress the out-of-focus light and improve the axial resolution. Compared with the first two challenges, suppressing the out-of-focus light from a depth layer is relatively easy. It can be achieved either numerically during post-processing, such as through 3D deconvolution [113], or physically during data acquisition, such as by employing parallel light-sheet illumination [114]. However, it is noteworthy that 3D deconvolution is effective only for specimens in which the ratio of background to in-focus signals is no greater than 20:1 [115], thereby posing a practical limitation on the applicable objects. Additionally, removing out-of-focus light by 3D deconvolution is achieved at the expense of a decreased signal-to-noise ratio and may also introduce structural artifacts [116]. In the following discussion, we focus on techniques that can tackle the first two challenges, referred to as depth remapping and defocus compensation.
A simple solution to these two challenges is to split the optical path at the image side and introduce corresponding OPDs for the target depth layers. This optical-path-division concept was first demonstrated using a dual-camera setup as shown in Fig. 13 [117, 118]. A 50:50 beam splitter is inserted into the optical path of a standard microscope and splits the light into two beams. Each beam is focused by a tube lens and forms an image at a detector. The two detectors are placed at different distances from the tube lens, measuring two distinct depth layers at the same time. Geissbuehler et al. further developed this technology and simultaneously captured eight depth layers by introducing additional beam splitters and mirrors [119].
Fig. 13.
Dual-camera optical setup for simultaneous two-plane imaging. Figure reprinted with permission from [118].
Alternatively, depth remapping and defocus compensation can also be achieved by wavefront engineering techniques, such as volume holographic imaging [120–123] or multifocus microscopy using a distorted grating [124–127] or a liquid crystal spatial light modulator [128, 129].
First proposed by Liu, et al [130], volume holographic imaging (VHI) is an optical-path-division direct-measurement technique. It utilizes a volume hologram’s wavefront selection properties to simultaneously image multiple object depths [131–135]. Figure 14 shows a representative experimental setup. A volume hologram is placed at the aperture stop of an imaging lens, acting as a Bragg filter and allowing photons with only specific propagation angles and wavelengths to pass through [120, 136]. To enable simultaneous imaging of multiple planes, the volume hologram can be produced in a multiplexed manner—superimposed by holographic gratings with different frequency patterns. Each multiplexed grating is Bragg matched to a different depth in the sample and diffracts the light to a different central angle. After passing through the volume hologram, the diffracted light is collected by a lens and imaged by an FPA. VHI has been implemented in applications such as endoscopy [137, 138] and microscopy [139, 140]. Despite the snapshot advantage, the number of depth layers that can be simultaneously imaged by VHI is extremely restricted (≤5) [133].
Fig. 14.
Snapshot volumetric imaging by using a multiplexed volume hologram. Figure adapted with permission from [121].
In VHI, the lateral FOV at each depth layer arises from the Bragg degenerate properties of a volume hologram. For a volume hologram recoded with a plane wavefront, degenerate diffraction occurs when (i) either combined changes are applied to the incident light’s wavelength and angle, or (ii) the incident beam pivots around the direction that is aligned with the orientation of the volume hologram’s fringes (the y axis in Fig. 14) [131]. Under monochromatic illumination, the VHI’s FOV is a line, due to type-ii degenerate diffraction. To complete the entire 3D volumetric acquisition, scanning is required along the x axis. By contrast, under broadband illumination, VHI has a broader FOV along the x axis because of type-i degenerate diffraction, at the expense of decreased depth resolution [132]. To mitigate this limitation, Sun et al. proposed a new form of VHI with rainbow illumination [141]. Basically, rather than shining a uniform broadband light onto the object, the authors pre-disperse the light using a grating and project the resultant color strips onto different parts of the surface. By carefully choosing the grating period and matching it to the diffracted beam angle, depth-selective images can be simultaneously obtained over the entire illuminated area, with each color Bragg matched to an x position. However, accurate matching between the external grating and hologram is challenging. Additionally, misalignment between the illumination plane and object plane can significantly reduce the lateral FOV [142]. Two follow-up works have been carried out since the invention of rainbow VHI. Castro et al. eliminated the stringent grating-hologram matching requirement by using the volume hologram as both the illumination disperser and angular-spectral filter [142]. Leon et al. improved depth resolution by a factor of 30 by optimizing the original dual-grating design [143].
As an alternative, depth remapping and defocus compensation can be achieved by using a distorted phase grating [125]. First proposed by Blanchard et al. [124], a distorted phase grating can introduce different levels of defocus in the wavefront and diffract them into different orders. Therefore, when a distorted grating is placed close to a lens, it effectively modifies the focal length of the lens in non-zero diffraction orders, playing the role of a defocus compensator. Additionally, the diffraction angles enable depth remapping.
The effect of a distorted phase grating on an imaging system is illustrated in Fig. 15. The combination of a distorted grating and a lens images a single object onto different image planes in each different diffraction order (Fig. 15a). Alternatively, if multiple objects are located at the preset depths, the system can simultaneously image them onto the same plane. In Fig. 15b, the three in-focus images correspond to the objects A, B, C associated with the +1, 0, and −1 diffraction orders, respectively.
Fig. 15.
Use of a distorted grating in an imaging system. a. When a single object is imaged by a distorted grating and lens, different diffraction-order images are formed at different image planes. b. When multiple objects at different depths are imaged by same imaging system, their different diffraction-order-associated images are formed at the same image plane. Figure adapted with permission from [124].
The first demonstration of this system worked only with narrow-band or laser illumination because the diffraction angle and defocusing power of a distorted grating are sensitive to wavelength [127]. Blanchard et al. alleviated this limitation by first dispersing the incident light using a pair of blazed gratings, then shining the resultant spectral components onto different positions of a distorted grating [127]. This spectral pre-dispersion compensates for the intrinsic spectral dispersion of the distorted grating, thereby allowing simultaneous broadband imaging of multiple planes. However, this scheme cannot be readily implemented in a microscopic setup because high-numerical microscopic objective normally corrects for aberration at only one depth layer. To solve this problem and extend the depth of focus, Abrahamsson et al. adopted an aberration-free refocusing scheme [144], utilizing a combination of chromatic correction gratings and prisms to compensate for the intrinsic spectral dispersion of the distorted grating [126]. In this way, the authors demonstrated the parallel acquisition of a volumetric image with up to nine focal planes [126]. In a recent work, Hajj et al. applied this technique to stochastic optical reconstruction microscopy (STORM) and demonstrated snapshot 3D superresolution imaging of a living cell [145]. In another follow-up work, Yu et al. demonstrated that the number of depth layers for simultaneous imaging can be dramatically increased to ~50 by introducing Dammann phase-encoding into the original distorted grating [146].
Functionally equivalent to a distorted grating, a liquid crystal spatial light modulator (SLM) can also be used for depth demultiplexing [128, 129]. In a demonstration, an SLM was placed at the aperture stop of a microscope objective and programmed to display a phase pattern simulating a set of superposed multi-focal off-axis Fresnel lenses, which remap different depths to different lateral positions of an FPA. Using this setup, nine focal planes were captured simultaneously. Because the SLM is sensitive to both wavelength and polarization, the major drawback of this method is the lack of color and polarization imaging capabilities.
Beyond these direct-measurement techniques, a 3D scene snapshot can also be acquired by computational approaches. Representative techniques in this category are 3D integral imaging [147], single-shot digital holography [148], and snapshot 3D optical coherence tomography [149].
3D integral imaging reconstructs the depth information from a 4D light field (that is, 2D spatial information and 2D light ray angular information). The light field acquisition methods have been discussed in Section 3.2. If the reflectance from a scene is Lambertian, the 3D reconstruction from a light field can be carried out by simulating the optical back-projections of multiple 2D elemental images according to either ray optics [150–153] or wave optics [154]. The depth-of-field is inversely proportional to the angular resolution of the captured light field, while the depth resolution is determined by the NA of the front optics as well as the angular resolution of the light field [25]. Due to its easy implementation, 3D integral imaging has been widely used in various applications, such as imaging objects in turbid media [155], photon counting and photon-starved 3D visualization [156–160], imaging occluded objects [161], 3D microscopy [94, 95, 162–164], and 3D endoscopy [165]. For example, in a recent implementation, Prevedel et al. demonstrated high-speed, large-scale 3D imaging of neuron dynamics in volumes of ~700 µm ×700 µm ×200 µm using a light field microscope [95].
Single-shot digital holography is a computational technique using direct image reconstruction [148, 166–168]. The incident light’s wavefront is recorded by interfering it with a reference beam and forming an interferogram at an FPA. The phase distribution of the complex wavefront contains the 3D information of the original object. By Fresnel transformation, the 3D scene thus can be reconstructed by numerical propagation of the complex wavefront to the image plane [169, 170]. Single-shot digital holography can be implemented in either an off-axis configuration [148, 166, 167, 171] or a parallel phase-shifting in-line configuration [168, 172–175]. Because of the reliance on coherent illumination, digital holography suffers from speckle artifacts in general.
The major challenge in digital holography is to suppress the zero-order and twin images during image reconstruction [176]. Off-axis single-shot digital holography has an intrinsic advantage because the real image (+1 diffraction order), the zero-order image, and the twin image (−1 diffraction order) are diffracted in different directions. Therefore, provided that the incident angle of the reference beam is larger than the diffraction angle associated with the object’s maximal spatial frequency, these three images can be separated in the spatial frequency domain [176]. However, the maximal allowable incident angle of the reference beam is limited by the Nyquist sampling of fringes at the FPA. This trade-off results in either an overlap of different diffraction order images or degradation in the resolution of reconstructed scenes [166].
By contrast, parallel phase-shifting digital holography employs an in-line configuration—the reference beam and object beam are incident on the image plane in parallel, resulting in a complete overlap of different diffraction order images. To remove the undesired zero-order and twin images, Awatsuji et al. used a phase-shifting array to encode adjacent 2 × 2 camera pixels with additional 0,π/2,π,and 3π/2 phases[168]. Given a slowly varying object’s wavefront, the camera pixels associated with these additive phases can be extracted, constituting corresponding phase-step images. By employing an algorithm similar to that in conventional sequential-acquisition phase-shifting digital holography [177], zero-order and twin images can be numerically eliminated. A similar four-step phase encoding method was also independently invented by Millerd et al. [173, 174]. Since the first demonstration, the number of required phase steps to recover the complex wavefront has been reduced first to three [178] and then to two [179–182] in several follow-up works. However, the difficulty of fabricating such a phase-shifting array still poses a practical limitation on the application of this technique.
In another parallel phase-shifting digital holography implementation [172], Hettwer et al. utilized a Michelson interferometer with a polarization beam splitter to generate an object wave and reference wave with orthogonal polarizations (Fig. 16). The combined object and reference waves are diffracted by a grating into three beams with equal intensities. Two quarter-wave-plates are inserted into the optical paths associated with ±1 diffraction orders. The fast axes of the two wave plates are respectively aligned with the object wave’s and reference wave’s polarization directions, introducing ±π/2 phase differences between these two waves. After passing through an analyzer, the object and reference waves interfere, forming three phase-shifted interferograms at a CCD. The object’s complex wavefront can be recovered by employing a three-step phase-shifting reconstruction algorithm. However, due to the utilization of a non-common-path Michelson interferometer, this technique is sensitive to environmental vibration.
Fig. 16.
Optical setup of a parallel phase-shifting digital holography technique. Figure reprinted with permission from [172].
Alternatively, parallel phase-shifting digital holography can be accomplished by the fractional Talbot effect [175], which is called the self-imaging phenomenon when a grating is illuminated by a coherent laser beam [183]. An image of the grating is formed at integer multiples of the Talbot distance Zt=2d2/λ, where d is the grating period and λ is the wavelength. At fractional Talbot distances, the light distribution also produces a superposition of shifted replicas of the grating that are weighted by different phase factors, referred to as a Fresnel image (Fig. 17a) [184]. Martinez-Leon et al. utilized a grid amplitude grating to produce a fractional Talbot pattern at a distance of Zt/4, where three adjacent aperture squares are encoded with additive phases , and π, respectively (Fig. 17b). The complex object wavefront can be recovered by employing a three-step phase-shifting reconstruction algorithm. Araiza-Esquivel et al. further advanced this technology and enabled color reproduction by illuminating the object with three color lasers and detecting the resultant holograms with three FPAs [185]. However, due to the introduction of an amplitude grating, the optical throughput of the reference beam is halved.
Fig. 17.
Parallel phase-shifting digital holography utilizing fractional Talbot effect. a. Formation of Fresnel images. b. Optical setup. Figure reprinted with permission from [175].
Besides the aforementioned techniques, it is worth mentioning several other implementations for parallel phase-shifting digital holography. Nomura et al. demonstrated that the object’s wavefront can be recovered by interference with a random-phase reference wave [186–188]. However, the phase of the reference wave must be measured a priori. Lin et al. utilized a phased spatial light modulator for parallel phase encoding of adjacent camera pixels [189]. Although similar to [168] in concept, this approach does not require a pixelated retarder array and therefore is relatively easy to implement. Still, this approach requires stringent pixel-to-pixel alignment between the spatial light modulator and the camera.
At the expense of sacrificing color reproducing capability, snapshot volumetric imaging can also be accomplished by spectrally encoding depth. A representative technique is snapshot 3D optical coherence tomography (OCT) [149]. In spectral-domain OCT, the photons scattered from different depth layers exhibit different modulation frequencies in the spectrum [190]. Therefore, no scanning along the depth axis is required when acquiring a volumetric image. However, conventional spectral-domain OCT systems normally utilize point-scanning or pushbroom imaging to measure spectra, resulting in a considerable loss of optical throughput. To enable snapshot 3D OCT, Nguyen et al. utilized a hyperspectral imager, IMS, to capture all the spectra in parallel [149]. Because the depth range is determined by the number of channels sampling the spectrum, the authors employed a large-format CCD sensor to accommodate the required datacube size, and the sensor’s relatively slow data readout limits the volumetric frame rate. Using their proof-of-concept system, Nguyen demonstrated volumetric imaging with a 400 µm depth range, 13.4 µm lateral resolution, and 16.0 µm axial resolution. Based on the same principle, similar wavelength encoding techniques, such as chromatic slit-scan confocal microscopy [191], single-shot computed tomography by spectral multiplexing [192], and self-interference fluorescence microscopy [193], can be potentially combined with a snapshot spectral imager to achieve snapshot 3D volumetric imaging as well.
We compare the snapshot volumetric imaging modalities in Table 3. The lateral resolutions of the direct-measurement techniques are all diffraction limited. For various reasons, the lateral resolutions of the computational techniques are worse than the diffraction limit. In 3D integral imaging, because depths are derived from a light field, the original compromise between lateral resolution and angular resolution in plenoptic imaging is translated to 3D integral imaging, resulting in a new trade-off between lateral resolution and depth resolution. In snapshot digital holography, because the complex wavefront is measured by a digital detector array, the lateral resolution is mainly limited by the camera’s finite pixel size and sampling rate, and the finite extent of camera face itself [194–196]. For 3D snapshot OCT, although the lateral resolution of the current proof-of-concept system is limited by the spatial sampling of its spectral imager, in theory, this method can achieve diffraction-limited performance.
Table 3.
Comparative features of snapshot volumetric imaging modalities
Modality | General strategy |
Acquisition strategy for direct measurement |
Reconstruction strategy for computational techniques |
Lateral resolution | Depth resolution | Optical throughput |
---|---|---|---|---|---|---|
Division of optical path using beams plitters |
Direct measurement |
Optical-path- division |
Not applicable | Diffraction limited | Determined by NA of the front optics |
1/Nz (Nz, number of depth layers) |
Volume holographic imaging |
Direct measurement |
Optical-path- division |
Not applicable | Diffraction limited | Determined by the trade-off between lateral FOV and depth resolution under broadband illumination [105]. This trade- off can be mitigated by employing rainbow illumination [141]. |
100% |
Distorted- grating- based methods |
Direct measurement |
Optical-path- division |
Not applicable | Diffraction limited | Determined by NA of the front optics |
1/Nz (Nz, number of depth layers; assuming equal diffraction efficiency among different diffraction orders) |
SLM-based methods |
Direct measurement |
Optical-path- division |
Not applicable | Diffraction limited | Determined by NA of the front optics |
1/Nz (Nz, number of depth layers) |
3D integral imaging |
Computation | Aperture- division |
Direct image reconstruction |
Limited by the trade-off between lateral resolution and depth resolution |
Limited mainly by the NA of front optics, and by the trade-off between lateral resolution and depth resolution |
100 % for near- field and far-field integral imaging; ≤50% for mask- based methods (Table 2) |
Single-shot digital holography |
Computation | Not applicable | Direct image reconstruction |
Limited by finite extent of an FPA’s pixel, sampling rate, and the finite extent of camera face itself [194–196] |
Limited by finite extent of an FPA’s pixel, sampling rate, and the finite extent of camera face itself |
100 % |
3D snapshot OCT |
Computation | Image-division | Direct image reconstruction |
Potentially can be diffraction limited; currently limited by the spatial sampling of a spectral imager [149] |
Spectral bandwidth of the light source |
100% |
Akin to a conventional camera, the depth resolutions of modalities that divide the optical path using beam splitters, distorted-gratings, and SLMs are mainly limited by the NA of the front optics. For 3D integral imaging, the depth resolution is limited by two factors—mainly by the NA of the front optics (the lack of parallax angles yields poor depth resolution), and also by the aforementioned trade-off between lateral resolution and depth resolution. For volume holographic imaging and 3D snapshot OCT, the depth resolutions are determined by the bandwidth of the illumination source, but in opposite ways. A broadband illumination source improves the depth resolution of 3D snapshot OCT, but degrades that of volume holographic imaging. In snapshot digital holography, the depth resolution is limited by the same factors that affect the lateral resolution—the finite extents of the camera’s pixel size and sampling rate, and the finite extent of camera face itself.
Measured by optical throughput, volume holographic imaging, 3D integral imaging, single-shot digital holography, and 3D snapshot OCT have an edge. By contrast, optical-path-division using beam splitters, distorted-gratings, and SLMs maintain only 1/Nz(Nz, number of depth layers) of optical throughput, because the amplitude of the wavefront is divided into Nz portions during depth remapping.
3.4 Snapshot temporal imaging (x, y, t)
To acquire an event datacube (x,y,t), conventional imaging devices measure the temporal information either at a spatial point using a device such as a photomultiplier tube, or at a slit using a device such as a streak camera, or at a plane using a device such as a CCD or CMOS. To acquire (x,y) spatial information, point or slit detectors rely on scanning, which limits on the applicable objects because the event must be exactly repeated at each scanning position. By contrast, plane detectors can capture an (x,y) scene within a single snapshot. However, because the temporal resolution of plane detectors is provided by a mechanical or electrical shutter, the imaging speed is limited to 200 million fps [197]. Within the camera exposure, the incident photons accumulate on the detector, and their time-of-arrival information is therefore completely lost. Further increasing the frame rate of a plane detector is restricted by data readout speed and on-chip storage capacity [198].
Snapshot temporal imaging, also called temporal super-resolution imaging, can temporally resolve a dynamic event within a single camera exposure and thus avoids the limitation imposed by the camera’s readout speed. Depending on the requirement on the active illumination, snapshot temporal imaging generally uses two strategies. The first strategy utilizes active pulse illumination to provide temporal resolution. Representative techniques include sequentially timed all-optical mapping photography (STAMP) [26] and frequency-domain streak tomography [199]. The second strategy is based on passive imaging and therefore does not need a specialized light source. Within this category, representative techniques are parallel streak imaging using a tilted lenslet array [200], temporal pixel multiplexing [201], compressed ultrafast photography [27], coded aperture compressive temporal imaging [202], programmable pixel compressive imaging [13], and smart pixel imaging with computational-imaging arrays [203, 204].
STAMP’s illumination system consists of a pulse stretcher and a pulse shaper (Fig. 18). The pulse stretcher temporally stretches an ultra-short optical probe pulse using a temporal disperser, such as a glass rod, a prism pair, or a fiber. The pulse shaper splits the resulting pulse into a series of discrete daughter pulses with different spectral wavelengths, followed by shining these pulses onto the sample as successive “flashes” for stroboscopic image acquisition. The temporal information of an event is thus encoded in the probe light’s spectrum, and the temporal resolution is determined by the duration of the corresponding daughter pulses. Based on their wavelengths, these daughter pulses are separated by a spatial mapping unit—a combination of a diffraction grating, a cylindrical mirror, and an array of periscopes. In the spatial mapping unit, the daughter pulses propagate over the same optical path length but exit at different heights. Thus the daughter pulses are directed towards different areas of an image sensor and can be simultaneously imaged in focus. By using STAMP, Nakagawa et al. have demonstrated an imaging speed of 4.4 trillion frames per second with 450 × 450 pixels resolution [26]. However, because of the difficulty of populating the periscope array, the temporal sequence depth of STAMP is currently limited to six frames, resulting in a very short observation time window (1.8 ps at 4.4 trillion fps).
Fig. 18.
Sequentially timed all-optical mapping photography (STAMP). Figure reprinted with permission from [26].
Frequency-domain tomography (FDT) is an interferometry-based ultrafast imaging technique [199] that shares the concept of frequency-domain holography [205] and frequency-domain streak photography [206], which were previously developed by the same research group. FDT generates multiple probe pulses in a cascaded four-wave mixing process and then illuminates the object (a 3 mm thick glass) with these pulses at five different incident angles (Fig. 19). The pump-laser-induced refractive index changes are imprinted onto probe pulses and appear as phase “streaks”. Finally, these probe pulses interfere with a reference pulse inside a spectrometer, creating a 2D spectral domain hologram on a CCD at the spectrometer’s detection plane. A tomographic movie of refractive index changes Δn(z,x,t) can be reconstructed at a selected y0 position using a conventional tomographic algorithm such as an algebraic reconstruction technique [207, 208]. Because the spectral information is traded for spatial information, FDT cannot reproduce colors. Additionally, this technique suffers from a shallow temporal sequence depth (five frames).
Fig. 19.
Schematic setup for frequency-domain streak tomography of evolving pump-laser-induced refractive index changes. Reprint with permission from [199].
Although they acquire snapshots, both STAMP and FDT require active pulse illumination. They cannot image objects that are self-illuminated through processes such as fluorescence or bioluminescence. By contrast, passive snapshot temporal imaging methods [200, 202, 209] are receive-only and thus capable of imaging transient events without specialized illumination. More importantly, because temporal resolution is provided by the instrument itself, passive temporal imaging can potentially reproduce colors, resulting in a four dimensional (x,y,t,λ) datacube.
Depending on whether the (x,y,t) datacube is directly acquired or computationally estimated, passive snapshot temporal imaging is further divided into two sub-categories. Two representative direct-measurement techniques are parallel streak imaging using a tilted lenslet array [200] and temporal pixel multiplexing [201]. Akin to a plenoptic camera, parallel streak imaging uses a lenslet array to acquire multiple elemental images of the objects (Fig. 20). Because the lenslet array is tilted, the elemental images are located at different vertical positions. These elemental images are relayed to the entrance slit of a streak camera, a one-dimensional imaging device that can transform an event’s temporal information into spatial information along the vertical axis (y axis) [210]. The entrance slit of the streak camera samples the elemental images at different heights, thereby allowing parallel streak imaging of multiple spatial lines. This method advantageously enables 2D ultrafast imaging while maintaining the streak camera’s native temporal resolution and temporal sequence depth. However, because each elemental image occupies only a part of the streak camera’s entrance slit, this method trades off the number of spatial samplings along the y axis against that along the x axis. In addition, because a narrow entrance slit is required to maintain high temporal resolution in the streak camera, the light throughput is significantly sacrificed.
Fig. 20.
Optical setup of parallel streak imaging utilizing a lenslet array. Figure adapted with permission from [200].
Temporal pixel multiplexing utilizes a DMD as an active shutter to increase the frame rate of a low-speed camera without increasing bandwidth requirements [201]. As shown in Fig. 21a, an object is imaged either by a microscope or a camera lens (L3), and an intermediate image is formed on the DMD. The DMD’s micro-mirrors are organized into m exposure groups, each consisting of n micro-mirrors labeled with different tags (e.g., 1–4 in Fig. 21b). The micro-mirrors with the same tag in all exposure groups are turned “on” at the same time and stay at this position for a duration of t/n, where t is the camera’s single exposure time (Fig. 21c). This temporally-modulated image is relayed to the image plane and measured by a high resolution camera. Because the pixels of the DMD’s micro-mirrors and the camera are spatially registered, the temporal modulation introduced at the DMD’s micro-mirrors is transferred to the exposure modulation at the camera’s pixels. By reorganizing the captured image’s pixels (Fig. 21d), a high-speed image sequence can be recovered at a reduced spatial resolution. Additionally, because there is no spatiotemporal mixing at the camera, a full-resolution image can be simultaneously acquired at the camera’s native frame rate. The drawback of this approach is that the light throughput is sacrificed by a factor of n, posing challenges for low-light imaging applications. This concept has also been demonstrated in a similar implementation which utilizes a pinhole array to create exposure groups [211].
Fig. 21.
Temporal pixel multiplexing utilizing a digital micro-mirror device (DMD) as an active shutter. a. Optical setup. b. Exposure groups at the camera. c. Exposure time for different pixels in an exposure group. d. Reorganization of camera pixels dependent on the exposure time. Figure adapted with permission from [201].
Based on the concept of compressed sensing [212], compressed ultrafast photography (CUP) is a computational technique using iterative image reconstruction. CUP can transform a conventional one-dimensional streak camera into a two-dimensional snapshot temporal imaging device. As shown in Fig. 22, CUP first images an object through a camera lens and then relays the intermediate image to a spatial encoding device—a DMD, where a pseudo-random pattern is displayed. The light reflected from only the “on” micro-mirrors is collected by a microscope objective and reimaged on the entrance slit of a streak camera. Here the entrance slit is fully opened, allowing the formation of a 2D image on the streak camera’s photocathode. Inside the streak camera, this image is temporally sheared along the vertical axis by a varying voltage. At a given voltage ramp rate, the amount of shearing is determined by the incident photons’ time of arrival. The final image is measured by a CCD within a single exposure.
Fig. 22.
Optical setup of compressed ultrafast photography (CUP). DMD, digital micro-mirror device. Figure reprinted with permission from [27].
The CUP image formation process can be described by three operators which are successively applied to an event I(x, y, t):
(2) |
where E(m, n) is the light energy measured at pixel (m, n) on the CCD, C is the spatial encoding operator describing the function of the DMD, S is the temporal shearing operator describing the function of the streak camera, and T is the spatial-temporal integration operator describing the detection process at the CCD. The CUP image reconstruction process is the solution of the inverse problem of Eq. 2. Provided spatio-temporal sparsity, the original event datacube can be reasonably estimated by adopting a two-step iterative shrinkage/thresholding (TwIST) algorithm, which minimizes the difference between the measurement, E, and the expected E corresponding to the estimated datacube, I, in the form of L2 norm [79].
The CUP frame rate is determined by the temporal shearing velocity of the streak camera—a faster shearing velocity results in a higher frame rate. However, in this case, because photons are spread over more CCD pixels, the signal level per pixel is reduced, which may cause potential reconstruction artifacts when the incident light is insufficiently strong. The size of the CUP-reconstructed datacube is 150 × 150 × 350 (x, y, t), which is limited by the acceptance NA of the collecting objective, photon shot noise, the sensitivity of the photocathode, and the number of binned CCD pixels. Additionally, because of the temporal shearing operation and sparsity constraint, CUP’s spatial resolution is slightly anisotropic and degraded approximately two times from the diffraction limit.
Similar to CUP, coded aperture compressive temporal imaging (CACTI) [202] first spatially encodes the input image with a pseudo-random binary pattern by using an absorption mask, then relays the resultant image to a CCD where photons are spatiotemporally integrated. However, different from CUP, CACTI mechanically translated the mask along the vertical axis by a piezo element, temporally shearing the mask image—rather than the encoded object image—at the detector plane. The image formation of CACTI thus can be described by
(3) |
Here E (m, n) is the light energy measured at the pixel (m, n) at the CCD, I (x, y, t) is the input event, C (y − Vt) is the spatial encoding operator depicting the function and movement of the mask, and T is the spatiotemporal integration operator depicting the detection process at the CCD. It is worth noting that in Eq. 3 only the operator C is time variant because only the mask image is sheared in CACTI. By contrast, in CUP, both the mask and object image are sheared (Eq. 2).
The image reconstruction process of CACTI is the solution of the inverse problem of Eq. 3. Llull et al. adapted both a generalized alternating projection (GAP) algorithm [213] and TwIST algorithm [79] to estimate the event datacube. Compared with TwIST, which performs best with a scene that can be considered sparse in the gradient domain, GAP requires no prior knowledge of the object and can use one of several bases, such as wavelets or discrete cosine transform, to represent a sparse signal. However, in cases where TwIST can be applied, experimental results show that TwIST-reconstructed videos generally provide greater visual quality [202]. The frame rate of the reconstructed video is determined by the moving speed of the mask and the CCD’s pixel size. Currently, CACTI’s maximum imaging speed approximates 4,500 fps.
The programmable pixel compressive camera (P2C2) is a computational imaging instrument replying on per-pixel modulation [13, 214]. As shown in Fig. 23, a liquid crystal on silicon (LCOS) encodes the input scene with a random binary pattern, and then relays the resultant image to a CCD. The LCOS’s pixels are one-to-one mapped to the CCD’s pixels, acting as per-pixel shutters. Therefore, the light intensity measured at each CCD pixel is an integration of the incident light modulated by its own shutter. During acquisition, the LCOS’s pixels are modulated at a rate higher than the CCD’s frame rate. The image formation process can be described by
(4) |
where E (m, n) is the light energy measured at pixel (m, n) at the CCD, I (x, y, t) is the input event, C (x, y, t) is the time-varying spatial encoding operator depicting the LCOS’s modulation, and T is the spatiotemporal integration operator depicting the detection process at the CCD. Given the constraint of spatiotemporal sparsity, the inverse problem of Eq. 4 can be solved by using a compressed sensing algorithm based on fixed point continuation [215]. The spatial resolution of P2C2 is generally worse than the diffraction limit because of the spatio-temporal multiplexing at the CCD and sparsity constraint on the input scene during image reconstruction. On the other hand, the imaging speed of a P2C2 is fundamentally limited by the modulation rate of the LCOS. It is worth noting that, compared with the global-shutter coding architecture employed in a flutter shutter video camera [216], the per-pixel coding architecture leveraged in a P2C2 results in a less ill-conditioned measurement matrix and therefore higher reconstruction quality.
Fig. 23.
Programmable pixel compressive camera (P2C2). a. Optical setup. b. Photograph of system. LCOS, liquid crystal on silicon. Figure reprinted with permission from [13].
CUP, CACTI, and P2C2 share a common thread in that the spatial encoding is accomplished by an optical architecture. By contrast, smart pixel imaging (SPI) with computational-imaging arrays [203, 204] transfers this encoding process to the digital domain by using a digital-pixel focal plane array, thereby minimizing the signal-to-noise loss caused by physical encoding elements, such as the DMD in CUP and P2C2, and the absorption mask in CACTI. In SPI, each detector pixel can be modulated by a time-varying, pseudo-random, and dual-binary signal (−1,1 or 1,0) at a rate up to 100 MHz. The image formation model using such a digital-pixel focal plane array can also be described by Eq. 4. However, in SPI the time-varying spatial encoding C (t) is introduced in the digital domain, rather than in the real image domain as in P2C2. Fernandez-Cull et al. demonstrated that by employing algorithms such as TwIST, the event datacube I (x,y,t) can be reasonably estimated [203].
To compare the reconstruction reliabilities of CUP, CACTI, P2C2, and SPI, we simulated the image formation processes based on Eqs. 2–4. We constructed the input event with a spinning “Siemens star” under constant wave illumination, rotating 10 degrees within time interval Δ (Fig. 24). We simulated two illumination conditions: In case 1, the illumination is turned on at t=0 and turned off at t=10Δ; in case 2, the illumination is turned on at t=0, but turned off at t=500Δ. Given a 1/Δ reconstructed frame rate, the targeted movie sequence depths are 10 and 500 frames for cases 1 and 2, respectively. By using the TwIST algorithm, we reconstructed the corresponding movies with these two sequence depths and show representative time-resolved images in Fig. 24a and 24b, respectively. In Fig. 24a, the CUP-, CACTI-, and P2C2- and SPI-reconstructed images have similar reconstruction quality. However, in Fig. 24b, CUP performs better than the other modalities because only in CUP the encoded image itself is sheared. This reduces the spatiotemporal crosstalk per CCD pixel and therefore eases the solution of the inverse problem. This advantage becomes significant when we reconstruct a movie with a substantial number of frames, as shown in simulation case 2.
Fig. 24.
Comparison of reconstruction reliability in CUP, CACTI, and P2C2 and SPI. The reconstructed movie sequence depth is 10 and 500 frames for a and b, respectively.
We compare snapshot temporal imaging modalities in Table. 4. The spatial resolutions of STAMP and parallel streak imaging using a tilted lenslet array are diffraction limited. Akin to Fourier-domain OCT, the spatial resolutions in FDT are limited by different factors along the two axes. Along the axis into the sample (z), the resolution is determined by the probe pulse’s spectral bandwidth—a broader bandwidth leads to a higher spatial resolution. By contrast, along the transverse axis (x), the resolution is diffraction limited. In temporal pixel multiplexing, because the FPA is divided into exposure groups, the spatial resolution is worse than the diffraction limit and determined by the size of each exposure group. For the compressed-sensing-based techniques—CUP, CACTI, P2C2, and SPI—because of the introduced spatial-temporal multiplexing at the FPA and the requirement of the input scene’s sparsity in a specific domain, the spatial resolutions are object-dependent and generally worse than the diffraction limit.
Table. 4.
Comparative features of snapshot temporal imaging modalities
Modality | General strategy |
Require active illumination? |
Acquisition strategy |
Reconstruction strategy |
Spatial resolution |
Temporal resolution |
Optical throughput |
---|---|---|---|---|---|---|---|
Sequentially timed all- optical mapping photography (STAMP) |
Direct measurement |
Yes | Optical-path- division |
Not applicable | Diffraction limited | Determined by illumination daughter pulses’ duration |
100 % |
Frequency- domain tomography (FDT) |
Computation | Yes | Frequency- domain- division |
Direct image reconstruction |
Along the axis (z) into the sample, limited by probe pulse’s spectral bandwidth; along the transverse axis (x), limited by diffraction |
Limited to temporal changes occurring over propagation distances larger than the object’s dimensions |
100% |
Parallel streak imaging using a tilted lenslet array |
Direct measurement |
No | Aperture- division |
Not applicable | Diffraction limited | Limited by the streak camera’s native temporal resolution |
1/Ny(Ny, number of spatial samplings along the axis perpendicular to the streak camera’s entrance slit) |
Temporal pixel multiplexing |
Direct measurement |
No | Image- division |
Not applicable | Limited by the size of an exposure group |
Limited by the refresh rate of the DMD |
1/Nt(Nt, number of pixels in an exposure group) |
Compressed ultrafast photography (CUP) |
Computation | No | Optical-path- division |
Iterative image reconstruction |
Object dependent; worse than diffraction limit generally because of spatio-temporal multiplexing at the FPA and the sparsity constraint on the input scene |
Object dependent; degrades from a streak camera’s native temporal resolution generally because of spatio-temporal multiplexing at the FPA and the sparsity constraint on the input scene |
12.5 % |
Coded aperture compressive temporal imagingbreak/>(CACTI) |
Computation | No | Not applicable |
Iterative image reconstruction |
Object dependent; worse than diffraction limit generally because of spatio-temporal multiplexing at the FPA and the sparsity constraint on the input scene |
Object dependent; degrades from the ideal temporal resolution d/v (d, camera’s binned pixel size; v, sweeping velocity of the piezo) generally because of spatio-temporal multiplexing at the FPA and the sparsity constraint on the input scene |
50% |
Programmable pixel compressive camera (P2C2) & smart pixel imaging (SPI) with computational- imaging arrays |
Computation | No | Not applicable |
Iterative image reconstruction |
Object dependent; worse than diffraction limit generally because of spatio-temporal multiplexing at the FPA and the sparsity constraint on the input scene |
Object dependent; degrades from the ideal temporal resolution 1/f> (f, DMD’s pixel refresh rate) generally because of spatio-temporal multiplexing at the FPA and the sparsity constraint on the input scene |
50% |
Due to its reliance on active illumination, the temporal resolution of STAMP is determined by the illumination daughter pulses’ duration and can be varied by adjusting the settings of a temporal mapping unit. In FDT, the temporal PSF is determined by the object’s size. Therefore, it can temporally resolve only those changes occurring over propagation distances larger than the object’s dimensions. Additionally, the number of resolvable temporal frames is limited by the object’s size relative to the length of the reconstructed phase streak. Parallel streak imaging using a tilted lenslet array and temporal pixel multiplexing maintain the same temporal resolutions as their temporal modulation devices, limited by the streak camera’s native temporal resolution and the DMD’s refresh rate, respectively. For CUP, CACTI, P2C2, and SPI, similar to their spatial resolutions, their temporal resolutions are also object-dependent and degraded from ideal cases by spatio-temporal multiplexing at the FPA and sparsity constraints.
Measured by optical throughput, STAMP and FDT top the class, maintaining 100% light throughput. CACTI, P2C2, and SPI have 50% optical throughput because they employ an absorption mask or a DMD as the spatial encoding element. CUP loses light both at the beam splitter and spatial encoding DMD, and currently has 12.5% light throughput. The optical throughput of parallel streak imaging using a tilted lenslet array is inversely proportionally to the spatial resolution along the axis perpendicular to the streak camera’s entrance slit; the throughput of temporal pixel multiplexing is inversely proportional to the number of exposure groups at the FPA. Therefore, they are not suitable for applications which require high spatial resolution.
3.5 Snapshot polarization imaging (x, y, ψ, λ) and spectropolarimetric imaging (x, y, λ, ψ, λ)
The polarization state of a single monochromatic wave of unit amplitude can be fully described by the polarization orientation angle, ψ, and the ellipticity angle, χ. In practice, it is more useful to depict the state of polarization of light using a Stokes vector, particularly when the light is incoherent or partially polarized. The Stokes vector consists of four parameters which have a relation with ψ and χ:
(5) |
where I is the light intensity, and p is the degree of polarization. To image all four Stokes parameters, a series of measurements is normally required using a combination of different retarders and linear polarizers [217]. However, this time-sequential acquisition mode is not suitable for imaging dynamic scenes.
To achieve snapshot polarization imaging, a variety of strategies have been proposed [22]. Particularly, if circular polarization is not expected from the scene (that is, S3=0), the parallel measurement of S1,S2 and S3 becomes much simpler. Representative techniques in this category include imaging polarimetry using a wedged double Wollaston prism [218], imaging polarimetry using a polarization filter array [219, 220], and imaging polarimetry using a light field architecture [221].
Imaging polarimetry using a wedged double Wollaston prism [218] is an aperture-division direct measurement technique. By inserting a combination of two Wollaston prims at the aperture stop, this approach can simultaneously measure polarized light components at angles 0, 45°, 90°, and 135° (Fig. 25). The first three Stokes parameters can be determined from the data by
(6) |
where I (0°), I (45°), I (90°), and I (135°) are the light intensities measured at the corresponding polarization angles. Because no polarization filters are used, this approach advantageously attains full optical throughput. However, because polarization separation by a Wollaston prism is sensitive to wavelength, a narrow band optical filter is required to filter the incident light, reducing the SNR and thereby resulting in a longer camera exposure. Mu et al. later improved this technique by further dividing the aperture stop and adding another Wollaston prism and a quarter wave plate to enable circular polarization measurement [222]. Mu also proposed a variant of this approach by replacing the Wollaston prism with a combination of four-quadrant retarders, a uniform polarizer, and a pyramid prism [222].
Fig. 25.
Imaging polarimetry using a wedged double Wollaston prism. Figure reprinted with permission from [217].
Similarly, imaging polarimetry using a pixel-matched polarizer array [219, 220, 223] is an image-division direct measurement technique. The concept was first proposed by Chun et al. in 1994 [224]. With advances in microlithography, it is now possible to fabricate micro-polarizers with sub-wavelength periodic structures. Snapshot imaging polarimetry can be realized by directly placing an array of such polarization filters just in front of an FPA. Due to the difficulty of fabricating sub-wavelength structures, previous studies were confined to the infrared region [223, 224]. Recently, this technology has been extended to the visible light range, owing to the rapid progress in nanofabrication techniques. Gruev et al. built a polarization camera working in the visible light range by covering a high-resolution CCD with pixel-matched nanowire optical filters [219, 220]. The nanowire optical filter array was fabricated by photolithography and has a period of 140 nm. On the CCD, each superpixel consists of 2 × 2 camera pixels covered by nanowire filters with four different orientations offset by 45°, simultaneously measuring four linear polarization components. By using this camera, Gruev et al. achieved a SNR of 45 dB at 40 fps [219]. A similar implementation was also demonstrated by Neal Brock almost at the same time [225]. However, these polarization imagers suffer from a common drawback in their low extinction ratios (~ 44 at 515 nm in [219], ~ 50 at 550 nm in [225]), compared with a conventional prism-based polarizers (~105).
Imaging polarimetry using a light field architecture [221] is a aperture-division direct-measurement technique. This approach inserts an array of polarization filters at the aperture of a light field camera. At the detector plane, each sub-pupil image consists of light rays of different polarization states. By reorganizing the image pixels, the images associated with different polarization orientations can be reconstructed. Compared with polarimeters using a wedged double Wollaston prism or using a pixel-matched polarizer array, this method can be used for broadband light input, and it is much easier to implement. However, because different polarization images actually come from different views, this method suffers from the parallax effect commonly seen in multi-view imaging. Traditionally, this method is used to measure only linear polarization components. It is noteworthy that recent works have implemented this light field architecture in measuring all four Stokes parameters by inserting a combination of four-quadrant retarders and polarizers at the aperture stop of the main lens [226, 227].
Compared with the aforementioned incomplete polarization imagers, a complete polarization imager—an instrument which can simultaneously measure all four Stoke parameters—normally requires a more complicated system setup. A common strategy is to encode different state-of-polarization (SOP) with different spatial carrier frequencies by using an interferometric setup. Oka et al. demonstrated an implementation using a set of birefringent wedge prisms placed just in front of a CCD [66], as shown in Fig 26. The polarimetric device consists of four wedged birefringent prisms and an analyzer. The fast axes of the four prisms are oriented at 0°, 90°, 45°, and −45° with respect to the x axis, respectively, and the transmission axis of the analyzer is along the x axis. Using Muller matrix calculus, we can derive the analytical expression for the intensity pattern formed at the CCD:
(7) |
Here S23(x, y) = S2(x, y) + iS3(x, y), and U = 2B tan α /λ, where B denotes the birefringence of the prism and α is the inclination angle of the plane of contact between the prisms. Equation 7 implies that the interferogram consists of a low-frequency component and three quasi-cosinusoidal components which appear as fringe patterns. Because these fringes have different frequencies (1/U and ), they serve as carriers and shift the corresponding Stokes parameter frequencies in the spatial frequency domain. By properly selecting the inclination angle of the wedge prims, these Stokes parameter frequencies can be satisfactorily separated and recovered by using a standard Fourier transform technique [228]. Despite its compactness, this implementation suffers from the common drawback of a monochromatic wave requirement, as also seen in other birefringence-based imaging polarimeters.
Fig. 26.
Schematic of a device using four wedged birefringent prisms and an analyzer to perform complete imaging polarimetry. PR, prism pair. Figure reprinted with permission from [67].
Based on a similar principle, Oka et al. invented an alternative method by replacing the birefringent wedge prisms at the image plane with Savart plates (SP) at the pupil plane [229]. The optical setup, shown in Fig. 27, consists of a first lens, L1; a first Savart plate, SP1; a half wave plate, HWP; a second Savart plate, SP2; an analyzer; and a second lens, L2. A Savart plate consists of two uniaxial birefringent crystals with equal thickness. The optic axis of each crystal is at 45° to the surface normal and is rotated 90° with respect to the other crystal. After passing through the first crystal, the incident light is divided into ordinary (o) and extraordinary (e) lights, and a lateral displacement, d, is introduced only for the e light (Fig. 27a). Upon entering the second crystal, the o light in the first crystal becomes the e light and experiences a vertical displacement. Therefore, after the first SP, the incident light is split into two parallel light beams whose polarizations are orthogonal and separated by a distance of (Fig. 27b). Then the HWP rotates the polarization coordinates by 45°. The second SP further splits the input light into four beams, and the analyzer extracts the polarization components along the x axis. The four beams interfere with each other, forming fringes at a camera (Fig. 27c). The light intensity distribution in the interferogram can also be expressed by Eq. 7. By using a similar image reconstruction method [66], Stokes parameters can be calculated from this measured interferogram.
Fig. 27.
Snapshot imaging polarimetry using Savart plates. a. Uniaxial birefringent crystal in a Savart plate. b. Savart plate. c. Optical setup. HWP, half-wave plate; L1, L2, lenses; SP1, SP2, Savart plates; A, analyzer. Figure reprinted with permission from [228].
In Oka’s original design [229], the system is built upon a 4f imaging system, where polarization-dependent shearing occurs in the spatial frequency domain. Luo et al. demonstrated that this design can be further simplified by removing the first lens L1, thereby making the system more compact [230]. Later, the same group coupled this modified imaging polarimeter to a fundus camera, and demonstrated an application in retinal imaging [231]. Despite its compactness and snapshot capability, imaging polarimeters using SPs are limited by their reliance on interference effects. Because the visibility of the interference fringes is inversely proportional to the incident light’s spectral bandwidth, forming an interference fringe with high contrast requires a narrow spectral bandwidth input, a constraint that significantly decreases the SNR. Additionally, the limited availability of large birefringent crystals, particularly in the infrared region, limits further development of this technology.
The above two types of complete imaging polarimeters [66, 229] require monochromatic light input, limiting their applications in imaging natural scenes which normally reflect or emit broadband spectra. To remove this restriction, Kudenov et al. developed a white-light channeled imaging polarimeter [232]. Based on Oka’s design [229], Kudenov replaced each Savart plate with a pair of polarization gratings [233, 234], each of which acts as a polarization angular beam splitter (Fig. 28). However, rather than splitting the incident light into two linear eigen-polarizations as a Wollaston prism does, a polarization grating separates light into two circular eigen-polarization components [235]. The advantage of using polarization gratings is that in the final interferogram the carrier’s frequencies are independent of wavelength, thereby allowing the Stokes parameters to be encoded into spectrally broadband interference fringes. Based on this technology, Kudenov recently built a snapshot imaging Mueller matrix polarizer by adding an illumination module which is essentially a mirror-reflection version of detection polarization optics, as shown in Fig. 28 [236].
Fig. 28.
Optical setup of a white-light channeled imaging polarimeter. PG1, PG2, PG3, PG4, polarization grating; QWP, quarter-wave plate. Figure reprinted with permission from [231].
We compare the snapshot polarization imaging modalities in Table 5. The spatial resolutions of imaging polarimetry using a wedged double Wollaston prism, and of channeled imaging polarimetry using birefringent wedge prisms, Savart plates, and polarization gratings are all diffraction limited. By contrast, because imaging polarimeters using a pixel-matched polarizer array or a light field architecture are both based on the image-division strategy, their spatial resolutions are limited by the dimensions of a superpixel at the FPA and a lenslet, respectively.
Table 5.
Comparative features of snapshot polarization imaging modalities
Modality | General strategy |
Acquisition strategy |
Reconstruction strategy |
Spatial resolution |
Measured Stokes parameters (completeness of polarization measurement) |
Input scene’s spectral bandwidth requirement |
Optical throughput |
---|---|---|---|---|---|---|---|
Imaging polarimetry using a wedged double Wollaston prism |
Direct measurement |
Aperture- division |
Not applicable | Diffraction limited | S0, S1, and S2 in [218] S0, S1, S2, and S3 in [222] |
Must be narrow band |
100% |
Imaging polarimetry using a pixel- matched polarizer array |
Direct measurement |
Image- division |
Not applicable | Limited by the size of a superpixel (2×2 camera pixels covered by nanowire filters) |
S0, S1, and S2 | Must be narrow band |
50% |
Imaging polarimetry using a light field architecture |
Direct measurement |
Aperture- division |
Not applicable | Limited by the size of a lenslet |
S0, S1, and S2 in [221] S0, S1, S2, and S3 in [226, 227] |
Broad band allowable |
50% |
Complete channeled imaging polarimetry using birefringent wedge prisms |
Computation | Frequency- domain- division |
Direct image reconstruction |
Diffraction limited | S0, S1, S2, and S3 | Must be narrow band |
50% |
Complete channeled imaging polarimetry using Savart plates |
Computation | Frequency- domain- division |
Direct image reconstruction |
Diffraction limited | S0, S1, S2, and S3 | Must be narrow band |
50% |
Complete channeled imaging polarimetry using polarization gratings |
Computation | Frequency- domain- division |
Direct image reconstruction |
Diffraction limited | S0, S1, S2, and S3 | Broad band allowable |
50% |
All of the aforementioned snapshot polarization imagers can measure the linear Stokes components (S0, S1, and S2). The three variants of channeled imaging polarimetry, and recent versions of imaging polarimetry using a wedged double Wollaston prism or a light field architecture, can measure the complete Stokes components (S0, S1, S2, and S3). However, only imaging polarimetry using a light field architecture and complete channeled imaging polarimetry using polarization gratings allow broadband spectral input. The other modalities work only when the input scene is monochromatic or narrow banded, because birefringent materials are sensitive to light wavelength.
Among all these snapshot polarization imaging modalities, only that using a wedged double Wollaston prism maintains 100% optical throughput. All others lose 50% of the light because they use analyzers either to pick up the linear polarization components or force interference at the FPA.
A snapshot imaging spectropolarimeter [237] can simultaneously measure a 3D spatiospectral (x,y,λ) datacube for each of the Stokes parameters. Conventionally, such a measurement requires scanning in specific domains, such as the spatial domain in channeled spectropolarimetry [238], the OPD in Fourier transform imaging spectropolarimetry [239, 240], or polarization in Stokes imaging spectropolarimetry [241]. Recent efforts to eliminate the scanning requirement include combining channeled spectropolarimetry with CTIS, also referred to as computed tomographic imaging channeled spectropolarimetry (CTICS) [242–244], combining channeled spectropolarimetry with IMS [245], combining integral field spectrometry with division-of-aperture imaging polarimetry [222], and utilizing polarization gratings [246, 247].
The marriage between channeled spectropolarimetry and snapshot spectral imagers, such as CTIS and IMS, becomes possible because both CTIS and IMS are insensitive to the incident light’s polarization. In other words, the spectral reconstruction and polarization reconstruction can be carried out independently. Compared with a standard CTIS setup, the CTICS implementation [242–244] adds three additional polarization elements—two retarders and a polarizer—at the aperture stop. These additional polarization elements introduce spectral modulation in each of CTIS’s diffraction orders. The reconstruction process has two steps. The first step is spectral reconstruction, using the same tomographic technique as in CTIS. The second step is polarization reconstruction, using the Fourier transform technique (as in channeled spectropolarimetry) across the recovered spectra at each spatial location from step one. In combining channeled spectropolarimetry with IMS [245], the polarization modulation module and spectral imaging module also work independently. However, different from CTICS, only one reconstruction process, namely polarization reconstruction, is required because the spectrum can be directly measured by IMS. Therefore, the combination of channeled spectropolarimetry with IMS is expected to yield better image quality than CTICS, although an experimental demonstration is still absent.
In [222], Mu proposes a snapshot imaging spectropolarimetry design that combines integral field spectrometry with aperture-division imaging polarimetry. The basic idea is to reformat the input 2D FOV into a 1D array by using an integral field unit, such as a coherent fiber bundle [33], followed by polarization separation using a polarization array at the aperture and spectral separation using a prim. This strategy is conceptually similar to combining channeled spectropolarimetry with IMS, as previously mentioned. However, no computational reconstruction is involved because both the spectrum and polarization are directly measured. Therefore, this method can potentially avoid reconstruction artifacts and high computational cost. A disadvantage is the loss of optical throughput due to the introduction of polarization filters.
In [246, 247], Kim et al. demonstrated a snapshot polarization grating imaging spectropolarimeter (PGIS). As previously mentioned, given a monochromatic wave input, a polarization grating can produce three diffraction orders—the polarization-independent zeroth order and two polarization-sensitive first-orders [235]. In cases where the incident light is chromatic, a polarization grating will separate the different polarizations as well as the wavelengths, thereby allowing simultaneous measurement of polarization and the spectrum with a single instrument. PGIS employs a quarter-wave plate sandwiched between two orthogonally-arranged polarization gratings as a unified polarization and spectral dispersion unit, and it places this unit at the aperture stop, projecting complete polarization and spectral information onto 2D dispersion patterns at an FPA. Image reconstruction can be done by applying an iterative tomographic algorithm similar to that in CTIS. Because the polarization components are directly measured, PGIS requires less post-computation than CTICS. However, because only three diffraction orders can be generated using a single polarization grating, to achieve a spatiospectral response similar to that in CTICS (that is, to generate a similar number of spatiospectral projections at an FPA), a stack of polarization gratings and wave plates are normally required, which may lead to a bulky setup.
4. Discussions and outlook
In this review, we categorized snapshot multidimensional imagers based on their acquisition strategies and reconstruction strategies, and we discussed their state-of-the-art implementations in spectral imaging, plenoptic imaging, volumetric imaging, temporal imaging, and polarization imaging. Compared with their scanning-based counterparts, snapshot imagers have a remarkable advantage in optical throughput. The more datacube dimensions a snapshot imager measures, the greater the advantage in comparison to alternative scanning-based methods.
As previously mentioned, a snapshot imager can capture an entire set of photon tags only if its measurement does not sacrifice one acquisition for another. A current state-of-the-art snapshot imager, such as the spectropolarimetric imager discussed in Section 3.5, can capture up to five photon tags. Because of the “no-sacrifice” principle, we are still a fair distance away from developing an ultimate snapshot imager that can acquire all nine photon tags in parallel.
The dimensions of a datacube are fundamentally limited by the number of pixels at FPA. For techniques using direct measurement strategies, at Nyquist sampling condition, the maximal number of resolvable datacube voxels that can be measured within a single camera snapshot is limited to MxMy/4, where Mx, My are number of FPA pixels along x and y axis, respectively. For example, when using a 50 Megapixels CCD sensor [4], an IMS can measure a datacube with resolvable voxels up to 500 × 500 × 50 (x,y,λ) in a spatial-resolution-priory mode or 250 × 250 × 200 (x,y,λ) in a spectral-resolution-priory mode. When measuring a high-dimensional datacube, this trade-off among resolutions along different dimensions becomes more significant because the FPA pixels have to be assigned to more photon characteristic bins.
To break this limitation, an effective approach is to integrate compressed sensing (CS) into the multidimensional imaging framework [248, 249]. Techniques that have taken this advantage encompass CASSI (discussed in Section 4.1), compressed integral imaging (discussed in Section 4.2), CUP, P2C2, CTICS, and SPI (discussed in Section 3.4). While CASSI and compressed integral imaging utilize CS to overcome the spatial bandwidth limitation of an FPA, the latter four techniques leverage the same framework to circumvent the FPA’s temporal bandwidth limit. However, CS-based imaging highly relies on signals’ sparsity in certain domains, therefore posing a practical restriction on the applicable objects.
The noises in multidimensional optical imaging are mainly contributed by two sources—photon noise and detector noise. In cases where detector noises dominate, similar to the Felgett advantage [250] in Fourier transform spectrometry, multiplexing-based snapshot imagers have an edge over their direct-measurement counterparts in achieving a higher signal to noise ratio (SNR). Techniques that utilize this advantage include MSI (discussed in Section 3.1), dappled photography (discussed in Section 3.2), FDT (discussed in Section 3.4), and complete channeled imaging polarimetry using birefringent wedge prisms, Savart plates, and polarization gratings (discussed in Section 3.5). However, along with the ongoing development of FPA technology, detector noises have been steadily reduced to a negligible level compared with photon noise, from the ultraviolet to mid-wave infrared. This multiplexing advantage has now become less important because it no longer provides the SNR improvement it once did.
The advancement of snapshot multidimensional imagers has gradually shifted its focus from technological development towards application. Besides the imagers’ traditional employment in remote sensing, recent applications in biomedical imaging have prominently emerged. For example, the snapshot spectral imager, IMS, has been demonstrated for both live cell imaging [46] and in-vivo tissue imaging [50], providing unprecedented details about the spectral signatures of both exogenous and endogenous chromophores. In another example, 3D integral imaging has recently been employed in both microscopy [95] and otoscopy [61], enabling the first 3D real time imaging of neuron cells and ear drums, respectively. Because the dose of illumination is normally restricted for in vivo or live cell imaging due to safety [23] or phototoxic concerns [251], the throughput advantage that snapshot imagers offer becomes even more valuable in biomedical applications.
Acknowledgments
The authors thank Professor James Ballard for close reading of the manuscript. This work was supported in part by National Institutes of Health grants DP1 EB016986 (NIH Director’s Pioneer Award) and R01 CA186567 (NIH Director’s Transformative Research Award). L. V. W. has a financial interest in Microphotoacoustics, Inc. and Endra, Inc., which, however, did not support this work.
Appendix
List of acronyms (alphabetical order)
Acronym | Full name |
---|---|
CACTI | Coded aperture compressive temporal imaging |
CASSI | Coded aperture snapshot spectral imaging |
CCD | Charge-coupled device |
CS | Compressed sensing |
CTIS | Computed tomography imaging spectrometry |
CTICS | Computed tomographic imaging channeled spectropolarimeter |
CMOS | Complementary metal–oxide–semiconductor |
CUP | Compressed ultrafast photography |
DMD | Digital micro-mirror device |
EI | Elemental image |
FDT | Fourier domain tomography |
FPA | Focal plane array |
FOV | Field of view |
MLA | Microlens array |
MSI | Multispectral Sagnac interferometry |
NA | Numerical aperture |
IRIS | Image-replicating imaging spectrometry |
IS-FB | Imaging spectrometry using a fiber bundle |
IS-FS | Imaging spectrometry using a filter stack |
IS-LFA | Imaging spectrometry using a light field architecture |
IMS | Image mapping spectrometry |
OCT | Optical coherence tomography |
OPT | Optical path difference |
PGIS | Polarization grating imaging spectropolarimeter |
P2C2 | Programmable pixel compressive camera |
SHIFT | Snapshot hyperspectral imaging Fourier transform spectrometry |
SLM | Spatial light modulator |
SNR | Signal to noise ratio |
SPI | Smart pixel imaging |
STAMP | Sequentially timed all-optical mapping photography |
TwIST | Two-step iterative shrinkage/thresholding algorithm |
VHI | Volume holographic imaging |
References
- 1.Sinclair MB, Haaland DM, Timlin JA, Jones HDT. Hyperspectral confocal microscope. Appl. Opt. 2006;45:6283–6291. doi: 10.1364/ao.45.006283. [DOI] [PubMed] [Google Scholar]
- 2.Sinclair MB, Timlin JA, Haaland DM, Werner-Washburne M. Design, construction, characterization, and application of a hyperspectral microarray scanner. Appl Optics. 2004;43:2079–2088. doi: 10.1364/ao.43.002079. [DOI] [PubMed] [Google Scholar]
- 3.Kasili PM, Vo-Dinh T. Hyperspectral imaging system using acousto-optic tunable filter for flow cytometry applications. Cytom Part A. 2006;69A:835–841. doi: 10.1002/cyto.a.20307. [DOI] [PubMed] [Google Scholar]
- 4.Kodak. KAF-50100 IMAGE SENSOR. Kodak Products. 2012 [Google Scholar]
- 5.Zhou C, Nayar SK. Computational cameras: Convergence of optics and processing. Image Processing, IEEE Transactions on. 2011;20:3322–3340. doi: 10.1109/TIP.2011.2171700. [DOI] [PubMed] [Google Scholar]
- 6.Wetzstein G, Ihrke I, Lanman D, Heidrich W. Computational Plenoptic Imaging. Computer Graphics Forum. 2011;30:2397–2426. [Google Scholar]
- 7.Willett RM, Marcia RF, Nichols JM. Compressed sensing for practical optical imaging systems: a tutorial. Optical Engineering. 2011;50:072601–072613. [Google Scholar]
- 8.Bucholtz F, Nichols JM. Compressive Sensing Demystified. Optics and Photonics News. 2014;25:44–49. [Google Scholar]
- 9.Chapman G. Ultra-precision machining systems; an enabling technology for perfect surfaces. Moore Nanotechnology Systems. 2004 [Google Scholar]
- 10.Williamson R. Field Guide to Optical Fabrication. Spie. 2011 [Google Scholar]
- 11.Wagadarikar A, John R, Willett R, Brady D. Single disperser design for coded aperture snapshot spectral imaging. Appl Optics. 2008;47:B44–B51. doi: 10.1364/ao.47.000b44. [DOI] [PubMed] [Google Scholar]
- 12.Wagadarikar AA, Pitsianis NP, Sun XB, Brady DJ. Video rate spectral imaging using a coded aperture snapshot spectral imager. Optics Express. 2009;17:6368–6388. doi: 10.1364/oe.17.006368. [DOI] [PubMed] [Google Scholar]
- 13.Reddy D, Veeraraghavan A, Chellappa R. P2C2: Programmable pixel compressive camera for high speed imaging; Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on; 2011. pp. 329–336. [Google Scholar]
- 14.Nanotech. 250 UPL v2 Compact Diamond Turning Lathe. Nanotech Products [Google Scholar]
- 15.Bedard N, Hagen N, Gao L, Tkaczyk TS. Image mapping spectrometry: calibration and characterization. Optical Engineering. 2012;51:111711. doi: 10.1117/1.OE.51.11.111711. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.Gao LA, Kester RT, Hagen N, Tkaczyk TS. Snapshot Image Mapping Spectrometer (IMS) with high sampling density for hyperspectral microscopy. Optics Express. 2010;18:14330–14344. doi: 10.1364/OE.18.014330. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Gao L, Kester RT, Tkaczyk TS. Compact Image Slicing Spectrometer (ISS) for hyperspectral fluorescence microscopy. Optics Express. 2009;17:12293–12308. doi: 10.1364/oe.17.012293. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18.Allington-Smith J. Basic principles of integral field spectroscopy. New Astron Rev. 2006;50:244–251. [Google Scholar]
- 19.Javidi B, Tajahuerce E, Andres P. Multi-dimensional Imaging. John Wiley & Sons; 2014. [Google Scholar]
- 20.Jacquinot P. The Luminosity of Spectrometers with Prisms, Gratings, or Fabry-Perot Etalons. J Opt Soc Am. 1954;44:761–765. [Google Scholar]
- 21.Hagen N, Kester RT, Gao L, Tkaczyk TS. Snapshot advantage: a review of the light collection improvement for parallel high-dimensional measurement systems. Optical Engineering. 2012;51:111702. doi: 10.1117/1.OE.51.11.111702. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22.Tyo JS, Goldstein DL, Chenault DB, Shaw JA. Review of passive imaging polarimetry for remote sensing applications. Appl Optics. 2006;45:5453–5469. doi: 10.1364/ao.45.005453. [DOI] [PubMed] [Google Scholar]
- 23.L.I.o. America. New York: American National Standards Institute; 2000. American National Standard for Safe Use of Lasers. pp. ANSI Z136.131-2000. [Google Scholar]
- 24.Gao L, Bedard N, Hagen N, Kester RT, Tkaczyk TS. Depth-resolved image mapping spectrometer (IMS) with structured illumination. Opt. Express. 2011;19:17439–17452. doi: 10.1364/OE.19.017439. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25.Levoy M, Ng R, Adams A, Footer M, Horowitz M. ACM SIGGRAPH 2006 Papers. Boston, Massachusetts: ACM; 2006. Light field microscopy; pp. 924–934. [Google Scholar]
- 26.Nakagawa K, Iwasaki A, Oishi Y, Horisaki R, Tsukamoto A, Nakamura A, Hirosawa K, Liao H, Ushida T, Goda K, Kannari F. SakumaI, Sequentially timed all-optical mapping photography (STAMP) Nat Photon. 2014;8:695–700. [Google Scholar]
- 27.Gao L, Liang J, Li C, Wang LV. Single-shot compressed ultrafast photography at one hundred billion frames per second. Nature. 2014;516:74–77. doi: 10.1038/nature14005. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28.Gao L, Smith RT. Optical hyperspectral imaging in microscopy and spectroscopy – a review of data acquisition. Journal of Biophotonics. 2014 doi: 10.1002/jbio.201400051. 10.1002/jbio.201400051. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29.Lu G, Fei B. Medical hyperspectral imaging: a review. J Biomed Opt. 2014;19:010901. doi: 10.1117/1.JBO.19.1.010901. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30.Bodkin A, Sheinis A, Norton A, Daly J, Roberts C, Beaven S, Weinheimer J. Video-rate chemical identification and visualization with snapshot hyperspectral imaging. Proc. SPIE. 2012:83740C. [Google Scholar]
- 31.Bodkin A, Sheinis A, Norton A, Daly J, Beaven S, Weinheimer J. Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XV. Orlando, Florida, USA: SPIE; 2009. Snapshot hyperspectral imaging: the hyperpixel array camera; p. 73340H. [Google Scholar]
- 32.Bodkin A, Sheinis A, Norton A. Hyperspectral imaging systems. Bodkin Design & Engineering Llc. 2005 [Google Scholar]
- 33.Matsuoka H, Kosai Y, Saito M, Takeyama N, Suto H. Single-cell viability assessment with a novel spectro-imaging system. J Biotechnol. 2002;94:299–308. doi: 10.1016/s0168-1656(01)00431-x. [DOI] [PubMed] [Google Scholar]
- 34.George TC, Hall BE, Zimmerman CA, Frost K, Seo M, Ortyn WE, Basiji D, Morrissey P. Distinguishing modes of cell death using imagestream (TM) multispectral imaging cytometry. Cytom Part A. 2004;59A:118–118. doi: 10.1002/cyto.a.20048. [DOI] [PubMed] [Google Scholar]
- 35.F-H DW, Gorman Alistair, Harvey Andrew Robert. Generalization of the Lyot filter and its application to snapshot spectral imaging. Optics Express. 2010;18:5602. doi: 10.1364/OE.18.005602. [DOI] [PubMed] [Google Scholar]
- 36.Kudenov MW, Dereniak EL. Compact real-time birefringent imaging spectrometer. Opt. Express. 2012;20:17973–17986. doi: 10.1364/OE.20.017973. [DOI] [PubMed] [Google Scholar]
- 37.Kudenov MW, Dereniak EL. Compact snapshot birefringent imaging Fourier transform spectrometer. SPIE Optical Engineering+ Applications, International Society for Optics and Photonics. 2010:781206. [Google Scholar]
- 38.Kudenov MW, Jungwirth MEL, Dereniak EL, Gerhart GR. White-light Sagnac interferometer for snapshot multispectral imaging. Appl. Opt. 2010;49:4067–4076. doi: 10.1364/AO.49.004067. [DOI] [PubMed] [Google Scholar]
- 39.Ford BK, Volin CE, Murphy SM, Lynch RM, Descour MR. Computed tomography-based spectral imaging for fluorescence microscopy. Biophys J. 2001;80:986–993. doi: 10.1016/S0006-3495(01)76077-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 40.Ford BK, Descour MR, Lynch RM. Large-image-format computed tomography imaging spectrometer for fluorescence microscopy. Optics Express. 2001;9:444–453. doi: 10.1364/oe.9.000444. [DOI] [PubMed] [Google Scholar]
- 41.Henault F, Bacon R, Content R, Lantz B, Laurent F, Lemonnier J, Morris S. Slicing the universe at affordable cost: the quest for the MUSE image slicer. Proceedings of SPIE. 2003:134–145. [Google Scholar]
- 42.Vives S, Prieto E. Original image slicer designed for integral field spectroscopy with the near-infrared spectrograph for the James Webb Space Telescope. Optical Engineering. 2006;45 [Google Scholar]
- 43.Kester RT, Gao L, Tkaczyk TS. Development of image mappers for hyperspectral biomedical imaging applications. Appl. Opt. 2010;49:1886–1899. doi: 10.1364/AO.49.001886. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 44.Gao L, Tkaczyk TS. Correction of vignetting and distortion errors induced by two-axis light beam steering. Optical Engineering. 2012;51 doi: 10.1117/1.OE.51.4.043203. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 45.Kester RT, Bedard N, Gao L, Tkaczyk TS. Real-time snapshot hyperspectral imaging endoscope. J Biomed Opt. 2011;16 doi: 10.1117/1.3574756. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 46.Elliott AD, Gao L, Ustione A, Bedard N, Kester R, Piston DW, Tkaczyk TS. Real-time hyperspectral fluorescence imaging of pancreatic β-cell dynamics with the image mapping spectrometer. J Cell Sci. 2012;125:4833–4840. doi: 10.1242/jcs.108258. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 47.Gao L, Kester RT, Hagen N, Tkaczyk TS. Snapshot Image-Mapping Spectrometer for Hyperspectral Fluorescence Microscopy. Opt. Photon. News. 2010;21:50–50. doi: 10.1364/OE.18.014330. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 48.Gao L, Hagen N, Tkaczyk TS. Quantitative comparison between full-spectrum and filter-based imaging in hyperspectral fluorescence microscopy. Journal of Microscopy. 2012;246:113–123. doi: 10.1111/j.1365-2818.2012.03596.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 49.Gao L, Smith RT, Tkaczyk TS. Snapshot hyperspectral retinal camera with the Image Mapping Spectrometer (IMS) Biomed Opt Express. 2012;3:48–54. doi: 10.1364/BOE.3.000048. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 50.Bedard N, Schwarz RA, Hu A, Bhattar V, Howe J, Williams MD, Gillenwater AM, Richards-Kortum R, Tkaczyk TS. Multimodal snapshot spectral imaging for oral cancer diagnostics: a pilot study. Biomed Opt Express. 2013;4:938–949. doi: 10.1364/BOE.4.000938. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 51.Hagen N, Kester RT, Morlier CG, Panek JA, Drayton P, Fashimpaur D, Stone P, Adams E. Chemical, Biological, Radiological, Nuclear, and Explosives (CBRNE) Sensing XIV. Baltimore, Maryland, USA: SPIE; 2013. Video-rate spectral imaging of gas leaks in the longwave infrared; p. 871005. [Google Scholar]
- 52.Courtes G, Georgelin Y, Bacon R, Monnet G, Boulesteix J. Instrumentation for Ground-Based Optical Astronomy. Springer; 1988. A New Device for Faint Objects High Resolution Imagery and Bidimensional Spectrography; pp. 266–274. [Google Scholar]
- 53.Bacon R, Adam G, Baranne A, Courtes G, Dubet D, Dubois J-P, Georgelin Y, Monnet G, Pecontal E, Urios J. The integral field spectrograph TIGER. Very Large Telescopes and their Instrumentation. 1988;2:1185–1194. [Google Scholar]
- 54.David William F-H, Andrew Robert H. Real-time imaging with a hyperspectral fovea. Journal of Optics A: Pure and Applied Optics. 2005;7:S298. [Google Scholar]
- 55.Gat N, Scriven G, Garman J, De Li M, Zhang J. Development of four-dimensional imaging spectrometers (4D-IS) SPIE Optics+ Photonics, International Society for Optics and Photonics. 2006:63020M. [Google Scholar]
- 56.Kriesel J, Scriven G, Gat N, Nagaraj S, Willson P, Swaminathan V. Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVIII. Baltimore, Maryland, USA: 2012. Snapshot hyperspectral fovea vision system (HyperVideo) p. 83900T. [Google Scholar]
- 57.Hill JM, Angel J, Scott JS, Lindley D, Hintzen P. Multiple object spectroscopy-The Medusa spectrograph. The Astrophysical Journal. 1980;242:L69–L72. [Google Scholar]
- 58.Basiji DA, Ortyn WE, Liang L, Venkatachalam V, Morrissey P. Cellular image analysis and imaging by flow cytometry. Clin Lab Med. 2007;27:653. doi: 10.1016/j.cll.2007.05.008. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 59.Headland SE, Jones HR, D'Sa ASV, Perretti M, Norling LV. Cutting-Edge Analysis of Extracellular Microparticles using ImageStreamX Imaging Flow Cytometry. Sci. Rep. 2014;4 doi: 10.1038/srep05237. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 60.Horstmeyer R, Euliss G, Athale R, Levoy M. Computational Photography (ICCP), 2009 IEEE International Conference on. IEEE; 2009. Flexible multimodal camera using a light field architecture; pp. 1–8. [Google Scholar]
- 61.Bedard N, Tosic I, Meng L, Berkner K. Imaging and Applied Optics 2014. Seattle, Washington: Optical Society of America; 2014. Light Field Otoscope; p. IM3C.6. [Google Scholar]
- 62.Mitchell TA, Stone TW. Compact snapshot multispectral imaging system. US: U.S. PATENT Wavefront Research, Inc.; 2011. [Google Scholar]
- 63.Harvey AR, Fletcher-Holmes DW. High-throughput snapshot spectral imaging in two dimensions. Biomedical Optics 2003, International Society for Optics and Photonics. 2003:46–54. [Google Scholar]
- 64.Kudenov MW, Dereniak EL. Compact real-time birefringent imaging spectrometer. Optics Express. 2012;20:17973–17986. doi: 10.1364/OE.20.017973. [DOI] [PubMed] [Google Scholar]
- 65.Hirai A, Inoue T, Itoh K, Ichioka Y. Application of Measurement multiple-image fourier of fast phenomena transform spectral imaging to measurement of fast phenomena. Optical Review. 1994;1:205–207. [Google Scholar]
- 66.Oka K, Kaneko T. Compact complete imaging polarimeter using birefringent wedge prisms. Optics Express. 2003;11:1510–1519. doi: 10.1364/oe.11.001510. [DOI] [PubMed] [Google Scholar]
- 67.Okamoto T, Yamaguchi I. Simultaneous acquisition of spectral image information. Optics Letters. 1991;16:1277–1279. doi: 10.1364/ol.16.001277. [DOI] [PubMed] [Google Scholar]
- 68.Descour M, Dereniak E. Computed-tomography imaging spectrometer: experimental calibration and reconstruction results. Appl Optics. 1995;34:4817–4826. doi: 10.1364/AO.34.004817. [DOI] [PubMed] [Google Scholar]
- 69.Shepp LA, Vardi Y. Maximum likelihood reconstruction for emission tomography. Medical Imaging, IEEE Transactions on. 1982;1:113–122. doi: 10.1109/TMI.1982.4307558. [DOI] [PubMed] [Google Scholar]
- 70.Volin CE, Garcia JP, Dereniak EL, Descour MR, Hamilton T, McMillan R. Midwave-infrared snapshot imaging spectrometer. Appl Optics. 2001;40:4501–4506. doi: 10.1364/ao.40.004501. [DOI] [PubMed] [Google Scholar]
- 71.Johnson WR, Wilson DW, Bearman G. All-reflective snapshot hyperspectral imager for ultraviolet and infrared applications. Optics Letters. 2005;30:1464–1466. doi: 10.1364/ol.30.001464. [DOI] [PubMed] [Google Scholar]
- 72.Fawzi AA, Lee N, Acton JH, Laine AF, Smith RT. Recovery of macular pigment spectrum in vivo using hyperspectral image analysis. J Biomed Opt. 2011;16 doi: 10.1117/1.3640813. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 73.Johnson WR, Wilson DW, Fink W, Humayun M, Bearman G. Snapshot hyperspectral imaging in ophthalmology. J Biomed Opt. 2007;12 doi: 10.1117/1.2434950. [DOI] [PubMed] [Google Scholar]
- 74.Hagen N, Dereniak EL. Analysis of computed tomographic imaging spectrometers. I. Spatial and spectral resolution. Appl Optics. 2008;47:F85–F95. doi: 10.1364/ao.47.000f85. [DOI] [PubMed] [Google Scholar]
- 75.Oktem FS, Kamalabadi F, Davila JM. A Parametric Estimation Approach to Instantaneous Spectral Imaging. Image Processing, IEEE Transactions on. 2014;23:5707–5721. doi: 10.1109/TIP.2014.2363903. [DOI] [PubMed] [Google Scholar]
- 76.Bearman G, Johnson WR, Wilson DW, Fink W, Humayun M. Snapshot hyperspectral imaging in ophthalmology. J Biomed Opt. 2007;12 doi: 10.1117/1.2434950. [DOI] [PubMed] [Google Scholar]
- 77.Gehm ME, John R, Brady DJ, Willett RM, Schulz TJ. Single-shot compressive spectral imaging with a dual-disperser architecture. Optics Express. 2007;15:14013–14027. doi: 10.1364/oe.15.014013. [DOI] [PubMed] [Google Scholar]
- 78.Figueiredo MAT, Nowak RD, Wright SJ. Gradient Projection for Sparse Reconstruction: Application to Compressed Sensing and Other Inverse Problems. Ieee J-Stsp. 2007;1:586–597. [Google Scholar]
- 79.Bioucas-Dias JM, Figueiredo MAT. A new TwIST: Two-step iterative shrinkage/thresholding algorithms for image restoration. Ieee T Image Process. 2007;16:2992–3004. doi: 10.1109/tip.2007.909319. [DOI] [PubMed] [Google Scholar]
- 80.Kittle D, Choi K, Wagadarikar A, Brady DJ. Multiframe image estimation for coded aperture snapshot spectral imagers. Appl. Opt. 2010;49:6824–6833. doi: 10.1364/AO.49.006824. [DOI] [PubMed] [Google Scholar]
- 81.Wu Y, Mirza IO, Arce GR, Prather DW. Development of a digital-micromirror-device-based multishot snapshot spectral imaging system. Opt. Lett. 2011;36:2692–2694. doi: 10.1364/OL.36.002692. [DOI] [PubMed] [Google Scholar]
- 82.Arguello H, Arce GR. Code aperture optimization for spectrally agile compressive imaging. Journal of the Optical Society of America A. 2011;28:2400–2413. doi: 10.1364/JOSAA.28.002400. [DOI] [PubMed] [Google Scholar]
- 83.Arguello H, Rueda H, Wu Y, Prather DW, Arce GR. Higher-order computational model for coded aperture spectral imaging. Appl Optics. 2013;52:D12–D21. doi: 10.1364/AO.52.000D12. [DOI] [PubMed] [Google Scholar]
- 84.Arguello H, Arce GR. Rank Minimization Code Aperture Design for Spectrally Selective Compressive Imaging. Image Processing, IEEE Transactions on. 2013;22:941–954. doi: 10.1109/TIP.2012.2222899. [DOI] [PubMed] [Google Scholar]
- 85.Arguello H, Arce G. Code aperture design for compressive spectral imaging; European Signal Processing Conference, Citeseer; 2010. pp. 137–140. [Google Scholar]
- 86.Wang L, Xiong Z, Gao D, Shi G, Wu F. Dual-camera design for coded aperture snapshot spectral imaging. Appl Optics. 2015;54:848–858. doi: 10.1364/AO.54.000848. [DOI] [PubMed] [Google Scholar]
- 87.Lam EY. Computational photography with plenoptic camera and light field capture: tutorial. Journal of the Optical Society of America A. 2015;32:2021–2032. doi: 10.1364/JOSAA.32.002021. [DOI] [PubMed] [Google Scholar]
- 88.Lippmann G. Epreuves reversibles donnant la sensation du relief. J. Phys. Theor. Appl. 1908;7:821–825. [Google Scholar]
- 89.Ng R, Levoy M, Brédif M, Duval G, Horowitz M, Hanrahan P. Light field photography with a hand-held plenoptic camera. Computer Science Technical Report CSTR. 2005;2 [Google Scholar]
- 90.Montilla I, Puga M, Marichal-Hernandez JG, Lüke JP, Rodríguez-Ramos JM. Multimedia Content and Mobile Devices. Burlingame, California, USA: SPIE; 2013. On the application of the plenoptic camera to mobile phones; p. 866702. [Google Scholar]
- 91.Adelson EH, Wang JYA. Single Lens Stereo with a Plenoptic Camera. Ieee T Pattern Anal. 1992;14:99–106. [Google Scholar]
- 92.Kim C, Hornung A, Heinzle S, Matusik W, Gross M. Multi-perspective stereoscopy from light fields. ACM Transactions on Graphics (TOG) 2011;30:190. [Google Scholar]
- 93.Turola M, Gruppetta S. 4D Light Field Ophthalmoscope: a Study of Plenoptic Imaging of the Human Retina. In: Delyett JP, Gauthier D, editors. Frontiers in Optics 2013. Orlando, Florida: Optical Society of America; 2013. p. JW3A.36. [Google Scholar]
- 94.Levoy M, Zhang Z, Mcdowall I. Recording and controlling the 4D light field in a microscope using microlens arrays. Journal of Microscopy. 2009;235:144–162. doi: 10.1111/j.1365-2818.2009.03195.x. [DOI] [PubMed] [Google Scholar]
- 95.Prevedel R, Yoon YG, Hoffmann M, Pak N, Wetzstein G, Kato S, Schrodel T, Raskar R, Zimmer M, Boyden ES, Vaziri A. Simultaneous whole-animal 3D imaging of neuronal activity using light-field microscopy. Nat Methods. 2014;11:727–U161. doi: 10.1038/nmeth.2964. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 96.Wilburn BS, Smulski M, Lee H-HK, Horowitz MA. Light field video camera. Electronic Imaging 2002, International Society for Optics and Photonics. 2001:29–36. [Google Scholar]
- 97.Yang JC, Everett M, Buehler C, McMillan L. A real-time distributed light field camera. Proceedings of the 13th Eurographics workshop on Rendering, Eurographics Association. 2002:77–86. [Google Scholar]
- 98.Levoy M, Hanrahan P. Proceedings of the 23rd annual conference on Computer graphics and interactive techniques. ACM; 1996. Light field rendering; pp. 31–42. [Google Scholar]
- 99.Georgiev T, Zheng KC, Curless B, Salesin D, Nayar S, Intwala C. Spatio-Angular Resolution Tradeoffs in Integral Photography. Rendering Techniques. 2006;2006:263–272. [Google Scholar]
- 100.Lumsdaine A, Georgiev T. Computational Photography (ICCP), 2009 IEEE International Conference on. IEEE; 2009. The focused plenoptic camera; pp. 1–8. [Google Scholar]
- 101.Georgiev T, Lumsdaine A. The multifocus plenoptic camera. IS&T/SPIE Electronic Imaging, International Society for Optics and Photonics. 2012:829908. [Google Scholar]
- 102.Veeraraghavan A, Raskar R, Agrawal A, Mohan A, Tumblin J. Dappled photography: Mask enhanced cameras for heterodyned light fields and coded aperture refocusing. ACM Transactions on Graphics. 2007;26:69. [Google Scholar]
- 103.Veeraraghavan A, Agrawal A, Raskar R, Mohan A, Tumblin J. Computer Vision and Pattern Recognition, 2008. CVPR 2008. IEEE Conference on. IEEE; 2008. Non-refractive modulators for encoding and capturing scene appearance and depth; pp. 1–8. [Google Scholar]
- 104.Wetzstein G, Ihrke I, Heidrich W. On Plenoptic Multiplexing and Reconstruction. Int J Comput Vis. 2013;101:384–400. [Google Scholar]
- 105.Xu Z, Lam EY. Image Reconstruction from Incomplete Data VII. San Diego, California, USA: SPIE; 2012. A high-resolution lightfield camera with dual-mask design; p. 85000U. [Google Scholar]
- 106.Ashok A, Neifeld MA. Compressive light field imaging. Proc. SPIE. 2010:76900Q. [Google Scholar]
- 107.Babacan SD, Ansorge R, Luessi M, Mataran PR, Molina R, Katsaggelos AK. Compressive Light Field Sensing. Image Processing, IEEE Transactions on. 2012;21:4746–4757. doi: 10.1109/TIP.2012.2210237. [DOI] [PubMed] [Google Scholar]
- 108.Liang C-K, Lin T-H, Wong B-Y, Liu C, Chen HH. ACM SIGGRAPH 2008 papers. Los Angeles, California: ACM; 2008. Programmable aperture photography: multiplexed light field acquisition; pp. 1–10. [Google Scholar]
- 109.Marwah K, Wetzstein G, Bando Y, Raskar R. Compressive light field photography using overcomplete dictionaries and optimized projections. ACM Transactions on Graphics (TOG) 2013;32:46. [Google Scholar]
- 110.Abolghasemi V, Ferdowsi S, Makkiabadi B, Sanei S. On optimization of the measurement matrix for compressive sensing. Proc. European Signal Processing Conf. 2010:427–431. [Google Scholar]
- 111.Geng J. Structured-light 3D surface imaging: a tutorial. Adv. Opt. Photon. 2011;3:128–160. [Google Scholar]
- 112.Stettner R, Bailey H, Richmond RD. Eye-safe laser radar 3D imaging. 2004:111–116. [Google Scholar]
- 113.Agard DA. Optical Sectioning Microscopy: Cellular Architecture in Three Dimensions. Annual Review of Biophysics and Bioengineering. 1984;13:191–219. doi: 10.1146/annurev.bb.13.060184.001203. [DOI] [PubMed] [Google Scholar]
- 114.Mohan K, Purnapatra SB, Mondal PP. Three Dimensional Fluorescence Imaging Using Multiple Light-Sheet Microscopy. Plos One. 2014;9:e96551. doi: 10.1371/journal.pone.0096551. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 115.Murray JM. Methods for imaging thick specimens: confocal microscopy, deconvolution, and structured illumination. Cold Spring Harb Protoc. 2011;2011:1399–1437. doi: 10.1101/pdb.top066936. [DOI] [PubMed] [Google Scholar]
- 116.Wallace W, Schaefer LH, Swedlow JR. A workingperson's guide to deconvolution in light microscopy. Biotechniques. 2001;31:1076. doi: 10.2144/01315bi01. [DOI] [PubMed] [Google Scholar]
- 117.Prabhat P, Ram S, Ward ES, Ober RJ. Simultaneous imaging of different focal planes in fluorescence microscopy for the study of cellular dynamics in three dimensions. Ieee T Nanobiosci. 2004;3:237–242. doi: 10.1109/tnb.2004.837899. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 118.Ram S, Prabhat P, Chao J, Sally Ward E, Ober RJ. High Accuracy 3D Quantum Dot Tracking with Multifocal Plane Microscopy for the Study of Fast Intracellular Dynamics in Live Cells. Biophys J. 95:6025–6043. doi: 10.1529/biophysj.108.140392. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 119.Geissbuehler S, Sharipov A, Godinat A, Bocchio NL, Sandoz PA, Huss A, Jensen NA, Jakobs S, Enderlein J, Gisou van der Goot F, Dubikovskaya EA, Lasser T, Leutenegger M. Live-cell multiplane three-dimensional super-resolution optical fluctuation imaging. Nat Commun. 2014;5 doi: 10.1038/ncomms6830. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 120.Sinha A, Barbastathis G, Liu WH, Psaltis D. Imaging using volume holograms. Optical Engineering. 2004;43:1959–1972. [Google Scholar]
- 121.Luo Y, Gelsinger-Austin PJ, Watson JM, Barbastathis G, Barton JK, Kostuk RK. Laser-induced fluorescence imaging of subsurface tissue structures with a volume holographic spatial-spectral imaging system. Opt. Lett. 2008;33:2098–2100. doi: 10.1364/ol.33.002098. [DOI] [PubMed] [Google Scholar]
- 122.Luo Y, Singh VR, Bhattacharya D, Yew EYS, Tsai J-C, Yu S-L, Chen H-H, Wong J-M, Matsudaira P, So PTC, Barbastathis G. Talbot holographic illumination nonscanning (THIN) fluorescence microscopy. Laser Photonics Rev. 2014;8:L71–L75. doi: 10.1002/lpor.201400053. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 123.Luo Y, Zervantonakis IK, Oh SB, Kamm RD, Barbastathis G. Spectrally resolved multidepth fluorescence imaging. J Biomed Opt. 2011;16:096015. doi: 10.1117/1.3626211. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 124.Blanchard PM, Greenaway AH. Simultaneous multiplane imaging with a distorted diffraction grating. Appl Optics. 1999;38:6692–6699. doi: 10.1364/ao.38.006692. [DOI] [PubMed] [Google Scholar]
- 125.Dalgarno PA, Dalgarno HIC, Putoud A, Lambert R, Paterson L, Logan DC, Towers DP, Warburton RJ, Greenaway AH. Multiplane imaging and three dimensional nanoscale particle tracking in biological microscopy. Optics Express. 2010;18:877–884. doi: 10.1364/OE.18.000877. [DOI] [PubMed] [Google Scholar]
- 126.Abrahamsson S, Chen J, Hajj B, Stallinga S, Katsov AY, Wisniewski J, Mizuguchi G, Soule P, Mueller F, Darzacq CD, Darzacq X, Wu C, Bargmann CI, Agard DA, Dahan M, Gustafsson MGL. Fast multicolor 3D imaging using aberration-corrected multifocus microscopy. Nat Meth. 2013;10:60–63. doi: 10.1038/nmeth.2277. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 127.Blanchard PM, Greenaway AH. Broadband simultaneous multiplane imaging. Opt Commun. 2000;183:29–36. [Google Scholar]
- 128.Maurer C, Khan S, Fassl S, Bernet S, Ritsch-Marte M. Depth of field multiplexing in microscopy. Opt Express. 2010;18:3023–3034. doi: 10.1364/OE.18.003023. [DOI] [PubMed] [Google Scholar]
- 129.Jesacher A, Ritsch-Marte M. Multi-focal light microscopy using liquid crystal spatial light modulators; Optomechatronic Technologies (ISOT), 2012 International Symposium on; 2012. pp. 1–2. [Google Scholar]
- 130.Liu W, Psaltis D, Barbastathis G. Real-time spectral imaging in three spatial dimensions. Optics Letters. 2002;27:854–856. doi: 10.1364/ol.27.000854. [DOI] [PubMed] [Google Scholar]
- 131.Sinha A, Sun W, Shih T, Barbastathis G. Volume holographic imaging in transmission geometry. Appl Optics. 2004;43:1533–1551. doi: 10.1364/ao.43.001533. [DOI] [PubMed] [Google Scholar]
- 132.Sinha A, Barbastathis G. Broadband volume holographic imaging. Appl Optics. 2004;43:5214–5221. doi: 10.1364/ao.43.005214. [DOI] [PubMed] [Google Scholar]
- 133.Luo Y, Gelsinger PJ, Barton JK, Barbastathis G, Kostuk RK. Optimization of multiplexed holographic gratings in PQ-PMMA for spectral-spatial imaging filters. Optics Letters. 2008;33:566–568. doi: 10.1364/ol.33.000566. [DOI] [PubMed] [Google Scholar]
- 134.Gelsinger-Austin PJ, Luo YA, Watson JM, Kostuk RK, Barbastathis G, Barton JK, Castro JM. Optical design for a spatial-spectral volume holographic imaging system. Optical Engineering. 2010;49 doi: 10.1117/1.3378025. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 135.Luo Y, Oh SB, Barbastathis G. Wavelength-coded multifocal microscopy. Optics Letters. 2010;35:781–783. doi: 10.1364/ol.35.000781. [DOI] [PubMed] [Google Scholar]
- 136.Luo Y, Castro J, Barton JK, Kostuk RK, Barbastathis G. Simulations and experiments of aperiodic and multiplexed gratings in volume holographic imaging systems. Optics Express. 2010;18:19273–19285. doi: 10.1364/OE.18.019273. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 137.Howlett ID, Gordon M, Brownlee JW, Barton JK, Kostuk RK. Endoscopic Microscopy IX; and Optical Techniques in Pulmonary Medicine. San Francisco, California, United States: SPIE; 2014. Volume holographic reflection endoscope for in-vivo ovarian cancer clinical studies; p. 89270R. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 138.Howlett ID, Gordon M, Orsinger G, Brownlee J, Hatch K, Romanowski M, Barton JK, Kostuk RK. Frontiers in Optics 2014. Tucson, Arizona: Optical Society of America; 2014. Evaluation of Volume Holographic Images Obtained Through an Endoscope for in-vivo Medical Applications; p. FTu4F.7. [Google Scholar]
- 139.Orsinger GV, Watson JM, Gordon M, Nymeyer AC, de Leon EE, Brownlee JW, Hatch KD, Chambers SK, Barton JK, Kostuk RK, Romanowski M. Simultaneous multiplane imaging of human ovarian cancer by volume holographic imaging. J Biomed Opt. 2014;19:036020–036020. doi: 10.1117/1.JBO.19.3.036020. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 140.Chen Z, Chen W, Lu H-y, Chevallier Y, Chen N, Barbastathis G, Luo Y. Real-time 3D particle manipulation visualized using volume holographic gratings. Optics Letters. 2014;39:3078–3081. doi: 10.1364/OL.39.003078. [DOI] [PubMed] [Google Scholar]
- 141.Sun W, Barbastathis G. Conference on Lasers and Electro-Optics/International Quantum Electronics Conference and Photonic Applications Systems Technologies. San Francisco, California: Optical Society of America; 2004. Rainbow volume holographic imaging; p. CPDA10. [Google Scholar]
- 142.Castro JM, Gelsinger-Austin PJ, Barton JK, Kostuk RK. Confocal-rainbow volume holographic imaging system. Appl Optics. 2011;50:1382–1388. doi: 10.1364/AO.50.001382. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 143.de Leon EE, Brownlee JW, Gelsinger-Austin P, Kostuk RK. Dual-grating confocal-rainbow volume holographic imaging system designs for high depth resolution. Appl Optics. 2012;51:6952–6961. doi: 10.1364/AO.51.006952. [DOI] [PubMed] [Google Scholar]
- 144.Botcherby EJ, Juskaitis R, Booth MJ, Wilson T. Aberration-free optical refocusing in high numerical aperture microscopy. Optics Letters. 2007;32:2007–2009. doi: 10.1364/ol.32.002007. [DOI] [PubMed] [Google Scholar]
- 145.Hajj B, Wisniewski J, El Beheiry M, Chen J, Revyakin A, Wu C, Dahan M. Whole-cell, multicolor superresolution imaging using volumetric multifocus microscopy. Proceedings of the National Academy of Sciences. 2014;111:17480–17485. doi: 10.1073/pnas.1412396111. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 146.Yu J, Zhou C, Jia W, Ma J, Hu A, Wu J, Wang S. Distorted Dammann grating. Optics Letters. 2013;38:474–476. doi: 10.1364/OL.38.000474. [DOI] [PubMed] [Google Scholar]
- 147.Xiao X, Javidi B, Martinez-Corral M, Stern A. Advances in three-dimensional integral imaging: sensing, display, and applications [Invited] Appl Optics. 2013;52:546–560. doi: 10.1364/AO.52.000546. [DOI] [PubMed] [Google Scholar]
- 148.Schnars U, Jüptner W. Direct recording of holograms by a CCD target and numerical reconstruction. Appl Optics. 1994;33:179–181. doi: 10.1364/AO.33.000179. [DOI] [PubMed] [Google Scholar]
- 149.Nguyen T-U, Pierce MC, Higgins L, Tkaczyk TS. Snapshot 3D optical coherence tomography system using image mappingspectrometry. Optics Express. 2013;21:13758–13772. doi: 10.1364/OE.21.013758. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 150.Arimoto H, Javidi B. Integral three-dimensional imaging with digital reconstruction. Optics Letters. 2001;26:157–159. doi: 10.1364/ol.26.000157. [DOI] [PubMed] [Google Scholar]
- 151.Seung-Hyun H, Javidi B. Three-dimensional visualization of partially occluded objects using integral imaging. Journal of Display Technology. 2005;1:354–359. [Google Scholar]
- 152.Hong S-H, Jang J-S, Javidi B. Three-dimensional volumetric object reconstruction using computational integral imaging. Optics Express. 2004;12:483–491. doi: 10.1364/opex.12.000483. [DOI] [PubMed] [Google Scholar]
- 153.Xiao X, Javidi B, Dey DK. Bayesian estimation of depth information in three-dimensional integral imaging. SPIE; 2014. p. 911714. [Google Scholar]
- 154.Broxton M, Grosenick L, Yang S, Cohen N, Andalman A, Deisseroth K, Levoy M. Wave optics theory and 3-D deconvolution for the light field microscope. Optics Express. 2013;21:25418–25439. doi: 10.1364/OE.21.025418. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 155.Myungjin C, Javidi B. Three-Dimensional Visualization of Objects in Turbid Water Using Integral Imaging. Journal of Display Technology. 2010;6:544–547. [Google Scholar]
- 156.Tavakoli B, Javidi B, Watson E. Three dimensional visualization by photoncounting computational Integral Imaging. Optics Express. 2008;16:4426–4436. doi: 10.1364/oe.16.004426. [DOI] [PubMed] [Google Scholar]
- 157.Xiao X, Javidi B. 3D photon counting integral imaging with unknown sensor positions. Journal of the Optical Society of America A. 2012;29:767–771. doi: 10.1364/JOSAA.29.000767. [DOI] [PubMed] [Google Scholar]
- 158.Yeom S, Javidi B, Watson E. Three-dimensional distortion-tolerant object recognition using photon-counting integral imaging. Optics Express. 2007;15:1513–1533. doi: 10.1364/oe.15.001513. [DOI] [PubMed] [Google Scholar]
- 159.DaneshPanah M, Javidi B, Watson EA. Three dimensional object recognition with photon counting imagery in the presence of noise. Optics Express. 2010;18:26450–26460. doi: 10.1364/OE.18.026450. [DOI] [PubMed] [Google Scholar]
- 160.Yeom S, Javidi B, Watson E. Photon counting passive 3D image sensing for automatic target recognition. Optics Express. 2005;13:9310–9330. doi: 10.1364/opex.13.009310. [DOI] [PubMed] [Google Scholar]
- 161.Zhao Y, Xiao X, Cho M, Javidi B. Tracking of multiple objects in unknown background using Bayesian estimation in 3D space. Journal of the Optical Society of America A. 2011;28:1935–1940. [Google Scholar]
- 162.Jang J-S, Javidi B. Three-dimensional integral imaging of micro-objects. Optics Letters. 2004;29:1230–1232. doi: 10.1364/ol.29.001230. [DOI] [PubMed] [Google Scholar]
- 163.Shin D, Cho M, Javidi B. Three-dimensional optical microscopy using axially distributed image sensing. Optics Letters. 2010;35:3646–3648. doi: 10.1364/OL.35.003646. [DOI] [PubMed] [Google Scholar]
- 164.Lim Y-T, Park J-H, Kwon K-C, Kim N. Resolution-enhanced integral imaging microscopy that uses lens array shifting. Optics Express. 2009;17:19253–19263. doi: 10.1364/OE.17.019253. [DOI] [PubMed] [Google Scholar]
- 165.Hassanfiroozi A, Huang Y-P, Javidi B, Shieh H-PD. Hexagonal liquid crystal lens array for 3D endoscopy. Optics Express. 2015;23:971–981. doi: 10.1364/OE.23.000971. [DOI] [PubMed] [Google Scholar]
- 166.Khare K, Ali PTS, Joseph J. Single shot high resolution digital holography. Optics Express. 2013;21:2581–2591. doi: 10.1364/OE.21.002581. [DOI] [PubMed] [Google Scholar]
- 167.Toge H, Fujiwara H, Sato K. Practical Holography XXII: Materials and Applications. San Jose, CA: SPIE; 2008. One-shot digital holography for recording color 3-D images; p. 69120U. [Google Scholar]
- 168.Awatsuji Y, Sasada M, Kubota T. Parallel quasi-phase-shifting digital holography. Appl Phys Lett. 2004;85:1069–1071. doi: 10.1364/ao.45.000968. [DOI] [PubMed] [Google Scholar]
- 169.Schnars U, Jüptner WP. Digital recording and numerical reconstruction of holograms. Measurement science and technology. 2002;13:R85. [Google Scholar]
- 170.Yaroslavsky L. Digital holography and digital image processing: principles, methods, algorithms. Springer Science & Business Media; 2003. [Google Scholar]
- 171.Samsheerali, Khare K, Joseph J. Single shot digital holography for real-time phase profiling of transparent objects. Recent Advances in Photonics (WRAP), 2013 Workshop on. 2013:1–3. [Google Scholar]
- 172.Hettwer A, Kranz J, Schwider J. Three channel phase-shifting interferometer using polarization-optics and a diffraction grating. Optical Engineering. 2000;39:960–966. [Google Scholar]
- 173.Millerd JE, Brock NJ, Hayes JB, North-Morris MB, Novak M, Wyant JC. Optical Science and Technology, the SPIE 49th Annual Meeting. International Society for Optics and Photonics; 2004. Pixelated phase-mask dynamic interferometer; pp. 304–314. [Google Scholar]
- 174.Novak M, Millerd J, Brock N, North-Morris M, Hayes J, Wyant J. Analysis of a micropolarizer array-based simultaneous phase-shifting interferometer. Appl Optics. 2005;44:6861–6868. doi: 10.1364/ao.44.006861. [DOI] [PubMed] [Google Scholar]
- 175.Martínez-León L, Araiza-E M, Javidi B, Andrés P, Climent V, Lancis J, Tajahuerce E. Single-shot digital holography by use of the fractional Talbot effect. Optics Express. 2009;17:12900–12909. doi: 10.1364/oe.17.012900. [DOI] [PubMed] [Google Scholar]
- 176.Cuche E, Marquet P, Depeursinge C. Spatial filtering for zero-order and twin-image elimination in digital off-axis holography. Appl Optics. 2000;39:4070–4075. doi: 10.1364/ao.39.004070. [DOI] [PubMed] [Google Scholar]
- 177.Yamaguchi I, Zhang T. Phase-shifting digital holography. Optics Letters. 1997;22:1268–1270. doi: 10.1364/ol.22.001268. [DOI] [PubMed] [Google Scholar]
- 178.Awatsuji Y, Fujii A, Kubota T, Matoba O. Parallel three-step phase-shifting digital holography. Appl Optics. 2006;45:2995–3002. doi: 10.1364/ao.45.002995. [DOI] [PubMed] [Google Scholar]
- 179.Awatsuji Y, Tahara T, Kaneko A, Koyama T, Nishio K, Ura S, Kubota T, Matoba O. Parallel two-step phase-shifting digital holography. Appl Optics. 2008;47:D183–D189. doi: 10.1364/ao.47.00d183. [DOI] [PubMed] [Google Scholar]
- 180.Nomura T, Murata S, Nitanai E, Numata T. Phase-shifting digital holography with a phase difference between orthogonal polarizations. Appl Optics. 2006;45:4873–4877. doi: 10.1364/ao.45.004873. [DOI] [PubMed] [Google Scholar]
- 181.Tahara T, Ito K, Fujii M, Kakue T, Shimozato Y, Awatsuji Y, Nishio K, Ura S, Kubota T, Matoba O. Experimental demonstration of parallel two-step phase-shifting digital holography. Optics Express. 2010;18:18975–18980. doi: 10.1364/OE.18.018975. [DOI] [PubMed] [Google Scholar]
- 182.Tahara T, Ito K, Kakue T, Fujii M, Shimozato Y, Awatsuji Y, Nishio K, Ura S, Kubota T, Matoba O. Parallel phase-shifting digital holographic microscopy. Biomed Opt Express. 2010;1:610–616. doi: 10.1364/BOE.1.000610. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 183.Winthrop JT, Worthington CR. Theory of Fresnel Images. I. Plane Periodic Objects in Monochromatic Light. J Opt Soc Am. 1965;55:373–380. [Google Scholar]
- 184.Leger JR, Swanson GJ. Efficient array illuminator using binary-optics phase plates at fractional-Talbot planes. Optics Letters. 1990;15:288–290. doi: 10.1364/ol.15.000288. [DOI] [PubMed] [Google Scholar]
- 185.Araiza-Esquivel MA, Martínez-León L, Javidi B, Andrés P, Lancis J, Tajahuerce E. Single-shot color digital holography based on the fractional Talbot effect. Appl Optics. 2011;50:B96–B101. doi: 10.1364/AO.50.000B96. [DOI] [PubMed] [Google Scholar]
- 186.Nomura T, Imbe M. Single-exposure phase-shifting digital holography using a random-phase reference wave. Optics Letters. 2010;35:2281–2283. doi: 10.1364/OL.35.002281. [DOI] [PubMed] [Google Scholar]
- 187.Imbe M, Nomura T. Single-exposure phase-shifting digital holography using a random-complex-amplitude encoded reference wave. Appl Optics. 2013;52:A161–A166. doi: 10.1364/AO.52.00A161. [DOI] [PubMed] [Google Scholar]
- 188.Imbe M, Nomura T. Study of reference waves in single-exposure generalized phase-shifting digital holography. Appl Optics. 2013;52:4097–4102. doi: 10.1364/AO.52.004097. [DOI] [PubMed] [Google Scholar]
- 189.Lin M, Nitta K, Matoba O, Awatsuji Y. Parallel phase-shifting digital holography with adaptive function using phase-mode spatial light modulator. Appl Optics. 2012;51:2633–2637. doi: 10.1364/AO.51.002633. [DOI] [PubMed] [Google Scholar]
- 190.Yaqoob Z, Wu J, Yang C. Spectral domain optical coherence tomography: a better OCT imaging strategy. Biotechniques. 2005;39:S6. doi: 10.2144/000112090. [DOI] [PubMed] [Google Scholar]
- 191.Lin PC, Sun P-C, Zhu L, Fainman Y. Single-shot depth-section imaging through chromatic slit-scan confocal microscopy. Appl Optics. 1998;37:6764–6770. doi: 10.1364/ao.37.006764. [DOI] [PubMed] [Google Scholar]
- 192.Matlis NH, Axley A, Leemans WP. Single-shot ultrafast tomographic imaging by spectral multiplexing. Nat Commun. 2012;3:1111. doi: 10.1038/ncomms2120. [DOI] [PubMed] [Google Scholar]
- 193.de Groot M, Evans CL, de Boer JF. Self-interference fluorescence microscopy: three dimensional fluorescence imaging without depth scanning. Opt. Express. 2012;20:15253–15262. doi: 10.1364/OE.20.015253. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 194.Stern A, Javidi B. Analysis of practical sampling and reconstruction from Fresnel fields. Optical Engineering. 2004;43:239–250. [Google Scholar]
- 195.Kelly DP, Hennelly BM, Pandey N, Naughton TJ, Rhodes WT. Resolution limits in practical digital holographic systems. Optical Engineering. 2009;48:095801. [Google Scholar]
- 196.Picart P, Leval J. General theoretical formulation of image formation in digital Fresnel holography. Journal of the Optical Society of America A. 2008;25:1744–1761. doi: 10.1364/josaa.25.001744. [DOI] [PubMed] [Google Scholar]
- 197.technologies D. IMACON 200 high speed camera. DRS technologies Products [Google Scholar]
- 198.El-Desouki M, Deen MJ, Fang QY, Liu L, Tse F, Armstrong D. CMOS Image Sensors for High Speed Applications. Sensors-Basel. 2009;9:430–444. doi: 10.3390/s90100430. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 199.Li Z, Zgadzaj R, Wang X, Chang Y-Y, Downer MC. Single-shot tomographic movies of evolving light-velocity objects. Nat Commun. 2014;5 doi: 10.1038/ncomms4085. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 200.Heshmat B, Satat G, Barsi C, Raskar R. Single-shot ultrafast imaging using parallax-free alignment with a tilted lenslet array. CLEO: Science and Innovations. 2014:STu3E. 7. [Google Scholar]
- 201.Bub G, Tecza M, Helmes M, Lee P, Kohl P. Temporal pixel multiplexing for simultaneous high-speed, high-resolution imaging. Nat Methods. 2010;7:209-U266. doi: 10.1038/nmeth.1429. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 202.Llull P, Liao X, Yuan X, Yang J, Kittle D, Carin L, Sapiro G, Brady DJ. Coded aperture compressive temporal imaging. Optics Express. 2013;21:10526–10545. doi: 10.1364/OE.21.010526. [DOI] [PubMed] [Google Scholar]
- 203.Fernandez-Cull C, Tyrrell BM, D'Onofrio R, Bolstad A, Lin J, Little JW, Blackwell M, Renzi M, Kelly M. SPIE Defense+ Security. International Society for Optics and Photonics; 2014. Smart pixel imaging with computational-imaging arrays; p. 90703D. [Google Scholar]
- 204.Shepard RH, Fernandez-Cull C, Raskar R, Shi B, Barsi C, Zhao H. Optical design and characterization of an advanced computational imaging system. SPIE Optics and Photonics for Information Processing VIII. 2014:92160A. [Google Scholar]
- 205.Le Blanc SP, Gaul EW, Matlis NH, Rundquist A, Downer MC. Single-shot measurement of temporal phase shifts by frequency-domain holography. Optics Letters. 2000;25:764–766. doi: 10.1364/ol.25.000764. [DOI] [PubMed] [Google Scholar]
- 206.Li Z, Zgadzaj R, Wang X, Reed S, Dong P, Downer MC. Frequency-domain streak camera for ultrafast imaging of evolving light-velocity objects. Optics Letters. 2010;35:4087–4089. doi: 10.1364/OL.35.004087. [DOI] [PubMed] [Google Scholar]
- 207.Herman GT. Fundamentals of computerized tomography : image reconstruction from projections. 2nd ed. New York: Springer, Dordrecht; 2009. [Google Scholar]
- 208.Garcia JP, Dereniak EL. Mixed-expectation image-reconstruction technique. Appl Optics. 1999;38:3745–3748. doi: 10.1364/ao.38.003745. [DOI] [PubMed] [Google Scholar]
- 209.Gao L, Liang J, Li C, Wang LV. Single-shot compressed ultrafast photography at one hundred billion frames per second. Nature. 2014;516:74–77. doi: 10.1038/nature14005. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 210.Hamamatsu. Guide to Streak Cameras. Hamamatsu Photonics Products Hamamatsu City. 2002 [Google Scholar]
- 211.Agrawal A, Veeraraghavan A, Raskar R. Computer Graphics Forum. Wiley Online Library; 2010. Reinterpretable Imager: Towards Variable Post-Capture Space, Angle and Time Resolution in Photography; pp. 763–772. [Google Scholar]
- 212.Eldar YC, Kutyniok G. Compressed sensing: theory and applications. Cambridge University Press; 2012. [Google Scholar]
- 213.Liao X, Li H, Carin L. Generalized Alternating Projection for Weighted-2,1 Minimization with Applications to Model-Based Compressive Sensing. SIAM Journal on Imaging Sciences. 2014;7:797–823. [Google Scholar]
- 214.Liu D, Gu J, Hitomi Y, Gupta M, Mitsunaga T, Nayar S. Efficient Space-Time Sampling with Pixel-wise Coded Exposure for High Speed Imaging. 2013 doi: 10.1109/TPAMI.2013.129. [DOI] [PubMed] [Google Scholar]
- 215.Hale ET, Yin W, Zhang Y. CAAM TR07-07. Rice University; 2007. A fixed-point continuation method for l1-regularized minimization with applications to compressed sensing. [Google Scholar]
- 216.Holloway J, Sankaranarayanan AC, Veeraraghavan A, Tambe S. Computational Photography (ICCP), 2012 IEEE International Conference on. IEEE; 2012. Flutter shutter video camera for compressive sensing of videos; pp. 1–9. [Google Scholar]
- 217.Berry HG, Gabrielse G, Livingston AE. Measurement of the Stokes parameters of light. Appl Optics. 1977;16:3200–3205. doi: 10.1364/AO.16.003200. [DOI] [PubMed] [Google Scholar]
- 218.Oliva E. Wedged double Wollaston, a device for single shot polarimetric measurements. Astronomy and Astrophysics Supplement Series. 1997;123:589–592. [Google Scholar]
- 219.Gruev V, Perkins R, York T. CCD polarization imaging sensor with aluminum nanowire optical filters. Opt. Express. 2010;18:19087–19094. doi: 10.1364/OE.18.019087. [DOI] [PubMed] [Google Scholar]
- 220.York T, Perkins R, Gruev V. Live demonstration: Material detection via an integrated polarization imager; Circuits and Systems (ISCAS), 2011 IEEE International Symposium on; 2011. pp. 1990–1990. [Google Scholar]
- 221.Bartlett BD, Rodriguez MD. SPIE Defense, Security, and Sensing. International Society for Optics and Photonics; 2013. Snapshot spectral and polarimetric imaging; target identification with multispectral video; p. 87430R. [Google Scholar]
- 222.Mu T, Zhang C, Li Q, Wei Y, Chen Q, Jia C. International Symposium on Optoelectronic Technology and Application 2014. International Society for Optics and Photonics; 2014. Snapshot full-Stokes imaging spectropolarimetry based on division-of-aperture polarimetry and integral-field spectroscopy; p. 92980D. [Google Scholar]
- 223.Nordin GP, Meier JT, Deguzman PC, Jones MW. Micropolarizer array for infrared imaging polarimetry. Journal of the Optical Society of America A. 1999;16:1168–1174. [Google Scholar]
- 224.Chun CS, Fleming DL, Torok E. SPIE's International Symposium on Optical Engineering and Photonics in Aerospace Sensing. International Society for Optics and Photonics; 1994. Polarization-sensitive thermal imaging; pp. 275–286. [Google Scholar]
- 225.Brock N, Kimbrough BT, Millerd JE. SPIE Optical Engineering+ Applications. International Society for Optics and Photonics; 2011. A pixelated micropolarizer-based camera for instantaneous interferometric measurements. 81600W-81600W-81609. [Google Scholar]
- 226.Meng X, Li J, Liu D, Zhu R. Fourier transform imaging spectropolarimeter using simultaneous polarization modulation. Optics Letters. 2013;38:778–780. doi: 10.1364/OL.38.000778. [DOI] [PubMed] [Google Scholar]
- 227.Meng X, Li J, Xu T, Liu D, Zhu R. High throughput full Stokes Fourier transform imaging spectropolarimetry. Optics Express. 2013;21:32071–32085. doi: 10.1364/OE.21.032071. [DOI] [PubMed] [Google Scholar]
- 228.Takeda M, Ina H, Kobayashi S. Fourier-transform method of fringe-pattern analysis for computer-based topography and interferometry. J Opt Soc Am. 1982;72:156–160. [Google Scholar]
- 229.Oka K, Saito N. Infrared Detectors and Focal Plane Arrays VIII. San Diego, California, USA: SPIE; 2006. Snapshot complete imaging polarimeter using Savart plates; p. 629508. [Google Scholar]
- 230.Luo H, Oka K, DeHoog E, Kudenov M, Schiewgerling J, Dereniak EL. Compact and miniature snapshot imaging polarimeter. Appl Optics. 2008;47:4413–4417. doi: 10.1364/ao.47.004413. [DOI] [PubMed] [Google Scholar]
- 231.DeHoog E, Luo H, Oka K, Dereniak E, Schwiegerling J. Snapshot polarimeter fundus camera. Appl Optics. 2009;48:1663–1667. doi: 10.1364/ao.48.001663. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 232.Kudenov MW, Escuti MJ, Dereniak EL, Oka K. White-light channeled imaging polarimeter using broadband polarization gratings. Appl Optics. 2011;50:2283–2293. doi: 10.1364/AO.50.002283. [DOI] [PubMed] [Google Scholar]
- 233.Crawford GP, Eakin JN, Radcliffe MD, Callan-Jones A, Pelcovits RA. Liquid-crystal diffraction gratings using polarization holography alignment techniques. J Appl Phys. 2005;98:123102. [Google Scholar]
- 234.Komanduri RK, Jones WM, Oh C, Escuti MJ. Polarization-independent modulation for projection displays using small-period LC polarization gratings. Journal of the Society for Information Display. 2007;15:589–594. [Google Scholar]
- 235.Packham C, Escuti M, Ginn J, Oh C, Quijano I, Boreman G. Polarization gratings: A novel polarimetric component for astronomical instruments. Publications of the Astronomical Society of the Pacific. 2010;122:1471–1482. [Google Scholar]
- 236.Kudenov MW, Escuti MJ, Hagen N, Dereniak EL, Oka K. Snapshot imaging Mueller matrix polarimeter using polarization gratings. Optics Letters. 2012;37:1367–1369. doi: 10.1364/OL.37.001367. [DOI] [PubMed] [Google Scholar]
- 237.Dereniak EL. Fifty Years of Optical Sciences at The University of Arizona. San Diego, California, United States: SPIE; 2014. From the outside looking in: developing snapshot imaging spectro-polarimeters; p. 91860L. [Google Scholar]
- 238.Oka K, Kato T. Spectroscopic polarimetry with a channeled spectrum. Optics Letters. 1999;24:1475–1477. doi: 10.1364/ol.24.001475. [DOI] [PubMed] [Google Scholar]
- 239.Craven-Jones J, Kudenov MW, Stapelbroek MG, Dereniak EL. Infrared hyperspectral imaging polarimeter using birefringent prisms. Appl. Opt. 2011;50:1170–1185. doi: 10.1364/AO.50.001170. [DOI] [PubMed] [Google Scholar]
- 240.Kudenov MW, Hagen NA, Dereniak EL, Gerhart GR. Fourier transform channeled spectropolarimetry in the MWIR. Optics Express. 2007;15:12792–12805. doi: 10.1364/oe.15.012792. [DOI] [PubMed] [Google Scholar]
- 241.Chan VC, Kudenov M, Liang C, Zhou P, Dereniak E. Three-Dimensional and Multidimensional Microscopy: Image Acquisition and Processing XXI, an Francisco. California, United States: 2014. Design and application of the snapshot hyperspectral imaging Fourier transform (SHIFT) spectropolarimeter for fluorescence imaging; p. 894903. [Google Scholar]
- 242.Sabatke D, Locke A, Dereniak EL, Descour M, Garcia J, Hamilton T, McMillan RW. Snapshot imaging spectropolarimeter. Optical Engineering. 2002;41:1048–1054. [Google Scholar]
- 243.Aumiller RW, Vanderlugt C, Dereniak EL, Sampson R, McMillan RW. SPIE Defense and Security Symposium. International Society for Optics and Photonics; 2008. Snapshot imaging spectropolarimetry in the visible and infrared; p. 69720D. [Google Scholar]
- 244.Hagen N. College of Optical Sciences. Tucson, AZ: The University of Arizona; 2007. Snapshot imaging spectropolarimetry. [Google Scholar]
- 245.Kester RT, Tkaczyk TS. Image mapped spectropolarimetry. US: US PATENT William Marsh Rice University; 2011. [Google Scholar]
- 246.Kim J, Escuti MJ. Optical Engineering+ Applications. International Society for Optics and Photonics; 2008. Snapshot imaging spectropolarimeter utilizing polarization gratings; p. 708603. [Google Scholar]
- 247.Kim J, Escuti MJ. SPIE Defense, Security, and Sensing. International Society for Optics and Photonics; 2010. Demonstration of a polarization grating imaging spectropolarimeter (pgis) p. 767208. [Google Scholar]
- 248.Javidi B, Mahalanobis A, Xiao X, Rivenson Y, Horisaki R, Stern A, Latorre-Carmona P, Martínez-Corral M, Pla F, Tanida J. SPIE Security+ Defence. International Society for Optics and Photonics; 2013. Multi-dimensional compressive imaging; p. 889904. [Google Scholar]
- 249.Horisaki R, Xiao X, Tanida J, Javidi B. Feasibility study for compressive multi-dimensional integral imaging. Optics Express. 2013;21:4263–4279. doi: 10.1364/OE.21.004263. [DOI] [PubMed] [Google Scholar]
- 250.Fellgett PB. The Theory of Infrared Sensitivities and Its Application to Investigations of Stellar Radiation in the Near Infra-red. 1951 [Google Scholar]
- 251.Dixit R, Cyr R. Cell damage and reactive oxygen species production induced by fluorescence microscopy: effect on mitosis and guidelines for non-invasive fluorescence microscopy. Plant J. 2003;36:280–290. doi: 10.1046/j.1365-313x.2003.01868.x. [DOI] [PubMed] [Google Scholar]