Abstract
A snapshot 3-Dimensional Optical Coherence Tomography system was developed using Image MappingSpectrometry. This system can give depth information (Z) at different spatial positions (XY) withinone camera integration time to potentially reduce motion artifact and enhance throughput. Thecurrent (x,y,λ) datacube of (85×356×117) provides a 3Dvisualization of sample with 400 μm depth and 13.4μm in transverse resolution. Axial resolution of 16.0μm can also be achieved in this proof-of-concept system. We present ananalysis of the theoretical constraints which will guide development of future systems withincreased imaging depth and improved axial and lateral resolutions.
OCIS codes: (110.4500) Optical coherence tomography, (170.3880) Medical and biological imaging
1. Introduction
Optical Coherence Tomography (OCT) is an established interferometry-based technique forvolumetric tissue imaging with micrometer resolution, best known in many medical applications suchas ophthalmologic imaging and endoscopy [1].Several clinically established examples include retinal imaging to detect glaucoma and age-relatedmacular degeneration or cardiovascular imaging when employed with a catheter [2, 3]. OCT’s uniquecapability to obtain 3-dimensional (3D) images of tissue microstructure within a scattering mediumis useful when high-resolution, sub-surface information is required for disease diagnosis andtreatment.
Although Fourier-Domain OCT (FD-OCT) is now firmly established and widely used, bothspectral-domain and swept-source/optical frequency domain imaging embodiments still require scanningelements. Moving parts can limit the system’s compactness, which is an important factor insystems miniaturized for endoscopic applications, and can introduce motion artifacts. The artifactscaused by movements and vibrations of the sample or of the scanning mechanism itself can result inblurred or non-continuous images, and potentially inaccurate clinical interpretation [4]. This effect is worsened when the samples are dynamicobjects [5, 6]. Snapshot imaging modalities capture light in parallel instead of raster scanninga focused beam, potentially allowing imaging with reduced illumination power or increased frame rate[7]. Efforts to reduce the number of scanningelements has led to line-illumination [8, 9] and full-field [10] approaches in OCT. The former technique images a line on the sample andreference mirror, thus requires only one scanning axis to obtain a 3D structure [11–13].Full-Field OCT as exempli-fied by AC Boccara et al. can provide real time in vivoimaging without lateral scanning, albeit with acquisition of multiple phase-shifted images, ratherthan single-shot [14]. Subhash etal demonstrated a version of FF-OCT where the requisite phase-stepped images are allcaptured in a single camera snap-shot by distributing each image to a separate region of the imagesensor [15, 16]. This method could, therefore, provide snapshot en face (XY)FF-OCT imaging at a single axial (Z) location; however, generation of a 3D volume required recordingof multiple camera acquisitions. To the best of our knowledge, the IMS-OCT approach introduced hereis the first implementation of OCT which can provide a complete 3D (XYZ) volume with a single camerasnap-shot.
Hyperspectral imaging methods capture spectral information at each spatial (XY) location in a 2Dscene, but have traditionally required spectral or spatial scanning to acquire the full spectraldatacube. Several snapshot hyperspectral imagers have been developed and commercialized by differentresearch groups and companies. Among the prominent techniques are Computed Tomography ImagingSpectrometer (CTIS) [17,18], Coded Aperture Snapshot Spectral Imager (CASSI) [19, 20],Image-Replicating Imaging Spectrometer (IRIS) [21], HyperPixel Array TM Imager (Bodkin Design &Engineering, LLC) [22], HyperVideo(Opto-Knowledge Systems, Inc.) [23] and theIMS which will be discussed in Section 2. CTIS and CASSI require extensive computations which slowdown the acquisition and data reconstruction, and generate computational artifacts, while IRIS haslow light throughput and is limited by its prism [24]. Meanwhile, the HyperPixel Array, HyperVideo and IMS produce direct spatial andspectral imaging by separating an image into spatially different zones without any datareconstruction. Due to its intrinsic pupil geometry, the HyperPixel Array Imager is limited in itsnumber of spectral samples. As a result, this technique is unsuitable for OCT which requires highspectral resolution to obtain clinically significant imaging depths. The second technique,HyperVideo, can provide a higher spectral resolution which is more desirable for OCT applications.However, since this technique relies on a specially designed fiber bundle, the spatial sampling isdirectly limited by the number of elements in the bundle.
We previously developed a snapshot hyperspectral imaging platform based on principles of imagingmapping/slicing spectrometery (IMS) [25].Here, we report on the development and use of IMS to acquire a full 3D OCT volume in a singlesnapshot image capture. The lateral (XY) dimension is acquired by use of wide-field Koehlerillumination, while depth (Z) information is encoded in the interference fringe pattern captured bythe IMS system’s spectral (λ) dimension. To the best of ourknowledge, this is the first demonstration of a snapshot 3D-OCT imaging using a hyperspectralimaging technique to provide volumetric data. This system also has the capability of increasingspectral sampling through system redesign.
2. Principles
Different from the standard scanning FD-OCT in which a beam is focused to a single point at thesample, in this snapshot 3D OCT system, a full-field OCT configuration is set up with Koehlerillumination in a similar fashion to the full-field OCT technique [10]. Since spatial and spectral information of a full-field imagecannot be successfully extracted with a 1-dimensional linear array-based spectrometer, our systemrelays an image of the overlapping sample and reference beams to the IMS. IMS is a recentlydeveloped hyperspectral imaging modality which can map 3D information(x,y,λ) onto a 2D detector for acquisition in a single camera integration[25]. IMS provides a simple and directapproach for hyperspectral imaging, enabled by advances in large format detectors and thedevelopment of a component termed image mapper which consists of multiple facets ofdifferent 2D tilt angles. By slicing the large image into discrete strips and regrouping thesestrips with void space in between them, the mapper creates a pupil array equally spaced fordispersion later in the optical train. Essentially, the image mapper can be considered as anoriginal way to downscale many slit spectrometers into one compact system, recorded by alarge-format detector. Thus, no complicated scanning mechanism or computations are required[24]. Adapting IMS to the unique requirementsof OCT, however, requires redesign of previous IMS modalities. This new concept for snapshot OCTrequires the IMS system to perform high spectral sampling within a narrow bandwidth (over 100spectral bins within a bandwidth of 50–150 nm in the red/near-infrared region), in contrastto our earlier IMS systems which achieved lower spectral sampling (60 spectral bins across theentire visible range) [24].
Fig. 1.

The concept of combining Full-field OCT (FF-OCT) and IMS systems to develop snapshot 3D-OCT. Alow coherence source travels to both sample and reference arms. Back-scattered light createsinterference and is fed into the the IMS system. The mapper slices the image and regroups differentregions into separate pupils. A large camera captures spectral and spatial data imaged by a lensletarray.
3. System Design
3.1. Inteferometry Arm
In the interferometry setup used here [Fig.2], a spatially incoherent LED source (λ=633 nm,FWHM=13.5 nm) is attached to an engineered diffuser for source pattern removal. Thediverging beam is collimated by a condenser lens (f=40 mm). An iris is placed next to thecollimator for field of view (FOV) control. Koehler illumination is established with the combinationof lens L2 (f=75 mm) and a microscope objective for full-field imaging. The Michelson-typeinterferometry objective (Zygo 2.5×, NA=0.074, WD=10.3 mm) has a built-inreference arm to minimize alignment variations between the two arms. A 300 mm focal length lens (L3)after the (50/50 non-polarizing) beam splitter BS1 collects the overlapping sample and reference armbeams and images them onto the mapper. The magnification (3.75×) created by the objectiveand lens L3 ensures the FOV covers the entire mapper surface (roughly 1”×1”square). 50% of the light exiting the interferometer is reflected at the second beamsplitter (BS2) towards a reference camera (RC), which is used to capture the full-field surfaceimage of the sample.
Fig. 2.
System layout. (a): System schematic. BS: beam splitter, IO: interferometryobjective, LA: lenslet array, PP: pupil plane, RC: reference camera, RA: reference arm, SA: samplearm. (b): Complete system on optical table.
3.2. IMS Arm
Since general IMS modalities have been previously reported in literature [24–28], key redesignsto meet OCT imaging’ criteria are highlighted.
Different from other IMS congurations [25], this OCT-adapted IMS system has the mapper positioned perpendicular to theincoming beam to achieve a uniform focal plane across the mapper surface. This setup reducessensitivity to artifacts like sub-field image vignetting and pupil plane distortions, thussimplifying the mapper facet angle calculations [29]. In addition, this configuration minimizes adjacent facet blockage, asindividual facets have different heights, which can potentially block parts of the light from otherfacets if the mapper is placed at an angle to the incoming beam.
Each facet deflects light to different angles toward the collecting lens L4 (f=80mm).Lens L4 organizes the high NA incoming beams into different pupils, with the specific destinationpupil depending on the mapper’s facet tilts. A beam expander consisting of two lenses, a2” diameter, 50 mm focal length (L5) and a 3” diameter, 200 mm focal length (L6)lens, is used to match the pupil array size to the image sensor dimensions without clipping of thelarge array.
In previous IMS systems, a single prism (ZF6, 10°) was employed to disperse lightspanning a wide spectrum (450–650 nm) into 60 spectral bins [24]. However, due to the requirement of much higher spectralresolution and narrower spectral bandwidth for FD-OCT, we use a ruled diffraction grating (300lines/mm) for greater dispersion. The dispersed array of beams carrying spatial and spectralinformation in two directions are simultaneously mapped onto a large-format CCD camera (Apogee AltaU16M, 16 MPxl, 9μm square pixel size) by a lenslet array with adjustablefocal length. This array set has two plates, each containing 25 lenslets, 6.25 mm in diameter tocreate telephoto lens combinations. The lenslet array’s geometry is designed to match thedispersion angle from the diffraction grating.
4. Image mapper development
4.1. Mapper fabrication method
Fabricated in-house, the image mapper is made of high purity aluminum (5N 99.999%) forhigh malleability and reflectivity. The earlier mappers used in IMS were fabricated using araster-fly cutting technique on a four-axis Nanotech Ultra Precision milling machine [24]. Here we used a ruling technique which has been shownrecently to exhibit several advantages over raster-fly cutting [30].
The raster-fly cutting process is significantly slower than ruling, and it creates a largeinconsistency in facet widths. In the ruling technique, the tool moves into the substrate from oneside with a predefined cutting depth, gradually scooping the raw material out while translatingacross the substrate from left to right, as shown in Fig.3(a). This process creates a clean, highly uniform, reflecting surface as one thin film ofaluminum is removed on each tool pass. The final surface roughness in ruling is under 10 nm andcomparable with raster-fly cutting [28]. Theincluded angle of the diamond tool can potentially damage adjacent facets during the ruling process.This challenge can be overcome by utilizing a tool with a narrower included angle, thus maintainingthe uniformity of the facet width. By using a tool with 5° included angle, the facets widthvariability was measured to be within 6.7%, in good agreement with previous mappers quality[27].
Fig. 3.
Image mapper fabrication. (a): Mapper in fabrication. The substrate is mounted onthe Nanotech milling machine. Two tools are placed on spindle prior to cutting facets.(b): Reflection of ruler’s straight edge on the finished mapper.(c): Mapper looking from the top. Different facet tilts are shown as variations indepth of cuts. (d): Examination of mapper’s facets with white-lightinterferometer. (e): Mapper looking from the front. (f): Enlarged sectionof mapper looking from the front showing finer cuts for individual facets.
For fabrication, the aluminum substrate is mounted on a stage which can be translated along themachine’s y axis as shown in Fig. 3(a). To obtainsub-micron accuracy in tilt angles and surface flatness across each facet, two tools are mounted onthe machine’s spindle. For the initial rough cuts, a carbide tool creates seventeen 1.5 mmwide passes across the 1” square substrate by maintaining the carbide tip stationary andorthogonal to the mapper substrate. During that time, the mapper substrate moves along the y axiswith depth (x) values varying along the pathway. For the fine cuts, the machine spindle createspre-programmed tilts (x-tilts) before the 75-μm diamond tool cuts into thesubstrate to create 20 uniform 75-μm wide facets, within each 1.5 mm widecarbide-tool pass. In a similar fashion to the rough cut, the mapper substrate travels across thestationary and tilted diamond tip with very fine cutting depths, ranging from 20μm down to 2 μm in multiple iterations.
While the IMS-OCT mapper is designed to have 300 facets, each 75 μm inwidth, the actual fabricated component includes 40 extra facets as a safety factor in fabrication,and also to enable testing of the system in alternative configurations. As a result, some sub-fieldscontain images from 3 facets while others have 4. Divided into identical ”blocks” of100 facets, the entire mapper with 17 rough-cut passes thus comprises 3.4 blocks. Each facet in asingle block has a unique two-dimensional angle to deflect light towards the collecting lens. Figure 3(b) shows a ruler’s straight edge being reflectedas a zig-zag pattern on the 17 rough-cut passes; a few of the individual facets can be seen in thewhite-light interferometry image in Fig. 3.
4.2. Mapper design and pupil distribution
Each block of 100 facets is tilted so that light from the interferometer is reflected into 35sub-pupils [Fig. 4]. In this first-generationdesign, we collect light from only every 4th facet, i.e. facets 1, 5, 9 ... [Fig. 4(b–c)]; light from the remaining facets isdiscarded outside the lenslet array in order to maintain enough void space in between pupils forsubsequent dispersion. This results in 85 out of the 340 facets being used to direct light from theOCT interferometer to the camera. Starting from one end of the mapper, the first 20 facets share thesame y-tilt and therefore redirect light onto the same horizontal row at the lenslet array. Facetsspaced 20 steps apart (eg. facets 1 and 21 in Fig. 4(c)) havethe same x-tilt and thus, redirect light to a common column. This geometry is repeated across theentire surface of the mapper such that the corresonding facets within each block(facets 1, 101, 201 ...) have identical x and y tilts, and therefore direct lightto the same sub-pupil. Since facets 1, 101 and 201 are 100 facets × 75μm = 7.5 mm apart at the mapper, this distance between the imagesof facets 1 and 101 at the image plane creates the necessary empty space to be filled in with laterdispersion from the diffraction grating.
Fig. 4.
Mapper facet and pupil distribution. (a): Facet tilt directions relative to mapper.(b): Pupil distribution from one block of mapper (100 facets). Facets whose numbers arenot shown are discarded in the leftmost and rightmost columns. (c): Grouping and orderof facets. Facet of the same y-tilt correspond to light grouped in the same row; and those of thesame x-tilt correspond to the same columns. Thus two facets which are 100 facets apart have theexact same x and y tilts.
There are two categories of cross-talk occurring in the system. The first arises from diffractiondue to the 75-μm wide mapper’s facets, leading to light leakingfrom one sub-pupil to neighboring sub-pupils. We term this effect spatialcross-talk[28]. With the use of apupil mask in the pupil array plane, the spatial cross-talk level was reported to be 6%[26]. This level strongly depends on thesurface quality of the mapper’s facets and the sub-pupil separation. When previously usingthe raster-fly cutting technique, the mapper facets were not perfectly flat, but had an opticalpower which broadened the beam and increased light leaking [28]. The new ruling technique used in this paper achieved facet flatness in thesub-micron range, ensuring that cross-talk caused by facets’ non-uniformity is minimized.The second type of cross-talk (spectral), arises from the dispersion of individualsub-images within the tightly-packed array, with the red end of one spectrum potentially overlappedwith the blue end of the next. To minimize spectral cross-talk, a band-pass filter (OD6) wasinserted into the system, leading to spectral leakage in the 0.001% range.
5. Data processing
5.1. Data acquisition
The Apogee camera is connected to a laptop via a USB cable and controlled with the LabVIEW 2009environment. For alignment and other fast acquisition purposes, the camera can bin images prior toacquisition and/or capture 12-bit images. Otherwise, operating in snap-shot mode, the camera canproduce a full-frame 16-bit image containing 4096×4096 pixels. The image is stored andopened in Matlab for further data processing. All images shown in this manuscript were acquired withan exposure time of 125 ms.
5.2. Calibration
Unlike many other hyperspectral modalities, IMS does not demand extensive computationrequirements to generate spectrally-resolved images [31]. Post-processing for IMS-OCT includes one-time data extraction and alignments,followed by standard spectral-domain OCT calibration.
IMS calibration
IMS calibration focuses on rearranging the sub-images into the correct positions, including (1)extracting the sub-images from the raw 2D image, (2) calibrating the dispersed spectra for everysub-image, (3) aligning and correcting all sub-images and (4) performing flat-field correction. Theflowchart of the calibration steps is shown in Fig. 5.
Fig. 5.
Initial calibration steps. This one-time calibration series is to performed to convert the raw 2Dimage into a (x,y,λ) datacube for subsequent image acquisitions.
As light from the 25 sub-pupils is recorded by the CCD sensor, the raw image includes 85vertically oriented sub-images of the mapper facets, each of which is horizontally dispersed by thediffraction grating. Initial data processing starts with subtracting the background density toremove stray light by blocking signals from both reference and sample arms, then extractingindividual sub-images and creating a 3D matrix of (x,y,n) where x is the transverse data obtained bystacking multiple sub-images, y is the image along the length of each facet, and n is the dispersedspectral information in pixels [Fig. 5(a)].All of the blank space which separates the sub-images after dispersion is discarded. Since it isentirely determined by the design of the mapper facets’ tilts, the order of these sub-imagesis easily redistributed.
After the 85 sub-images are rearranged and corrected to obtain the transverse full-field image,spectral calibration is required. Spectra are recorded by both of the IMS-OCT system and an OceanOptics spectrometer as a calibrated reference channel. Since dispersion from the IMS grating isapproximately linear in wavelength, and the LED light source has a simple Gaussian shape, thewavelength-pixel relationship can be interpolated based on the spectrum measured from the referencespectrometer. This calibration step generates the (x,y,λ) datacube which isready for OCT calibration [Fig. 5(b)].
An image of a test object containing straight lines is used to vertically align the sub-images[Fig. 5(c)]. Vertical offset andmagnification difference among individual sub-images are corrected with a linear approximation, i.e.disregarding the insignificant effects from distortion and magnification variation along onesub-image. Flat-field correction is carried out to compensate for uneven intensity, mostly caused bythe mapper facets’ surface variation. Figure 5(d)shows a cross-sectional image at the center wavelength, extracted from the flat-field correction.This non-uniformity of the facets’ reflectivity is used to compensate for the variations forthe whole spectrum.
OCT calibration
Given the estimated spectral values from the linear calibration mentioned above, the spectralbins-calibrated wavelength relationship is fitted to a polynomial for finer calibration[Fig. 6(d)]. The spectra are then zero-paddedand interpolated so that they are evenly spaced in wavenumber (k) [Fig. 6(e)]. The DC (non-interferometric) component is removed fromeach set of recorded fringes by subtracting the spectrum obtained when the sample reflector ispositioned far beyond the expected imaging depth, i.e. when fringes are not present. This methodminimizes the effect of any spectral variations in the light path, effectively removing most of theDC component from the depth profile [Fig.6(f)]. A Fourier transform of the resampled spectra generates the OCT axialscattering profile (A-line) for each individual spectral line (Fig.6[g]). The spectral phase obtained from an image of a simple reflector isused to iteratively adjust spectral values based on the process described by Mujat etal.[32]. The calculated nonlinearityin phase ϕ(k) is removed to compensate for errors in spectrometercalibration or dispersion mismatch between sample and reference arms [33, 34]. Since the fullspectrum of the LED used here is relatively narrow (50 nm), dispersion mismatch effects arerelatively insignificant. A flat mirror was mounted on an axial translation stage to recorddifferent sample positions for a one-time depth calibration. The corrected pixel-wavelengthassignments and depth scale are then applied on all subsequent data sets. After the one-timecalibration steps mentioned above, any raw image taken by the system can be readily processed forfast data reconstruction. A predefined mask extracts the sub-images; and wave-number interpolationis taken place with the known wavelength array. Converting into Fourier space, the depth profiles ofall spatial points can be quantitatively reconstructed and visualized.
Fig. 6.
OCT calibration steps. a: A segment of a raw sub-image with horizontal features from sample andvertical interferometric fringes. b: One spectral cross-section taken from (a). c: Calibratedspectra corresponding to the raw image in (a). Spectra along the facets form a gradient from black(610 nm) to white (640 nm). d: The initial wavelength-pixel relationship is fitted to a third-orderpolynomial. e: The calibrated wavelength after zero-padding to 512 data points to prepare for depthreconstruction. f: A spectrum of inteferometric fringes with DC components removed. g: Depth profilereconstructed from the fringes shown in f. h: Relationship between wavelengths and the arrayindices. For a narrow spectral band such as that used here, this relationship is almost linear.
6. Experiments
6.1. Depth Assessment
Depth analysis is important in the assessment of axial resolution and depth range. Figure 7 shows data from a flat, reflective surface taken from thelarge 3D datacube at multiple depth positions, as the sample is mounted on a translation stage forthis calibration experiment.
Fig. 7.
Snapshot 3D-OCT system’s depth assessment. a: Different depth positions of a flat,reflecting mirror mounted on a translation stage. b: Measured axial resolution from onerepresentative transverse location. c: Relationship between peak pixel position and mirror physicalposition. Note that at the position around 400 μm, peak positions becomeundetectable, indicating the end of the imaging depth. d: Linear regression of the relationshipbetween peak pixel position and mirror position.
Adjacent positions are 25.4 μm apart [Fig. 7(a)]. After zero-padding and phase linearization processes, theaverage axial resolution was measured to be 20.9 μm over the depth range[Fig. 7(b)]. Axial resolution of 16.0μm can be obtained near the zero optical path difference (OPD) position. Inaddition, the axial position of each coherence peak is plotted against translation stage position inFig. 7(c). This confirms the expected depth range ofapproximately 400 μm. The physical depth and pixel value relationship isestablished and fitted to a linear equation [Fig.7(c)]. The measured SNR for the coherence peak at a depth of 50μm was 43dB.
6.2. 3D Visualization
In this first-generation system, the performance is evaluated by imaging a USAF resolution targetwith clear tape on the front surface to produce 3D structures. The raw image of 4096×4096pixels can be seen in Fig. 8(a), while both thetarget’s bars and interferometric fringes due to reflections at the clear tape can beobserved in Fig. 8(b).
Fig. 8.
Simultaneous spatial and spectral visualization. a: Spatial features from resolution target. b:Interferometric fringes caused by resolution target. c: Interferometric fringes caused by cleartape.
The current system provides a datacube (85×356×127) from 85 facets of 356 pixelsin length, being dispersed in 127 pixels. The final image shown in Fig. 9 demonstrates a simple experiment in which 3D structure can be visualized aftercalibration algorithms are applied. The result was shown in the open-source MicroView 3D ImageViewer (Parallax Innovations).
Fig. 9.
3D structure recorded in snapshot mode from the 3D-OCT system. a: Reconstructed structure ofclear tape on USAF target. b,c: Its XZ and YZ cross-sections. d: Transverse image from the referencecamera.
Multiple surfaces can be observed along the depth in the 3D display as well as in the XZ and YZcross-sections. Note that the dark bands on the 1st and3rd surfaces from the right shown in Fig. 9(c) come from the resolution target’s spatial features. The second surfacefrom the right was created by the interference between the tape’s two reflective surfaces,thus indicates the tape’s actual thickness. The bright DC component is left intact in Fig. 9 for illustration.
After interference fringes and 3D datacube were recorded from reflective objects, we tested thesystem with a simple but more scattering sample. A US dime was placed at the image plane(photographed with a conventional camera in Fig. 10(a)); andthe ear’s 3D shape on the dime was recorded and calibrated to obtain a 3D datacube of thesame size as the previous experiment. However, due to the mapper’s design (see Section 4.2),the resolution along the x-axis is four times higher than that along the y-axis. To maintain uniformsampling across the FOV, X-axis binning was carried out, as seen in the composite transverse view inFig. 10(b) and transverse cross-sections at different depthsof the ear in Fig. 10(c). Four out of 85 curvatures on thedime’s surface from 85 mapper facets are displayed in Fig.10(d).
Fig. 10.
System evaluation with simple 3D structural sample. a: A 2D image of an US dime taken withreference camera. b: Corresponding transverse surface acquired with snapshot 3D-OCT system. c:Transverse surfaces at different depths. d: Cross-sections along the depth range.
To investigate the potential for the IMS-OCT system to image biological samples, a 3D volume of apiece of onion was acquired. The power at the sample was measured to be 3.1 mW/cm2. Figure 11(a) shows the regular en face 2D imagetaken from the reference camera, while Figure 11(b) displaysthe reconstructed en face image obtained from the OCT system. Five transverseslices in the XY plane at various locations along the axial (Z) axis are shown in Figure 11(c), indicating different structures within the onion depth.
Fig. 11.
3D snapshot of a layer of onion placed on top of a highly scattering metal surface. a: Image of alayer of onion (bottom) on a metal surface (top) acquired with the reference camera. b: Transversesurface acquired with snapshot 3D-OCT system. c: Representative transverse sections at different (z)depths.
7. Discussion
7.1. Resolutions, Imaging Depth and Camera Pixel Count
The depth range, axial resolution, and FOV for the system reported here were chosen to enable afirst proof-of-concept demonstration of the IMS-OCT concept for 3D volumetric imaging. This setup isable to provide 85 sub-images in 25 sub-fields; each sub-image carries spatial features along onemapper facet’s length as well as the interferometric fringes created from the referencemirror and sample. The lateral resolution (13.4 μm) and depth range (400μm) meet the expected performance, while the averaged axial resolution of20.9 μm is slightly larger than the expected value mostly due to thebroadening effect along the depth of range in OCT. However, the measured axial resolution near zeroOPD position (16.0 μm) where the broadening effect is insignificant meetsthe theoretical calulation of 15.9 μm.
To illustrate the potential for future development and scaling of the snapshot 3D-OCT, weinvestigated the fundamental relationships between these imaging parameters. Using a single imagesensor (CCD or CMOS) to capture hyperspectral data (x,y,λ) requires atrade-off between pixels used for spatial and spectral resolution. When this datacube is convertedfrom (x,y,λ) to (x,y,k) to (x,y,z) to obtain a 3D volume in this snapshot3D-OCT, the original tradeoff becomes one involving spatial points, axial range, and axialresolutions. In conventional spectral-domain OCT, axial range is determined by the system’sspectral resolution, while axial resolution is inversely related to the spectral bandwidthcollected. Given a finite number of pixels in a SD-OCT line-scan camera, one can only increase axialrange at the expense of axial resolution, and vice-versa [35]. In snapshot 3D-OCT, the product of spatial and spectral pixels(Nx × Ny ×Nλ) cannot exceed the total number of camera pixels. Figure 12 presents the relationship between the number ofresolvable spatial points in each of the X and Y directions, imaging depth, and camera pixel count,in IMS-OCT. This analysis assumes that the system accommodates sufficient spectral bandwidth toachieve 10 μm axial resolution, and that there are two axial pixels peraxial resolution element (Nyquist’s criterion is exactly met). The system described hereuses a 16 MPxl camera, but collects light from only every fourth facet of the mapper (Section 4.2).This arrangement uses only 3.5 MPxl, with (85×356) spatial points and 117 spectral pixels.Binning consecutive groups of 4 pixels in the direction along the facet length allowed us to presentthe image data in Fig. 12 with an equal points in X and Y(85×85). Figure 12 illustrates how the use of all 16MPxl’s would have enabled either deeper imaging, or additional spatial points. Whiletheoretically feasible, use of all camera pixels would require a redesign of the IMS optical train.Our next generation system aims to take advantage of 29 MPxl image sensors which are currently onthe market. As illustrated in Fig. 12, this pixel count willallow imaging to depths of over 1 mm in tissue, with 256×256 lateral pixels.
Fig. 12.
Effect of camera pixel count on 3D datacube size for a system operating at 10μm axial resolution.
7.2. Optical improvements for next system generation
Since the focus of this manuscript was to provide a first proof-of-concept demonstration ofIMS-OCT, the experimental setup was not fully optimized for light efficiency. An incoherent LED wasused as the light source, which will be replaced by a superluminescent diode (SLD) in the nextgeneration system. This will increase the illumination power across the full field of view. The two50/50 beamsplitters (BS1 and BS2 in Fig. 2) simplifiedalignment of the system, but reduced overall throughput by 75%. These components will bereplaced by polarizing beamsplitters and waveplates to circulate light from the source to thedetector (via the sample) more efficiently. At the IMS stage, we are designing the next system witha different geometry, aiming to reach light efficiency at the level of 50–60%achieved in previous IMS systems [27].Reducing the overall number of surfaces in the light path will also reduce losses due to Fresnelreflections.
With improved light illumination and overall system throughput, future IMS-OCT systems can beexpected to image with shorter exposure times than shown here (125 ms), with corresponding increasesin frame rates. Our previously reported IMS systems demonstrated imaging in live biological tissue[27], therefore, we believe there are nosystem-related limitations for future live biological tissue imaging with IMS-OCT. Currently for our16 Megapixel camera in this proof-of-concept system, It typically takes 8 s to transfer all acquireddata to the computer via a USB port, and 9 s to generate a 3D (x,y,z) volume using Matlab, on anIntel® CoreTM 2 Duo chip.
In our future design, a standard near-infrared superluminescent diode source will be employed asthe light source. Although the current visible LED source provides easy alignment, it however has anarrow bandwidth that sacrifices axial resolution. Its property of spatial incoherence also tends toreduce fringe visibility. Another significant improvement will involve higher density and moreuniform sampling at the specimen. The current mapper only collects light from every 4th facet,discarding spatial information from the rest of the mapper to save space for dispersion. The setupleads to higher sampling along the facet length than across the facets, and requires necessaryvertical binning for uniform sampling, as shown in the previous experiment. This uneven samplingissue will be overcome in the high-performance system by redesigning the mapper facet geometry toutilize all the facets on the mapper. The beam expander in the IMS arm will be removed, and newlenslet array design will be inserted. Not only will this be an enhancement for spectral samplingand imaging depth, the beam expander removal will also minimize the blurring and distortion on thecorner sub-fields [Fig. 8].
8. Conclusion
In conclusion, this paper demonstrates a proof-of-concept 3D-OCT system that is capable ofgenerating a 3D volumetric datacube in snapshot mode with simple calibration. The system can capturea datacube of (85×356×117) with expected performance specifications. A non-scanning,snapshot 3D imaging modality may be capable of acquiring images with reduced motion artifacts,particularly in weakly scattering samples. The high-performance system is being developed for alonger depth penetration of 1 mm and higher transverse and axial resolutions to provide betterquality of depth visualization.
Acknowledgments
This work is supported by the John S. Dunn Foundation CollaborativeResearch Award Program and the National Institutes ofHealth under grant R21 EB011598.
References and links
- 1.Adler D. C., Zhou C., Tsai T.-H., Schmitt J., Huang Q., Mashimo H., Fujimoto J. G., “Three-dimensional endomicroscopy of the human colon usingoptical coherence tomography.” Opt Express 17, 784–796 (2009) 10.1364/OE.17.000784 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Yi K., Mujat M., Park B. H., Sun W., Miller J. W., Seddon J. M., Young L. H., de Boer J. F., Chen T. C., “Spectral domain optical coherence tomography forquantitative evaluation of drusen and associated structural changes in non-neovascular age-relatedmacular degeneration.” Br J Ophthalmol 93, 176–181 (2009) 10.1136/bjo.2008.137356 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Osiac E., Saftoiu A., Gheonea D. I., Mandrila I., Angelescu R., “Optical coherence tomography and Doppler optical coherencetomography in the gastrointestinal tract.” World JGastroenterol 17, 15–20 (2011) 10.3748/wjg.v17.i1.15 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.de Kinkelder R., Kalkman J., Faber D. J., Schraa O., Kok P. H. B., Verbraak F. D., van Leeuwen T. G., “Heartbeat-induced axial motion artifacts in opticalcoherence tomography measurements of the retina.” Invest OphthalmolVis Sci 52, 3908–3913 (2011) 10.1167/iovs.10-6738 [DOI] [PubMed] [Google Scholar]
- 5.de Boer J. F., Cense B., Park B. H., Pierce M. C., Tearney G. J., Bouma B. E., “Improved signal-to-noise ratio in spectral-domain comparedwith time-domain optical coherence tomography.” Opt Lett 28, 2067–2069 (2003) 10.1364/OL.28.002067 [DOI] [PubMed] [Google Scholar]
- 6.Leitgeb R., Hitzenberger C., Fercher A., “Performance of Fourier domain vs. time domain opticalcoherence tomography.” Opt Express 11, 889–894 (2003) 10.1364/OE.11.000889 [DOI] [PubMed] [Google Scholar]
- 7.Hagen N., Kester R. T., Gao L., Tkaczyk T. S., “Snapshot advantage: a review of the light collectionimprovement for parallel high-dimensional measurement systems.” OptEng 51(2012) 10.1117/1.OE.51.11.111702 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Chen Y., Aguirre A. D., Hsiung P.-L., Desai S., Herz P. R., Pedrosa M., Huang Q., Figueiredo M., Huang S.-W., Koski A., Schmitt J. M., Fujimoto J. G., Mashimo H., “Ultrahigh resolution optical coherence tomography ofBarrett’s esophagus: preliminary descriptive clinical study correlating images withhistology.” Endoscopy 39, 599–605 (2007) 10.1055/s-2007-966648 [DOI] [PubMed] [Google Scholar]
- 9.Nakamura Y., Makita S., Yamanari M., Itoh M., Yatagai T., Yasuno Y., “High-speed three-dimensional human retinal imaging byline-field spectral domain optical coherence tomography,” Opt.Express 15, 7103–7116 (2007) 10.1364/OE.15.007103 [DOI] [PubMed] [Google Scholar]
- 10.Dubois A., Moreau J., Boccara C., “Spectroscopic ultrahigh-resolution full-field opticalcoherence microscopy.” Opt Express 16, 17082–17091 (2008) 10.1364/OE.16.017082 [DOI] [PubMed] [Google Scholar]
- 11.Grajciar B., Pircher M., Fercher A., Leitgeb R., “Parallel Fourier domain optical coherence tomography for invivo measurement of the human eye.” Opt Express 13, 1131–1137 (2005) 10.1364/OPEX.13.001131 [DOI] [PubMed] [Google Scholar]
- 12.Watanabe Y., Yamada K., Sato M., “Three-dimensional imaging by ultrahigh-speed axial-lateralparallel time domain optical coherence tomography.” OptExpress 14, 5201–5209 (2006) 10.1364/OE.14.005201 [DOI] [PubMed] [Google Scholar]
- 13.Witte S., Baclayon M., Peterman E. J. G., Toonen R. F. G., Mansvelder H. D., Groot M. L., “Single-shot two-dimensional full-range optical coherencetomography achieved by dispersion control.” Opt Express 17, 11335–11349 (2009) 10.1364/OE.17.011335 [DOI] [PubMed] [Google Scholar]
- 14.Grieve K., Dubois A., Simonutti M., Paques M., Sahel J., Gargasson J.-F. L., Boccara C., “In vivo anterior segment imaging in the rat eye with highspeed white light full-field optical coherence tomography.” OptExpress 13, 6286–6295 (2005) 10.1364/OPEX.13.006286 [DOI] [PubMed] [Google Scholar]
- 15.Hrebesh M. S., Dabu R., Sato M., “In vivo imaging of dynamic biological specimen by real-timesingle-shot full-field optical coherence tomography,” OptComm 282, 674–683 (2009) 10.1016/j.optcom.2008.10.070 [DOI] [Google Scholar]
- 16.Subhash H. M., “Review article: Full-field and single-shot full-fieldoptical coherence tomography: A novel technique for biomedical imagingapplications,” Advances in Optical Technologies 2012(2012) 10.1155/2012/435408 [DOI] [Google Scholar]
- 17.Ford B. K., Volin C. E., Murphy S. M., Lynch R. M., Descour M. R., “Computed tomography-based spectral imaging for fluorescencemicroscopy.” Biophys J 80, 986–993 (2001) 10.1016/S0006-3495(01)76077-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18.Ford B., Descour M., Lynch R., “Large-image-format computed tomography imaging spectrometerfor fluorescence microscopy.” Opt Express 9, 444–453 (2001) 10.1364/OE.9.000444 [DOI] [PubMed] [Google Scholar]
- 19.Fernandez C. A., Wagadarikar A., Brady D. J., McCain S. C., Oliver T., “Fluorescence microscopy with a coded aperture snapshotspectral imager,” 7184,71840Z–71840Z-11(2009). [Google Scholar]
- 20.Cull C. F., Choi K., Brady D. J., Oliver T., “Identification of fluorescent beads using a coded aperturesnapshot spectral imager.” Appl Opt 49, B59–B70 (2010) 10.1364/AO.49.000B59 [DOI] [PubMed] [Google Scholar]
- 21.Gorman A., Fletcher-Holmes D. W., Harvey A. R., “Generalization of the Lyot filter and its application tosnapshot spectral imaging.” Opt Express 18, 5602–5608 (2010) 10.1364/OE.18.005602 [DOI] [PubMed] [Google Scholar]
- 22.Bodkin A., Sheinis A., Norton A., Daly J., Roberts C., Beaven S., Weinheimer J., eds., Video-rate chemical identification and visualization with snapshothyperspectral imaging, vol. 8374 (2012). [Google Scholar]
- 23.Kriesel J., Scriven G., Gat N., Nagaraj S., Willson P., Swaminathan V., eds., Snapshot hyperspectral fovea vision system(HyperVideo)(2012). [Google Scholar]
- 24.Gao L., Kester R. T., Hagen N., Tkaczyk T. S., “Snapshot image mapping spectrometer (IMS) with highsampling density for hyperspectral microscopy.” Opt Express 18, 14330–14344 (2010) 10.1364/OE.18.014330 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25.Gao L., Kester R. T., Tkaczyk T. S., “Compact image slicing spectrometer (ISS) for hyperspectralfluorescence microscopy.” Opt Express 17, 12293–12308 (2009) 10.1364/OE.17.012293 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26.Gao L., Bedard N., Hagen N., Kester R. T., Tkaczyk T. S., “Depth-resolved image mapping spectrometer (IMS) withstructured illumination.” Opt Express 19, 17439–17452 (2011) 10.1364/OE.19.017439 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27.Bedard N., Hagen N., Gao L., Tkaczyk T. S., “Image mapping spectrometry: calibration andcharacterization.” Opt Eng 51(2012) 10.1117/1.OE.51.11.111711 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28.Kester R. T., Gao L., Tkaczyk T. S., “Development of image mappers for hyperspectral biomedicalimaging applications.” Appl Opt 49, 1886–1899 (2010) 10.1364/AO.49.001886 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29.Gao L. S., Tkaczyk T. S., “Correction of vignetting and distortion errors induced bytwo-axis light beam steering,” Optical Engineering 51(2012) 10.1117/1.OE.51.4.043203 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30.Elliott A. D., Gao L., Ustione A., Bedard N., Kester R., Piston D. W., Tkaczyk T. S., “Real-time hyperspectral fluorescence imaging of pancreaticβ-cell dynamics with the image mapping spectrometer(IMS).” J Cell Sci (2012) 10.1242/jcs.108258 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 31.Abdulhalim I., “Competence between spatial and temporal coherence in fullfield optical coherence tomography and interference microscopy,”Journal of Optics A: Pure and Applied Optics 8(2006) 10.1088/1464-4258/8/11/004 [DOI] [Google Scholar]
- 32.Mujat M., Park B. H., Cense B., Chen T. C., de Boer J. F., “Autocalibration of spectral-domain optical coherencetomography spectrometers for in vivo quantitative retinal nerve fiber layer birefringencedetermination.” J Biomed Opt 12, 041205 (2007) 10.1117/1.2764460 [DOI] [PubMed] [Google Scholar]
- 33.Dorrer C., “Influence of the calibration of the detector on spectralinterferometry,” J. Opt. Soc. Am. B 16, 1160–1168 (1999) 10.1364/JOSAB.16.001160 [DOI] [Google Scholar]
- 34.Lepetit L., Chriaux G., Joffre M., “Linear techniques of phase measurement by femtosecondspectral interferometry for applications in spectroscopy,” J. Opt.Soc. Am. B 12, 2467–2474 (1995) 10.1364/JOSAB.12.002467 [DOI] [Google Scholar]
- 35.Wojtkowski M., Srinivasan V., Ko T., Fujimoto J., Kowalczyk A., Duker J., “Ultrahigh-resolution, high-speed, Fourier domain opticalcoherence tomography and methods for dispersion compensation.” OptExpress 12, 2404–2422 (2004) 10.1364/OPEX.12.002404 [DOI] [PubMed] [Google Scholar]











