Skip to main content
Optics Express logoLink to Optics Express
. 2020 Oct 23;28(22):33632–33643. doi: 10.1364/OE.399735

Developing an optical design pipeline for correcting lens aberrations and vignetting in light field cameras

Qi Cui 1, Shuaishuai Zhu 2, Liang Gao 1,2,*
PMCID: PMC7679190  PMID: 33115023

Abstract

Light field cameras have been employed in myriad applications thanks to their 3D imaging capability. By placing a microlens array in front of a conventional camera, one can measure both the spatial and angular information of incoming light rays and reconstruct a depth map. The unique optical architecture of light field cameras poses new challenges on controlling aberrations and vignetting in lens design process. The results of our study show that field curvature can be numerically corrected for by digital refocusing, and vignetting must be minimized because it reduces the depth reconstruction accuracy. To address this unmet need, we herein present an optical design pipeline for light field cameras and demonstrated its implementation in a light field endoscope.

1. Introduction

The light rays captured by an imaging system contain abundant information, which is described by a 7D plenoptic function P(θ, φ, λ, t, x, y, z) (θ, φ, angular coordinates; λ, wavelength; t, time; x, y, z, spatial coordinates) [1]. A conventional camera acquires only the 2D spatial information (x, y) of an input scene. By contrast, the light field camera measures both spatial (x, y) and angular information (θ, φ) [2], where the angular information can be further used to reconstruct a depth map (x, y, z). Due to its superior 3D imaging capability, the light field camera has been employed in various applications such as biomedical imaging [3,4], object recognition [57], and machine vision [8,9].

There are two types of light field cameras: the unfocused light field (ULF) camera [2,10] and the focused light field (FLF) camera [11]. Figure 1 shows the corresponding schematics. As shown in Fig. 1(a), in a ULF camera, three point objects S1, S2, and S3 are first imaged by the main lens, forming intermediate image points S1, S2 and S3. These intermediate image points are then reimaged by the microlens array (MLA) onto a detector array. Because the distance from the MLA to the detector array is equal to the focal length of the MLA, the ULF camera essentially images the pupil associated with each microlens. We use (u, v) and (x, y) to denote the Cartesian coordinates at the pupil plane and the MLA, respectively. The captured raw images (M1, M2, and M3 in Fig. 1(a)) can be re-arranged as a 4D datacube (x, y, u, v), which is also referred to as a light field (LF) [12]. A 2D x-u slice of the LF is termed an epipolar plane image (EPI). As an example, Fig. 1(b) shows three EPIs associated with point S1, S2, and S3, respectively. The corresponding depths can then be deduced by estimating the slope of lines in the EPIs. A refocused image at a given depth can be reconstructed from an integral projection of the 4D LF along a trajectory in the EPIs [2]. Reconstructing images at all depths creates a focal stack of images, and an extended depth of field (DOF) image can be rendered by fusing all the reconstructed images [13].

Fig. 1.

Fig. 1.

Ray models of light field cameras. (a) ULF camera. (b) EPIs associated with point S1, S2 and S3 in (a). (c) FLF camera. (d) Perspective images imaged by microlens L1 and L2 in (c).

Unlike the ULF camera, the FLF camera directly images the object, rather than pupils, onto the detector array. There are two types of FLF cameras: the Keplerian and Galilean [14]. Figure 1(c) shows the schematic of a Galilean FLF camera. The spacing (B) between the MLA and the detector is smaller than the focal length of the MLA. In contrast, B is larger than the focal length of the MLA in the Keplerian configuration. The depth information can be derived from the disparities between adjacent perspective images (Fig. 1(d)), and an all-in-focus image can be reconstructed by projecting all the pixels in the raw image back to the intermediate image plane.

Although the depth calibration method and ray tracing model of light field camera have been extensively studied [1522], the optical design of its main lens has yet to be exploited. Because of the unique optical architecture of light field cameras, the handling of lens aberrations and vignetting is significantly different from conventional lens design methods [23,24]. To address this unmet need, we systematically analyzed the effect of aberrations and vignetting on the fidelity of reconstructed images and developed a design pipeline for the main lens of light field cameras. While the proposed lens design pipeline is generally applicable to all light field cameras, we focus on a niche application in endoscopy (Section 4: Design example).

2. Aberrations and vignetting in light field cameras

When designing an imaging lens, although aberrations and vignetting are usually unwanted, they are not equally weighted in the tolerancing budget. Here we limit our discussion to third-order Seidel aberrations and ignore defocus and wavefront tilt. The conventional optical design prioritizes the correction of aberrations, which increase the spot size at the image plane (i.e., spherical aberration, coma, astigmatism, field curvature). Particularly, when field curvature W222 exists, a flat object plane is imaged to a curved surface. Because the detector plane is flat, field-dependent defocus is then introduced to the final image. In the periphery field, the blur so induced is so severe that it often overshadows other aberrations. More problematically, field curvature is more difficult to correct for than other Seidel aberrations—common approaches such as lens bending/splitting and stop shifting cannot be applied because field curvature depends on only the power and refractive index of lenses, if the system is free of astigmatism. Therefore, in conventional optical design, field curvature is considered one of the toughest aberrations, and correcting for it normally leads to a bulky setup. By contrast, vignetting reduces the irradiance of the image but not the resolution, and it can be numerically corrected for in postprocessing. For this reason, vignetting is a less-concerned factor compared with Seidel aberrations.

Unlike conventional cameras that capture only the 2D (x, y) information of a scene, light field cameras measure a 4D (x, y, u, v) datacube and derive the depth from light ray angles. Therefore, designing the main lens needs a new standard. Particularly, the field curvature and vignetting must be assessed in 3D (x, y, z) rather than 2D (x, y). Figure 2 shows a light field camera with field curvature. The object is imaged by the main lens to a curved surface, as indicated by the black dashed line. The depth of field of the microlens array (MLA), denoted by DRM, determines the depth range of the main lens, while the DRM itself depends on the detector pixel size and the numerical aperture (NA) of the MLA [25]. Provided that the entire curved intermediate image locates within the DRM, the shape of the surface can be recovered through calibration [16]. As a result, the field curvature can be numerically corrected for by digital refocusing, and it can be loosely tolerated in light field cameras.

Fig. 2.

Fig. 2.

Field curvature in a light field camera. DRM, depth range of the microlens array; MLA, microlens array.

By contrast, in light field cameras, vignetting must be minimized. Because light field cameras estimate depths using the light ray angles, the loss of the angular information due to vignetting will reduce the number of views in the EPIs. To elaborate on this effect, we performed a simulation using Zemax (Zemax, LLC). Figure 3 shows the shaded model of an ULF camera. The object is a point source. We use a 4F system as the main lens, which consists of two paraxial lenses (f = 15 mm) and a physical stop. The stop is placed at the Fourier plane of the first lens (i.e., back focal plane). To match the NA of the main lens and the MLA, we set the stop diameter to 1.38 mm. To introduce vignetting, we place another aperture of the same diameter at a location 10 mm after the stop. A MLA (f = 0.65 mm, lens pitch = 60 µm) locates at the back focal plane of the second lens, and a detector array is placed at the back focal plane of the MLA. The pixel size of the detector array is 4 µm.

Fig. 3.

Fig. 3.

Shaded model of an unfocused light field (ULF) camera. MLA, microlens array.

We define the vignetting factor η as:

η=1EEu, (1)

where E and Eu denote the total irradiance received by the detector array with and without vignetting, respectively, and η is zero if the image is unvignetted. In the simulation, the point source was placed at the front focal plane of the first lens, and we scanned it along the x-axis at 13 different locations from 0 mm to 1.2 mm with a step size of 0.1 mm. At each step, we traced 100,000 light rays to form a raw image and rendered an EPI at v = 0 and y = 0. Figure 4(a) shows three representative raw images at x = 0 mm, 0.6 mm, 1.2 mm, and their corresponding EPIs. The results indicate that although the slope of the line feature in the EPIs does not change, the number of pixels that forms the line (i.e., views) reduces as vignetting increases. The relation between the vignetting factor and the number of views is shown in Fig. 4(b). We calculated the number of views by enumerating the non-zero pixels in the EPI after image binarization. The light field camera reconstructs depth by estimating the slope of line features in EPIs through linear regression. The standard error of fitting can be computed by:

SE=(bibˆi)2(aia¯)21n2, (2)

where SE is the standard error, n is the number of observations, ai is an independent variable for the ith observation, a¯ is the mean, bi is a dependent variable for the ith observation, and bˆi is the estimated value of bi. Equation 2 implies that the standard error decreases as the number of observations increases. In light field cameras, vignetting reduces the number of views in EPIs, resulting in a larger regression error and, therefore, a reduced depth accuracy. Particularly, when the number of detector pixels associated with a microlens is small, vignetting dramatically increases the regression error.

Fig. 4.

Fig. 4.

Vignetting and number of views in epipolar plane images (EPIs). (a) Three representative raw images and corresponding EPIs at x = 0 mm, 0.6 mm, 1.2 mm. (b) Number of views in an EPI vs. vignetting factor.

To further illustrate the effect of vignetting on depth accuracy, we defocused the point source by 6 mm towards the first lens, and we scanned it under the same conditions. Because the depth of the point source has changed, the line in the EPI is tilted with respect to the vertical axis, and it is not aligned with the detector pixels. As a result, ambiguities are introduced by sampling. Three representative raw images and corresponding EPIs at x = 0 mm, 0.6 mm, 1.2 mm are shown in Fig. 5(a). At each step, we computed the slope of the line in the EPI. The relation between the slope regression error and the vignetting factor is shown in Fig. 5(b).

Fig. 5.

Fig. 5.

Vignetting and slope regression error of the line feature in epipolar plane images (EPIs). (a) Three representative raw images and corresponding EPIs at x = 0 mm, 0.6 mm, 1.2 mm. (b) Slope regression error vs. vignetting factor.

It is worth mentioning that the slope regression error is also dependent on aberrations and noises. When aberrations exist, the image of a point source is no longer a sharp point, and the shape of the line in the EPI may be distorted. On the other hand, noises affect the intensity of the views and the background pixels. In both cases, a sufficient number of views is critical for faithful depth reconstruction. Therefore, vignetting must be minimized in light field cameras.

Finally, we validated the effect of vignetting through a real experiment. The optical setup of an unfocused light field camera is shown in Fig. 6(a). We used a 4F system as the main lens, which consists of two 50 mm focal length achromatic doublets (Thorlabs, AC254-050-A-ML). A 4.8 mm diameter stop was placed at the Fourier plane to match the NA of the main lens and the MLA. An MLA with a 50 µm pitch was placed at the back focal plane of the second lens, and the spacing between the MLA and the camera (Lumenera, Lt965R) is equal to the MLA focal length. A flat printed grid pattern was used as the object, and it locates near the front focal plane of the main lens. An adjustable aperture was positioned 12 mm before the camera, and its diameter was set to be 2.8 mm, 4 mm, and 5 mm to create different levels of vignetting. We captured a raw image for each aperture diameter and a baseline image when the aperture was removed (i.e. no vignetting). A representative raw image when the aperture diameter = 4 mm and the baseline image are shown in Fig. 6(b), each including two magnified subfields. Compared to the baseline, Area 2 from the raw image when the aperture diameter = 4 mm shows vignetted pupils. Next, we calculated the vignetting factor and generated a disparity map for each image, followed by computing the root-mean-squared error (RMSE) for each disparity map. Note that a depth map can be further rendered based on disparity-to-depth calibration [16]. The resultant disparity maps are shown in Fig. 7. The experimental results indicate that the disparity RMSE increases as the vignetting factor increases. Therefore, depth accuracy would be jeopardized if vignetting exists.

Fig. 6.

Fig. 6.

Experimental setup and raw images of a flat printed grid pattern object. (a) Optical setup. (b) A raw image when the aperture diameter = 4 mm and the baseline image with two magnified subfields.

Fig. 7.

Fig. 7.

Disparity maps for each aperture diameter. RMSE, root-mean-squared error.

3. Lens design for light field cameras

Compared to conventional cameras, light field cameras can tolerate field curvature but are sensitive to vignetting. The field curvature coefficient W220 can be separated into two terms:

W220=12W222+W220p, (3)

where W222 is proportional to astigmatism and W220p is Petzval curvature. Without astigmatism, the field curvature reduces to Petzval curvature. Because Petzval curvature depends on only the power and refractive index of lenses, it is insensitive to most aberration correction methods (e.g., lens bending/splitting, stop shifting). The primary method to flat Petzval surface is to add negative power lenses and create air spaces in between. However, it makes the system bulky and expensive. Therefore, releasing the tolerance on the field curvature can greatly reduce the system complexity and design constraints. For example, if we use a single ball lens as the main lens in a light field camera, all off-axis aberrations would be eliminated [26]. Digitally correcting for the remained field curvature provides an ideal solution to achieve a large field of view with a high resolution.

To minimize vignetting in a light field camera, we put a constraint on the lens aperture:

a|y¯|+|y|, (4)

where a is the radius of the aperture, and y¯ and y are the chief ray height and marginal ray height at the aperture position, respectively. In addition, we force the telecentricity of the main lens in the image space.

Figure 8 illustrates the proposed optical design pipeline, which differs from the conventional standard in two aspects: first, the field curvature is not a primary design constraint and can be loosely tolerated, while vignetting must be strictly minimized. Second, optimization must be performed in 3D (x, y, z) rather than 2D (x, y)—we must account for all object points within both the depth range (z) and FOV (x, y). In practice, given radial symmetry, it is justified to sample object points only in the y-z plane. During optimization, we assign each (y, z) object point to a system configuration. We then perform ray tracing in each configuration and calculate the corresponding vignetting factor. Lastly, we construct a y-z vignetting factor map and compute the mean. We use this value as the metric to evaluate vignetting of the system.

Fig. 8.

Fig. 8.

Optical design pipeline for light field cameras. Due to correction of aberrations/vignetting in a 3D space, our design pipeline yields optimized optical performance for computational refocusing and parallax-based depth estimation

4. Design example

To demonstrate the implementation of the proposed pipeline, we designed the main lens for a light field endoscope using Zemax. The desired specifications are listed in Table 1.

Table 1. Specifications of the light field endoscope.

Object space NA Working distance Magnification FOV Length Diameter Depth range
0.024 65 mm ∼ 0.2 10 mm >200 mm 5 mm 6 mm

We selected a double Gauss lens as the initial configuration to reduce odd aberrations, followed by scaling down the lens to the required diameter. Next, nine object points within the depth range (z) and the FOV (x, y) were chosen to build the multi-configuration, as summarized in Fig. 9. The working distance (WD) is defined as the distance between an object point and the first surface of the main lens. We inserted a dummy surface after the nominal image plane (where the marginal ray height = 0 mm) in each configuration, which serves as the real image plane. Due to field curvature, defocus is introduced for off-axis object points. During optimization, the location of the dummy surface was set as a variable, and each configuration was optimized independently to compensate for the field-dependent defocus. In this way, the effect of field curvature is excluded in the merit function for image quality optimization.

Fig. 9.

Fig. 9.

Multi-configuration in lens optimization.

Next, we built the merit function based on design specifications. The activated operands are summarized in Table 2. The variables consist of the radius of surface curvature and the central thickness between adjacent surfaces. Only spherical surfaces are used for each lens element. The optimization process is divided into two steps: local optimization and global optimization. During the local optimization, the paraxial magnification is defined using operand PMAG, RECI, ABLT, and ABGT. The desired magnification of the main lens is −0.2. We used operand AXCL to minimize the axial color, while other aberrations (spherical aberration, coma, astigmatism, distortion, and lateral color) are optimized together to minimize the root-mean-squared (RMS) spot size using default operand TRAC. Particularly, we limited vignetting by image space telecentricity. The operand RAID was used to confine the chief ray angle (CRA) at the last surface of the lens. In addition, the semi-diameter of the lens group was limited by operand MXSD, and the air and glass thicknesses were constrained by operand MNCA, MXCA, MNEA, MNCG, MXCG, and MNEG. During the global optimization, we made two changes: first, we replaced operand TRAC with operand OPDX to minimize the RMS wavefront error. Second, the glass type of each element was set as “substitute” for better performance.

Table 2. Activated operands in the merit function.

Local optimization PMAG, RECI, ABLT, ABGT, AXCL, TRAC, RAID, MXSD, MNCA, MXCA, MNEA, MNCG, MXCG, MNEG
Global optimization PMAG, EFFL, ABLT, ABGT, AXCL, OPDX, RAID, MXSD, MNCA, MXCA, MNEA, MNCG, MXCG, MNEG

To meet the length requirement, we further used Hopkins rod lenses as the relay lens. The desired magnification of the relay lens is 1. We started with two thick doublets, which are symmetric about the stop. As a result, the lens does not introduce coma, distortion, and lateral color. We used the same merit function as that in the main lens, except the object space telecentric was forced to match the pupil. The variables consist of the radius of curvature of each surface and the spacing between adjacent surfaces. After optimization, we duplicated the lenses to extend the relay optics to the required length.

The schematic of the final endoscope is shown in Fig. 10 The original lens design file is shown in Dataset 1 (229.1KB, zmx) (Ref. [31]). The effective focal length (EFFL) of the system is 14.6 mm, and the total length (TOTR) is 212 mm. The back focal length is 3 mm, and the paraxial magnification is −0.206. Figure 11 shows spot diagrams of three configurations when working distance = 65 mm and object height = 0 mm, 7 mm, 10 mm, respectively, and the corresponding modulation transfer functions (MTFs) are shown in Fig. 12. Finally, we performed ray tracing to calculate vignetting factors for all object points within the depth range and the FOV, and the result is shown in Fig. 13, where the pixel value represents the normalized percentage of unvignetted rays. The mean of this map is 0.99, implying that only one percent of total rays are vignetted. The resultant design, therefore, maximizes the depth reconstruction fidelity.

Fig. 10.

Fig. 10.

Optical setup of the endoscope.

Fig. 11.

Fig. 11.

Spot diagrams corresponding to three configurations in which working distance = 65 mm, object height = 0 mm, 7 mm, 10 mm, respectively.

Fig. 12.

Fig. 12.

Modulation transfer functions (MTFs) corresponding to three configurations in which working distance = 65 mm, object height = 0 mm, 7 mm, 10 mm, respectively.

Fig. 13.

Fig. 13.

Vignetting factor map within the depth range and the FOV.

5. Conclusion

In this paper, we systemically studied the effect of field curvature and vignetting on light field depth reconstruction accuracy. We show that the field curvature in light field cameras can be loosely tolerated, while vignetting must be minimized to assure high reconstruction fidelity. To incorporate this finding into the lens design process, we developed a pipeline that optimizes the optical performance of light field cameras in a 3D space, facilitating the computational refocus and parallax-based depth estimation. We expect this work will lay the foundation for future light field camera lens design development, particularly in biomedical applications where diagnosis and treatment heavily rely on the accuracy of the 3D measurement [2729].

Noteworthily, our current optical design pipeline is applicable to only ray optics models. This premise holds valid for light field cameras with a relatively small aperture, such as a light field endoscope. For large NA imaging, to account for the diffraction effect that occur when recording the light field, we must adapt the design process for a wave optics model [30] instead. This study is out of the scope of current work, and we will leave it for future investigation.

Acknowledgment

We thank Prof. Rongguang Liang for providing the initial design for Hopkin rod lenses.

Funding

National Institutes of Health10.13039/100000002 (R01EY029397, R21EB028375, R35GM128761); National Science Foundation10.13039/100000001 (1652150).

Disclosures

The authors declare no conflicts of interest.

References

  • 1.Adelson E. H., Bergen J. R., “The plenoptic function and the elements of early vision,” in Computational Models of Visual Processing, Landy M. S., Movshon J. A., eds. (MIT Press, 1991), pp. 3–20. [Google Scholar]
  • 2.Ng R., “Digital light field photography,” Ph.D. dissertation (Stanford University, 2006).
  • 3.Prevedel R., Yoon Y. G., Hoffmann M., Pak N., Wetzstein G., Kato S., Schrödel T., Raskar R., Zimmer M., Boyden E. S., Vaziri A., “Simultaneous whole-animal 3D imaging of neuronal activity using light-field microscopy,” Nat. Methods 11(7), 727–730 (2014). 10.1038/nmeth.2964 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Bedard N., Shope T., Hoberman A., Haralam M. A., Shaikh N., Kovačević J., Balram N., Tošić I., “Light field otoscope design for 3D in vivo imaging of the middle ear,” Biomed. Opt. Express 8(1), 260–272 (2017). 10.1364/BOE.8.000260 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Raghavendra R., Raja K. B., Busch C., “Presentation attack detection for face recognition using light field camera,” IEEE Trans. Image Process. 24(3), 1060–1075 (2015). 10.1109/TIP.2015.2395951 [DOI] [PubMed] [Google Scholar]
  • 6.Maeno K., Nagahara H., Shimada A., Taniguchi R. I., “Light field distortion feature for transparent object recognition,” in IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2013), pp. 122–135. [Google Scholar]
  • 7.Zhu S., Lv X., Feng X., Lin J., Jin P., Gao L., “Plenoptic Face Presentation Attack Detection,” IEEE Access 8, 59007–59014 (2020). 10.1109/ACCESS.2020.2980755 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Lynch K., Fahringer T., Thurow B., “Three-dimensional particle image velocimetry using a plenoptic camera,” in 50th AIAA Aerospace Sciences Meeting (AIAA, 2012), pp. 1–14. [Google Scholar]
  • 9.Alam M. Z., Gunturk B. K., “Hybrid Light Field Imaging for Improved Spatial Resolution and Depth Range,” arXiv preprint arXiv:1611.05008 (2016).
  • 10.Adelson E. H., Wang J. Y. A., “Single lens stereo with plenoptic camera,” IEEE Trans. Pattern Anal. Mach. Intell. 14(2), 99–106 (1992). 10.1109/34.121783 [DOI] [Google Scholar]
  • 11.Georgiev T. G., Lumsdaine A., “Superresolution with plenoptic 2.0 cameras,” Signal recovery and synthesis, (OSA, 2009).
  • 12.Levoy M., Hanrahan P., “light field rendering,” in Proceedings of the 23rd annual conference on Computer graphics and interactive techniques, (1996), P. 31–42. [Google Scholar]
  • 13.Agarwala A., Dontcheva M., Agrawala M., Drucker S., Colburn A., Curless B., Salesin D., Cohen M., “Interactive digital photomontage,” ACM Trans. Graphic 23(3), 294–302 (2004). 10.1145/1015706.1015718 [DOI] [Google Scholar]
  • 14.Perwass C., Wietzke L., “Single-lens 3D camera with extended depth-of-field,” in Human Vision and Electronic Imaging XVII, Proc. SPIE 8291, 829108 (2012). [Google Scholar]
  • 15.Tosic I., Berkner K., “Light field scale-depth space transform for dense depth estimation,” in IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2014), pp. 435–442. [Google Scholar]
  • 16.Gao L., Bedard N., Tosic I., “Disparity-to-depth calibration in light field imaging,” in Imaging and Applied Optics, OSA Technical Digest, (Optical Society of America, 2016), paper CW3D.2. [Google Scholar]
  • 17.Tremblay E. J., Marks D. L., Brady D. J., Ford J. E., “Design and scaling of monocentric multiscale imagers,” Appl. Opt. 51(20), 4691–4702 (2012). 10.1364/AO.51.004691 [DOI] [PubMed] [Google Scholar]
  • 18.Hahne C., Aggoun A., Haxha S., Velisavljevic V., Fernández J. C. J., “Light field geometry of a standard plenoptic camera,” Opt. Express 22(22), 26659–26673 (2014). 10.1364/OE.22.026659 [DOI] [PubMed] [Google Scholar]
  • 19.Hahne C., Aggoun A., Velisavljevic V., Fiebig S., Pesch M., “Refocusing distance of a standard plenoptic camera,” Opt. Express 24(19), 21521–21540 (2016). 10.1364/OE.24.021521 [DOI] [PubMed] [Google Scholar]
  • 20.Chen Y., Jin X., Dai Q., “Distance measurement based on light field geometry and ray tracing,” Opt. Express 25(1), 59–76 (2017). 10.1364/OE.25.000059 [DOI] [PubMed] [Google Scholar]
  • 21.Chen Y. Q., Jin X, Dai Q. H., “Distance estimation based on light field geometric modeling,” 2017 IEEE International Conference on Multimedia & Expo Workshops (ICMEW), Hong Kong, 2017, pp. 43–48. [Google Scholar]
  • 22.Hahne C., Aggoun A., Velisavljevic V., Fiebig S., Pesch M., “Baseline and triangulation geometry in a standard plenoptic camera,” Int. J. Comput. Vis. 126(1), 21–35 (2018). 10.1007/s11263-017-1036-4 [DOI] [Google Scholar]
  • 23.Fisher R. E., Tadic-Galeb B., Optical System Design (McGraw-Hill, 2000). [Google Scholar]
  • 24.Kingslake R., Johnson R. B., Lens design fundamentals (Academic, 2009). [Google Scholar]
  • 25.Zhu S., Lai A., Eaton K., Jin P., Gao L., “On the fundamental comparison between unfocused and focused light field cameras,” Appl. Opt. 57(1), A1–A11 (2018). 10.1364/AO.57.0000A1 [DOI] [PubMed] [Google Scholar]
  • 26.Dansereau D. G., Schuster G., Ford J., Wetzstein G., “A wide-field-of-view monocentric light field camera,” in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017), pp. 5048–5057. [Google Scholar]
  • 27.Bedard N., Shope T., Hoberman A., Haralam M. A., Shaikh N., Kovačević J., Balram N., Tošić I., “Light field otoscope design for 3D in vivo imaging of the middle ear,” Biomed. Opt. Express 8(1), 260–272 (2017). 10.1364/BOE.8.000260 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Kwan E., Qin Y., Hua H., “Development of a Light Field Laparoscope for Depth Reconstruction,” in Imaging and Applied Optics 2017 (3D, AIO, COSI, IS, MATH, pcAOP), OSA Technical Digest (online) (Optical Society of America, 2017), paper DW1F.2.
  • 29.Zhu S., Jin P., Liang R., Gao L., “Optical design and development of a snapshot light-field laryngoscope,” Opt. Eng. 57(2), 023110 (2018). 10.1117/1.OE.57.2.023110 [DOI] [Google Scholar]
  • 30.Broxton M., Grosenick L., Yang S., Cohen N., Andalman A., Deisseroth K., Levoy M., “Wave optics theory and 3-D deconvolution for the light field microscope,” Opt. Express 21(21), 25418–25439 (2013). 10.1364/OE.21.025418 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Cui Q., Zhu Z., Gao L., Original lens design file of the main lens of a light field endoscope, 1 fighsare 1 2020. oe-28-22-33632-d001.zmx .

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Citations

  1. Cui Q., Zhu Z., Gao L., Original lens design file of the main lens of a light field endoscope, 1 fighsare 1 2020. oe-28-22-33632-d001.zmx .

Articles from Optics Express are provided here courtesy of Optica Publishing Group

RESOURCES