Abstract
Various optical instruments have been developed for three-dimensional (3D) surface topography, including the white light interference, reflectance confocal microscopes, and digital holographic microscopes, etc. However, the steep local slope of objects may cause the light to be reflected in a way that it will not be captured by the objective lens because of the finite collection angle of the objective. To solve this “shadow problem,” we report a method to enlarge the collection angle range of optical sectioning structured illumination microscopy by capturing sectioned images of the objects from multiple angle of views. We develop a multi-view image fusion algorithm to reconstruct a single 3D image. Using this method, we detect previously invisible details whose slopes are beyond the collection angle of the objective. The proposed approach is useful for height map measurement and quantitative analyses in a variety of fields, such as biology, materials science, microelectronics, etc.
1. INTRODUCTION
Over the past few decades, there has been extensive development of optical metrology techniques to supplement measurement of three-dimensional (3D) surface topography, including interferometry [1,2], confocal [3–6] holography, and digital holography [7–9]. White light interferometry is a well-established method to measure surface topography with height variations from a few nanometers to centimeters and has been widely used as an optical testing method in semiconductor industry. Unfortunately, this technique only works for optically smooth surfaces whose height variations within a resolution cell do not exceed λ∕4 of the light used [2]. Reflectance confocal microscopy based on the focus-detection technique has been used for the characterization of surface topography [3]. It has been mostly applied to qualitative examining of polymer blends and skin diagnosis, which has advantages of high lateral resolution and high measurable local slope [4–6]. However, for larger objects, such as centimeter-scale creatures, it is slow and laborious. Digital holographic microscopy is suitable for the investigation of reflective surfaces including engineered surfaces as well as living cells [7]. It provides diffraction-limited lateral resolution down to a few hundreds of nanometers and an axial resolution less than λ∕150 [8]. 3D surface topography can be obtained by a single acquisition in digital holographic microscopy, which makes it attractive for dynamical analysis using fast cameras [9]. While all the above techniques have demonstrated their success, they all have a fundamental numerical aperture (NA) limit in the range of surface slopes to be captured in detection of specular reflections. We named this issue the “shadow problem.”
The “shadow problem” is a common problem in 3D surface topography and is widely recognized as one of the most challenging problems in 3D optical imaging and metrological technology [10–13]. Because the NA of the objective lens is limited, the large steepness of the features will cause the light to be reflected in a way that it will not be captured by the objective lens. Using a camera with high dynamic range and increasing exposure time can solve the shadow problem to some extent [14,15]. However, if the light reflected from the sample is beyond the collection capability of the objective lens, the “shadow problem” still exists. To overcome this limitation, fluorescent labeling is an effective method. Using a confocal microscope, Liu et al. developed a super-aperture method by covering the specimen with a readily removable organic fluorescent film to create an isotropic scattering surface [16]. As a result, they demonstrated the detection of slopes with angles close to 90 deg using an objective with NA = 0.75. However, for specimens from which one requires true color information, this method is clearly not acceptable.
In our previous work, a new color-structured illumination microscopy (C-SIM) has been demonstrated that is suitable for 3D surface topography [17]. Employing the digital micromirror device (DMD)-projected structured patterns of LED illumination, the C-SIM allows the collection of high-resolution 3D structure with full natural color with the help of an optically sectioning decoding algorithm in hue, saturation, and value (HSV) space [18]. However, C-SIM is also confronted with the “shadow problem,” which affects the accuracy of measuring gross physical dimensions (e.g., length, curvature). In order to solve the shadow problem and not discard the color information of the sample, we had to develop a new approach to address the shadow problem in C-SIM.
We intend to acquire 3D images specimens by imaging them from multiple directions to obtain more information of the inclined slope. By doing so, we are able to detect slopes with angles beyond the maximal theoretical detection angle of an objective. The challenge is to find a correct solution to reconstruct a single 3D image from the multiple individual views. Nguyen et al. proposed a method based on multi-view image fusion scheme for capture of natural-color 3D models of insects. However, macro-photography used in this article is not suitable for producing high-resolution 3D scans [19]. Atsushi et al. have developed a system that consists of a digital microscope and a computer-controlled turntable to reconstruct a 3D micro-model in a triangular mesh from several 2D images from different views [20]. As for the multi-view fusion of optical sectioning images, Preibisch et al. developed a selective plane illumination microscopy (SPIM) registration method and implemented it in a plugin for Fiji. The software enables efficient, sample-independent registration of multi-view SPIM acquisitions using fluorescent beads in rigid mounting medium as fiduciary markers. Unfortunately, this method is not feasible for label-free samples [21].
In this paper, we present a new approach to fuse the multi-view 3D data sets into a single data set. The core steps of our fusion algorithm include 3D-data rotation and 3D-data stitching. On the condition that the rotation angle of the sample is 10 deg, we achieve measurement of high surface slopes with inclination of 27.5 deg using an objective lens with NA = 0.3, comparable with that using an objective lens with NA of 0.46 and using an objective lens with NA = 0.45 comparable with NA of 0.6. The approach provides a promising tool for testing the 3D topography of steeply sloped engineered surfaces, microstructures of insects, optical micro-devices, etc.
2. METHOD
A. Multi-view C-SIM System
Figure 1(a) presents the light path diagram of the multi-view C-SIM system. A liquid light guide and collimating, white light LED (SOLIS-3C, 3 W, Thorlabs Inc.) is employed as the illumination source. A total internal reflection (TIR) prism is used to separate the projection and the illumination paths. The white light is reflected by the TIR prism and illuminates the DMD (V7000, 22.7 kHz at 1024 × 768 pixels, 13.7 μm pixel pitch. ViALUX GmbH, Germany) chip. The modulated light then passes through the achromatic collimating lens (f = 200 mm) and the 50/50 beam-splitter and is then focused by the objective lens (either a 10× objective, NA = 0.45/WD = 4 mm or a 10× objective, NA = 0.3/WD = 16 mm, Nikon Inc., Japan) to illuminate the sample. A USB3.0 color complementary metal oxide semiconductor (CMOS) camera with a maximum full-frame rate of 80 fps (UI-3370CP-C-HQ, 2048 × 2048 pixels, 5.5 μm pixel size, IDS GmbH, Germany) equipped with a tube lens (f = 200 mm) is used to record the two-dimensional (2D) images. The sample is mounted on an integrated hexapod stage (hexaCUBE190, Attocube Inc., Germany), which combines six linear close-loop piezoelectric positioners with a ball joints and linear bearings, enabling true six-dimensional (6D) motion with high accuracy and repeatability. The absolute position of the stage is defined by six degrees of freedom, among which x, y, z represent the relative distance of the stage platform away from the predefined zero, and σx, σy, σz determine the rotation angles of the stage platform around the x, y, and z axis, respectively. The hexaCUBE190 can realize a repeatability ranging under 50 nm in x, y, and z, and 1 μrad in σx, σy, σz. The travel ranges exceed ±15 mm in the x, y, and z directions, and rotations exceed ±19 deg around these axes.
Fig. 1.
Schematic diagram of multi-view C-SIM system. (a) Light path diagram. (b) Photograph of the microscope instrument.
In traditional C-SIM, to acquire the 3D light intensity distribution of the specimen, volume data for different layers are obtained by axially moving the specimen at different z positions as shown in Fig. 2(b). For each layer, three fringe-illuminated raw images with an adjacent phase shift of 2π∕3 are captured [17]. Here, by adjusting the parameters σx, σy, or σz, the sample will rotate with the hexapod stage with a specific angle. In this way, the volume data of the specimen in different orientations, i.e., the multi-view volume data, can be captured by repeating the above acquisition process as shown in Figs. 2(a) and 2(c). The typical rotation angle are selected as σx = −10 deg, 0 deg, 10 deg and σy = −10 deg, 0 deg, 10 deg. Hardware synchronization of the DMD, the CMOS camera, and the hexapod stage, are carried out by custom-developed software written in MATLAB.
Fig. 2.
Schematic diagram of 3D data acquisition at different rotation angle. (a) The typical rotation angle is selected as σx = −10 deg, σy = 0 deg. (b) The typical rotation angle is selected as σx = 0 deg, σy = 0 deg. (c) The typical rotation angles are selected as σx = 10 deg, σy = 0 deg.
B. Fusion of Multi-view 3D Full-Color Image Data Sets
With the acquired multi-view volume data sets, a more informative 3D image can be recovered via a fusion process. The entire process includes three steps. First, for each layer, the optically sectioned images are recovered from the three raw images recorded with an adjacent phase shift of 2π∕3 employing the root mean square (RMS) decoding algorithm. In order to recover full-color images, we transform raw images into HSV space for decoding [17]. After that, we need to reconstruct a fused 3D image from the full-color volume data sets of different views by 3D-data rotation and 3D-data stitching.
The second step is 3D-data rotation. The principle of it is shown in Fig. 3. Figure 3(a) shows the volume data set taken with a rotation angle θ around a line parallel to the x axis through the center of the yellow cube. The gray hemisphere shell represents the surface of a sample, and the yellow cube represents the acquired volume data. If we want to stitch this volume data with the volume data taken without rotation, we need to rotate the yellow volume data around the rotation axis with −θ to make the spatial coordinates of the two volumes consistent, which is shown Fig. 3(b). To illustrate this procedure more clearly, the rotation operation is depicted in a side-view, as is shown in Figs. 3(c) and 3(d). After the rotation operation with −θ around the rotation axis, the coordinates of the original and the new volumes, (y, z) and (y′, z′), satisfy the relation
| (1) |
Fig. 3.
Principle of 3D-data rotation. (a) The volume data set is acquired when the rotation angle of the sample stage is σx = θ deg. (b) The result after 3D-data rotation. (c) One layer of the z-y plane data of (a). (d) One layer of z-y plane data of (b).
The widths and heights of the pictures in Figs. 3(c) and 3(d) are denoted as (w, h) and (w′, h′), respectively, satisfying the relation
| (2) |
However, after rotation, the position of a pixel may become non-integers. In this case, we have to employ interpolation methods (like linear interpolation) to export the rotated volume as digital images with identical pitch size.
The third step is 3D-data stitching. The method we used is point-to-point pixel stitching. To achieve the 3D-data stitching, we need to find the corresponding coordinates of the reference and the rotated volumes. For example, if the feature point (x1, y1, z1) in Fig. 4(a) corresponds to the feature point in Fig. 4(b), the coordinate difference of the two volumes can be calculated as (Δx, Δy, Δz). Directly finding the matched feature points of the two volumes in 3D is not straightforward. To address this issue, we calculate their maximum intensity projection (MIP) images. For example, Figs. 4(c) and 4(d) present the MIP images of the volume data in Figs. 4(a) and 4(b) along the z axis, respectively. By picture feature point matching, we can directly calculate Δx and Δy. Similarly, Δz can be obtained by projecting the volume along the x axis. After the above operation, these two volumes can be stitched in 3D by selecting the maximum value of different views on each voxel.
Fig. 4.
Principle in finding the pixel point correspondence. (a) The volume data is obtained when the stage is aclinic. (b) The volume data set as shown in Fig. 3(b). (c) The MIP image of the volume data in (a). (d) The MIP image of the volume data in (b).
3. RESULTS AND DISCUSSION
To demonstrate the performance of the multi-view fusion method based on optical sectioning images, we utilized a convex star on a Chinese commemorative coin as a test object and imaged it with our C-SIM system. The convex star resembles a pentagram with five straight ridges, along which the star gradually reaches its peak at the center.
Volumetric image data set acquisition was implemented with our multi-view C-SIM system using a low-NA objective (10×, NA = 0.3). Five stacks of optically sectioned images of five angles of views are acquired by controlling the hexapod stage, whose MIP images are shown in Fig. 5. For view 1 as shown in Fig. 5(a), we need to capture 150 raw images, which finally compose the sectioned images of 50 layers at the axial stepping interval of 1000 nm, resulting in a depth of 49 μm. Thus, the acquisition time is nearly 3.0 s, with 10 ms exposure time and 0.044 ms DMD switching time for each raw image and 30 ms Z-stage settling time for each layer. For views 2–5 as shown in Figs. 5(b)–5(e), each optical sectioning depth is set as 94 μm, resulting in an acquisition time of 5.7 s. Considering the rotating time of the hexapod stage for every angle of view (about 4 s), the total acquisition time is 53.8 s.
Fig. 5.

MIP images of the same convex star from multiple view angles. (a) The rotation angle is selected as σx = 0 deg, σy = 0 deg. (b) Diagram of the convex star, which can be simplified as a pentagram consisting of ten inclinations (①–⑩). (c) The rotation angle is selected as σx = −10 deg, σy = 0 deg. (d) The rotation angle is selected as σx = 10 deg, σy = 0 deg. (e) The rotation angle is selected as σx = 0 deg, σy = −10 deg. (f ) The rotation angle is selected as σx = 0 deg, σy = 10 deg.
To analyze the superiority of our approach, we compare the result of view 1 and the fusion result. The MIP images of the two results are shown in Figs. 6(a) and 6(d). The Visualization 1 presents the rendered 3D full-color images with and without using our multi-view fusion method. Figures 6(b) and 6(e) show the height map of Figs. 6(a) and 6(d), respectively. For view 1, abundant high-inclined surface topography information failed to be collected by the objective lens, resulting in a great loss of image information and height information [Figs. 6(a) and 6(b)]. With our system and the developed algorithm, the missed information is mostly recovered in the fused volume, especially at the lower right corner of the star [Figs. 6(d) and 6(e)]. For comparison, we focus on the same region of the convex star. Figure 6(f) presents the profile along the solid purple line in Fig. 6(e), revealing two peaks at the surface. Because of the missing height information, we only observe discontinuous height information as shown Fig. 6(c). Ideally, the highest local slope θmax is accommodated within the collection limit of the objective NA, satisfying NA > sin θmax. Hence, the maximal local slope θmax is 17.5 deg by using a NA = 0.3 objective. However, we can detect the local slope of 27.5 deg via the fusion method, which is comparable with that using an objective lens with NA of 0.46. If we approximate the convex star as a pentagram consisting of ten slopes [①–⑩, Fig. 5(b)], we can easily calculate the angle between the ten inclinations and the horizontal plane with the spatial coordinates of point A-I and point O. Obviously, the angles between inclinations (⑤ and ⑥) and the horizontal plane are the biggest. So the result after fusion processing is significantly improved in inclinations ⑤, ⑥ compared to view 1.
Fig. 6.
Comparison between the result of view 1 and the fusion result from five perspectives of a convex star on a Chinese commemorative coin. (a) MIP image of view 1 (see the left half of Visualization 1). (b) The 3D height map of (a). (c) Profile along the purple line-scan of (b). (d) MIP image of this convex star’s fusion results from five perspectives (see the right half of Visualization 1). (e) 3D height map of (d). (f) Profile along the purple line-scan of (e).
To further demonstrate the application value of our method, we tested a biological sample, the left compound eye of a leaf beetle (Galeruca sp.) with our C-SIM system by using a high-NA objective (10×, NA = 0.45). The compound eye is one of the most important organs of insects, and it consists of thousands of individual optical units or ommatidia [22,23]. Each ommatidium and the whole compound eye are typical samples whose surfaces are highly curved.
Five stacks of optically sectioned images of five angles of views the same as the above example are acquired by rotating the hexapod stage. In view 1, the optical sectioning depth is 84 μm and the data acquisition time is 5.1 s. In views 2–5, each optical sectioning depth is 94 μm and each data acquisition time is 5.7 s. Adding the rotating time, the total data acquisition time is 55.9 s.
Next, a better result is recovered with our fusion method. To analyze the superiority of our approach, we compare the result of view 1 and the fusion result. The MIP images of the two results are shown in Figs. 7(a) and 7(d). Visualization 2 presents the rendered 3D full-color images with and without using our multi-view fusion method. Figures 7(b) and 7(e) show the height maps of Figs. 7(a) and 7(d), respectively. For view 1, abundant highly curved surface topography information failed to be collected by the objective lens, resulting in a great loss of image information and height information [Figs. 7(a) and 7(b)]. With our system and the developed algorithm, the missed information is mostly recovered in the fused volume, especially at the edge of the compound eye [Figs. 7(d) and 7(e)]. This shows that we can more accurately acquire the number of ommatidia per eye. The completed surface topography can help us obtain the credible profile of the sample to data analyze. For example, Fig. 7(f) presents the profile along the solid purple line in Fig. 7(e), indicating eight ommatidia. The radius of curvature of the whole compound eye is about 1.02 mm, and the curvature of a single ommatidium is 0.018 mm.
Fig. 7.
Comparison between the result of view 1 and the fusion result from five perspectives of a left compound eye of a leaf beetle (Galeruca sp.). (a) MIP image of the compound eye of view 1 (see the left half of Visualization 2). (b) 3D height map of (a). (c) Profile along the purple line-scan of (b). (d) MIP image of this compound eye’s fusion results from five perspectives (see the right half of Visualization 2). (e) 3D height map of (d). (f) Profile along the purple line-scan of (e).
The capability of our system has been demonstrated using two test objects: a Chinese commemorative coin and the compound eye of a beetle. We show that the system can image both smooth and rough surfaces. Although our multi-view C-SIM represents a powerful means of obtaining 3D data of steepness of the features on millimeter-size samples with full natural color, there are still several issues to address.
First, the acquisition speed can be much higher by using a more advanced and faster motion stage. During the process of image collection, the most time-consuming operation is the rotation and reposition time to rotate the specimen and then return back to the initial position. This stage needs 4 s to rotate 10 deg around the x or y axis. If a higher-quality stage were used, the image acquisition time will be reduced significantly. For example, a stage with 1 s rotation time for 10 deg and 10 ms reset time would decrease the acquisition time to 29.5 s, which is a 54.8% reduction compared with 53.8 s—the total data acquisition time of the convex star. Second, the angle of rotation (10 deg) of the stage limits the maximum surface slope that can be detected. Without this restriction, we are readily able to detect slopes with angles close to 90 deg with a relatively low-NA objective. Third, The CMOS camera used in our microscope has a pixel depth of 8 bits that is able to obtain 256 gray scales. However, optical sectioning and the decoding algorithm cause a reduction in the gray scales of images. The loss in image information cannot be added via image processing methods. It is recommendable to work with a higher-dynamic-range camera in order to gain more information from an image. For example, a CMOS camera having a pixel depth of 16 bits that are able to obtain 65,536 gray scales would be much more preferable.
To overcome the slope limits for reflectance microscopy, high numerical-aperture objectives are often adopted to equip the system with a larger collection angle, which will lead to shorter working distance and smaller field of view. With our approach, we can achieve intact measurement of high surface slopes using an objective lens with NA = 0.45. The synthetic collection angle of the result is comparable with that using an objective lens with NA of 0.6, which is not commercially available (10×, NA = 0.6). The maximum surface slope angle that can be measured with actual configurations is 36.5 deg for objective NA = 0.3 and 45.7 deg for objective NA = 0.45. In conclusion, we achieve the detection of highly curved or tilted surfaces without sacrificing resolution, magnification, working distance, and field of view. In addition, our fusion algorithm also provides a new method to the 3D stitching of multi-view optical sectioning images.
4. CONCLUSION
In summary, we present a scheme for C-SIM with a hexapod stage based on DMD fringe projection. This has practical value to acquiring multi-view natural color of reflected light from object surfaces in 3D. In addition, a multi-view, image-fusion algorithm for reconstructing a single 3D image from multiple views has been demonstrated. Using the novel microscope, we can break the limitation of NA, obtain more complete surface topography information of samples with highly curved or tilted surfaces, and reconstruct a full-color 3D optical sectioning image with high resolution. This technique may find potential applications in such fields as biology, materials science, and microelectronics, where high surface slopes’ topography information and natural color information play crucial roles.
Visualization 1.
Visualization 2.
Funding.
National Key Research and Development Program of China (2017YFC0110100); National Natural Science Foundation of China (11474352, 61522511, 81427802), National Institutes of Health (GM100156).
REFERENCES
- 1.de Groot P, “Principles of interference microscopy for the measurement of surface topography,” Adv. Opt. Photon. 7, 1–65 (2015). [Google Scholar]
- 2.Wiesner B, Hybl O, and Häusler G, “Improved white-light interferometry on rough surfaces by statistically independent speckle patterns,” Appl. Opt. 51, 751–757 (2012). [DOI] [PubMed] [Google Scholar]
- 3.Udupa G, Singaperumal M, Sirohi RS, and Kothiyal MP,“Characterization of surface topography by confocal microscopy: I. Principles and the measurement system,” Meas. Sci. Technol. 11, 305–314 (2000). [Google Scholar]
- 4.Semler EJ, Tjia JS, and Moghe PV, “Analysis of surface microtopography of biodegradable polymer matrices using confocal reflection microscopy,” Biotechnol. Prog. 13, 630–634 (1997). [DOI] [PubMed] [Google Scholar]
- 5.Mattison SP, Mondragon E, Kaunas R, and Applegate BE, “Hybrid nonlinear photoacoustic and reflectance confocal microscopy for label-free subcellular imaging with a single light source,” Opt. Lett. 42, 4028–4031 (2017). [DOI] [PubMed] [Google Scholar]
- 6.Ghanta S, Jordan MI, Kose K, Brooks DH, Rajadhyaksha M, and Dy Jennifer G, “A marked Poisson process driven latent shape model for 3D segmentation of reflectance confocal microscopy image stacks of human skin,” IEEE Trans. Image Process. 26, 172–184 (2017). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Kemper B and von Bally B, “Digital holographic microscopy for live cell applications and technical inspection,” Appl. Opt. 47, A52–A61 (2008). [DOI] [PubMed] [Google Scholar]
- 8.Kemper B, Bauwens A, Langehanenberg AVSKP, Karch JMH, and von Bally G, “Label-free quantitative cell division monitoring of endothelial cells by digital holographic microscopy,” J. Biomed. Opt. 15, 036009 (2010). [DOI] [PubMed] [Google Scholar]
- 9.Tahara T, Quan X, Otani R, Takaki Y, and Matoba O, “Digital holography and its multidimensional imaging applications: a review,” Microscopy 67, 55–67 (2018). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Bartl G, Krystek M, Nicolaus A, and Giardini W, “Interferometric determination of the topographies of absolute sphere radii using the sphere interferometer of PTB,” Meas. Sci. Technol. 21, 115101 (2010). [Google Scholar]
- 11.Staymates M, Fletcher R, Staymates J, Gillen G, and Berkland C, “Production and characterization of polymer microspheres containing trace explosives using precision particle fabrication technology,” J. Microencapsulation 27, 426–435 (2010). [DOI] [PubMed] [Google Scholar]
- 12.Mauch F, Lyda W, Gronle M, and Osten W, “Improved signal model for confocal sensors accounting for object depending artifacts,” Opt. Express 20, 19936–19945 (2012). [DOI] [PubMed] [Google Scholar]
- 13.Antón JCM, Alonso J, and Pedrero JAG, “Topographic optical profilometry of steep slope micro-optical transparent surfaces,” Opt. Express 23, 9494–9507 (2015). [DOI] [PubMed] [Google Scholar]
- 14.Dai Y, Liao W, Zhou L, Chen S, and Xie X, “Ion beam figuring of high-slope surfaces based on figure error compensation algorithm,” Appl. Opt. 49, 6630–6636 (2010). [DOI] [PubMed] [Google Scholar]
- 15.Fay MF, de Lega XC, and de Groot P, “Measuring high-slope and super-smooth optics with high-dynamic-range coherence scanning interferometry,” in Optical Fabrication and Testing (OF&T), OSA Technical Digest (Optical Society of America, 2014), paper OW1B.3. [Google Scholar]
- 16.Liu J, Liu C, Tan J, Yang B, and Wilson T, “Super-aperture metrology: overcoming a fundamental limit in imaging smooth highly curved surfaces,” J. Microsc. 261, 300–306 (2016). [DOI] [PubMed] [Google Scholar]
- 17.Qian J, Lei M, Dan D, Yao B, Zhou X, Yang Y, Yan S, Min J, and Yu X, “Full-color structured illumination optical sectioning microscopy,” Sci. Rep. 5, 14513 (2015). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18.Qian J, Dang S, Wang Z, Zhou X, Dan D, Yao B, Tong Y, Yang H, Lu Y, Chen Y, Yang X, Bai M, and Lei M, “Large-scale volumetric imaging of insects with natural color,” Opt. Express 27, 4845–4857 (2019). [DOI] [PubMed] [Google Scholar]
- 19.Nguyen CV, Lovell DR, Adcock M, and La Salle J, “Capturing natural-colour 3D models of insects for species discovery and diagnostics,” PLoS One 9, e94346 (2014). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20.Atsushi K, Sueyasu H, Funayama Y, and Maekawa T, “System for reconstruction of three-dimensional micro objects from multiple photographic images,” Computer Aided Design 43, 1045–1055 (2011). [Google Scholar]
- 21.Preibisch S, Saalfeld S, Schindelin J, and Tomancak P, “Software for bead-based registration of selective plane illumination microscopy data,” Nat. Methods 7, 418–419 (2010). [DOI] [PubMed] [Google Scholar]
- 22.Lee LP and Szema R, “Inspirations from biological optics for advanced photonic systems,” Science 310, 1148–1150 (2005). [DOI] [PubMed] [Google Scholar]
- 23.Singh SP and Mohan L, “Variations in the ommatidia and compound eyes of three species of mosquito vectors,” J. Entomol. Zool. Stud. 1, 16–21 (2013). [Google Scholar]






