Abstract
Optical superposition natural compound eyes (OSNCEs) allow circadian insects to thrive in varying light conditions thanks to their unique anatomical structures. This provides a blueprint for optical superposition artificial compound eyes (OSACEs) that can adapt to different illumination intensities. However, OSACEs have received limited research attention until recently, with most studies focusing on apposition compound eyes that operate only in bright light. In this work, we accurately replicate the anatomical features and the ganglia adjustments of OSNCEs using lensed plastic optical fibers as artificial ommatidia. As the core part of this work, we implement a spatial approach alongside a temporal approach derived from both hardware and algorithms to accommodate lighting variations of up to 1000 times while still maintaining high image quality such as 180° field of view, minimal distortion, nearly infinite depth of field, and ultrafast motion detection. These adaptive biomimetic features make the OSACE very promising for surveillance, virtual reality, and unmanned aerial vehicles.
Optical superposition compound eye mimics natural structures and ganglia, enabling perception over a 1000× light intensity range.
INTRODUCTION
Robert Hooke’s pioneering investigation of the cornea in a gray drone fly laid the groundwork for extensive studies on compound eyes (1). Following this, S. Exner’s classified natural compound eyes (NCEs) into apposition and superposition types based on the length of the crystalline tract, which includes a crystalline cone and a clear zone (Fig. 1, A to C, and fig. S1, A to C) (2). In apposition compound eyes of diurnal insects such as bees, pigment cells play a crucial role in image processing. Light entering through the facet lens at a narrow angle [~1° for bees (3)] passes through the short crystalline tract, resulting in an inverted image at the distal end of the rhabdom. This structure acts as a light guide, facilitating the conversion of light into neural signals within the photoreceptor cells (4). The pigment cells prevent cross-talk between adjacent ommatidia, allowing the fused rhabdom to selectively process light from its corresponding facet lens. Consequently, apposition compound eyes achieve high spatial resolution but sacrifice light sensitivity, making them limited to daytime applications with strong illumination. In contrast, nocturnal insects, such as moths, require different visual adaptations due to their activity in low-light environments (Fig. 1A). Those with optical superposition NCEs (OSNCEs), a subtype of superposition NCEs, exhibit anatomical structures similar to diurnal insects but possess longer crystalline tracts. In low-light conditions, OSNCEs employ various spatial and temporal approaches to enhance their adaptability to light intensities over 100,000 times dimmer (5).
Fig. 1. Concept and principle of the OSACE that uses lensed optical fibers to mimic natural counterparts.
(A) Moth has NCEs that can work during both day and night. (B) During the day, each facet lens with a half aperture angle μ, an aperture diameter A, and a focal length f focuses light on the corresponding rhabdoms through a long crystalline tract. Each rhabdom then transmits the signal to the medulla and lobula through synaptic connections within lamina cartridges. (C) At night, the pigment migration allows light from neighboring ommatidia to focus on one rhabdom with an effective half aperture angle , an effective aperture diameter , and an effective focal length . Lateral neural connections between rhabdoms facilitate the Gaussian spatial summation of signal outputs. (D and E) Schematic diagram of a moth’s NCE imaging results during the day (D) and night (E). (F) OSACE effectively replicates natural structures during day time by using microlenses to emulate facet lenses and crystalline cones, optical fiber cores to simulate the rhabdom, optical fiber claddings to mimic pigment cells, an imaging lens to replicate the synaptic units to facilitate the signal transmission to deeper neural centers, photodetectors to mimic photoreceptor cells, and a flat imaging sensor chip to emulate deeper neural centres (medulla and lobula), where initial signal processing occurs. Subsequently, the signals are conveyed to the central processing unit (CPU) of a computer for advanced analysis. (G) OSACE effectively emulates its natural counterparts at night by transitioning from the scatter mode to the focus mode and the binning mode, mirroring the phenomenon of pigment migration, facilitating lateral interactions among captured spots on the planar imaging sensor chip to replicate the lateral neural connections.
Most artificial compound eyes (ACEs) primarily focus on mimicking apposition compound eyes due to the relative ease of replication (6). While planar microlens arrays inherently offer a limited field of view (FOV) (7–9), their versatile designs—combining an array of subimages into a single final image—can be applied to either static (10) or dynamic (11) perception scenarios. Advancements in curved microlens arrays with distributed photodetector cells have improved the FOV. However, these designs are challenging to fabricate and struggle to achieve simultaneous static imaging and dynamic motion detection (12–15). Curved microlens arrays paired with planar photodetector cells simplify fabrication. The use of femtosecond laser two-photon polymerization (FL-TPP) to fabricate logarithmic ommatidia presents a promising approach for achieving high-quality imaging with a 90° FOV (16). In addition, incorporating light guides can further extend the FOV but face issues such as overlapping information between neighboring ommatidia due to a large numerical aperture and substantial light transmission loss (17). Although metasurfaces present a promising alternative, their FOV cannot reach 180°. In addition, ohmic losses can reduce their optical efficiency (18), and focusing efficiency is notably lower at the edges compared to the center [e.g., 45% at an incident angle of 85° compared to 82% at a normal incident (19)].
Emulating superposition compound eyes presents substantial challenges, as it requires not only the accurate replication of anatomical structures but also the enhancement of light sensitivity through spatial and temporal methods. Consequently, very few research efforts have been put on mimicking superposition NCEs. For instance, the Gabor superlens, which uses an array of Keplerian telescopes, was designed to achieve superposition; however, its FOV is restricted to less than 90° due to its inherently planar design (20, 21). Similar to apposition ACEs, the FOV of superposition ACEs could be expanded by arranging gradient refractive index (GRIN) lenses along a curved surface (22). However, the complex geometric relationships of GRIN lenses confine them to a two-dimensional (2D) plane, preventing the realization of truly 3D superposition compound eyes. While single-pixel computational imaging technology can reconstruct 3D light information to create a 3D artificial superposition compound eye, it relies on a series of coded illumination patterns, which limits its effectiveness in natural lighting conditions and precludes real-time imaging capabilities (23). Amid these challenges, no ACEs have successfully adapted to varying light levels by altering their internal structures and data processing methods in the manner that OSNCEs do.
In this study, we propose the use of lensed plastic optical fibers to faithfully mimic the anatomical structure of NCEs, enabling a 180° FOV for light perception. By integrating spatial and temporal approaches using both hardware and algorithms, the optical superposition artificial compound eye (OSACE) can achieve static direct panoramic imaging without distortion across nearly infinite depths of field and ultrafast motion detection in varying light conditions (see movie S1). Practically, the OSACE holds promising potential for applications in diverse fields such as surveillance, virtual reality, autonomous driving, and unmanned aerial vehicles. Moreover, it serves as a foundational element for the future development of superposition ACE technologies.
RESULTS
Working principles
In daylight, the abundant illumination enables each facet lens to project light into its corresponding rhabdom through the crystalline tract (Fig. 1B and fig. S1B). The presence of pigment helps prevent cross-talk between neighboring ommatidia. As a result, the half aperture angle μ, which determines the efficacy of light acceptance, is calculated as A/2f, where A represents the aperture diameter and f denotes the focal length of the facet lens (24, 25). Subsequently, the signals from each rhabdom can be transmitted to deeper brain regions, such as the medulla and lobula, through synaptic connections within the lamina cartridges.
Conversely, in dimly lit environments, conventional imaging methods face challenges. Nocturnal insects obtain vision enhancement via two strategies: spatial and temporal approaches. The spatial approach migrates pigment between the crystalline cone and the clear zone to facilitate light passage through the clear zone to reach the underlying rhabdom (Fig. 1C and fig. S1C) (26, 27). Consequently, the effective half aperture angle is defined as , with and representing the effective aperture diameter and the effective focal length, respectively (24, 25). Furthermore, lateral neural connections characterized by a Gaussian spatial summation function among rhabdoms promote the integration of signal outputs into individual cells (Fig. 1C and fig. S1C). On the other hand, the temporal approach simply extends the integration time to capture more photons, thereby enhancing imaging quality.
Here, OSACE resembles OSNCE’s adaptation to different light conditions. In bright environments, the imaging lens aids in the scattering of intense light (referred to as scatter mode) from each optical fiber to ensure a slight overlap on the complementary metal-oxide semiconductor (CMOS) chip. Consequently, a notable portion of photodetectors on the CMOS chip can be activated, directly forming an image (Fig. 1, D and F, and fig. S1, D and F). Conversely, in dim light environments, spatial and temporal approaches are used to address insufficient illumination. In the spatial approach, the imaging lens focuses light onto specific isolated photodetectors, enhancing light intensity at these locations (referred to as focus mode). In addition, the binning mode combines signals from adjacent activated photodetectors to further boost light intensity, similar to how OSNCE focuses light from neighboring facet lenses onto a single rhabdom. However, the activation of isolated spots on the CMOS chip does not produce a complete image. Inspired by OSNCE, OSACE uses Gaussian spatial summation among these spots to create a full image (Fig. 1, E and G, and fig. S1, E and G). A detailed theoretical analysis, along with optical simulations, is conducted to validate the proposed working principle (see the “Theoretical model and optical simulations” section). In the temporal method, a slight overlap is maintained (i.e., scatter mode). Similar to the behavior of OSNCE, increasing the exposure time helps capture more photons, contributing to the formation of a clearer image.
Assembled device
On the basis of our previous analysis (4), conical microlensed plastic optical fibers with a half-apex angle of θ = 35° emerge as an optimal selection. We propose a fabrication process flow that consists of a sequence of 3D printing, electroplating, and two moulding processes to produce the conical microlensed plastic optical fibers in batch (Fig. 2A). In assembly, 271 lensed plastic optical fibers are threaded into a 3D printed perforated dome with a diameter of 14 mm, positioning the lensed ends on the dome surface (Fig. 2, B and C). The other bare fiber ends are inserted into a perforated flat buncher, projecting light onto a flat imaging sensor via an imaging lens. The dome, the buncher, the imaging lens, and the sensor chip are enclosed in a threaded hollow tube (Fig. 2C). The lensed plastic fibers function as light guides to transmit light from the microlenses on the dome to the corresponding positions in the buncher plane. This setup facilitates the transmission of light of different intensities collected from the curved surface of the dome to a flat image sensor (Fig. 2D), enhancing the capability to capture both static and dynamic light information across different lighting conditions.
Fig. 2. Operating principles of the OSACE.
(A) Scanning electron microscopy (SEM) image of the conical microlensed plastic optical fiber. (B) Top view of the OSACE light receiving head with a 3D printed dome to host 271 fiber-lensed ends. (C) 3D view of an assembled OSACE. (D) Concept of image formation. Under a varying intensity of illumination, a “+” line-art object pattern is projected to the camera head. Then, certain fibers receive light and subsequently transmit this pattern from the lensed end to the opposite end of the fiber. In conditions of sufficient illumination, an imaging lens is operated in the scatter mode to project the overlapping spots onto a flat imaging sensor chip. Conversely, under weak light conditions, the imaging lens is switched to the focus mode, resulting in discrete spots on the chip, which subsequently undergo lateral interactions. As a result, a final digital image can be generated across varying light intensities.
Experimental results
Here, we evaluated the static and dynamic performance of OSACE under various lighting conditions. First, we confirmed the OSACE’s 180° FOV. Laser spots are illuminated within a range of 90° to 0° at intervals of 22.5° along both the x and y axes. Given the scatter mode of the imaging lens, the captured spots exhibit a high degree of uniformity in terms of size, brightness, and angular position, as depicted in the combined result (Fig. 3A) and the individual images (fig. S2). The expansive 180° FOV ensures a broader range of light information for both static and dynamic perception.
Fig. 3. Static imaging and depth estimation of the OSACE.
(A) Combined image of a laser spot captured from nine angles (−90° to 90° in both x and y directions, 22.5° increment). The center of each recorded laser spot image is overexposed, appearing as a white spot. (B) Depth estimation based on the linear correlation between the point spread parameter σ and the reciprocal of object distance (u−1). Sample images at u1 = 3 mm, u2 = 5 mm, u3 = 7 mm, and u4 = 9 mm are delineated within the dotted box. For each image obtained at these distances, gray values along five vertical edge lines (two shown in pink) in the x direction are analyzed to calculate the mean and error, as shown in (H) and (J). DL denotes the distance from a point on the pink line to the image’s upper boundary. (C to F) Images of the letter F under different conditions. (D) Real-time images without grayscale mapping, Gaussian, or temporal enhancement. (G) Information entropy of images under different light intensities; SD < 0.01; mean and error from five repeated tests. (H) Information entropy using spatial or temporal processing at 0.1 lux, with results averaged over five tests. (I) Structural similarity index measure (SSIM) of images at various light intensities, SD < 0.03; based on three repeated experiments. (J) Relationship between σ and u−1 under different light intensities. (K) Ratio of edge pixels to total nonzero pixels versus u−1 at 0.1 lux. (L) Relationship between σ and u−1 under different exposure times at 0.1 lux. a.u., arbitrary unit.
Second, the static direct imaging performance was tested under different light conditions. A letter “F” is placed at different distances to test OSACE’s direct imaging ability (red lines in Fig. 3B). When the light intensity is quite adequate (100 lux), the imaging results are quite clear (Fig. 3C and fig. S3A). When the light becomes weaker, OSACE can still capture the object until its light intensity is down to 10 lux (figs. S3B and S4A for about 50 lux and figs. S3C and S4B for about 10 lux). If the light intensity reduces to 0.1 lux, then the low signal smears the images. Just like what the OSNCE does, we can adopt both the spatial and temporal approaches to overcome this problem. For the spatial approach, the focus mode and the binning mode are adopted to improve the local light intensity, mimicking the pigment migration of NCEs. To recover discrete spots (Fig. 3D) to a full image, a Gaussian spatial summation among these spots is added to the raw image (Fig. 3E and fig. S3D and Materials and Methods) to mimic the lateral neural connections of OSNCEs. For the temporal approach, we did not switch to the focus mode and the binning mode. Instead, at the scatter mode, we simply increased the exposure time from 15 to 75, 150, and 300 ms (Fig. 3F and figs. S3, E to G and S4, C and D), markedly improving the clarity of raw images.
To quantitatively assess the imaging performance under varying lighting intensities, we introduce the concept of information entropy (see the “Information entropy” section). This metric characterizes the level of randomness or uncertainty present in an image, essentially reflecting the amount of information conveyed within the image (28). Consequently, higher information entropy in Fig. 3G indicates a greater transmission of information, thus suggesting better image quality. Low errors demonstrate the robustness of OSACE in capturing images under different conditions. In addition, despite a decrease in light intensity, there remains a discernible downward trend in the information entropy with a longer object distance. Even at 10 lux, the decreasing trend of information entropy remains apparent, signifying effective object capture under this illumination. However, at 0.1 lux, the resulting information entropy value becomes too low, making it imperative to enhance the image quality. As depicted in Fig. 3H, using a spatial approach noticeably elevates the information entropy of 0.1 lux (the black diamond curve) to a level comparable to that of 100 lux in Fig. 3G. Furthermore, increasing the exposure time proves effective in improving the information entropy (Fig. 3H). At an exposure time of 600 ms, the information entropy levels become comparable to those of 100 lux in Fig. 3G. Furthermore, the adjustment of exposure time or the adoption of the spatial approach can not only be performed manually but also potentially be triggered autonomously based on real-time lighting conditions (see the “Automation of the temporal and spatial approaches”).
Because the structural similarity index measure (SSIM) evaluates image similarity by taking into account structural information (e.g., edge sharpness), contrast, and brightness, we adopt it as a comprehensive metric to complement the existing evaluations (see the “Structural similarity index measure” section). We compare the SSIM under different illumination levels (Fig. 3I). At each illumination level, SSIM decreases sharply as the distance increases, indicating a rapid decline in image similarity. This is because, with increasing distance, the recorded object becomes smaller, leading to a reduction in structural similarity s. Similarly, at each distance, SSIM also drops markedly as the illumination level decreases. This is due to lower recorded gray values under dimmer lighting, which reduces the luminance similarity l. When the illumination is reduced to 0.1 lux, both spatial (fig. S14A) and temporal (fig. S14B) approaches are applied. Both approaches effectively improve the SSIM, demonstrating their feasibility. Specifically, for the temporal approach, although SSIM still decreases with increasing distance at a fixed exposure time, increasing the exposure time tends to improve SSIM. The capability of direct imaging under varying light intensities renders OSACE more adaptive to the applications that need to work in both indoor and outdoor environments, such as robots.
Third, the distance estimation ability was assessed through captured images. The gray values along the vertical edge direction at various distances (illustrated by pink lines in Fig. 3B) are analyzed. For each condition, five lines are examined to derive the mean value and the associated errors. This analytical endeavor aims to elucidate the correlation between the point spread parameter σ (derived from the edge grayscale gradient) and the reciprocal of the object distance u−1 (Fig. 3J). Theoretically, this correlation is expected to exhibit a linear trend based on our previous research findings (4). Notably, when the light intensity is high (e.g., 100 lux), the depth estimation sensitivity (indicated by the slope of lines) is proportionately high, leading to small errors (Fig. 3J). Conversely, a lower light intensity results in a reduced sensitivity and larger errors. At 0.1 lux, the direct calculation of the edge grayscale gradient becomes challenging because the image consists of discrete spots (Fig. 3D). Consequently, the ratio of the edge pixels to the total number of nonzero pixels is used as a surrogate measure for the edge grayscale gradient (Fig. 3K), showing a nearly linear relationship between this ratio and the reciprocal of the objective distance. Here, the values and the errors in each condition are computed from five repeated experiments. Furthermore, the adoption of the temporal approach to improve the information entropy enables recalibration of the value of σ for object distance estimation (Fig. 3L). This capability positions OSACE for the applications that need real-time object distance detection, such as autonomous driving.
Fourth, the panoramic imaging performance was characterized under different light conditions. The letters “HK” are positioned at three distinct angular orientations: −50° (left), 0° (center), and + 50° (right) under 100 lux (Fig. 4A and fig. S6A), 50 lux (figs. S5A and S6B), and 10 lux (figs. S5B and S6C). No image distortions are observed, demonstrating the excellent panoramic imaging of OSACE across varying light intensities. Similarly, at 0.1 lux, the imaging quality can be enhanced by adopting either the spatial approach (Fig. 4B and fig. S6E; fig. S6D for images consisting of spots) or the temporal approach (Fig. 4C and fig. S6F). The panoramic imaging capability across varying light intensities enables OSACE to be more adaptable for diverse applications, including surveillance and unmanned drones.
Fig. 4. Panoramic imaging and nearly infinite depth of field of the OSACE.
(A to C) Images of the letters “HK” acquired at three distinct polar angles relative to the center of the OSACE: −50° (left), 0° (center), and +50° (right), under varying light intensities. (D) Diagram of an experimental configuration designed to validate the near-infinite depth of field capability of OSACE under hierarchical light intensities. Objects A (circle) and B (triangle) are positioned at angular orientations of −40° and 40°, respectively. (E to G) Images of the circle and triangle patterns captured with the circle image fixed at a distance of DA = 1 mm, while the distance of the triangle image shifts from DB = 1, 5, to 10 mm under various illumination intensities.
Fifth, the performance of nearly infinite depth of field is tested under different light conditions. Two objects, a circle (red light source) and a triangle (blue light source), are positioned at different angles and varying distances (Fig. 4D). At intense light intensity (e.g., 100, 50, and 25 lux), their respective image sizes appear similar (Fig. 4E; fig. S7, A and B; and movie S3) when the distances of both objects are equal. In addition, when the circle object remains stationary while the triangle object is moved away, the image size of the circle remains constant while that of the triangle diminishes. At weak intensity, both spatial (Fig. 4F and fig. S7C for images consisting of spots) and temporal (Fig. 4G) approaches are adopted. Notably, the images of the triangle are always clear, showing that the OSACE has a nearly infinite depth of field. This is ascribed to the image formation process, in which each fiber captures all light information within its acceptance angle, irrespective of the object distance. As compared to the reported ACEs that lack the property of infinite depth of field (17), the OSACE excels in niche applications, such as virtual reality and augmented reality.
Last, the dynamic motion capture capability is characterized. The OSACE is placed on a slide affixed to a rail, allowing OSACE to move at consistent speeds (from 1 to 10 mm/s), heading toward a standard checkerboard (Fig. 5A) under various light intensities (100 lux, 50 lux, and 10 lux). For each condition, the optical flows (Fig. 5B) and the corresponding velocities are calculated using the conventional Lucas-Kanade method (29, 30) [refer to the Supplementary Materials in (4)]. The velocity for each condition is determined by averaging the velocities derived from the optical flows. To ensure reliability, at least three repeated experiments are conducted to obtain the average measured velocity and its associated error. These measured velocities are then fitted into a linear relationship. Given the potential variations in measurement precision and algorithmic limitations, calibration is necessary to align the measured velocities with the true velocities (Fig. 5, C to E). If the OSACE moves at other unknown velocities, then the velocities calculated on the basis of optical flows can also be calibrated to true velocities by applying the bias identified during the calibration process. The experimental errors are attributed to environmental disturbances and friction during the device’s movement. In future work, devices with reduced friction will be used, and environmental influences will be further minimized to enhance the accuracy and consistency of the experiments. Further, the OSACE can achieve up to an angular response of 4.5 × 106 deg/s, thanks to the emulation and surpassing of NCEs’ signal transmission capabilities, which is much faster than human’s (see the “Ultrafast perception of angular motion under varying background light intensities”) (31–35). This capability opens the door for the OSACE tracking kinestate and controlling motion in robots and unmanned aerial vehicles.
Fig. 5. Dynamic motion detection of the OSACE.
(A) Diagram illustrating the positioning of the OSACE 10 mm in front of a checkerboard pattern, followed by its translation at different velocities under varying illumination intensities. (B) Optical flow when the OSACE is translated referring to a checkerboard pattern at a distance of 10 mm. Here, the white area is illuminated by bright squares of the checkerboard, while the black area corresponds to the dark squares of the checkerboard. The direction and length of the vectors indicate the motion direction and velocity of a bright square. (C to E) Relationships between measured velocities and true velocities under 100 lux (C), 50 lux (D), and 10 lux (E), respectively. Optical flow is used to compute velocities, which subsequently undergo a correction process before curve fitting for each light intensity condition.
DISCUSSION
In this work, we present the OSACE by distributing lensed plastic optical fibers on a hemispherical surface to transmit light from a curved surface to a flat plane. Unlike previous designs limited to operation under sufficient illumination, the OSACE is designed to function effectively across varying illumination intensities. Drawing inspiration from nocturnal insects, we use both spatial and temporal approaches to adapt to varying light intensities, marking the first successful attempt to faithfully mimic the structural and neural changes observed in OSNCEs. In low-light conditions, the spatial approach enhances image quality through focus mode, binning mode, and Gaussian spatial summation, effectively mimicking the functions of pigment migration and lateral neural connections in OSNCEs. For the temporal approach, extending the exposure time emulates the longer integration time in OSNCEs, contributing to clearer image formation. These hardware adjustments, together with image processing algorithms integrated into the hardware, enable the OSACE to perform static and dynamic perception across a wide range of illumination conditions. On the basis of simulation and experimental validation, the OSACE demonstrates the capability to adapt to varying lighting conditions while providing 180° real-time imaging without distortion in an almost infinite depth of field. This versatility makes the OSACE well suited for applications such as surveillance, autonomous driving, and virtual reality. Its ultrafast dynamic motion capture capability paves the way for tracking kinematics and controlling motion in robots and unmanned aerial vehicles. In the future, the acceptance angle of the ommatidia will be further reduced through the integration of aperture stops. Moreover, increasing the number of ommatidia will help improve spatial resolution. These enhancements are expected to extend the effective working distance of the OSACE. Overall, OSACE provides a promising blueprint for emulating optical superposition compound eyes, enabling high-quality imaging under a wide range of lighting conditions.
MATERIALS AND METHODS
Gaussian spatial summation
For a 2D plane (x,y), if a two-dimensional coordinate point (0,0) is taken as the center, the common 2D Gaussian distribution function G(x,y) can be expressed as
| (1) |
where is the variance of (x,y) and amp is the amplitude, which is a constant. Similarly, if a 2D coordinate point (u,v) is taken as the center, the 2D Gaussian distribution function can be expressed as
| (2) |
Initially, we identify each pixel point (u,v) within the raw image whose gray value is nonzero. Subsequently, Eq. 2 is used to compute the Gaussian distribution corresponding to these identified pixel points across the entire image. Ultimately, the summation of all Gaussian distributions is applied to the raw image, resulting in the final image. Here are the following details:
1) Entropy evaluation: The entropy of the real-time image is computed. If the entropy falls below a predefined threshold, indicating a low-information or dim scene, then the spatial approach is initiated.
2) Focus adjustment: The distance between the imaging lens and the imaging chip is adjusted to bring the bare ends of the optical fibers into focus on the imaging plane, transitioning from scatter mode to focus mode.
3) Pixel binning: Neighboring pixels are binned together, effectively reducing the spatial resolution (e.g., from 640 × 480 to 320 × 240). This increases the gray-level intensity of pixels illuminated by the optical fibers, enhancing signal strength (binning mode).
4) Nonzero pixel detection: All pixels with nonzero gray values are identified as sources of potential signal.
5) Gaussian parameters setup: The variance and amplitude constant amp are defined according to Eq. 2 to govern the shape and scale of the Gaussian distribution.
6) Gaussian influence calculation: Based on Eq. 2, a 2D Gaussian matrix is generated for each nonzero pixel, centered at that pixel, to model its influence on the surrounding area.
7) Image enhancement: The Gaussian influence of each nonzero pixel is superimposed onto the original image, resulting in a spatially enhanced image with improved gray values.
8) Video processing: For video data, steps 4 to 7 are applied frame-by-frame to generate a spatially enhanced video sequence.
Acknowledgments
For technical assistance and facility support, special thanks go to UMF-Materials Research Centre (MRC) and UMF-Cleanroom (UMF-Cleanroom) of the University Research Facility in Material Characterization and Device Fabrication (UMF), Industrial Centre (IC) of The Hong Kong Polytechnic University.
Funding: This work was supported by the Research Grants Council (RGC) of Hong Kong (15215620, N_PolyU511/20, CRF C5047-24GF), the Innovation and Technology Commission (ITC) of Hong Kong (ITF-MHKJFS MHP/085/22), the Hong Kong Polytechnic University (1-CD4V, 1-YY5V, 1-CD6U, G-SB6C, 1-CD8U, 1-BBEN, 1-W28S, 1-CD9Q, 1-CDJ8, 1-CDJW, 4-ZZVZ, and 1-CDMA), and the National Natural Science Foundation of China (62061160488 and 62405257).
Author contributions: Conceptualization: H. Jiang and X.Z. Methodology: H. Jiang, C.C.T., Y.C., C.-H.T., Y.D., and X.Z. Software: H. Jiang, W.Y., Z.W., and H. Jia. Validation: H. Jiang, C.C.T., Y.C., and Y.D. Formal analysis: H. Jiang, W.Y., and H. Jia. Investigation: H. Jiang, C.C.T., and C.-H.T. Resources: Y.D., H. Jia, and X.Z. Data curation: H. Jiang, W.Y., Z.W., H. Jia, and X.Z. Writing—original draft: H. Jiang and X.Z. Writing—review and editing: H. Jiang, C.C.T., and X.Z. Visualization: H. Jiang, Y.C., and X.Z. Supervision: H.Jia. and X.Z. Project administration: H. Jia. and X.Z. Funding acquisition: Z.W., H. Jia, and X.Z.
Competing interests: The authors declare that they have no competing interests.
Data and materials availability: All data needed to evaluate the conclusions in this paper are present in the paper and/or the Supplementary Materials. The data shown in the paper are available on the DRYAD Dataset at 10.5061/dryad.bg79cnpnr.
Supplementary Materials
The PDF file includes:
Figs. S1 to S21
Theoretical model and optical simulations
Information entropy
Automation of the temporal and spatial approaches
Structural similarity index measure (SSIM)
Ultrafast perception of angular motion under varying background light intensities
Legends for movies S1 to S5
Other Supplementary Material for this manuscript includes the following:
Movies S1 to S5
REFERENCES AND NOTES
- 1.R. Hooke, Micrographia, or Some Physiological Descriptions of Minute Bodies, made by Magnifying Glasses, with Observations and Inquiries Thereupon (Council of the Royal Society of London, 1664). [Google Scholar]
- 2.S. Exner, Die Physiologie der facettirten Augen von Krebsen und Insecten: Eine Studie (Franz Deuticke, 1891).
- 3.M. F. Land, Variations in the Structure and Design of Compound Eyes (Springer, 1989). [Google Scholar]
- 4.Jiang H., Tsoi C. C., Yu W. X., Ma M. C., Li M. J., Wang Z. K., Zhang X. M., Optical fibre based artificial compound eyes for direct static imaging and ultrafast motion detection. Light Sci Appl 13, 256 (2024). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Warrant E. J., Seeing better at night: Life style, eye design and the optimum strategy of spatial and temporal summation. Vision Res. 39, 1611–1630 (1999). [DOI] [PubMed] [Google Scholar]
- 6.Jiang H., Tsoi C. C., Sun L. R., Yu W. X., Fan H., Ma M. C., Jia Y. W., Zhang X. M., Biomimetic curved artificial compound eyes: A review. Advanced Devices & Instrumentation 5, 0034 (2024). [Google Scholar]
- 7.Brückner A., Duparré J., Leitel R., Dannberg P., Bräuer A., Tünnermann A., Thin wafer-level camera lenses inspired by insect compound eyes. Opt. Express 18, 24379–24394 (2010). [DOI] [PubMed] [Google Scholar]
- 8.Kim H. Y., Cha Y. G., Kwon J. M., Bae S. I., Kim K., Jang K. W., Jo Y. J., Kim M. H., Jeong K. H., Biologically inspired microlens array camera for high-speed and high-sensitivity imaging. Sci. Adv. 11, eads3389 (2025). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Wu S. D., Jiang T., Zhang G. X., Schoenemann B., Neri F., Zhu M., Bu C. G., Han J. D., Kuhnert K. D., Artificial compound eye: A survey of the state-of-the-art. Artif. Intell. Rev. 48, 573–603 (2017). [Google Scholar]
- 10.Wu S. D., Zhang G. X., Jiang T., Zhu M., Fu K. C., Rong H. N., Xian K. Y., Song H., Kuhnert K. D., Multi-aperture stereo reconstruction for artificial compound eye with cross image belief propagation. Appl. Optics 57, B160–B169 (2018). [DOI] [PubMed] [Google Scholar]
- 11.Wu S. D., Zhang G. X., Neri F., Zhu M., Jiang T., Kuhnert K. D., A multi-aperture optical flow estimation method for an artificial compound eye. Integr. Comput. Aided Eng. 26, 139–157 (2019). [Google Scholar]
- 12.Floreano D., Camara R. P., Viollet S., Ruffier F., Brückner A., Leitel R., Buss W., Menouni M., Expert F., Juston R., Dobrzynski M. K., L’Eplattenier G., Recktenwald F., Mallot H. A., Franceschini N., Miniature curved artificial compound eyes. Proc. Natl. Acad. Sci. U.S.A. 110, 9267–9272 (2013). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.Song Y. M., Xie Y. Z., Malyarchuk V., Xiao J. L., Jung I., Choi K. J., Liu Z. J., Park H., Lu C. F., Kim R. H., Li R., Crozier K. B., Huang Y. G., Rogers J. A., Digital cameras with designs inspired by the arthropod eye. Nature 497, 95–99 (2013). [DOI] [PubMed] [Google Scholar]
- 14.Lee M., Lee G. J., Jang H. J., Joh E., Cho H., Kim M. S., Kim H. M., Kang K. M., Lee J. H., Kim M., Jang H., Yeo J. E., Durand F., Lu N. S., Kim D. H., Song Y. M., An amphibious artificial vision system with a panoramic visual field. Nat.Electron. 5, 452–459 (2022). [Google Scholar]
- 15.Zhou Y., Sun Z. B., Ding Y. C., Yuan Z. N., Qiu X., Cao Y. B., Wan Z. A., Long Z. H., Poddar S., Kumar S., Ye W. H., Chan C. L. J., Zhang D. Q., Ren B. T., Zhang Q. P., Kwok H. S., Li M. G. J., Fan Z. Y., An ultrawide field-of-view pinhole compound eye using hemispherical nanowire array for robot vision. Sci. Robot. 9, eadi8666 (2024). [DOI] [PubMed] [Google Scholar]
- 16.Hu Z. Y., Zhang Y. L., Pan C., Dou J. Y., Li Z. Z., Tian Z. N., Mao J. W., Chen Q. D., Sun H. B., Miniature optoelectronic compound eye camera. Nat. Commun. 13, 5634 (2022). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Dai B., Zhang L., Zhao C. L., Bachman H., Becker R., Mai J., Jiao Z., Li W., Zheng L. L., Wan X. J., Huang T. J., Zhuang S. L., Zhang D. W., Biomimetic apposition compound eye fabricated using microfluidic-assisted 3D printing. Nat. Commun. 12, 1–11 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18.Kogos L. C., Li Y. Z., Liu J. N., Li Y. Y., Tian L., Paiella R., Plasmonic ommatidia for lensless compound-eye vision. Nat. Commun. 11, 1–9 (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.Fan C. Y., Lin C. P., Su G. D. J., Ultrawide-angle and high-efficiency metalens in hexagonal arrangement. Sci. Rep. 10, 15677 (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20.Stollberg K., Brückner A., Duparré J., Dannberg P., Bräuer A., Tünnermann A., The Gabor superlens as an alternative wafer-level camera approach inspired by superposition compound eyes of nocturnal insects. Opt. Express 17, 15747–15759 (2009). [DOI] [PubMed] [Google Scholar]
- 21.Brückner A., Duparré J., Dannberg P., Bräuer A., Tünnermann A., Artificial neural superposition eye. Opt. Express 15, 11922–11933 (2007). [DOI] [PubMed] [Google Scholar]
- 22.S. Hiura, A. Mohan, R. Raskar, Krill-eye: “Superposition compound eye for wide-angle imaging via grin lenses,” in 2009 IEEE 12th International Conference on Computer Vision Workshops (IEEE, 2010), vol. 2, pp. 186–199. [Google Scholar]
- 23.Ma M., Zhang Y., Deng H. X., Gao X. C., Gu L., Sun Q. Z., Su Y. L., Zhang X., Super-resolution and super-robust single-pixel superposition compound eye. Opt. Lasers Eng. 146, 106699 (2021). [Google Scholar]
- 24.Kirschfeld K., The absolute sensitivity of lens and compound eyes. Z. Naturforsch. C Biosci. 29, 592–596 (1974). [DOI] [PubMed] [Google Scholar]
- 25.Frederiksen R., Warrant E. J., The optical sensitivity of compound eyes: Theory and experiment compared. Biol. Lett. 4, 745–747 (2008). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26.Shaw S. R., Optics of arthropod compound eye. Science 165, 88–90 (1969). [DOI] [PubMed] [Google Scholar]
- 27.Berry S., The use of optical coherence tomography to demonstrate dark and light adaptation in a live moth. Environ. Entomol. 51, 643–648 (2022). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28.Tsai D. Y., Lee Y., Matsuyama E., Information entropy measure for evaluation of image quality. J. Digit. Imaging 21, 338–347 (2008). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29.Lucas B. D., Kanade T., An iterative image registration technique with an application to stereo vision. Vancouver 81, (1981). [Google Scholar]
- 30.Fleet D. J., Langley K., Recursive filters for optical flow. IEEE Trans. Pattern Anal. Mach. Intell. 17, 61–67 (1995). [Google Scholar]
- 31.Chen J., Zhou Z., Kim B. J., Zhou Y., Wang Z. Q., Wan T. Q., Yan J. M., Kang J. F., Ahn J. H., Chai Y., Optoelectronic graded neurons for bioinspired in-sensor motion perception. Nat. Nanotechnol. 18, 882–888 (2023). [DOI] [PubMed] [Google Scholar]
- 32.Kelly D., Wilson H., Human flicker sensitivity: Two stages of retinal diffusion. Science 202, 896–899 (1978). [DOI] [PubMed] [Google Scholar]
- 33.Miall R., The flicker fusion frequencies of six laboratory insects, and the response of the compound eye to mains fluorescent ‘ripple’. Physiol. Entomol. 3, 99–106 (1978). [Google Scholar]
- 34.Juusola M., French A. S., Uusitalo R. O., Weckström M., Information processing by graded-potential transmission through tonically active synapses. Trends Neurosci. 19, 292–297 (1996). [DOI] [PubMed] [Google Scholar]
- 35.de Ruyter van Steveninck R. R., Laughlin S. B., The rate of information transfer at graded-potential synapses. Nature 379, 642–645 (1996). [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Figs. S1 to S21
Theoretical model and optical simulations
Information entropy
Automation of the temporal and spatial approaches
Structural similarity index measure (SSIM)
Ultrafast perception of angular motion under varying background light intensities
Legends for movies S1 to S5
Movies S1 to S5





