Abstract
Digital cameras obtain color information of the scene using a chromatic filter, usually a Bayer filter, overlaid on a pixelated detector. However, the periodic arrangement of both the filter array and the detector array introduces frequency aliasing in sampling and color misregistration during demosaicking process which causes degradation of image quality. Inspired by the biological structure of the avian retinas, we developed a chromatic LED array which has a geometric arrangement of multi-hyperuniformity, which exhibits an irregularity on small-length scales but a quasi-uniformity on large scales, to suppress frequency aliasing and color misregistration in full color image retrieval. Experiments were performed with a single-pixel imaging system using the multi-hyperuniform chromatic LED array to provide structured illumination, and 208 fps frame rate was achieved at 32 × 32 pixel resolution. By comparing the experimental results with the images captured with a conventional digital camera, it has been demonstrated that the proposed imaging system forms images with less chromatic moiré patterns and color misregistration artifacts. The concept proposed verified here could provide insights for the design and the manufacturing of future bionic imaging sensors.
Keywords: multi-hyperuniform, single-pixel imaging, frequency aliasing, color misregistration
1. Introduction
Both spatial and spectral information provides us crucial knowledge of the world. Chromatic digital cameras obtain spatial and spectral information simultaneously by placing an absorbing color-filter array (a.k.a. the Bayer filter) on top of a detector array [1], because photoelectronic sensors are only sensitive to light intensity, regardless of its wavelength. Due to technical limitations and commercial considerations, the elements of the pixelated detectors, as well as those of the filter arrays, are usually arranged in a Cartesian geometry. Such periodic arrangement of both the detector array and the Bayer filter array introduces artificial effects decreasing the fidelity of the captured images. Frequency aliasing, which converts frequencies above the Nyquist limit into moiré fringes during the optical sampling, is one such effect. Color misregistration, which causes inauthentic color shifts during the demosaicking process, is another. Many post-processing algorithms have been proposed [2,3,4,5,6,7,8,9] to suppress these artificial effects in the captured images, which increase the computational burden on imaging system. However, these effects could be avoided or suppressed if raw image data could be sampled in a different manner with low cost.
The evolution of species is governed by neither technical limitation nor commercial consideration, but environmental requirements, and environment requires diurnal animals to evolve eyes which can obtain images with high fidelity and dynamic performance. Some researches about human retinas showed that arrangement of the photoreceptors is random and uniform, which is able to yield images with better reconstruction quality by suppressing frequency aliasing [10]. The similar structures exist in the avian vision system as well. Recent biological investigations have found that birds, being the vertebrate with the most sophisticated vision, have retinas consisting of five types of cones, each of which independently exhibits a disorder on small-length scales but a quasi-uniformity on large scales [11,12]. The fantastic structure of correlated disorder is known as hyperuniformity [13], and the fact that such arrangement can obtain high fidelity images is explained by the theory that a slight irregularity in the optical sampling arrangement can avoid frequency aliasing [14]. It has been shown that the evolution of the avian vision system may be most sophisticated among all animals. Another biological research [15] shows that the birds can achieve continuous imaging with up to 145 fps frame rate. A recent hyperuniform sampling experiment [16] further verified the feasibility of the theory. However, the experiment was performed with a single-pixel imaging [17,18,19,20,21] system using a digital-micromirror-device which is commonly used in many applications [22,23]. While the sampling patterns were designed to be hyperuniform, the micromirrors forming the patterns were arranged in Cartesian coordinates and displayed at an up-to-22 KHz modulational rate. Consequently, the frequency aliasing in the sampled image was not suppressed completely and the system cannot perform high dynamic tasks.
In this paper, we addressed that the periodic sampling caused frequency aliasing and color misregistration by utilizing a customed chromatic LED array in which red-green-blue luminous points formed a multi-hyperuniform arrangement [11], that is, luminous points of one color form a hyperuniform point pattern, and all points together, regardless of their colors, exhibit hyperuniformity as well. Such arrangement of the chromatic LED array was designed to mimic the multi-hyperuniform structure of the chicken retina system [12]. Optical sampling using multi-hyperuniformity was performed experimentally via a single-pixel imaging system. The high-speed hyperuniform LED array developed in this work, which have a maximum illumination rate of 2.5 MHz, can effectively improve the dynamic performance of the imaging system. Both numerical and experimental results indicated that the images retrieved by multi-hyperuniform sampling contained less chromatic moiré patterns at high frequencies and less color misregistration artifacts at the edge of color transition, where the proposed imaging system achieved 208 fps frame rate in experiment. The work is different from the methods of optimizing interpolation algorithm, which can solve these artificial effects in hardware through a simple imaging system. The proof-of-principle system demonstrated here might push us one step closer to the biomimetic digital camera which the imaging community aimed to invent for so long.
2. Theory
Hyperuniform structure exists in not only avian retinas but also physical systems such as crystal [24], or even the large-scale structure of the universe [13]. The property of a hyperuniform system can be quantified as the density fluctuation in its corresponding point patterns. For a 2D point pattern with hyperuniform distribution, the variance of the number of points σ2(R) within a circular domain S is approximately proportional to the R [13] i.e.,
(1) |
where Ns is the number of points contained in S, and angular brackets represent an ensemble average. R is the radius of a circular observation window. Equation (1) indicates that the variance of the hyperuniform point patterns grows more slowly than the area of the domain, while for any statistically homogeneous and isotropic point pattern, the variance cannot grow more slowly than the area of circle S or other strictly convex domains [25,26].
A special hyperuniformity, known as multi-hyperuniformity [11], contains more than one type of points. For example, five types of cone photoreceptors exist in chicken retina: violet, blue, green, red species to sense color and double species to detect luminance. The point patterns of these five types of cones are arranged individually and never occurred in the near vicinity of other cones of the same type, which ensure each cone pattern achieves a much more uniform arrangement [11]. All types of photoreceptors grow simultaneously with the constraints of cell size, and such competing interactions ensure that all cone patterns are arranged in a hyperuniformity. Multi-hyperuniform structure contains multiple point species where both the total population and the individual point types are simultaneously hyperuniform. It is worth noting that the overall point arrangement in multi-hyperuniform obey Equation (1) as well, which means that the point patterns are independent with each other and display hyperuniform in total whether the individual species is removed or not. Such multi-hyperuniform structure is believed to be the main reason that birds have the most sophisticated vision of any vertebrate. It would be interesting to exploit the concept in optical sampling by using such geometry, and here it was performed in the manner of structured illumination in a single-pixel imaging scheme.
A chromatic LED array was developed, in which its red-green-blue luminous points were arranged in multi-hyperuniformity. Specifically, a hyperuniform point pattern was generated by a ‘cell-growing’ random procedure based on a regular hexagonal geometry [16]; the green luminous points, being the center of the LED chips, were then arranged following the hyperuniform point pattern, as shown in Figure 1a. As the red-green-blue luminous points in an LED chip have a fixed geometry (Figure 1b), a periodicity would exist in the LED array if each chip were arranged in the same manner. Therefore, an extra irregularity was introduced by rotating the LED chip of the ith column and the jth row with a randomly generated angle θij (Figure 1c). The LED array, as shown in Figure 1d, was fabricated by placing 32 × 32 LED chips on corresponding positions and integrating them on the printed circuit board, where the red-green-blue luminous points on each chip have center wavelengths of 632 nm, 518 nm, and 468 nm, respectively. It is worth mentioning that, for the purpose of variance σ2(R) estimation and further numerical simulation, the multi-hyperuniform arrangement were generated on an underlying Cartesian grid of 544 × 544 pixels with one pixel corresponding to 0.1 mm × 0.1 mm. The actual LED array, however, was not limited by Cartesian coordinates. Recent research has found that the quantities of five types of cones are different in avian retina, where the double cones were the most abundant cone type (40.7%) followed by green (21.1%), red (17.1%), blue (12.6%) and violet (8.5%) single cones. Due to the fixed arrangement of R-G-B channels of each LED chip, the three channels have the same spatial density on the multi-hyperuniform LED array with 32 × 32 chips.
To evaluate the hyperuniformity of the chromatic LED array, as in the calculating process in [11], the variances σ2(R) were computed directly for each monochromatic point patterns separately, and for overall point pattern, as shown in Figure 2. Specifically, for each R value, 2500 circular domains S were randomly placed in the pattern without overlapping the system boundary. The maximum radius was chosen to be Rmax = L/2, limited by the pattern size L. The variances of each pattern were fitted (dashed lines in Figure 2) using the fitting function:
(2) |
where a cosine term represented that the patterns were originated from the regular hexagonal arrangement [16], the window size R is normalized by the averaged points’ distance D = 17 for monochromatic point patterns and D = 5.67 for overall point pattern. According to the previous research [11], the structural properties of individual and overall point patterns could be obtained in the same manner, so the Equation (2) is suitable for hyperuniform evaluation of chromatic LED point array.
The parameters of the fitting curves in Figure 2 are listed in Table 1. The fitting curves and their fitting parameters, listed in Table 1, indicated that the variances of all four patterns grew proportionally to R rather than R2, which met the criterion of hyperuniformity described by Equation (1). Each monochromatic luminous point pattern exhibits hyperuniformity individually, and combined as an overall point pattern, the LED array remains hyperuniform, therefore, the arrangement of the LED array was multi-hyperuniform.
Table 1.
Green | Red | Blue | Overall | |
---|---|---|---|---|
P | 0.4675 | 0.5932 | 0.5725 | 0.5663 |
Q | 0.1274 | 0.0271 | 0.0335 | 0.1505 |
It is worth mentioning that Q is the coefficient for the cosine term in Equation (2), representing the hexagonal geometry. In Table 1, coefficient Q of the green point pattern is larger than the red and blue ones because the latter ones had an extra random rotation introduced during the generation of the pattern, meaning the red and blue point patterns had a larger deviation from the regular hexagonal geometry than the green point pattern. The parameter P is a coefficient for fitting curves.
3. Numerical Simulations
Numerical simulations were performed using a multi-hyperuniform sampling point pattern, which was generated based on the multi-hyperuniform LED array. The point pattern, having a 544 × 544 underlying pixel grid, contained 3072 sampling points, 1024 for red, green, and blue each. Figure 3a showed a partial area of the arrangement with 4 red, 4 green, and 4 blue points. Each LED luminous point, not strictly a point, had the size of 6 pixel grids, representing its 0.3 mm × 0.2 mm physical size.
For comparison, two other types of sampling patterns were also used in the numerical simulation. The regular LED pattern, a part of which was shown in Figure 3b, represented that the LED chips were arranged in a regular geometry. The regular LED pattern also contained 3072 sampling points, 1024 for red, green, and blue each. The Bayer pattern, a part of which was shown in Figure 3c, is a common arrangement used in conventional chromatic digital cameras. The Bayer pattern, having a 1:2:1 ratio of red, green, and blue points, contained 3072 sampling points which consisted of 768 red, 1536 green, and 768 blue points.
To ensure that the comparison is fair, the three sampling patterns had the same number of sampling points and the same size of underlying pixel grid, therefore, the same sampling frequency and the same field-of-view for the optical sampling.
In numerical simulation, a group of 35 chromatic images, whose pixel resolution is 544 × 544, were used as the objects. Each chromatic image I was under-sampled by the three sampling patterns, and a demosaicking algorithm [4,5] was applied to the under-sampled monochromatic data of red, green, and blue to reconstruct the chromatic image I’. There are many sophisticated demosaicking methods for various applications [6,7,8] and the gradient-based interpolation algorithm [9] was applied in this work. The gradients of different directions are calculated according to the sampling structure, which can ensure for selecting the proper direction to estimate the missing pixel values of the images. This algorithm is efficient to reduce the pseudo color of color-transition area in reconstructed images for three sampling patterns without much time cost. It is worth mentioning that the demosaicking algorithm used here is not best for three sampling structures but valid and simple, which can ensure the fair comparison of image reconstruction with three sampling patterns. There are some complicated algorithms to improve the quality of images as well, which would cause more time costs of imaging. It is verified that the multi-hyperuniform sampling structure could be used to improve the image quality in a hardware way without more computational burden.
The qualities of reconstruction images were evaluated using the root mean square error (RMSE) between each original image I and its reconstruction I’ as:
(3) |
where were the pixel resolution of the images, and channel, being red, green, or blue, represented the monochromatic data of different channels. The final RMSE is the averaging value of the RMSEs for three monochromatic channels, i.e.,
(4) |
Both original images and the reconstructions were normalized to the same scale.
The RMSEs of all resulting images, sorted in descending order of multi-hyperuniform pattern RMSEs, are shown in Figure 4. In most cases, 33 out of 35, the proposed multi-hyperuniform pattern yielded the best resulting images among the three sampling patterns. The average RMSE for all 35 images reconstructed from multi-hyperuniform sampling, being 0.1214, is the smallest of the RMSEs yielded from the three sampling patterns. Regular and Bayer patterns each yielded the lowest RMSE in two cases, where the original images contain large blocky areas with no color transitions and high-frequency details. The slight degradation of image quality by using multi-hyperuniform sampling in such cases, was predicted by the fact that the irregularity in optical sampling degrades image quality [27]. The RMSEs of the reconstructed images by multi-hyperuniform structure demonstrate the improvement of the image quality on a pixel-wise level.
For considering larger-scale features of reconstructed images, the structural similarity (SSIM) between each original image I and its reconstruction I’ is calculated as:
(5) |
where, is the average value of images, is the variance of images, and are constants. is covariance of I and I’, and are constants for ensuring the validation of this equation. The meaning of the channel is the same as the previous calculation process. The final SSIM is the averaging value of the SSIMs for three channels, i.e.,
(6) |
The SSIMs of all images reconstructed are shown in Figure 5, where image indexes are consistent with the results in Figure 4. The averaging SSIM for all 35 images from multi-hyperuniform sampling is the largest compared with other results yielded from regular and Bayer sampling. The two images with the large block can be reconstructed better by regular sampling pattern and their indexes are the same with the two cases where we find lower RMSEs by using regular and Bayer patterns in Figure 4. The SSIMs of the reconstructed images demonstrate that image quality can be improved on larger-scale features by multi-hyperuniform sampling as well.
Since aliasing errors are usually caused by insufficient sampling, it is necessary to compare the proposed sampling with other random sampling strategies. The blue-noise sampling strategy is another random strategy, which is derived by the human vision system. This method is used here and the reconstruction results are listed in Figure 6 and Figure 7. The points in blue-noise sampling pattern are arranged randomly but the distances of any two adjacent points are uniform, which causes the reconstructed images to be close to the results using multi-hyperuniform sampling structure.
Due to the fact that different images have different power distribution of image spatial frequency [28,29], more various scenes are used for comparisons, where the reconstructed images of artificial scenes and natural scenes are obtained by different patterns in Figure 6 and Figure 7. In Figure 6, it is obvious that less color misregistration artifacts and moiré fringes were observed in reconstructed images by multi-hyperuniform sampling pattern compared with the other structures. It is worth noting that the color misregistration could be considered as one of the small-scale features and the chromatic moiré fringes could be treated as one of the large-scale features. The RMSEs and SSIMs are sequentially listed under the resulting images, which indicate that multi-hyperuniform sampling structure can suppress the color misregistration and chromatic moiré fringes in different scale features. The reconstructed results in Figure 7 demonstrated that the multi-hyperuniform sampling structure is valid to improve the qualities of natural scenes with sparse periodic patterns as well.
4. Experimental Results
It is difficult to perform the multi-hyperuniform optical sampling with an actual detector array, because such a chromatic detector array would be impossible to manufacture. However, it is possible to perform a multi-hyperuniform optical sampling experiment utilizing the computational imaging methods which have emerged during the last decade [30,31,32,33]. Single-pixel imaging, being a typical computational imaging method, forms an image by sampling the scene with varying structured illuminations, and associating the illumination patterns with the corresponding light intensities recorded with a single-pixel detector. This imaging strategy provides advantages for imaging in situations that are challenging with a detector array, such as special spectrum imaging [34,35], adaptive imaging [36,37,38], optical phased array imaging [39] and 3D profiling [40,41,42,43].
In this work, a single-pixel imaging experimental system was set up as shown in Figure 8. The multi-hyperuniform chromatic LED structured illumination module was used in the system to sample the scene, which consists of a field-programmable gate array (Xilinx Spartan XC6SLX9-2FTG256C), a drive circuit and an LED array developed in Section 2. During the experiment, for each monochromatic channel, a projection lens (f = 150 mm) projected N masks Pi (I = 1, …, N) displayed on the LED array. The mask Pi is orthonormal and derived from the Hadamard matrix, a square matrix with elements ±1 whose rows (or columns) are orthogonal to one another [44]. The LED array displays monochromatic illumination masks at the rate of 1.25 MHz. The high illumination rate was achieved by using the line control modulational strategy proposed in our previous work [45]. A single-pixel bucket detector (Thorlabs PMT2102) and a digitizer (PicoScope 6404D) were used to record the corresponding reflecting total light intensities Si (I = 1, …, N). The image I’channel for the monochromatic channel can be reconstructed as:
(7) |
where the N is the quantity of the Hadamard basis masks. The problem of reconstructing the single channel image of the scene becomes a problem of solving N independent unknowns using a set of linear equations. Due to the orthonormal property of the mask, the Equation (7) can be solved perfectly if the number of Hadamard basis masks is equal to pixels of the image [16,21,44,45]. Provided the scene is sparse, compressive sensing [18,21,44] can be used to reconstruct image with N < 32 × 32 measurements by sub-sampling the scene and solving the optimization problem. To yield one 32 × 32 monochromatic image, 2048 masks (1024 Hadamard masks and their inverses) were used to perform a fully Hadamard sampling. After images I’channel for red, green, and blue were reconstructed separately, the gradient-based interpolation algorithm was applied to yield the chromatic image I’ of the object. A chromatic image required 6144 illumination masks, or 4.8 ms acquisition time, resulting in a 208 fps frame rate for the multi-hyperuniform chromatic LED array-based single-pixel imaging system. Due to the lab manufacturing limitation, the imaging experiment only can achieve 32 × 32 pixels resolution.
For comparison, another LED array with regular sampling arrangement, as shown in Figure 3b, was used in the experimental system to obtain images with regular sampling. An ordinary smartphone camera was also used to capture images with Bayer sampling, where the method of sub-sampling is chosen to ensure the same sampling structure compared with the simulation. All images were obtained with the same number of spatial sampling points to make sure the comparison was fair.
Figure 9 illustrated the resulting images yielded from multi-hyperuniform, regular and Bayer sampling patterns. Like numerical simulation, color misregistration and chromatic moiré fringes were observed in images yielded by regular and Bayer sampling, while multi-hyperuniform sampling suppressed both artifacts. The calculated RMSEs and SSIMs listed below the images are in a good agreement with the numerical simulation. It is showed below that the images reconstructed by multi-hyperuniform have the lowest RMSE and highest SSIM values, demonstrating the effectiveness of suppression of color misregistration and frequency aliasing.
5. Discussions and Conclusions
In summary, we developed a chromatic LED array with multi-hyperuniform structure, that is, luminous points of each monochromatic chip exhibited hyperuniformity independently, and red-green-blue luminous points combined to show hyperuniformity as well. The chromatic LED array was developed to mimic the virtues of avian retina optical sampling, specifically, suppressing color misregistration and chromatic moiré fringes caused by periodic optical sampling. The placement and orientation of LED array has an impact on whether the arrangement of LED array is multi-hyperuniform, and two random procedures are used here to ensure LED array is of multi-hyperuniformity. Comparisons were performed numerically and experimentally by reconstructing images by different sampling methods in a single-pixel imaging system. Both numerical and experimental results indicated that the multi-hyperuniform sampling method yielded images with better image quality compared to the other two methods by using the same and basic interpolation algorithm. Besides the improvement of the images’ quality, the proposed imaging system achieved 208 fps frame rate experimentally, which has a potential in high dynamic applications.
This work is a proof-of-principle to demonstrate the feasibility of multi-hyperuniformity in high dynamic chromatic optical sampling. The chromatic LED array developed in this work contains only 1024 chromatic luminous LED chips due to the lab manufacturing limitation. The capability of improvement of image quality is almost the same by 32 × 32 multi-hyperuniform LED array with different orientations and placements, which can be enhanced by integrating more LED chips on the sampling array due to the property of the hyperuniform structure. Although the low-resolution color images are reconstructed, the method in this paper can offer a new solution to suppress the artificial effects in high resolution imaging which would not increase imaging time. Cutting-edge LED manufacturing techniques such as micro-LED or OLED could be used to develop multi-hyperuniform LED array with a much larger chip number and a higher density to take full advantages of such a sampling structure in high-resolution imaging. In this work, the multi-hyperuniform pattern has been verified to be able to improve image quality with a high frame rate. In future, LEDs could be used not only for illumination and display, but also for the development of new bionic imaging sensors.
Author Contributions
Conceptualization, M.-J.S. and L.-J.L.; methodology, M.-J.S.; software, X.-Y.Z.; validation, X.-Y.Z. and L.C.; data curation, X.-Y.Z.; writing—review and editing, X.-Y.Z. and M.-J.S. All authors have read and agreed to the published version of the manuscript.
Funding
This research was funded by National Natural Science Foundation of China, grant number 61922011, and Open Research Projects of Zhejiang Lab, grant number 2021MC0AB03.
Conflicts of Interest
The authors declare no conflict of interest.
Footnotes
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
- 1.Bayer B.E. Color Imaging Array. 3971065. U.S. Patent. 1976 Jul 20;
- 2.Takamatsu M., Ito M. Color Image Forming Apparatus Providing Registration Control for Individual Color Images. 5550625. U.S. Patent. 1996 Aug 27;
- 3.Costenza D.W. Image Registration for a Raster Output Scanner Color Printer. 5412409. U.S. Patent. 1995 May 2;
- 4.Rajeev R., Snyder W.E., Griff B.L. Demosaicking methods for Bayer color arrays. J. Electron. Imaging. 2002;11:306–315. [Google Scholar]
- 5.Chung K.H., Chan Y.H. Color Demosaicing Using Variance of Color Differences. IEEE Trans. Image Process. 2006;15:2944–2955. doi: 10.1109/TIP.2006.877521. [DOI] [PubMed] [Google Scholar]
- 6.Kimmel R. Demosaicing: Image reconstruction from color CCD samples. IEEE Trans Image Process. 1999;8:1221–1228. doi: 10.1109/83.784434. [DOI] [PubMed] [Google Scholar]
- 7.Malvar H.S., He L.W., Cutler R. High-quality linear interpolation for demosaicing of Bayer-patterned color images; Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing; Montreal, QC, Canada. 17–21 May 2004. [Google Scholar]
- 8.Alleysson D., Süsstrunk S. Linear demosaicing inspired by the human visual system. IEEE Trans Image Process. 2005;14:439–449. doi: 10.1109/tip.2004.841200. [DOI] [PubMed] [Google Scholar]
- 9.Laroche C.A., Prescott M.A. Apparatus and method for adaptively interpolating a full color image utilizing chrominance gradients. 5373322. U.S. Patent. 1994 Dec 13;
- 10.Yellott J. Spectral analysis of spital sampling photoreceptors: Topological disorder prevents aliasing. Vis. Res. 1982;22:1205–1210. doi: 10.1016/0042-6989(82)90086-4. [DOI] [PubMed] [Google Scholar]
- 11.Jiao Y., Lau T., Hatzikirou H., Meyer-Hermann M., Corbo J.C., Torquato S. Avian photoreceptor patterns represent a disordered hyperuniform solution to a multiscale packing problem. Phys. Rev. E. 2014;89:022721. doi: 10.1103/PhysRevE.89.022721. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Hart N.S. The visual ecology of avian photoreceptors. Prog. Retin. Eye Res. 2001;20:675–703. doi: 10.1016/S1350-9462(01)00009-X. [DOI] [PubMed] [Google Scholar]
- 13.Torquato S., Stillinger F.H. Local density fluctuations, hyperuniformity, and order metrics. Phys. Rev. E. 2003;68:041113. doi: 10.1103/PhysRevE.68.041113. [DOI] [PubMed] [Google Scholar]
- 14.Yellott J. Spectral consequences of photoreceptor sampling in the rhesus retina. Science. 1983;221:382–385. doi: 10.1126/science.6867716. [DOI] [PubMed] [Google Scholar]
- 15.Boström J.E., Dimitrova M., Canton C., Håstad O., Qvarnström A., Ödeen A. Ultra-Rapid Vision in Birds. PLoS ONE. 2016;11:e0151099. doi: 10.1371/journal.pone.0151099. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.Sun M.J., Zhao X.Y., Li L.J. Imaging using hyperuniform sampling with a single-pixel camera. Opt. Lett. 2018;43:4049–4052. doi: 10.1364/OL.43.004049. [DOI] [PubMed] [Google Scholar]
- 17.Shapiro J.H. Computational ghost imaging. Phys. Rev. A. 2008;78:061802. doi: 10.1103/PhysRevA.78.061802. [DOI] [Google Scholar]
- 18.Duarte M.F., Davenport M.A., Takhar D., Laska J.N., Sun T., Kelly K.F., Baraniuk R.G. Single-pixel imaging via compressive sampling. IEEE Signal Process. Mag. 2008;25:83–91. doi: 10.1109/MSP.2007.914730. [DOI] [Google Scholar]
- 19.Bromberg Y., Katz O., Silberberg Y. Ghost imaging with a single detector. Phys. Rev. A. 2009;79:053840. doi: 10.1103/PhysRevA.79.053840. [DOI] [Google Scholar]
- 20.Zhang Z.B., Ma X., Zhong J.G. Single-pixel imaging by means of Fourier spectrum acquisition. Nat. Commun. 2015;6:6225–6230. doi: 10.1038/ncomms7225. [DOI] [PubMed] [Google Scholar]
- 21.Sun M.J., Zhang J.M. Single-pixel imaging and its application in three-dimensional reconstruction: A brief review. Sensors. 2019;19:732. doi: 10.3390/s19030732. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22.Xu C., Xu T.F., Yan G., Ma X., Zhang Y.H., Wang X., Zhao F., Arce G.R. Super-resolution compressive spectral imaging via two-tone adaptive coding. Photonics Res. 2020;3:395–411. doi: 10.1364/PRJ.377665. [DOI] [Google Scholar]
- 23.Li J.J., Matlock A., Li Y.Z., Chen Q., Zuo C., Tian L. High-speed in vitro intensity diffraction tomography. Adv. Photonics. 2019;1:066004. doi: 10.1117/1.AP.1.6.066004. [DOI] [Google Scholar]
- 24.DeckLéger Z., Chamanara N., Skorobogatiy M., Silveirinha M.G., Caloz C. Uniform-velocity spacetime crystals. Adv. Photonics. 2019;1:056002. [Google Scholar]
- 25.Beck J. Irregularities of distribution. I. Acta Math. 1987;159:1–49. doi: 10.1007/BF02392553. [DOI] [Google Scholar]
- 26.Beck J. Randomness in lattice point problems. Discrete Math. 2001;229:29–55. doi: 10.1016/S0012-365X(00)00200-4. [DOI] [Google Scholar]
- 27.French A.S., Snyder A.W., Stavenga D.G. Image degradation by an irregular retinal mosaic. Biol. Cybern. 1977;27:229–233. doi: 10.1007/BF00344144. [DOI] [PubMed] [Google Scholar]
- 28.Farinella G.M., Battiato S., Gallo G., Cipolla R. Natural versus artificial scene classification by ordering discrete fourier power spectra. SSPR SPR. 2008;5342:137–146. [Google Scholar]
- 29.Li Y.Q., Majumder A., Zhang H., Gopi M. Optimized multi-spectral filter array based imaging of natural scenes. Sensors. 2018;18:1172. doi: 10.3390/s18041172. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30.Barbastathis G., Ozcan A., Situ G.H. On the use of deep learning for computational imaging. Optica. 2019;6:921–943. doi: 10.1364/OPTICA.6.000921. [DOI] [Google Scholar]
- 31.Feng S.J., Chen Q., Gu G.H., Tao T.Y., Zhang L., Hu Y., Hu W., Zuo C. Fringe pattern analysis using deep learning. Adv. Photonics. 2019;1:025001. doi: 10.1117/1.AP.1.2.025001. [DOI] [Google Scholar]
- 32.Gibson G.M., Johnson S.D., Padgett M.J. Single-pixel imaging 12 years on: A review. Opt. Express. 2020;28:28190–28208. doi: 10.1364/OE.403195. [DOI] [PubMed] [Google Scholar]
- 33.Jiao S.M., Sun M.J., Gao Y., Lei T., Xie Z.W., Yuan X.C. Motion estimation and quality enhancement for a single image in dynamic single-pixel imaging. Opt. Express. 2019;27:12841–12854. doi: 10.1364/OE.27.012841. [DOI] [PubMed] [Google Scholar]
- 34.Watts C.M., Shrekenhamer D., Montoya J., Lipworth G., Hunt J., Sleasman T., Krishna S., Smith D.R., Padilla W.J. Terahertz compressive imaging with metamaterial spatial light modulators. Nat. Photonics. 2014;8:605–609. doi: 10.1038/nphoton.2014.139. [DOI] [Google Scholar]
- 35.Zhang A.X., He Y.H., Wu L.A., Chen L.M., Wang B.B. Tabletop x-ray ghost imaging with ultra-low radiation. Optica. 2018;5:374–377. doi: 10.1364/OPTICA.5.000374. [DOI] [Google Scholar]
- 36.Phillips D.B., Sun M.J., Taylor J.M., Edgar M.P., Barnett S.M., Gibson G.M., Padgett M.J. Adaptive foveated single-pixel imaging with dynamic supersampling. Sci. Adv. 2017;3:e1601782. doi: 10.1126/sciadv.1601782. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 37.Sun M.J., Edgar M.P., Phillips D.B., Gibson G.M., Padgett M.J. Improving the signal-to-noise ratio of single-pixel imaging using digital microscanning. Opt. Express. 2016;24:10476–10485. doi: 10.1364/OE.24.010476. [DOI] [PubMed] [Google Scholar]
- 38.Sun M.J., Wang H.Y., Huang J.Y. Improving the performance of computational ghost imaging by using a quadrant detector and digital micro-scanning. Sci. Rep. 2019;9:4105. doi: 10.1038/s41598-019-40798-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 39.Li L.J., Chen W., Zhao X.Y., Sun M.J. Fast Optical Phased Array Calibration Technique for Random Phase Modulation LiDAR. IEEE Photonics J. 2018;11:1–10. doi: 10.1109/JPHOT.2018.2889313. [DOI] [Google Scholar]
- 40.Howland G.A., Lum D.J., Ware M.R., Howell J.C. Photon counting compressive depth mapping. Opt. Express. 2013;21:23822–23837. doi: 10.1364/OE.21.023822. [DOI] [PubMed] [Google Scholar]
- 41.Sun B., Edgar M.P., Bowman R., Vittert L.E., Welsh S., Bowman A., Padgett M.J. 3D computational imaging with single-pixel detectors. Science. 2013;340:844–847. doi: 10.1126/science.1234454. [DOI] [PubMed] [Google Scholar]
- 42.Sun M.J., Edgar M.P., Gibson G.M., Sun B.Q., Radwell N., Lamb R., Padgett M.J. Single-pixel three-dimensional imaging with a time-based depth resolution. Nat. Commun. 2016;7:12010. doi: 10.1038/ncomms12010. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 43.Wang M., Sun M.J., Huang C. Single-pixel 3D reconstruction via a high-speed LED array. J. Phys. Photonics. 2020;2:025006. doi: 10.1088/2515-7647/ab83e5. [DOI] [Google Scholar]
- 44.Sun M.J., Meng L.T., Edgar M.P., Padgett M.J., Radwell N. A Russian Dolls ordering of the Hadamard basis for compressive single-pixel imaging. Sci. Rep. 2017;7:3464. doi: 10.1038/s41598-017-03725-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 45.Xu Z.H., Chen W., Penuelas J., Padgett M., Sun M.J. 1000 fps computational ghost imaging using LED-based structured illumination. Opt. Express. 2018;26:2427–2434. doi: 10.1364/OE.26.002427. [DOI] [PubMed] [Google Scholar]