Skip to main content
Nature Communications logoLink to Nature Communications
. 2022 Jun 22;13:3566. doi: 10.1038/s41467-022-31052-6

Pixel super-resolution with spatially entangled photons

Hugo Defienne 1,, Patrick Cameron 1, Bienvenu Ndagano 1, Ashley Lyons 1, Matthew Reichert 2, Jiuxuan Zhao 3, Andrew R Harvey 1, Edoardo Charbon 3, Jason W Fleischer 2, Daniele Faccio 1,
PMCID: PMC9217946  PMID: 35732642

Abstract

Pixelation occurs in many imaging systems and limits the spatial resolution of the acquired images. This effect is notably present in quantum imaging experiments with correlated photons in which the number of pixels used to detect coincidences is often limited by the sensor technology or the acquisition speed. Here, we introduce a pixel super-resolution technique based on measuring the full spatially-resolved joint probability distribution (JPD) of spatially-entangled photons. Without shifting optical elements or using prior information, our technique increases the pixel resolution of the imaging system by a factor two and enables retrieval of spatial information lost due to undersampling. We demonstrate its use in various quantum imaging protocols using photon pairs, including quantum illumination, entanglement-enabled quantum holography, and in a full-field version of N00N-state quantum holography. The JPD pixel super-resolution technique can benefit any full-field imaging system limited by the sensor spatial resolution, including all already established and future photon-correlation-based quantum imaging schemes, bringing these techniques closer to real-world applications.

Subject terms: Imaging and sensing, Quantum optics, Imaging techniques


Pixelation is common in quantum imaging systems and limit the image spatial resolution. Here, the authors introduce a pixel super-resolution approach based on measuring the full spatially-resolved joint probability distribution of spatially-entangled photons, and improve pixel resolution by a factor two.

Introduction

The acquisition of a high-resolution image over a large field of view is essential in most optical imaging applications. In this respect, the widespread development of digital cameras made of millions of pixels has strongly contributed to create imaging systems with large space-bandwidth products. In classical imaging, it is therefore mainly the imaging systems with strong spatial constraints that suffer from pixelation and undersampling, such as lensless on-chip microscopes1,2. However, this effect is very present in quantum imaging schemes where it severely hinders the progress of these techniques towards practical applications.

Quantum imaging systems harness quantum properties of light and their interaction with the environment to go beyond the limit of classical imaging or to implement unique imaging modalities3. Of the many approaches, imaging schemes based on entangled photon pairs are the most common and are among the most promising. Proof-of-principle demonstrations range from improving optical resolution4,5 and imaging sensitivity69 to the creation of new imaging protocols, such as ghost imaging10,11, quantum illumination1215 and quantum holography16,17. Contrary to classical imaging, photon-correlation-based imaging systems operate by measuring photon coincidences between many spatial positions of the image plane in parallel (except from induced-coherence imaging approaches1820). In practice, this process is much more delicate than forming an intensity image by photon accumulation and therefore requires specific photodetection devices. Although such a task was originally performed with raster-scanning single-photon detectors10, today most implementations use single-photon sensitive cameras such as electron multiplied charge coupled devices (EMCCD)21,22, intensified complementary metal-oxide-semiconductor (iCMOS)23 and single-photon avalanche diode (SPAD) cameras24.

EMCCD cameras are the most widely used devices for imaging photon correlations thanks to their high quantum efficiency, low noise and high pixel resolution. However, these cameras suffer from a very low frame rate (~100 fps) due to their electronic amplification process operating in series and therefore require very long acquisition times (~10 h) to reconstruct correlation images13,14,17,25. Using a smaller sensor area or a binning technique can reduce the acquisition time, but at the cost of a loss in pixel resolution. By using an image intensifier, iCMOS cameras do not use such slow electronic amplification processes and can therefore reach higher frame rates (~1000 fps). However, so far these cameras have only enabled correlation images with relatively small number of pixels (and still several hours of acquisition), mostly because of their low detection efficiency and higher noise level23,26. Finally, SPAD cameras are an emerging technology that can detect single photons at low noise while operating at very high frame rate (up to 800 kfps) despite their low quantum efficiency (~20%) and typically low resolution (~1000 pixels).

It is clear from the above that nearly all sensor technologies currently used in quantum imaging experiments suffer from a poor pixel resolution, either directly when the technology does not have cameras with enough pixels or indirectly when the experiment can only operate in a reasonable time with a small number of pixels. In these systems, objects are therefore often undersampled, resulting in a loss of spatial information and the creation of artefacts in the retrieved images. Here, we demonstrate a quantum image processing technique based on entangled photon pairs that increases the pixel resolution by a factor two. We experimentally demonstrate it in three common photon-pair-based imaging schemes: two quantum illumination protocols using (i) a near-field illumination configuration with an EMCCD camera13 and a (ii) far-field configuration with a SPAD camera27, and (iii) an entanglement-enabled holography system17. In addition, we use our JPD pixel super-resolution method in (iv) a full-field version of N00N-state quantum holography, a scheme that has only been demonstrated so far using a scanning approach6. Note that we refer to our technique as ’pixel super-resolution’ to avoid confusion with the term ’super-resolution’ describing imaging techniques capable of overcoming the classical diffraction limit.

RESULTS

Experimental demonstration

Figure 1a describes the experimental setup used to demonstrate the principle of our technique for the widely used case of p = 2 spatially entangled photons. The spatially entangled photon pairs are produced by type-I SPDC in a thin β-barium Borate (BBO) crystal and illuminate an object, t, using a near-field illumination configuration (i.e. the output surface of the crystal is imaged onto the object). The biphoton correlation width in the camera plane is estimated to be σ ≈ 13 μm. The object is imaged onto an EMCCD camera with a pixel pitch of Δ = 32 μm. In our experiment, t is a horizontal square-modulation amplitude grating. The recorded intensity image in Fig. 1b shows a region of the object composed of 10 grating periods imaged with 25 rows of pixels. It is clear that the image suffers from the effect of aliasing, due to an harmonic above the Nyquist frequency of the detector array, leading to a low-frequency Moire modulation with a period of approximately 5 pixels. If we are able to double the sampling frequency so as to super-resolve the image, it is expected that we will be able to image all the harmonics and hence remove this Moire pattern.

Fig. 1. Experimental demonstration of JPD pixel super-resolution.

Fig. 1

a Experimental setup. Light emitted by a diode laser (405 nm) illuminates β-barium borate (BBO) crystal with a thickness of 0.5 mm to produce spatially entangled pairs of photons by type I SPDC. Long-pass and band-pass filters at 810 ± 5 nm (BPF) positioned after the crystal remove pump photons. A two-lens system f1f2 images the crystal surface onto an object t, that is itself imaged onto the EMCCD camera by another two-lens system f3f4. t is a grid-shaped amplitude object. Photon correlation width in the image plane is estimated as σ ≈ 13 μm and camera pixel pitch is Δ = 32 μm. b Intensity image, (c) JPD diagonal image and (d) sum-coordinate projection of the JPD. e Intensity image obtained using a camera with half pixel pitch i.e. 16μm. All images show the same spatial region of the object containing 10 grating periods. Coordinates are in pixels. f Spectra of the intensity image (solid red), diagonal image (dashed black), JPD sum-coordinate image (dashed blue) and intensity image acquired with a 16 μm-pixel-pitch camera (dashed red) obtained by performing a discrete Fourier transform to the corresponding image and averaging over the x-axis. g System modulation transfer function (MTF) obtained using the slanted-edge technique with an intensity image (solid red), a diagonal image (dashed black), a JPD sum-coordinate image (dashed blue) and an intensity image acquired using a 16 μm-pixel-pitch camera (dashed red). All frequency values are normalized to the same reference frequency k0 = 1/Δ.

We also measure the spatially resolved JPD of the photon pairs Γijkl by identifying photon coincidences between any arbitrary pair of pixels (i, j) and (k, l) centred at spatial positions r1=(xi,yj) and r2=(xk,yl) using the method described in28. The information contained in the JPD can be used for various purposes. For example, it was used in pioneering works with EMCCD cameras to estimate position and momentum correlation widths of entangled photons pairs in direct imaging of the Einstein-Podolsky-Rosen paradox21,22. In the experiment here, the diagonal component of the JPD, Γijij reconstructs an image of the object i.e. Γijij = ∣t(xi, yj)∣4. Such a diagonal image is the quantity that is conventionally measured and used in all photon-pair-based imaging schemes using a near-field illumination configuration5,13,25,29. As shown in Fig. 1c, measuring the diagonal image in our experiment does not however improve the image quality i.e. the Moire pattern is still present.

There is another way to retrieve an image of the object without using the diagonal component of the JPD. For that, one can project the JPD along its sum-coordinate axis, achieved by summing Γ:

Pi+j++=i=1NXj=1NYΓ(i+i)(j+j)ij, 1

where NY × NX is the number of pixels of the illuminated region of the camera sensor, P+ is defined as the sum-coordinate projection of the JPD and (i+, j+) are sum-coordinate pixel indexes. Such a projection retrieves an image of the object sampled over four times the number of pixels of the sensor: Pi+j++=t(xi,yj)4 for even pixel indices i.e. i+ = 2i and j+ = 2j, with xi = iΔ and xj = jΔ; Pi+j++=t(xi+1/2,yj+1/2)4 for odd pixel indices i.e. i+ = 2i + 1 and j+ = 2j + 1, with xi+1/2 = xi + Δ/2 and yj + Δ/2. Pixel resolution is therefore increased by a factor 2 (see section JPD pixel super-resolution principle for more details). Figure 1d shows the resulting sum-coordinate projection of the JPD measured in the experiment shown in Fig. 1a. Thanks to pixel super-resolution, we observe that the spurious low frequency Moire modulation has been removed and the 10 grating periods are now clearly visible. As a comparison, such a high-resolution image is very similar to a conventional intensity image acquired using a camera with half the pixel pitch i.e. 16 μm (Fig. 1e).

The frequency analysis of these different images is shown in Fig. 1f and provides more quantitative information about our approach. In particular, we observe the absence of a frequency peak at 0.2 in the sum-coordinate image spectrum (dashed blue), while it is present in both intensity (solid red) and JPD-diagonal (dashed black) image spectra. It is instead substituted by a peak at 0.8 that is the true frequency component of the object (harmonics), as confirmed by its presence in the spectrum of the intensity image acquired using the high-resolution, 16 μm-pixel-pitch camera (dashed red). Removal of the aliased frequency component corresponds also to the disappearance of the Moire pattern in the real space. This confirms that our approach achieves pixel super-resolution and retrieves information that was lost due to undersampling. Additional measurements provided in supplementary document section 3 are acquired using a camera of even lower resolution (48 μm-pixel-pitch) and show that JPD pixel super-resolution can also recovers the fundamental spectral component (main peak) even when this is absent in both the intensity and diagonal images. JPD pixel super-resolution is also confirmed by simulations detailed in supplementary document section 2.8.

Figure 1g shows system modulation transfer functions (MTF) calculated using the slanted-edge technique30 with different imaging modalities. The MTF obtained using the JPD sum-coordinate projection (dashed blue) is approximately 1.7 times broader than those acquired using intensity (solid red) and JPD diagonal (dashed black) images, almost matching the MTF retrieved by intensity measurement with the high-resolution 16 μm-pixel-pitch camera (dashed red). This shows that JPD pixel super-resolution not only doubles the Nyquist frequency (1/(2Δ) → 1/Δ), but also broadens the system MTF which results in less attenuation of higher frequencies. The broadening of the MTF is explained by the fact that the effective size of a ’pixel’ during a JPD measurement (i.e. the size of the surface over which the coincidences are integrated) is on average smaller than the real size of the pixels. This shows that images retrieved by JPD pixel super-resolution are similar to conventional intensity images obtained with a camera that has four times more pixels (higher Nyquist frequency) but that also has smaller pixels (broader MTF) (more details about MTF measurements are provided in Methods and in supplementary document section 4.)

Interestingly, results in Fig. 1 also lead to another conclusion that, contrary to a common belief in the field, conventional imaging with photon pairs (i.e. using the JPD diagonal) does not always improve image quality compared to classical intensity imaging. For example here, the spurious Moire effect in the diagonal image (Fig. 1c) is more intense than in the direct intensity image (Fig. 1b), which is also confirmed by a higher intensity of the 0.2 frequency peak in the diagonal image spectra (Fig. 1f).

JPD pixel super-resolution principle

We gain further insight into the underlying principle of JPD pixel super-resolution from inspection of the JPD of a spatially entangled two-photon state: this is a 4-dimensional object containing much richer information than a conventional 2D intensity image. The JPD contains correlation information not only between photons detected at the same pixel, but also correlation information about photons detected between nearest-neighbour pixels.

Figure 2 illustrates this concept in the one-dimensional case. In Fig. 2a, photon pairs with a correlation width σ illuminate a one-dimensional object t(x) imaged onto an array of pixels with pitch Δ and pixel gap δ (i.e. the width of non-active areas between neighbouring pixels). When measuring the JPD, there are two main contributions: (i) The first contribution originates from pairs of photons detected at the same pixel (blue rays) and form the JPD diagonal elements, Γkk. Because these pairs crossed the object around the same positions as the pixels i.e. xk, the resulting image (Fig. 2b) provides a sampling of the object Γkk ~ ∣t(xk)∣4 similar to that performed by a conventional intensity measurement. (ii) The second contribution originates from photon pairs detected by nearest-neighbour pixels and forms the JPD off-diagonal elements Γkk+1. Because these photons crossed the object around positions located between the pixels i.e. xk+1/2, the resulting image (Fig. 2c) provides a sampling of the object Γkk+1 ~ ∣t(xk+1/2)∣4 similar to an intensity measurement performed with a sensor shifted by Δ/2 in the transverse direction (see Methods for derivations of Γkk and Γkk+1).

Fig. 2. Principle of JPD pixel super-resolution.

Fig. 2

a Schematic of the optical imaging system composed of an object t illuminated by entangled photon pairs with correlation width σ and imaged onto an array of pixels. Δ and δ are pixel pitch and gap, respectively. Positions at the center of pixels are noted using an integer indices i.e. xk, while those between pixels are noted using half-integer indices i.e. xk+1/2. Blue rays represent some photon pairs falling on a single pixel and red rays some falling on two nearest-neighbor pixels. b, c Unidimensional images formed by JPD diagonal elements Γkk and off-diagonal elements Γkk+1, respectively. d Performing a sum-coordinate projection of the JPD using equation (1) recombines these two low-resolution images into a high-resolution one.

Finally, projecting the JPD along the sum-coordinate (Eq. 1) retrieves a high-resolution image (Fig. 2d) by interlacing diagonal and off-diagonal elements. To understand this recombination, one can expand Eq. 1 in the one dimensional case for even and odd pixels:

Pi+=2k+=Γkk+2l=0NΓk2lk 2
Pi+=2k+1+=2Γkk+1+2l=1NΓk(2l+1)k 3

where N is the number of pixel of the sensor. In theory, when operating under the constraint σ < Δ, correlations between non-neighbouring pixels are nearly zeros i.e. Γij ≈ 0 if ∣i − j∣ > 2. The sum terms in Eqs. 2 and 3 then become negligible compared to Γkk and Γkk+1. It leads to Pi+=2k+=Γkk and Pi+=2k+1+=2Γkk+1, providing the super-resolved image. In practice, however, experimentally measured correlation values are noisy. All the noise then adds up in the sums in Eqs. 2 and 3, ultimately producing a noise with greater variance that dominates the diagonal and off-diagonal elements in the final image. To solve this issue, the JPD is filtered before performing the projection to remove the weakest correlation values i.e. all terms except Γkk and Γkk+1 are set to zero. In doing so, we also remove the noise associated with these values, that do not add up in the sums, and significantly improve the quality of the final image (see Methods and Fig. 5 for more details).

Fig. 5. Filtering and normalization.

Fig. 5

a Sum-coordinate projection of the JPD P+ measured in the experiment in Fig. 3 without filtering. b Sum-coordinate projection of the JPD P+ measured in the experiment in Fig. 3 after filtering and before normalisation. c Graphical representation of Eq. 6.

In addition, it should be noted that JPD diagonal values (i.e. Γijij in the two dimensional case) are not measured directly with single-photon cameras, as most of these devices are not photon-number resolving. To circumvent this limitation, diagonal values are estimated from correlations between either vertical (Γi(j±1)ij) or horizontal (Γ(i±1)jij) direct-neighbouring pixels. This has the practical consequence of restricting the super-resolution effect to one dimension in the complementary axis. In the general case, an image super-resolved in two dimensions can still be obtained by combining two images super-resolved in each direction. In the specific case of EMCCD cameras, this is however not possible because of charge smearing5. This effect compromises horizontal correlations values and truly restricts the super-resolution effect to one dimension along the vertical spatial axis (see Methods). These limitations are lifted when operating in a far-field illumination configuration14,17,27 (see section Application to quantum holography) and in quantum imaging schemes using two distinct cameras16,31.

The previous analysis also shows that two experimental conditions must be verified to achieve pixel super-resolution. First, the sensor fill-factor must be large enough to allow photon coincidence detection between neighbouring pixels i.e. δ < σ. Second, the spatial resolution of the imaging system must be limited by the pixel size. This is true if the higher spatial frequency component of the optical field in the object plane is both smaller than the imaging system spatial frequency cut-off and larger than the sensor Nyquist frequency 1/(2Δ). In practice, one must verify at least that σ < Δ. Otherwise, a JPD sum-coordinate projection can still be retrieved, but will not contain more information than a conventional intensity measurement. In the following, we apply our approach to real quantum imaging experiments in which the condition δ < σ < Δ is true. We note that this condition is straightforward to satisfy. Indeed, σ < Δ is the starting requirement for any form of pixel super-resolution to actually make sense–without this condition, there will not even be any aliasing effects in the first place. The other condition δ < σ, is always satisfied unless the pixel fill factor is extremely low.

Applications to quantum illumination

We demonstrate our technique on two different experimental schemes based on two-photon quantum illumination protocols described in13 and27. In the first protocol, an amplitude object t1 is illuminated by photon pairs and imaged onto an EMCCD camera using a similar experimental setup than this shown in Fig. 1a. Simultaneously, another object t2 is illuminated by a classical source and also imaged on the camera (see Methods and supplementary document section 7.1 for more details about the experimental setup). The intensity image, Fig. 3a, shows a superposition of both quantum and classical images. The goal of such protocol is to segment the quantum object and therefore retrieve an image showing only object t1 illuminated by photon pairs, effectively removing any classical objects or noise. Conventionally, this is achieved by measuring the diagonal image shown in Fig. 3b. Using the JPD pixel super-resolution processing method, we can now perform such a task and simultaneously retrieve a pixel super-resolved image of t1 (Fig. 3c), in which we observe the clear removal of the Moire effect due to aliasing.

Fig. 3. Results in quantum illumination imaging.

Fig. 3

Experiment performed using a setup similar to this shown in Fig. 1a. in which another object illuminated by a classical light source (LED) is inserted. Images of both objects (positive and negative resolution charts) are superimposed onto the camera. a Intensity image, (b) diagonal image and (c) JPD pixel super-resolution image acquired using a camera with a 32 μm-pixel-pitch. d Intensity image, (e) diagonal image and (f) JPD pixel super-resolution image acquired using a camera with a 16 μm-pixel-pitch. The photon correlation width in the camera plane is σ ≈ 8 μm.

In addition, we reproduced the same experiment using a higher resolution camera i.e. 16 μm-pixel-pitch (Fig. 3d–f). In this case, we observe that JPD pixel super-resolution anti-aliasing is mainly visible at the edges and corners of the object, in particular with the attenuation of the so-called staircase effects. Thus, even in imaging situations where aliasing does not produce artifacts as clear as Moire effect, the JPD pixel super-resolution approach still provides clear benefits in removing it and improving the overall image quality.

We also apply our technique in another quantum illumination protocol demonstrated in27. This protocol performs the same task as in Fig. 3 i.e. distilling the quantum image from the classical, but uses a far-field illumination configuration, operates in reflection and detects photons with a SPAD camera (see supplementary document section 7.2 for results and experimental arrangement).

Application to quantum holography

We now demonstrate JPD pixel super-resolution on two different experimental quantum holography schemes. The first approach was demonstrated in17 and its experimental configuration is described in Fig. 4a. Pairs of photons entangled in space and polarisation illuminate an SLM and a birefringent phase object t6 positioned in each half of an optical plane using a far-field illumination configuration. The object and SLM are then both imaged onto two different parts of an EMCCD camera.

Fig. 4. Results in entanglement-enabled quantum holography.

Fig. 4

a Experimental setup. Light emitted by a laser diode at 405 nm and polarized at 45 illuminates a stacked BBO crystal pair (0.5 mm thickness each) whose optical axes are perpendicular to each other to produce pairs of photons entangled in space and polarization by type-I SPDC. After the crystals, pump photons are filtered out by a band pass filter at 810 ± 5 nm. A lens f11 is used to Fourier-image photon pairs onto an optical plane where an SLM with a phase object t6. A two-lens imaging system f12 − f13 is then used to image the SLM and the object onto two different parts of an EMCCD camera. A polarizer (P) at 45 is positioned before the camera. The biphoton correlation width in the camera plane is estimated as σ ≈ 9 μm. EMCCD camera pixel pitch is 16 μm. b Photo of the birefringent object used in the experiment (section of a bird feather). c Spatial phase of the object reconstructed by combining four anti-diagonal images Γij −i −j measured for four phase shifts {0, π/2, π/2, 3π/4} programmed by the SLM. d Spatial phase of the same object reconstructed from four minus-coordinate projections P. White scale bar is 1 mm.

In such a far-field configuration, the information of the JPD is now concentrated around the anti-diagonal (i.e. Γij −i −j) because photon pairs are spatially anti-correlated in the object plane32. This anti-diagonal is the quantity that is conventionally measured and used in all photon-pair-based imaging schemes that use a far-field illumination configuration14,17,27. In this case, the JPD pixel super-resolution technique must be adapted by using the minus-coordinate projection P of the filtered JPD in place of the sum-coordinate projection P+ to retrieve the high-resolution image (see Methods).

The object considered here is a section of a bird feather, shown in Fig. 4b. The SLM is used to compensate for optical aberrations and implement a four phase-shifting holographic process by displaying uniform phase patterns with values {0, π/2, π, 3π/2} (see Methods). In the original protocol, four different images are obtained for each step of the process by measuring the anti-diagonal component Γij −i −j of the JPD. These images are then combined numerically to reconstruct the spatial phase of the birefringent object (Fig. 4c). Using the new JPD pixel super-resolution approach, the four images are now obtained from the minus-coordinate projection P of the JPD and recombined to retrieve a phase image with improved spatial resolution (Fig. 4d). We do not observe the clear removal of aliasing artefacts in the super-resolved image as in Figs. 1d and 3c due to the relatively smooth shape of the bird feather that is mostly composed of low frequencies below the Nyquist limit. Resolution improvements are therefore mainly located at the edges and corners, that visually translates into an overall improvement of the image quality.

The second approach is a full-field version of a N00N-state holographic scheme based on photon pairs (N = 2). N00N states are known for providing phase measurements with N-times better sensitivity than classical holography7. Our results show that a full-field version of N00N-state holography with photon pairs not only preserves the two-fold sensitivity enhancement, but also provides a better pixel resolution, an advantage that could only be matched by a scanning approach6 that would also increase the acquisition time by a factor of four (see supplementary document section 6 for experimental layout and results).

Discussion

We have introduced a pixel super-resolution technique based on the measurement of a spatially resolved JPD of spatially entangled photons. This approach retrieves spatial information about an object that is lost due to pixelation, without shifting optical elements or relying on prior knowledge. We demonstrated that this JPD pixel super-resolution approach can improve the spatial resolution in already established quantum illumination and entanglement-enabled holography schemes, as well as in a full-field version of N00N-state quantum holography, using different types of illumination (near-field and far-field) and different single-photon-sensitive cameras (EMCCD and SPAD). Our approach has the advantage that it can be used immediately in quantum imaging schemes based on photon pairs, and even in some cases by only reprocessing already acquired data. In addition, our approach can also be implemented in any classical imaging system limited by pixelation, after substituting the classical source by a source of correlated photons with similar properties. Indeed, contrary to classical pixel super-resolution techniques, such as shift-and-add approaches33, wavelength scanning34 and machine-learning-based algorithms35, the JPD pixel super-resolution approach has the advantage that it does not require displacing optical elements in the system or having prior knowledge about the object being imaged. Although we experimentally demonstrated this technique for the case of p = 2 (photon pair) entanglement, we anticipate that our approach could be generalised for all p to further increase the pixel resolution. Photon pair sources are without doubt the current experimental choice in any given lab but recent efforts have shown promising progress towards direct generation of spatially entangled three-photon states36. We also underline that although the schemes shown here used spatially entangled photons, strictly speaking it is not entanglement but only spatial correlations that are used to generate the JPD. This opens the intriguing prospect for future work to investigate the potential of classical sources of light, e.g. thermal light, to achieve similar pixel super-resolution as shown here but with ready access to p > 2 JPDs.

Methods

Experimental layouts

Experiment in Fig. 1a

BBO crystal has 0.5 × 5 × 5 mm and is cut for type I SPDC at 405 nm with a half opening angle of 3 degrees (Newlight Photonics). It is slightly rotated around horizontal axis to ensure near-collinear phase matching of photons at the output (i.e. ring collapsed into a disk). The pump is a continuous-wave laser at 405 nm (Coherent OBIS-LX) with an output power of approximately 200 mW and a beam diameter of 0.8 ± 0.1 mm. A 650 nm-cut-off long-pass filter is used to block pump photons after the crystals, together with a band-pass filter centred at 810 ± 5 nm. The camera is an EMCCD (Andor Ixon Ultra 897) that operates at −60 C, with a horizontal pixel shift readout rate of 17 MHz, a vertical pixel shift every 0.3 μs, a vertical clock amplitude voltage of +4V above the factory setting and an amplification gain set to 1000. The camera sensor has a total 512 × 512 pixels with 16 μm pixel pitch and nearly unity fill factor. In Fig. 1b–d, the camera is operated with a pixel pitch of Δ = 32 μm by using a 2 × 2 binning. In Fig. 1e, the camera is operated with a pixel pitch of 16 μm. Exposure time is set to 2 ms. The camera speed is about 100 frames per second (fps) using a region of interest of 100 × 100 pixels. The two-lens imaging system f1f2 in Fig. 1a is represented by two lenses for clarity, but is composed of a series of six lenses with focal lengths 45, 75, 50, 150, 100, 150 mm. The first and the last lens are positioned at focal distances from the crystal and the object, respectively, and the distance between two lenses in a row equals the sum of their focal lengths. Similarly, the second two-lens imaging system f3 − f4 in Fig. 1a is composed of a series of four consecutive lenses with focal lengths 150, 50, 75, 100 mm arranged as in the previous case. The imaging system magnification is 3.3. The photon correlation width in the camera plane is estimated as σ ≈ 13 μm.

Experiment used to acquire images in Fig. 3

The experimental setup is the same as this shown in Fig. 1a, with some changes in the lenses used and the addition of an external arm to superimposed the classical image. It is shown in Fig. 15a of the supplementary document. The output surface of the crystal is imaged onto an object t1 using a two-lenses imaging system with focal lengths f5 = 35 mm and f6 = 75 mm. The object is then imaged onto the camera using a single-lens imaging system composed of one lens with focal length f7 = 50 mm positioned at a distance of 100mm from the object and the camera. Another object t2 is inserted and illuminated by a spatially filtered light-emitting diode (LED) and spectrally filtered at 810 ± 5 nm. Images of both objects are superimposed on the camera using a beam splitter. t1 and t2 are negative and positive amplitude USAF resolution charts, respectively. They are shown in Figs. 15b and c of the supplementary document. Exposure time is set to 6 ms. The imaging system magnification 2.1. The biphoton correlation width in the camera plane is estimated as σ ≈ 8 μm. 6.106 frames were acquired to retrieve intensity image and JPD in approximately 20 h of acquisition. The same setup was used in13. More details in section 7 of the supplementary document.

Experiment in Fig. 4a

The paired set of stacked BBO crystals have dimensions of 0.5 × 5 × 5 mm each and are cut for type I SPDC at 405 nm. They are optically contacted with one crystal rotated by 90 degrees about the axis normal to the incidence face. Both crystals are slightly rotated around horizontal and vertical axis to ensure near-collinear phase matching of photons at the output (i.e. rings collapsed into disks). The pump laser and camera are the same than in Fig. 1a. A 650 nm-cut-off long-pass filter is used to block pump photons after the crystals, together with a band-pass filter centred at 810 ± 5 nm. The SLM is a phase only modulator (Holoeye Pluto-2-NIR-015) with 1920 × 1080 pixels and a 8 μm pixel pitch. For clarity, it is represented in transmission in Fig. 4a, but was operated in reflection. Exposure time is set to 3 ms. The camera speed is about 40 frame per second using a region of interest of 200 × 200 pixels. The single-lens Fourier imaging system f11 is composed of a series of three lenses of focal lengths 45, 125, 150 mm. The first and last lenses are positioned at focal distance from the crystal and the object-SLM plane, respectively, and the distance between each pair of lenses equals the sum of their focal lengths. The two-lens imaging system f12 − f13 is in reality composed by a series of four lenses with focal lengths 150, 75, 75, 100 mm. The first and the last lens are positioned at focal distances from respectively the SLM-object and the camera, respectively, and the distance between two lenses in a row equals the sum of their focal lengths. The imaging system effective focal length is 36 mm. The photon correlation width in the camera plane is estimated as σ ≈ 9 μm. 2.5.106 frames were acquired to retrieve intensity image and JPD in each case in approximately 17 h of acquisition. The same setup was used in17. More details in section 5 of the supplementary document.

JPD measurement with a camera

Γijkl, where (i, j) and (k, l) are two arbitrary pair of pixels centred at positions r1=(xi,yj) and r2=(xk,yl), is measured by acquiring a set of M + 1 frames {I(l)}l[[1,M+1]] using a fixed exposure time and then processing them using the formula:

Γijkl=1Ml=1MIij(l)Ikl(l)Iij(l)Ikl(l+1) 4

In all the results of our work, M was on the order of 106–107 frames. However, it is essential to note that not all the JPD values can be directly measured using this process. When using an EMCCD camera, (i) correlation values at the same pixel, i.e. Γijij, cannot be directly measured because Eq. 4 is only valid for different pixels (i, j) ≠ (k, l)28, and (ii) and correlation values between vertical pixels, i.e. Γiji (j±q) (where q is an integer that defines the position of a pixel above or bellow (i, j)), cannot be measured because of the presence of charge smearing effects. To circumvent this issue, these values are interpolated from neighbouring correlation values of the JPD i.e. [Γij (i+1) j + Γij (i−1) j]/2 → Γijij and [Γij (i+1) (j±q) + Γij (i−1) (j±q)]/2 → Γiji (j±q), as detailed in5. As a result, the gain in resolution along the x-axis in the experiments using near-field imaging configurations (Figs. 1 and 3, and Fig. 15 and 10 of the supplementary document) is not optimal. However, it is important to note that the gain in pixel resolution along the y-axis is not affected by this interpolation technique. Therefore, the spectral analyses performed in Fig. 1 are also not impacted because the grid-objects used are horizontal (no spectral component on the x-axis) and all the resulting spectral curves are obtained by summing along the x-axis. In addition, this interpolation also does not affect experiments using far-field illumination configurations in Fig. 4a and Fig. 16 of the supplementary document because the JPD diagonals do not contain any relevant imaging information (that is in the JPD anti-diagonals). More details are provided in28 and in section 1 of the supplementary document.

JPD pixel super-resolution in far-field illumination configuration

When an object is illuminated by photon-pairs using a far-field illumination configuration (i.e. the crystal is Fourier-imaged on the object), the JPD pixel super-resolution technique must be adapted and the sum-coordinate projection P+ cannot be used to retrieve the high-resolution image. First, instead of the diagonal, information about the object is retrieved by displaying the anti-diagonal component of the JPD i.e. Γij −i −j ≈ ∣t(xi, yi)∣2t(−xi, −yi)∣2. Γij −i −j is the quantity that is conventionally measured and used in all photon-pairs-based imaging schemes using a far-field illumination configuration14,17,27. Second, instead of using the sum-coordinate projection, a super-resolved image of the object is retrieved by integrating the JPD along its minus-coordinate axis:

Pij=i=1NXj=1NYΓ(i+i)(j+j)ij, 5

where NY × NX is the number of pixels of the illuminated region of the camera sensor, P is defined as the minus-coordinate projection of the JPD and (i, j) defines the sum-coordinate pixel to which a spatial variable r=(xi,yj) is associated. Each value Pij is obtained by adding all the values Γijkl located on an anti-diagonal of the JPD defined by i − k = i and j − l = j (i.e. r1r2=r). In theory, calculating the minus-coordinate projection of the JPD can therefore achieve pixel super-resolution and potentially retrieved lost spatial information of undersampled objects. However, it is important to note that the JPD anti-diagonal Γijij and minus-coordinate projection P images are not directly proportional to ∣t(xi, yi)∣ as in the near-field configuration, but to ∣t(xi, yi)∣∣t(−xi, −yi)∣, which does not always enable to retrieve ∣t∣ without ambiguity. In works using a far-field illumination configuration, this problem is solved by illuminating t with only half of the photon pairs beam (i.e. t(x, y ≤ 0) = 1). The object then appears twice in the retrieved image (object and its symmetric), but no information is lost.

JPD filtering

Figure 5a shows the sum-coordinate projection P+ calculated using equation (1) from the unfiltered JPD measured in Fig. 3. This image is very noisy and does not reveal the object. To solve this issue, a filtering method is applied and consists in setting to 0 all values of the JPD that have a negligible weight (i.e. values close to zero) so that their associated noise does not contribute when performing the sum. When using a near-field illumination configuration (Figs. 1 and 3, and Figs. 10 and 15 of the supplementary document), filtering is applied by setting all JPD values to zeros except from those of the main JPD diagonal Γijij and of the eight other diagonals directly above and bellow i.e. Γij (i±1) j, Γiji (j±1) and Γij (i±1) (j±1). When using a far-field illumination configuration (Figs. 4a and 16 of the supplementary document), filtering is applied by setting all JPD values to zeros except from those of the main JPD anti-diagonal Γij −i −j and those of the eight other anti-diagonals directly above and bellow i.e. Γij (−i±1) −j, Γij −i (−j±1) and Γij (−i±1) (−j±1). diagonal Γijij and of the eight other diagonals directly above and bellow i.e. Γij (i±1) j, Γiji (j±1) and Γij (i±1) (j±1).

Normalisation

Figure 5b shows the sum-coordinate projection P+ directly calculated from the filtered JPD measured in the experiment in Fig. 3 before normalisation. We observe that this image has an artefact taking the form of inhomogeneous horizontal and vertical stripes. This artefact is very similar to this commonly observed in shift-and-add super-resolution techniques33 and is often refereed as a ’motion error artefact’. In our case, it originates from the difference in the effective detection areas of photons pairs when they are detected by the same pixel (diagonal elements) or by neighbouring pixel (off-diagonal elements), as illustrated in Fig. 2. Using an analogy with the shift-and-add technique, our problem is equivalent to a situation in which the different shifted low-resolution images were measured using cameras with different pixel width during the first step of the process. Then, when these low-resolution images are recombined into a high-resolution one (second step of shift-and-add), the artifact appears in the resulting image because some pixels are less intense than others. In practice, the simplest way to remove this artifact is to normalize each low-resolution image by its total average intensity so that neighboring pixels in the high-resolution image are at the same level. We use such a normalization approach in our work by dividing all values of the non-zero diagonals in the filtered JPD by their spatial average value i.e. Γij(i+i)(j+j)Γij(i+i)(j+j)/i,jΓij(i+i)(j+j), where (i, j) identifies a specific JPD diagonal. After normalization, Fig. 3f is obtained. In the case of far-field illumination, the same normalisation is applied to the values of the non-zero anti-diagonals in the filtered JPD i.e. Γij(i+i+)(j+j+)Γij(i+i+)(j+j+)/i,jΓij(i+i+)(j+j+), where (i+, j+), where (i+, j+) identifies a specific JPD anti-diagonal.

In some cases the artefact is reduced but still visible in the resulting image even after normalisation. The persistence of this artefact is due the fact that the difference in the effective integration areas between diagonal and off-diagonal elements is too large to be accurately corrected by simple sum normalization. For example, in the experiment shown in Fig. 10a of the supplementary document, the poor quality of the SPAD camera sensor, in particular its very low fill-factor (10.5%), is probably at the origin of the remaining artefact. To further reduce this artefact, one could use more complex normalisation algorithms, such as L1 or L2 norms minimisation approaches37 and kernel regression38, that are commonly used in shift-and-add approaches.

Slanted-edge method

MTF measurements using the slanted-edge approach were performed with a razor blade titled by approximatively 100 mrad and positioned in the object plane of the experimental configuration shown in Fig. 1, followed by a standard method described in30. Broadening of the curves is estimated by comparing spatial frequency values where MTF is 50% of its low frequency value (i.e. criteria MTF50). More details are provided in section 4 of supplementary document.

Estimation of the photon correlation width σ

Near-field illumination configuration

The value of the correlation width in the image plane σ is obtained by calculating the position-correlation width at the output of the crystal using the formula αLλp/(2π) (L is the crystal thickness, λp is the pump beam wavelength and α is a parameter described in39) and multiplying it by the magnification of the imaging system.

Far-field configuration

The value of σ is obtained by calculating the angular-correlation width of photons at the output of the crystal using the formula λp/(2ω)39, (ω is the pump beam waist) and multiplying it by the effective focal length of the imaging system.

In our work, values of σ are estimated using the theory and not with the experimental techniques described in21,22 because these approaches fail at providing an accurate result precisely when the correlation width is smaller than the pixel width. In addition, note also that these width values are not strict bounds but correspond to standard deviation widths when modelling spatial correlations with a Gaussian model40. More details are provided in section 2.4 of the supplementary document.

Analytical derivation of Γkk and Γkk+1

We consider an unidimensional object t(x) illuminated by photon pairs using a near-field illumination configuration. Photon pairs are characterised by a two-photon wavefunction Ψt(x1, x2) in the object plane. JPD values Γkl are measured using an array of pixels with pitch Δ and gap δ. Assuming that the imaging system is not limited by diffraction but only by the sensor pixel resolution, its point spread function can be approximate by a Dirac delta function and Γkl can be formally written as:

Γkl=xkΔδ2xk+Δδ2xlΔδ2xl+Δδ2t(x1)t(x2)Ψt(x1,x2)2dx1dx2, 6

where we assumed unity magnification between object and camera planes. A graphical representation of this integral is shown in Fig. 5c. For clarity, we only represented an array of three pixels. The bivariate function ∣t(x1)t(x2)∣2 is represented in green and overlaps with an grid of squares of size Δ and spacing δ. Each square represents an integration area associated to a specific JPD value. For example, the central square corresponds to the integration area of Γkk i.e. [xkΔδ2,xk+Δδ2]×[xkΔδ2,xk+Δδ2]. In addition, the bivariate function ∣Ψt(x1, x2)∣2 is represented by two dashed black lines. These two lines delimit the most intense part of the function, which corresponds to a diagonal band of width σ using a double Gaussian model40.

We seek to calculate the JPD values Γkk and Γkk+1. Graphically, these values are located at the intersection between the grid, the green area and the surface inside the dashed lines. They are represented in blue and red, respectively. For small widths σ < Δ, it is clear in Fig. 5c that the blue and red integration areas are tightening around the positions (xk, xk) and (xk+1/2, xk+1/2) positions, which results in Γkk ~ ∣t(xk)∣4 and Γkk+1 ~ ∣t(xk+1/2)∣4. More formally, one can also apply a change of variable and perform a first-order Taylor expansion in Eq. 6 to reach the same results:

Γkkt(xk)4S0 7
Γkk+1t(xk+1/2)4S1 8

where S0σ/2[1+22(Δσ/2)] and S1 ≈ σ2/2. Full calculations are provided in section 2.5 of the supplementary document. Note also that the difference between integration areas S1 ≠ S2 makes the normalization step after calculating the JPD projection necessary (see Methods section Normalization).

Supplementary information

Acknowledgements

D.F. acknowledges financial support from the Royal Academy of Engineering Chair in Emerging Technology, UK Engineering and Physical Sciences Research Council (grants EP/T00097X/1 and EP/R030081/1) and from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 801060. H.D. acknowledges funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No. 840958. JZ has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie Grant Agreement No. 754354.

Author contributions

H.D. conceived the original idea, designed and performed experiments, analysed the data and prepared the manuscript. P.C. performed the slanted-edge experiment. P.C. and H.D. performed the simulations. M.R. and J.F. contributed to developing the quantum illumination protocol with the EMCCD camera. J.Z. and E.C. contributed to developing the quantum illumination protocol with the SPAD camera. A.L. and B.N. contributed to developing the entanglement-enabled quantum holography protocol. A.R.H. participated to the data analysis. All authors contributed to the manuscript. D.F. supervised the project.

Peer review

Peer review information

Nature Communications thanks Eric Lantz and the other anonymous reviewers for their contribution to the peer review of this work.

Data availability

The data generated in this study have been deposited in a database under accession code 10.5525/gla.researchdata.1269.

Competing interests

The authors declare that there are no conflicts of interest related to this article. For the sake of transparency, the authors would like to disclose that E.C. holds the position of Chief Scientific Officer of Fastree3D, a company which is active in LIDAR and consumer electronics, and that he is co-founder of Pi Imaging Technology. Both companies have not been involved with the paper drafting, and at the time of writing have no commercial interests related to this article.

Footnotes

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Contributor Information

Hugo Defienne, Email: hugo.defienne@glasgow.ac.uk.

Daniele Faccio, Email: daniele.faccio@glasgow.ac.uk.

Supplementary information

The online version contains supplementary material available at 10.1038/s41467-022-31052-6.

References

  • 1.Bishara W, Su T-W, Coskun AF, Ozcan A. Lensfree on-chip microscopy over a wide field-of-view using pixel super-resolution. Opt. Express. 2010;18:11181–11191. doi: 10.1364/OE.18.011181. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Bishara W, et al. Holographic pixel super-resolution in portable lensless on-chip microscopy using a fiber-optic array. Lab a Chip. 2011;11:1276–1279. doi: 10.1039/c0lc00684j. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Moreau P-A, Toninelli E, Gregory T, Padgett MJ. Imaging with quantum states of light. Nat. Rev. Phys. 2019;1:367–380. doi: 10.1038/s42254-019-0056-0. [DOI] [Google Scholar]
  • 4.Boto AN, et al. Quantum interferometric optical lithography: exploiting entanglement to beat the diffraction limit. Phys. Rev. Lett. 2000;85:2733–2736. doi: 10.1103/PhysRevLett.85.2733. [DOI] [PubMed] [Google Scholar]
  • 5.Reichert M, Defienne H, Fleischer JW. Massively parallel coincidence counting of high-dimensional entangled states. Sci. Rep. 2018;8:7925. doi: 10.1038/s41598-018-26144-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Ono T, Okamoto R, Takeuchi S. An entanglement-enhanced microscope. Nat. Commun. 2013;4:2426. doi: 10.1038/ncomms3426. [DOI] [PubMed] [Google Scholar]
  • 7.Israel Y, Rosen S, Silberberg Y. Supersensitive polarization microscopy using NOON states of light. Phys. Rev. Lett. 2014;112:103604. doi: 10.1103/PhysRevLett.112.103604. [DOI] [PubMed] [Google Scholar]
  • 8.Brida G, Genovese M, Berchera IR. Experimental realization of sub-shot-noise quantum imaging. Nat. Photonics. 2010;4:227–230. doi: 10.1038/nphoton.2010.29. [DOI] [Google Scholar]
  • 9.Camphausen R, et al. A quantum-enhanced wide-field phase imager. Sci. Adv. 2021;7:eabj2155. doi: 10.1126/sciadv.abj2155. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Pittman TB, Shih YH, Strekalov DV, Sergienko AV. Optical imaging by means of two-photon quantum entanglement. Phys. Rev. A. 1995;52:R3429–R3432. doi: 10.1103/PhysRevA.52.R3429. [DOI] [PubMed] [Google Scholar]
  • 11.Aspden RS, Tasca DS, Boyd RW, Padgett MJ. EPR-based ghost imaging using a single-photon-sensitive camera. N. J. Phys. 2013;15:073032. doi: 10.1088/1367-2630/15/7/073032. [DOI] [Google Scholar]
  • 12.Lopaeva ED, et al. Experimental realization of quantum illumination. Phys. Rev. Lett. 2013;110:153603. doi: 10.1103/PhysRevLett.110.153603. [DOI] [PubMed] [Google Scholar]
  • 13.Defienne H, Reichert M, Fleischer JW, Faccio D. Quantum image distillation. Sci. Adv. 2019;5:eaax0307. doi: 10.1126/sciadv.aax0307. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Gregory T, Moreau P-A, Toninelli E, Padgett MJ. Imaging through noise with quantum illumination. Sci. Adv. 2020;6:eaay2652. doi: 10.1126/sciadv.aay2652. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Gregory T, Moreau P-A, Mekhail S, Wolley O, Padgett MJ. Noise rejection through an improved quantum illumination protocol. Sci. Rep. 2021;11:21841. doi: 10.1038/s41598-021-01122-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Devaux F, Mosset A, Bassignot F, Lantz E. Quantum holography with biphotons of high Schmidt number. Phys. Rev. A. 2019;99:033854. doi: 10.1103/PhysRevA.99.033854. [DOI] [Google Scholar]
  • 17.Defienne H, Ndagano B, Lyons A, Faccio D. Polarization entanglement-enabled quantum holography. Nat. Phys. 2021;17:591–597. doi: 10.1038/s41567-020-01156-1. [DOI] [Google Scholar]
  • 18.Lemos GB, et al. Quantum imaging with undetected photons. Nature. 2014;512:409–412. doi: 10.1038/nature13586. [DOI] [PubMed] [Google Scholar]
  • 19.Kviatkovsky I, Chrzanowski HM, Avery EG, Bartolomaeus H, Ramelow S. Microscopy with undetected photons in the mid-infrared. Sci. Adv. 2020;6:eabd0264. doi: 10.1126/sciadv.abd0264. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Vanselow A, et al. Frequency-domain optical coherence tomography with undetected mid-infrared photons. Optica. 2020;7:1729–1736. doi: 10.1364/OPTICA.400128. [DOI] [Google Scholar]
  • 21.Moreau P-A, Mougin-Sisini J, Devaux F, Lantz E. Realization of the purely spatial Einstein-Podolsky-Rosen paradox in full-field images of spontaneous parametric down-conversion. Phys. Rev. A. 2012;86:010101. doi: 10.1103/PhysRevA.86.010101. [DOI] [Google Scholar]
  • 22.Edgar MP, et al. Imaging high-dimensional spatial entanglement with a camera. Nat. Commun. 2012;3:984. doi: 10.1038/ncomms1988. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Chrapkiewicz R, Wasilewski W, Banaszek K. High-fidelity spatially resolved multiphoton counting for quantum imaging applications. Opt. Lett. 2014;39:5090–5093. doi: 10.1364/OL.39.005090. [DOI] [PubMed] [Google Scholar]
  • 24.Unternährer M, Bessire B, Gasparini L, Stoppa D, Stefanov A. Coincidence detection of spatially correlated photon pairs with a monolithic time-resolving detector array. Opt. Express. 2016;24:28829–28841. doi: 10.1364/OE.24.028829. [DOI] [PubMed] [Google Scholar]
  • 25.Toninelli E, et al. Resolution-enhanced quantum imaging by centroid estimation of biphotons. Optica. 2019;6:347–353. doi: 10.1364/OPTICA.6.000347. [DOI] [Google Scholar]
  • 26.Chrapkiewicz R, Jachura M, Banaszek K, Wasilewski W. Hologram of a single photon. Nat. Photonics. 2016;10:576–579. doi: 10.1038/nphoton.2016.129. [DOI] [Google Scholar]
  • 27.Defienne H, Zhao J, Charbon E, Faccio D. Full-field quantum imaging with a single-photon avalanche diode camera. Phys. Rev. A. 2021;103:042608. doi: 10.1103/PhysRevA.103.042608. [DOI] [Google Scholar]
  • 28.Defienne H, Reichert M, Fleischer JW. General model of photon-pair detection with an image sensor. Phys. Rev. Lett. 2018;120:203604. doi: 10.1103/PhysRevLett.120.203604. [DOI] [PubMed] [Google Scholar]
  • 29.Reichert M, Defienne H, Sun X, Fleischer JW. Biphoton transmission through non-unitary objects. J. Opt. 2017;19:044004. doi: 10.1088/2040-8986/aa6175. [DOI] [Google Scholar]
  • 30.Burns, P. D. et al. Slanted-edge mtf for digital camera and scanner analysis. In Is and Ts Pics Conference, 135–138 (Society for Imaging Science & Technology, 2000).
  • 31.Devaux F, Mosset A, Moreau P-A, Lantz E. Imaging spatiotemporal Hong-Ou-Mandel interference of Biphoton states of extremely high Schmidt number. Phys. Rev. X. 2020;10:031031. [Google Scholar]
  • 32.Schneeloch J, Howell JC. Introduction to the transverse spatial correlations in spontaneous parametric down-conversion through the biphoton birth zone. J. Opt. 2016;18:053501. doi: 10.1088/2040-8978/18/5/053501. [DOI] [Google Scholar]
  • 33.Farsiu S, Robinson M, Elad M, Milanfar P. Fast and robust multiframe super resolution. IEEE Trans. Image Process. 2004;13:1327–1344. doi: 10.1109/TIP.2004.834669. [DOI] [PubMed] [Google Scholar]
  • 34.Luo W, Zhang Y, Feizi A, Göröcs Z, Ozcan A. Pixel super-resolution using wavelength scanning. Light.: Sci. Appl. 2016;5:e16060–e16060. doi: 10.1038/lsa.2016.60. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.Shi, W. et al. Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network. 1874–1883 (2016). https://www.cv-foundation.org/openaccess/content_cvpr_2016/html/Shi_Real-Time_Single_Image_CVPR_2016_paper.html.
  • 36.Borshchevskaya NA, Katamadze KG, Kulik SP, Fedorov MV. Three-photon generation by means of third-order spontaneous parametric down-conversion in bulk crystals. Laser Phys. Lett. 2015;12:115404. doi: 10.1088/1612-2011/12/11/115404. [DOI] [Google Scholar]
  • 37.Farsiu, S., Elad, M. & Milanfar, P. A practical approach to superresolution. In Visual Communications and Image Processing 2006, vol. 6077, 607703 (International Society for Optics and Photonics, 2006). https://www.spiedigitallibrary.org/conference-proceedings-of-spie/6077/607703/A-practical-approach-to-superresolution/10.1117/12.644391.short.
  • 38.Takeda H, Farsiu S, Milanfar P. Kernel regression for image processing and reconstruction. IEEE Trans. Image Process. 2007;16:349–366. doi: 10.1109/TIP.2006.888330. [DOI] [PubMed] [Google Scholar]
  • 39.Chan KW, Torres JP, Eberly JH. Transverse entanglement migration in Hilbert space. Phys. Rev. A. 2007;75:050101. doi: 10.1103/PhysRevA.75.050101. [DOI] [Google Scholar]
  • 40.Fedorov MV, Mikhailova YM, Volkov PA. Gaussian modelling and Schmidt modes of SPDC biphoton states. J. Phys. B: At., Mol. Optical Phys. 2009;42:175503. doi: 10.1088/0953-4075/42/17/175503. [DOI] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Data Availability Statement

The data generated in this study have been deposited in a database under accession code 10.5525/gla.researchdata.1269.


Articles from Nature Communications are provided here courtesy of Nature Publishing Group

RESOURCES