Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2021 Jul 27.
Published in final edited form as: Dig Tech Pap IEEE Int Solid State Circuits Conf. 2019 Mar 7;2019:198–200. doi: 10.1109/isscc.2019.8662408

A 512-Pixel 3kHz-Frame-Rate Dual-Shank Lensless Filterless Single-Photon-Avalanche-Diode CMOS Neural Imaging Probe

Changhyuk Lee 1,2, Adriaan J Taal 1, Jaebin Choi 1, Kukjoo Kim 1, Kevin Tien 1, Laurent Moreaux 3, Michael L Roukes 3, Kenneth L Shepard 1
PMCID: PMC8315581  NIHMSID: NIHMS1721561  PMID: 34321707

Optical functional neural imaging has revolutionized neuroscience with optical reporters that enable single-cell-resolved monitoring of neuronal activity in vivo. State-of-the-art microscopy methods, however, are fundamentally limited in imaging depth by absorption and scattering in tissue even with the use of the most sophisticated two-photon microscopy techniques [1]. To overcome this imaging depth problem, we develop a lens-less, optical-filter-less, shank-based image sensor array that can be inserted into the brain, allowing cellular-resolution recording at arbitrary depths with excitation provided by an external laser light source (Fig. 11.5.1). Lens-less imaging is achieved generally by giving each pixel a spatial sensitivity function, which can be introduced by near-field or far-field, phase or amplitude masking. Since probe thickness must be less than 70μm to limit tissue damage and far-field masks are characterized by distances on the order of 200μm between the mask and the detector [2], we employ a near-field amplitude mask formed by Talbot gratings in the back-end metal of the CMOS process, which gives each pixel a diffraction-grating-induced angle-sensitivity [3]. Filter-less fluorescence imaging is achieved with time-gated operation in which the excitation light source is pulsed and pixel-level time-gated circuitry collects photons only after the excitation source has been removed.

Figure 11.5.1: Concept diagram and system architecture.

Figure 11.5.1:

The implantable neural imager takes the form of two shanks, each 3.2mm long and containing 256 single-photon-avalanche-diode (SPAD) pixels in two rows at 25μm pitch and 6.3% fill factor (Fig. 11.5.1). The shanks are thinned to 70μm in a substrate back-etch process. Each shank defines an imaging field of view of approximately 3.2mm×200μm×200μm, limited by scattering in tissue. An off-chip FPGA controls the time-gating and data acquisition.

An externally supplied SPAD bias high voltage (VCA) and an active-reset bias voltage (VRST) set the SPADs in Geiger mode with an excess bias voltage determined by VOD=VCA-VBD, where VBD is the breakdown voltage (Fig. 11.5.2). A variable SPAD dead time, determined by a digital delay (3ns to 1μs) controlled by bias current Ihold-on, is used to limit afterpulsing probability, which is reduced to 1.5% at 10ns. A detection circuit, that ignores rising edges of the diode anode voltage associated with diode gated turn-off, is used to trigger a pixel-level, 5b photon-event counter. The quenching and active-reset circuits are capable of supplying excess bias voltages (VOD) between 0 and 5 V, enabling the dynamic adjustment of photon detection probability (PDP) and dark-count rate (DCR). Functioning SPAD yield is consistently better than 95% across all tested shanks. A global time-gating shutter scheme ensures simultaneous recording from all SPADs, ensuring integrity of time-correlated photon detection. The ON signal is level shifted to VRST in order to toggle the SPAD between VCA and VCA-VRST, allowing the SPAD to be turned on and off synchronously to an external fast-pulsed excitation laser source.

Figure 11.5.2: Schematic of the in-pixel active quenching circuit and timing diagram in the case of a detection event.

Figure 11.5.2:

We employ a single supply connection along the entire shank and mitigate potential supply noise issues with 250fF of local decoupling capacitance per pixel. The imaging probe, dominated by the charging of SPAD diode capacitance at quenching events, consumes a total of 97μW under dark conditions and 68mW in full output saturation lighting. Digital circuitry and I/O require approximately 4mW during readout. In addition, each individual pixel can be programmed on or off, lowering operating power while recording over subsets of pixels. The SPAD array displays excellent output linearity over almost three orders of magnitude of input light intensity (Fig. 11.5.3b). A VOD of 1V, which provides the best trade-off between DCR and PDP, is employed consistently in our measurements.

Figure 11.5.3: SPAD performance: (a) array-wide PDP and DCR variability at 1V VOD over fluorescence emission wavelengths (b) SPAD linearity over almost three order of magnitude in incident light intensity, and (c) time-gated measurement of 4ns fluorescent lifetime.

Figure 11.5.3:

To verify time-gated fluorescence imaging, we image fluorophores (Alexa Fluor 488, 500-to-540nm emission, 4ns lifetime) on beads using a mode-locked Ti-Sapphire pulsed laser, frequency doubled to 480nm as an excitation source. Conventional two-photon microscopy provides ground truth data on bead locations. By synchronizing an 8b digital-to-time converter (DTC) to the 80MHz laser pulse, time-gating in the SPADs achieves a temporal resolution of 350ps (Fig. 11.5.3c). A variable duty cycle of 10 to 50% allows the SPAD to be moved out of Geiger mode after sufficient decay of the fluorescent signal, avoiding dark count and unnecessary power dissipation. Emitted fluorescent signal intensities from a 15μm microsphere (Thermo Fisher F8844), corresponding to soma dimensions, are measured to be on the order of 1nW. For an isotropic radiating source located 200μm away from the 7.7μm-diameter SPAD, an SNR of 40dB dictates a maximum frame rate of approximately 3kfps. Higher frame-rates, ultimately bounded by the laser repetition rate, are possible under high signal-level conditions. At circuit-level, framerate is bounded by readout overhead requiring two FPGA 100MHz clock cycles per pixel.

While much of the novelty of this work lies in the form factor and application of this imager, this design compares favorably with other recent SPAD imagers with conventional planar array formats [35] (Fig. 11.5.4) which displaces a volume of only 0.012mm3 on the implanted shanks.

Figure 11.5.4: Comparison with recent work. PDP and DCR are reported at excess bias voltage.

Figure 11.5.4:

Thirty-two SPAD ensembles of 16 pixels (Fig. 11.5.5a), each consisting of orthogonal combinations of angular modulation frequencies (β=12, 20), direction (x, y), and quadrature phase (=0°, 90°, 180°, 270°), are used to maximize spatial diversity for computational image reconstruction [3]. The angular modulative linear mapping is measured in two-dimensional angular (θ,φ) coordinates and subsequently parametrically fit for use in image reconstruction (Fig. 11.5.5b). The resulting angular amplitude response function is affected by three factors. In addition to the angular modulation due to the near-field diffraction effect of the gratings, windowing effect rejects photons for incident angles larger than 60 degrees from normal. Finally, the PDP at a given wavelength is lower for obliquely incident photons due to a deeper effective multiplication layer. Overall transmission efficiency for the SPADs with amplitude gratings is 16% over all incident angles (Fig. 11.5.5c).

Figure 11.5.5: (a) Measured angle-sensitivity of A-SPAD in angular coordinates of two selected pixels. (b) fitted sensitivity and comparison with measurement (c) illustration of angle sensitive SPAD.

Figure 11.5.5:

Spatial resolution assessment was performed by positioning two micropipettes over the shank, simulating neural point sources with their extreme tips filled with Alexa Fluor 488 (Fig. 11.5.6). A third pipette is completely filled and overlaid perpendicular to the shank close to the base, simulating a neural structure. Inverse imaging is performed using a Tikhonov regularization, enforcing non-negativity and smoothness with total-variance denoising. The image is further sharpened by L1-minimization through soft-thresholding.

Figure 11.5.6: (a) Recorded data from micropipette experiment (b) Stitched photomicrograph of die with three positioned pipettes. The base of the shank is on the right, the center pads are use of real estate unrelated to probe. Higher addresses correspond to bottom shank in photo. (c) volumetric reconstruction above chip surface at z=200μm with their respective error covariance ellipses.

Figure 11.5.6:

Resolution in this experiment (Δx ≈ 45μm, Δy ≈ Δz ≈ 60μm) is extracted by the 34% (or 1) radius of estimation confidence ellipsoid. The ellipsoid radius in each direction is estimated by computing the spatial covariance from a larger volume cutout. The resolution is limited by integrated dark count rate, integrated photon shot noise at high excitation intensities, spectral spread of the fluorophore which challenges the monochromatic assumption in the pixel angular sensitivity, and pixel-to-pixel deviation of angular response from the median fitted response. The denser longitudinal (x-axis) SPAD population yields a better resolution along that axis.

Future functional imaging will enable blind-source separation techniques to be used to directly infer neural sources taking advantage of their temporal properties without the need to perform this inverse imaging, better exploiting the high temporal resolution of the probe imagers over traditional multiphoton laser-scanning microscopy.

Figure 11.5.7: Photomicrograph after etching and thinning of die.

Figure 11.5.7:

Acknowledgments:

This work was supported by the National Institutes of Health under Grant U01NS090596, by the Defense Advanced Research Projects Agency (DARPA) under Contract N66001–17-C-4012, and by the U. S. Army Research Laboratory and the U. S. Army Research Office under Contract W911NF-12-1-0594. We would also like to thank TSMC Foundry for their full support in testchip fabrication.

References:

  • [1].Wang H, et al. , “LOVTRAP, An Optogenetic System for Photo-induced Protein Dissociation”, Nature Methods, vol. 13, no. 9, p. 755–758, Sept. 2016. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [2].Adams JK, et al. , “Single-Frame 3D Fluorescence Microscopy with Ultraminiature Lensless FlatScope”, Science Advances, vol. 3, no. 12, Dec. 2017 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [3].Lee C, et al. , “A 72 × 60 Angle-Sensitive SPAD Imaging Array for Lens-less FLIM,” Sensors, vol. 16, no. 9, p. 1422, Feb. 2016. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [4].Perenzoni M, et al. , “A 160 × 120 Pixel Analog-Counting Single-Photon Imager With Time-Gating and Self-Referenced Column-Parallel A/D Conversion for Fluorescence Lifetime Imaging” IEEE JSSC, vol. 51, no. 1, p.155–167, Jan. 2016. [Google Scholar]
  • [5].Field RM, et al. , "A 100 fps, Time-Correlated Single-Photon-Counting-Based Fluorescence-Lifetime Imager in 130 nm CMOS," IEEE JSSC, vol. 49, no. 4, p. 867–880, Apr. 2014. [Google Scholar]

RESOURCES