Abstract
Volumetric functional imaging of transient cellular signaling and motion dynamics is often limited by hardware bandwidth and the scarcity of photons under short exposures. To overcome these challenges, we introduce squeezed light field microscopy (SLIM), a computational imaging approach that rapidly captures high-resolution three-dimensional light signals using only a single, low-format camera sensor. SLIM records over 1,000 volumes per second across a 550-μm diameter field of view and 300-μm depth, achieving 3.6-μm lateral and 6-μm axial resolution. Here we demonstrate its utility in blood cell velocimetry within the embryonic zebrafish brain and in freely moving tails undergoing high-frequency swings. Millisecond-scale temporal resolution further enables precise voltage imaging of neural membrane potentials in the leech ganglion and hippocampus of behaving mice. Together, these results establish SLIM as a versatile and robust tool for high-speed volumetric microscopy across diverse biological systems.
High-speed fluorescence microscopy has been playing an indispensable role in revealing the dynamic interplay and functionality among cells in their native environment. With continuous improvements in fluorescent markers, many transient biological processes, such as blood flow1 and neural action potentials2–4, become trackable and, thus, demand microscopy with an ever higher spatiotemporal resolution. Traditional three-dimensional (3D) imaging tools, such as confocal microscopy, light-sheet microscopy and two-photon microscopy, heavily rely on scanning to acquire a volumetric image. Despite advancement in beam shaping5,6, remote refocusing mechanisms7, detector array8,9 and detection geometry10, there persists an inherent trade-off between temporal resolution, the 3D field of view (FOV) and spatial resolution. This constraint marks a notable challenge to obtain optimal performance across a large 3D FOV for robust ultra-fast detection exceeding kilohertz.
Computational imaging mitigates this trade-off by encoding high-dimensional information, such as depth11, time12 and spectra13, into two-dimensional (2D) multiplexed camera measurements. Among these techniques, light field microscopy (LFM) excels in various biological applications, including observation of neural activity in freely moving animals14–16 and visualization of hemodynamics in the brain17 and heart18–20. By simultaneously collecting the spatial and angular information of light rays, LFM enables volumetric reconstruction post hoc from snapshot measurements. Moreover, when combined with advanced deep-learning-based reconstruction algorithms19–22, LFM can achieve high-resolution imaging at subcellular spatial resolutions.
Without scanning, the sensor bandwidth becomes the primary bottleneck for LFM 3D imaging speed. While scientific complementary metal-oxide semiconductor (sCMOS) sensors typically offer a full frame rate lower than 100 Hz, increasing the imaging speed can be achieved by reading out only selected low-format regions of interest (ROI). However, this approach comes at the cost of sacrificing either the spatial and/or angular components associated with the FOV and axial resolution.
The integration of ultra-high-speed cameras7,10,23 and event cameras24 holds promise for providing higher bandwidths to LFM. However, their current limitations in sensitivity and noise performance present challenges, especially for photon-starved applications such as imaging genetically encoded voltage indicators (GEVIs) (see Supplementary Table 1 for a list of camera models and their performance)7. Although event camera-based light field imaging has been demonstrated at kilohertz rates, the binary nature of event camera outputs restricts its ability to accurately quantify analog signals, such as subthreshold voltage oscillations. On the other hand, the compressibility of four-dimensional (4D) (two spatial dimensions plus two angular dimensions) light fields has been leveraged for compressive detection. Coded masks25–27 and random diffusers28–31 are used to modulate and integrate the spatio-angular components originally recorded by distinct pixels. Sparse nonlocal measurements can also be used across different angular views to acquire light fields with sensors of arbitrary formats32–34. Nevertheless, as compressive imaging relies on sparsity prior and optimization algorithms for signal recovery from the sub-Nyquist measurement, the performance is prone to degradation in challenging scenarios. These methods are primarily validated on photographic scenes and biological samples with relatively long exposure times. Their robustness and effectiveness in kilohertz microscopy with extremely low photon budget such as voltage imaging remain elusive.
To address the unmet need for high-speed light field imaging, we present herein squeezed light field microscopy (SLIM), which allows the capture of 3D fluorescent signals at kilohertz volume rates in a highly data-efficient manner. SLIM operates by acquiring an array of rotated 2D subaperture images. An anamorphic relay system applies anisotropic scaling, effectively ‘squeezing’ the image along one spatial axis. This allows the camera sensor to detect the light field using only a low-format letterbox-shaped ROI. Leveraging the row-by-row read-out architecture of CMOS sensors, SLIM achieves a more than fivefold increase in acquisition rates compared to traditional LFM. Each squeezed subaperture image complements the others, facilitating high-fidelity, robust 3D reconstruction from compressed measurement. By calculating the product of space-bandwidth product and volume rate, SLIM measures ~7.3 gigavoxels per second, placing it among the ultra-fast 3D fluorescent microscopes reported in literature (Supplementary Table 2).
We demonstrated SLIM by capturing the flowing red blood cells (RBCs) in freely swinging tails of embryonic zebrafish at 1,000 volumes per second (vps), ex vivo voltage imaging in dissected leech ganglia at 800 vps and in vivo voltage imaging in the hippocampus of behaving mice at 800 vps. SLIM enables tracking high-speed cellular motion across a 550-μm FOV within a 300-μm depth range. It allows detection of millisecond membrane action potentials and subthreshold oscillations in a 3D space over extended time periods in awake, free-behaving animals. Furthermore, we showcased that the high frame rate of SLIM can be exploited to enhance the axial resolution when combined with multi-layer scanning light-sheet microscopy. This allows for imaging densely labeled structures, previously challenging with LFM, such as contracting myocardium in a zebrafish, at 4,800 frames per second (fps), leading to a volume rate of 300 vps.
Results
Principle and design of SLIM
In a typical SLIM camera (Fig. 1a), the input scene is imaged by a combined system consisting of an array of dove prisms and lenslets (Fig. 1a(i) and Supplementary Fig. 1). Each dove prism within the array is rotated at a distinct angle relative to its optical axis. This arrangement gives rise to an array of perspective images, each rotated at twice the angle of its corresponding dove prism’s rotation, all converging at an intermediate image plane situated behind the lenslets. Subsequently, these rotated perspective images are further processed through an anamorphic relay system consisting of two cylindrical doublets with orthogonal optical axes (Fig. 1a and Supplementary Fig. 2). This relay system imparts anisotropic scaling to the image array, where the images experience de-magnification (×0.2) along one spatial axis while preserving the original magnification along the orthogonal direction. Finally, the rescaled image array is acquired by a 2D camera, where we read out only pixel rows that receive light signals (referred to as active read-out ROI in Fig. 1a).
Fig. 1 |. Principle of SLIM.

a, Schematic of SLIM detection system. L1–L3, achromatic doublet. CL1–CL2, achromatic cylindrical doublet. SLIM records light field at kilohertz frame rates by using a reduced active read-out ROI on the camera. b, The optical transformation in SLIM comprises image rotation and squeezing, performed by a dove prism and lenslet array (a(i)) and a customized anamorphic relay (a(ii)). c, (i) Each subaperture, after reversing the squeezing and rotation, gives an image with anisotropic spatial resolution but complementary to others. By merging different subaperture images, SLIM estimates the original features. Three columns show how one, two and all subaperture images reconstruct the final image and spectrum. The red dotted circle denotes the lateral cutoff frequency. (ii) 3D illustration of transfer functions of SLIM by analyzing all subaperture images as geometrical projections. Each gives an elliptical slice in the 3D frequency space, depending on its rotation angle and subaperture location. d, SLIM is compatible with different illumination modes. We demonstrated widefield illumination (i) for behaving mice imaging, and selective volume illumination (ii) for zebrafish and leech ganglion imaging. e, 3D MIPs of fluorescent beads. Scale bar, 50 μm. f, Cross-sections of single bead showing representative PSFs. Scale bar, 3 μm.
One of the key advantages of using squeezed optical mapping is the improved read-out speed. CMOS sensors are equipped with parallel analog-to-digital converters for each column of pixels, ensuring consistent frame rates regardless of the number of pixel columns being read out. The frame rate is, therefore, solely determined by and inversely proportional to the number of pixel rows being read out35. For example, on the Kinetix sCMOS developed by Teledyne, using an ROI of 200 × 3,200 pixels allows SLIM to capture a 19 subaperture image array at 1,326 fps and 7,476 fps in 16-bit and 8-bit mode, respectively. In contrast, the full-frame mode achieves frame rates of only 83 fps (16-bit) and 500 fps (8-bit).
The forward model of SLIM is illustrated in Fig. 1b. Similar to Fourier LFM (FLFM)15,36–38, SLIM can be conceptualized as a tomographic system, where each subaperture image is essentially a parallel projection of a 3D volume along a line of sight at the subaperture’s view angle39,40. However, unlike FLFM, where these subaperture images are directly captured by a 2D camera, SLIM applies in-plane rotation and vertical scaling operations to these images before recording.
In the 3D spatial frequency space, the Fourier spectrum of a SLIM subaperture image manifests as a 2D elliptical slice (Fourier slice theorem; Supplementary Note 1). The short axis of this ellipse corresponds to the low-resolution sampling along the squeezing direction. By using an array of subaperture images rotated at complementary angles, SLIM fills in the missing high-frequency information. This process results in a synthesized power spectrum with a bandwidth that approximates that of the original unsqueezed FLFM (Fig. 1c). In addition, the rotation angles of subaperture images are carefully crafted to maximize the horizontal projections of their 3D point spread functions (PSFs) (Supplementary Fig. 3). In other words, when imaging a 3D object, the subaperture images of SLIM exhibit lateral disparity shift due to their view angle difference. While the entire set of rotation angles are sampled uniformly from 0 to 180°, we optimize the angle assignment to each subaperture image to align its disparity shift with the unsqueezed spatial axis (that is, camera pixel row direction), thereby maximizing the samplings of disparity and consequently enhancing the axial resolution (Supplementary Note 1). Using this approach, some subapertures may not receive their optimal angles. We prioritized the subapertures in the outer region, as they display larger disparity shifts than inner ones and contribute more substantially to the axial resolution. This forward model can be further extended to wave optics by using a sum of 2D convolutions between the sample sliced at each depth and the corresponding subaperture PSF40. Through an iterative deconvolution algorithm, SLIM reconstructs the 3D fluorescence distribution by fusing all subaperture images.
We built two setups to demonstrate SLIM’s applications across a wide range of high-speed biological processes (Fig. 1d and Supplementary Figs. 4 and 5). A widefield epi-illumination setup is used for mouse imaging through cranial windows, while a selective volume illumination setup is designed for small animals such as zebrafish larvae and dissected leech ganglia, where the target of interest can be accessed via side illumination. Selective illumination, implemented with either a scanning light sheet or a slit-confined light-emitting diode (LED) (Supplementary Fig. 4a), suppresses fluorescence outside the imaging volume, particularly in scattering tissues (Supplementary Figs. 6 and 7). The LED is preferred for applications requiring high power stability.
Figure 1e,f shows the 3D reconstruction of fluorescent beads of subdiffraction size imaged by SLIM. At a magnification of ×3.6, the imaging volume spans a 3D space of ∅550 μm × 300 μm (∅ denotes diameter) with a spatial resolution of 3.6 μm laterally and 6.0 μm axially (Supplementary Fig. 3). SLIM inherits the lenslet array-based optical design of FLFM for snapshot 3D imaging and provides a highly efficient data acquisition strategy for scientific camera sensors. SLIM can record and reconstruct 7.3 × 109 effective voxels per second, making it a powerful optical compression method that enables kilohertz-rate volumetric imaging within the constrained bandwidth of conventional camera sensors.
Imaging of flowing RBCs in an embryonic zebrafish
Experimental characterization of blood flow in living organisms provides valuable insights into local metabolic activity, vascular development and pathological conditions. Using fluorescently labeled blood cells, various imaging methods have been demonstrated in single-cell velocimetry, such as in the larval zebrafish heart10, tail41 and mouse brain1,17. However, these methods are often limited to 2D imaging or restricted by a limited volumetric frame rate, which hinders the detection of fast flow and necessitates sedation of the animal to reduce motion artifacts. Here, we show that SLIM can be used to capture fast-circulating RBCs in a zebrafish at a kilohertz volumetric rate, both with and without sedation.
We imaged transgenic zebrafish embryos expressing DsRed in RBCs at 3 days postfertilization. We excited the zebrafish brain using light-sheet-synthesized volumetric illumination and recorded fluorescence using SLIM with 19 subaperture images at 1,000 frames per second. The reconstruction reveals the 3D distribution of RBCs and allows for cell tracking over time (Supplementary Fig. 8). Figure 2a shows two separate recordings from the dorsal and ventral view, each visualizing RBCs at representative time points (red) and the vasculature network by maximum intensity projection (MIP) throughout all frames (cyan). The flowing velocity is pulsatile temporally and varies spatially in the aorta and vein (Supplementary Videos 1 and 2). The tracking reveals the velocity distribution in 3D and highlights vessels with a high-speed flow of up to 6 mm s−1 (Fig. 2a). SLIM’s kilohertz imaging seizes the transient motion at a millisecond time scale (Fig. 2b), effectively eliminating the motion blur and enabling robust cell tracking that would be compromised at lower imaging rates (Fig. 2c).
Fig. 2 |. 3D imaging of hemodynamics in the embryonic zebrafish brain and tail at 1,000 vps.

a, MIPs of flowing blood cells at representative time points (red) and vascular network obtained by combining frames over time (cyan). The velocity maps show overlaid RBC trajectories color-coded by their instantaneous velocity. Two views, the dorsal view and ventral view, are two datasets taken with differently oriented embryos. Scale bars, 100 μm. b, Zoom-in time-lapse of the region labeled by the white dashed box in a. Scale bars, 30 μm. c, Temporal projection color-coded by the time visualizes the motion blur with a lower imaging speed. d, MIPs of a free-moving fish tail. Scale bar, 100 um. e, RBC trajectories from d are color-coded by time. The coordinate system has been rotated so that the x–y plane shows the RBC movement perpendicular to the tail swing direction. The single trajectory on the bottom right exhibits the compound motion of a single RBC during fish swimming. Scale bar, 100 μm.
We further demonstrated the speed advantage by imaging the free-moving tail of a zebrafish without sedation. The embryo was mounted on a cover glass with its head restrained using agarose while allowing the tail to move freely in the water. SLIM captured high-frequency tail swings without motion blur (Fig. 2d and Supplementary Video 3), maintaining its capability to track individual RBCs and revealing the compound movement composed of oscillation perpendicular to the tail plane and forward progression along the vessels (Fig. 2e). By combining with closed-loop tracking and a translational stage15, SLIM’s high-speed volumetric imaging holds promise for studying hemodynamics under natural conditions during locomotor behavior.
Optical recording of membrane action potentials
The development of voltage imaging has enabled neuroscientists to examine neural dynamics with a high spatiotemporal resolution. However, it has long been a challenge to capture voltage signals in vivo across a large volume due to the extremely fast transients and low signal-to-noise ratio (SNR). With its millisecond temporal resolution, SLIM can precisely detect spike timings across a large 3D neural network, opening avenues for mapping the intricate interaction of neuronal components and elucidating the mechanisms underlying sensory processing and behavioral generation.
As a demonstration, we loaded the voltage-sensitive dye FluoVolt to a dissected ganglion from a medicinal leech42. Using SLIM, we recorded the fluorescent signals with 29 subaperture images at 800 Hz under the illumination of an ultra-low-noise LED. Concurrently, we introduced an intracellular microelectrode for simultaneous electrophysiological stimulation and recording (Fig. 3a). Timing and waveforms (Fig. 3b) of neuronal action potentials are adequately sampled from the reconstructed 3D image sequence (Fig. 3c and Supplementary Video 4). We further processed the data by correcting motion drift, manually choosing an area of interest on each cell, and averaging pixels from the corresponding cell membrane (Supplementary Figs. 9 and 10). The resultant time-lapse fluorescence intensities at selected neurons are shown in Fig. 3d. SLIM measurements match the electrophysiological record in quantitative detail (Supplementary Fig. 11), including the reduction of spike amplitude under strong depolarizing current injection.
Fig. 3 |. 3D imaging of membrane action potentials and fictive swimming oscillation in medicinal leech ganglia at 800 vps.

a, Brightfield snapshot of leech ganglion. Microelectrode is denoted by a dashed circle, which allows for simultaneous stimulation (stim.) and electrophysiological (ephys.) recording. b, Average spike waveform. The yellow area marks the standard deviation of waveforms. Orange dots represent temporally sampling points, with an interval of 1.25 ms. c, MIP of SLIM reconstruction for voltage dye fluorescent signals. Scale bar, 50 μm. d, Recording of stimulation current, electrophysiological read-out and optical measurements of ganglion cells: the impaled cell (top), its contralateral partner that is electrically coupled to it (middle) and an unconnected cell (bottom). Rz indicates Retzius cell and AP indicates anterior pagoda cell. Gray boxes represent the time window when stimulation is injected. A deeper color represents larger stimulation. Spikes are detected and marked as black dots above the traces. (i),(ii) Zoom-ins of signal segments labeled by red and blue lines. e, A schematic of the isolated leech nerve cord, consisting of the anterior brain, midbody ganglia (circles) and the posterior brain. The voltage-sensitive dye components were applied to a midbody ganglion (M10). The brightfield reference image of selected ganglion is shown on the right. f, x–y cross-section of SLIM reconstruction for voltage dye fluorescent signals of fictive swimming behavior. Scale bar, 50 μm. g, Coherence of the optically recorded signals from all cells on the dorsal surface of the ganglion with the swim rhythm. Cells used in h are marked. Scale bar, 120 μm. h, Selected electrophysiological and voltage-sensitive dye traces of motor neurons during fictive swimming are presented. Top row: extracellular recording from a posterior-segment nerve root; red trace shows the filtered envelope of the extracellular signal. Other rows: voltage-sensitive dye signals from the dorsal surface captured the activity of dorsal and ventral inhibitory and excitatory motor neurons, specifically DI-1, DE-3, VE-4 and pressure sensitive cell P. Blue box visualizes the synchrony of the subthreshold oscillations among neurons. i, The polar plot illustrates the coherence between each optical recording and the extracellular recording in the frequency range of 0.8 Hz to 1.1 Hz. The distance from the center represents the coherence magnitude, while the angle indicates the coherence phase. Error bars show confidence intervals, calculated using the multitaper estimate method.
In a separate experiment, we used a train of electrical pulses to stimulate a dorsal posterior nerve root of midbody ganglion 13 (M13), which mimics a touch to the body wall in an intact leech to elicit fictive swimming43,44. Using the SLIM system, we imaged the selected midbody ganglion 10 (M10) (Fig. 3e,f) from the dorsal side at an 800 Hz volume rate under the same illumination conditions as the previous demonstration. The nerve signal was simultaneously recorded through the suction microelectrode, which showed rhythmic dorsal motor neuron bursts characteristic of swimming (Fig. 3h). After manually selecting and averaging a 3D ROI for each cell, the optical fluorescence signals at selected motor neurons (dorsal and ventral inhibitory and excitatory motor neurons DI-1, DE-3 and VE-4) and pressure sensitive cell (P2) are shown in Fig. 3h. The rhythmic activity characteristic of swimming (1 Hz to 1.5 Hz) was clearly observed in all motor neurons, consistent with previous work42. To characterize how cells participated in generating the swim rhythm, we calculated the magnitude and phase of coherence for each cell in the swimming oscillation band (Fig. 3g,i) with respect to the extracellular recording. SLIM measurement matches well with oscillatory behavior of neurons, including the overall coherence phase distribution of all cells in dorsal side and four pairs of specific motor neurons, DI-1, DE-3, VI-2 and VE-4, are very regular in their location and indeed overlapped in the measured and predicted phase maps.
Voltage imaging from the hippocampus of behaving mice
We examined the performance of the SLIM system in awake, behaving mice. We monitored neuronal activity in the CA1 of the hippocampus through an implanted cranial window in mice expressing the GEVI pAce45. Mice were imaged continuously at 800 Hz for three minutes on a treadmill setup that used an optical rotary encoder to track movement (Fig. 4a). Conventional widefield microscopes suffer from a shallow depth of focus and have difficulty imaging axially distributed neuron population (Fig. 4b). In contrast, SLIM provides volumetric mapping of signals (Supplementary Fig. 12), allowing for simultaneous optical measurement of neurons at different depths (Extended Data Fig. 1). After image reconstruction and motion correction, we extracted membrane-potential traces from multiple neuronal sources exhibiting strong speed-related action potential modulation across the image volume (Fig. 4c). We calculated the relative fluorescence change over the 3-minute recording, both in SNR (Fig. 4d and Supplementary Fig. 13) and in (Fig. 4e), where SLIM’s millisecond temporal resolution provided sufficient sampling on the rising and falling slopes of transient spikes (Supplementary Fig. 14). Due to photobleaching (Supplementary Fig. 15), the amplitude of the spike waveform exhibited a gradual decay, but the SNR maintained around five across the entire recording (Fig. 4e). SLIM also detected subtle subthreshold membrane-potential oscillations (Fig. 4d and Extended Data Fig. 2). The observed signals predominantly show prominent frequency components in the 4–10 Hz band, likely originating from theta oscillations commonly found in the hippocampus46–48 (Fig. 4f and Extended Data Fig. 2). The examination over the inactive neurons and background reveals the absence of spikes and subthreshold oscillations, further confirming the fidelity of observed signals. By correlating the time-dependent firing rate of each neuron with locomotion speed, we found that most neurons were positively modulated by locomotion speed (Fig. 4g and Supplementary Fig. 16), consistent with the previous finding49. In addition, SLIM’s compressed measurement offers an advantage over alternative methods, providing highly efficient data bandwidth (Fig. 4h) and making it more accessible for long-term 3D voltage imaging across large volumes (Supplementary Fig. 17). Overall, SLIM enables 3D voltage imaging in neuron populations distributed across large volumes, with the potential to elucidate network dynamics and interactions among different cell types across layers.
Fig. 4 |. 3D voltage imaging of hippocampus in behaving mice at 800 vps.

a, Schematics of the experiment setup. The animal expressing GEVI is placed on a customized treadmill. SLIM captures volumetric fluorescent signals through cranial windows, while an optical rotary encoder on the treadmill simultaneously records the belt’s motion. b, Widefield reference camera captures a high-resolution, long-exposure 2D image of the targeted FOV. Dashed circles highlight out-of-focus neurons. c, 3D MIP of SLIM reconstruction. Scale bars, 100 μm. d, Detrended fluorescent signal traces over a 170-s recording, extracted from neurons labeled in c. The top row shows the animal’s locomotion velocity. The red-shaded curve represents neuronal firing rates. The inset zooms in on the signals within the 24–34 s window. Red dots represent detected spikes. e, Average spike waveform calculated from 1,130 detected spikes in neuron 1. By further dividing the signal trace into five time bins (35 s each), the evolution of waveform can be visualized in box plots (n = 417, 459, 166, 30, 58). The spike SNR is measured as the spike magnitude over the signal’s standard deviation within each time bin. Dots represent individual spike measurements. f, Average power spectrum of neural signal traces (n = 18). See Extended Data Fig. 2 for signal traces and their individual power spectra. g, Pearson correlation between firing rate and locomotion velocity. Top: 3D locations of selected neurons with correlation values encoded by color. Bottom: box plot showing distribution of correlation coefficients of neurons shown in d (n = 8). h, Data bandwidth required for camera read-out and storage. Raw data size for SLIM, FLFM and scanning-based methods is roughly estimated based on the digital image dimensions (left panel) in 16 bits. Box plots display the median (center line) with upper and lower quartiles (box limits); whiskers represent 1.5 × interquartile range and individual data points represent one sample. px, pixels. Illustrations in a created with BioRender.com.
Imaging of a beating embryonic zebrafish heart
Although LFM techniques, including SLIM, offer the ability to numerically refocus to specific depths, they typically lack intrinsic optical sectioning capability. Its application is potentially hindered by the spatial resolution and reconstruction artifacts, and it favors objects with high sparseness50. Here, we demonstrated that SLIM can be combined with scanning multi-sheet illumination. The synergy enables high-contrast 3D imaging of densely labeled fluorescent objects.
We constructed a dual-light-sheet illumination module and scanned the beams using a galvo-mirror driven by a sawtooth function. Rather than synchronizing the camera exposure with the entire scan range as in previous experiments, we operated the camera at a higher rate, allowing each frame to capture a subset of depth layers of the fluorescent object (Fig. 5a). This approach suppresses out-of-focus light and improves axial resolution by the thickness of the light sheet, as shown on fluorescent beads and zebrafish vasculature networks (Tg( flk:mCherry)) (Fig. 5b,c). On the other hand, SLIM offers an ultra-high frame rate and supports simultaneous multi-plane detection. These features enable SLIM to maintain a high volume rate even within this scanning scheme.
Fig. 5 |. 3D imaging of a beating zebrafish heart with multi-plane scanning light-sheet illumination at 300 vps.

a, Dual scanning light sheet replaces the flood illumination (that is, illuminating entire sample volume). In synchrony with the light sheet, the camera captures multiple frames at different scanning positions, each reconstructing two layers of the entire volume. By combining all measurements in one scan cycle, a 3D volume is synthesized for that time point. b, The x–z MIPs of fluorescent beads show image contrast and axial resolution. Axial resolution Rz has been quantified by frequency analysis. Scale bar, 100 μm. c, The structural images of the vasculature network in an embryonic zebrafish. Scale bars, 100 μm. d, Comparison of x–y cross-section images between scanning light-sheet illumination and flood illumination on cardiomyocytes in the zebrafish heart. The orange arrowheads mark muscle structures. Scale bar, 50 μm. e, The 3D rendering of myocardium at a representative time point. Scale bars, 50 μm. f, Kymographs calculated by sampling the time-dependent distance of the cavity along the white dashed line in e. The black arrowheads indicate the beat-to-beat variance of cardiac contraction. Scale bar, 50 μm.
We demonstrated this acquisition scheme by imaging a beating zebrafish heart (Tg(cmlc2:GFP)) at 300 vps (Fig. 5d and Supplementary Video 5). This is achieved by scanning the dual-light sheets at 300 Hz, synchronized with camera recording at 4,800 fps (8-bit speed mode). This setup allowed us to reconstruct the heart with 30 planes across 200-μm depth range with the microstructures such as ventricular trabeculation clearly delineated (Fig. 5d,e). The enhanced spatial resolution and contrast offer the potential for accurate segmentation of the heart chamber’s geometry, facilitating cardiac studies, such as regional myocardial contractility analysis51 and computational fluid dynamics for hemodynamic forces simulation52. While current LFM cardiac imaging is mostly demonstrated on sparse markers such as cardiomyocyte nuclei18,19,21 and blood cells19,20,53, SLIM with scanning multi-sheet illumination proves effective in resolving the densely labeled muscle tissue. It provides high 3D imaging speed to capture the beating heart in real time and outlines the time-dependent chamber dimension to detect beat-to-beat variations (Fig. 5f).
Discussion
We presented SLIM as a snapshot 3D detection method that addresses the pressing need for high-speed volumetric microscopy operating at kilohertz speeds. SLIM accomplishes this by capturing a condensed representation of the original light field using a compact ROI on the sensor. The sampling strategy is grounded in the principle that the inherent spatio-angular correlation in the light field can be exploited to recover signals from compressive measurement25,26,28–30,32,54. As with other compressed sensing approaches, accurate reconstruction in SLIM relies on the assumption of sample sparsity.
SLIM’s kilohertz 3D imaging speed, which challenges previous methods and often entails considerable design trade-offs and demanding hardware requirements9,24, presents opportunities to investigate millisecond-scale dynamics in emerging fields such as voltage imaging. It is adaptable to most CMOS sensors, which generally allow higher frame rates at reduced read-out pixel rows. While we set kilohertz as a milestone for 3D fluorescence microscopy, SLIM has the potential to achieve tens and even hundreds of thousands of volumes per second with current ultra-fast cameras.
Quantum efficiency and read-out noise are critical parameters that determine a sensor’s sensitivity in low-light imaging. Unfortunately, these factors degrade notably in the ultra-fast cameras currently available. Another important factor, the full well capacity, is also frequently sacrificed together with bitdepth in trade for frame rate. It becomes a relevant limitation when the fluorescence change () is extremely low (for example, our leech ganglion imaging) and a high pixel brightness is required to provide an acceptable SNR7. Moreover, the extended-time imaging in animal behavior study often demands a substantial number of frames to be recorded, which becomes impractical for ultra-fast cameras that rely on limited on-board storage. Although this could be potentially solved by a camera array with a dedicated computer cluster for data handling55, there are considerable technical and cost challenges in integrating high performance sensors into an array. In contrast, SLIM presents an accessible solution to transform a single commonly used sCMOS into a kilohertz 3D imaging tool.
SLIM offers a snapshot acquisition that effectively addresses the trade-off between the pixel exposure time and volumetric frame rate encountered in conventional scanning-based 3D optical microscopy techniques. This approach provides SLIM with distinct advantages in photon efficiency and SNR, making it particularly beneficial for high-speed imaging of weak fluorescence.
SLIM compresses images only along the vertical axis and redistributes the information to the horizontal axis, which retains full sampling power through image rotation. As a result, SLIM can reconstruct the FOV equivalent to that of a full-sensor FLFM system, achieving comparable spatial resolution provided the sample is sufficiently sparse (see Extended Data Fig. 3 for validation against ground truth). This design is specifically tailored to use a low-format rectangular sensor, marking a fundamental departure from existing compressive light field imaging25–31,54. The latter retrieves light fields at the same or lower resolutions than the multiplexed measurement and suffers from a linear reduction in FOV and pixels as it crops the sensor ROI. Moreover, SLIM does not require multiple shots25 or learning on a sparse basis before reconstruction27. It also shows scalability in various ROI sizes. Provided the sample sparsity allows, it can tolerate a narrow sensor size and further increase its space-bandwidth-frame rate product by using faster recording speeds. Moreover, the SLIM strategy could potentially be integrated with anisotropic binning capabilities of CCD image sensors, enabling in-sensor image compression with reconfigurable compression ratios. This would provide SLIM with greater flexibility, allowing it to be optimized for imaging scenes with varying complexity.
SLIM can transform an existing FLFM15,22,36,50 into a high-speed 3D imager with a substantially higher frame rate. Its performance approximates FLFM when the targeted dynamics exhibit sufficient spatiotemporal sparseness. However, as with all compressive detection systems, SLIM is susceptible to performance degradation when imaging dense signals with complex structures. Moreover, SLIM shares several limitations inherent to conventional FLFM, such as the missing cone problem, compromised spatial resolution, depth-variant performance and limited considerations for tissue scattering and lens aberrations. In addition, our current reconstruction assumes no occlusions in the scene28,30. These constraints limit its direct applicability in complex intravital imaging scenarios.
We have shown that multi-sheet scanning offers a solution by trading imaging speed for improved optical sectioning, extending SLIM’s applicability to densely labeled tissue imaging. The refocusing capability within a largely extended depth of field makes SLIM compatible with various 3D illumination structures. For large organisms where direct lateral access is limited, oblique illumination microscopy56 or swept confocally aligned planar excitation microscopy10 could be potentially used. In addition, the literature presents several strategies to enhance SLIM’s performance, such as background rejection by hardware17 and computation57, multi-focus optics for extended depth of field15,58 and sparsity-based resolution enhancement50,54. Furthermore, ongoing advancements in data-driven reconstruction algorithms, particularly physics-embedded deep-learning models20,22, hold promise for addressing the ill-posed inverse problems associated with limited space-bandwidth and compressive detection in SLIM. These developments are expected to enhance SLIM’s capabilities and broaden its utility across diverse imaging scenarios.
Finally, although beyond the scope of the current work, SLIM holds promise for extending fast volumetric imaging to other spectral ranges, such as the short-wave infrared region, where large-format image sensors remain challenging to fabricate. In these spectral regions, the inherently pixel-intensive nature of conventional LFM presents logistical hurdles. For instance, InGaAs cameras, widely used for short-wave infrared applications, typically feature limited pixel counts due to manufacturing complexity and high costs, thereby constraining their utility in conventional light field imaging. The SLIM approach, by substantially reducing the required number of camera pixels for 3D reconstruction, offers a compelling solution. It allows high-resolution volumetric imaging to be achieved within the practical and economical pixel budgets of existing short-wave infrared cameras, potentially opening new avenues for deep tissue imaging and biomedical exploration59,60.
Online content
Any methods, additional references, Nature Portfolio reporting summaries, source data, extended data, supplementary information, acknowledgements, peer review information; details of author contributions and competing interests; and statements of data and code availability are available at https://doi.org/10.1038/s41592-025-02843-8.
Methods
Hardware setup
For SLIM setup using selective volume side illumination, the detection features a ×20 water-dipping objective (N20X-PFH, Olympus XLUMPLFLN ×20, 1.0 numerical aperture (NA)). A 4F relay system (AC508–180-A, AC508–200-A, Thorlabs) forms a conjugate plane of the objective’s back pupil, accommodating a customized dove prism and a spherical lenslet array. The dove prism (aperture length 1.3 mm, material H-K9L, fabricated by Changchun Sunday Optics) is positioned anteriorly to the plano-convex lenslet (aperture diameter 1.3 mm, focal length 36 mm, material PMMA, fabricated in-house). Each pair generates a rotated subaperture image with a magnification of ×3.6 and NA of 0.065. In total, 29 pairs are used and securely housed in 3D-printed mechanical holders (refer to Supplementary Fig. 1 for detailed designs of the prism, lenslet and holder). Our anamorphic relay system comprises a spherical achromat doublet (ACT508–250-A, Thorlabs) and two orthogonally oriented cylindrical achromat doublets (ACY254–250-A, ACY254–50-A, Thorlabs). The back focal planes of two cylindrical lenses are colocated, producing an image with an anisotropic scaling factor (Supplementary Fig. 2). A sCMOS camera (Kinetix, Teledyne) captures the final image, with a 320 × 3,200 pixels ROI covering all 29 subaperture images, or a 200 × 3,200 pixels ROI for 19 subaperture images. The maximal read-out speeds for two ROI are 830 fps and 1,326 fps in 16-bit dynamic range mode and 4,790 fps and 7,476 fps in 8-bit speed mode. The 29 subaperture configuration collects around 50% more light than the 19 subaperture one.
The illumination sources include blue and green continuous lasers (MBL-FN-473–500mW and MGL-III-532–300mW, CNI Laser) and an ultra-low-noise blue LED (UHP-T-470SR, Prizmatix). For scanning light-sheet setup, we use a knife-edge mirror (MRAK25-G01, Thorlabs) to combine two beams with adjustable spacing and a galvo-mirror (GVS011, Thorlabs) to scan them together. Planar illumination is formed perpendicular to the detection axis by a cylindrical lens and a dry objective (RMS4X-PF, Olympus ×4, 0.13 NA). We use a sawtooth function to drive the galvo-mirror. In synthetic volume illumination configuration, we block one light-sheet beam. The camera is then triggered at the beginning of the sawtooth waveform and exposed for the entire scan. In scanning plane illumination configuration, we use two light-sheet beams and trigger the camera multiple times during a scan. The static LED setup shares the same illumination objective and perpendicular geometry. We build a Koehler illumination system and use an adjustable slit as a field aperture. The conjugate image of the slit is relayed to the sample, and the slit controls the depth range of the beam. The LED provides ultra-stable illumination power and thus suppresses excitation source noise during our voltage imaging experiments.
For SLIM setup using widefield epi-illumination, we use a ×16 water-dipping objective (×16 Nikon CFI LWD Plan Fluorite Objective, 0.80 NA). The back pupil relay system uses a pair of doublets (AC508–150-A, Thorlabs). The dove prism, lenslet array and anamorphic relay system are identical to the selective volume side-illumination setup. An ultra-low-noise blue LED (UHP-T-470SR, Prizmatix) in a Koehler configuration is used for epi-illumination.
See Supplementary Figs. 4 and 5 for system schematics, Supplementary Table 3 for components list and Supplementary Table 4 for configurations used in different experiments.
Image formation and reconstruction
The light field of the fluorescent sample is acquired by dividing the objective’s back pupil with a lenslet array and recording a group of subaperture images. Depending on their subaperture locations, they display disparity, that is the distinct displacement shown by the same signal. After calibrating the displacement at every axial position, the formation of each subaperture image can be modeled as a sum of laterally shifted depth slices. We replace the shifting operator with a convolution with PSF to account for both the diffraction and displacement. A dove prism is a truncated right-angle prism and used to rotate the incident beam. The rotation of the prism around its longitudinal axis causes the beam to rotate at twice the rate of the prism’s rotation. By placing a dove prism array in the infinity space between the objective and lenslet array, we apply varying in-plane rotations to subaperture images. Finally, we adopt an anamorphic relay system to introduce anisotropic scaling to the image array: we de-magnify (squeeze) the image in the direction perpendicular to the camera read-out axis while maintaining the original scale in the other direction (Supplementary Fig. 2). This one-axis scaling and the aforementioned in-plane rotation are both directly applied to the 3D fluorescent image in our model.
Given a 3D fluorescence distribution and system PSF, the formation of camera measurement can be modeled as:
| (1) |
where is the index of subaperture, represents the 2D convolution, applies subaperture-dependent rotation from the dove prism and introduces image scaling from the anamorphic relay system.
The volume reconstruction algorithm was derived from Richardson–Lucy deconvolution14,15,36,61 (Supplementary Note 2). Based on the forward model (1), the 3D fluorescence distribution is iteratively solved from the camera measurements and empirical PSFs. The rotation angles and image scaling ratio are precalibrated as known priors in the reconstruction. The measurement patch is cropped from the raw sensor image according to the center location of each subaperture image. We experimentally measure the PSFs by imaging a subdiffraction fluorescent bead. The point source is placed in the middle of FOV and axially scanned over a broad 600-μm depth range with a 2-μm step size using a motorized translation stage. The actual axial range and step size in the reconstruction depends on the specific experiments. The PSF is assumed to be spatially invariant within each subaperture image.
With our implementation, each measurement patch has a resolution of 301 × 61 pixels. The numbers of channels () and axial slices () are configured based on the targeted frame rate and depth range and/or step size. For example, with 19 subaperture measurements to reconstruct a volume of 305 × 305 × 151 pixels, the deconvolution takes around 30 s via eight iterations using a desktop computer with a modest graphical processing unit (Nvidia RTX 3070).
Fluorescent beads imaging and resolution characterization
Fluorescent microspheres of 1 μm (F13082, ThermoFisher) were embedded in 1% low-melting agarose and injected into a transparent fluorinated ethylene propylene tube for imaging. To acquire experimental PSFs, the solution was highly diluted to keep only one bead in the FOV. For spatial resolution quantification, the beads were randomly distributed in space, and we measured the full-width at half-maximum of the bead image at various locations in the FOV (Supplementary Fig. 3). We also performed two-point resolving experiments by reconstructing synthetic measurements that added two sequentially captured frames. We translated the bead laterally and axially at varying distances in these two frames. The resolution was then indicated by the resolvable set with the least distance (Supplementary Fig. 3).
Fish husbandry and imaging
Transgenic zebrafish lines Tg(gata1a:dsRed), Tg( flk:mCherry) and Tg(cmlc:GFP) were used in our experiments for imaging blood cells, endothelial cells and myocardium, respectively. Embryonic fish were maintained at 3 days postfertilization in standard E3 medium, which was supplemented with extra 1-phenyl 2-thiourea (Sigma Aldrich) to inhibit melanogenesis at day one. For brain hemodynamics and cardiac imaging, the larvae were anesthetized with tricaine (3-aminobenzoic acid ethyl ester, Sigma Aldrich) and immobilized in 1% low-melting-point agarose inside a fluorinated ethylene propylene tube before imaging. For tail experiments, the larvae were first positioned on cover glass before the heads were fixed by 3% low-melting-point agarose. Immediately after the agarose solidified, the sample was immersed in a water chamber. The imaging starts after visually confirming the unconstrained movement of the tail. All the experiments were performed in compliance with and with the approval of a UCLA IACUC protocol.
Blood cell velocimetry
3D cell tracking was performed by ImageJ and the plugin Trackmate62,63. We used the difference of Gaussian blob detector and the linear assignment problem tracker for all experiments. The spot positions and trajectory properties were exported to MATLAB for further analysis and visualization. We removed the local oscillation from the tracking result by fitting a center line of each trajectory with a smooth spline. The tangential velocity was calculated by projecting the cell displacement in adjacent frames onto the center line. The trajectories were then color-coded by the tangential velocity to display the blood flow speed along the vessels. The tracking accuracy was quantified by imaging static fluorescent beads (Supplementary Fig. 8)
Leech sample preparation
Medicinal leeches (Hirudo verbana) obtained from leech.com were housed in artificial pond water maintained at 15 °C. Detailed dissection procedures have been described before42,64. Briefly, an adult leech was anesthetized in ice-cold leech saline and an individual segmental ganglion (M10 or M11) was dissected out. The ganglion was pinned down ventral side up on a rectangular-shaped flat substrate made of polydimethylsiloxane (Sylgard 184, Dow Corning). After removing the sheath that covers the ganglion, a voltage-sensitive dye65 (FluoVolt, F10488, ThermoFisher) was bath-loaded using a peristaltic pump. The sample was placed under the detection objective of the SLIM system for imaging. In swim experiments, the entire nervous system was dissected out, except for the cephalic ganglia. Segmental ganglion M10 or M11 was desheathed as before. In addition, the dorsal posterior nerves (DP1) of ganglion M13 or M14 were exposed for extracellular stimulation and recording with a suction electrode. Nerve stimulation in these caudal ganglia is a well-established method for eliciting fictive swimming.
Leech ganglion electrophysiology and imaging
Glass microelectrodes (20–50 MΩ) were filled with a recording solution of 3 M potassium acetate and 60 mM potassium chloride. After penetrating the membrane of a cell of interest, small negative holding currents were injected to ensure stability. Intracellular electrophysiology used Neuroprobe amplifiers (Model 1600; A-M systems). Membrane voltage and electrode current were digitized along with the camera trigger signal at 10 kHz using a 16-bit data acquisition board (NI USB-6002; National Instruments). We used the camera triggers as time stamps to align recorded frames with electrophysiological data. Extracellular electrophysiology used a custom-built differential amplifier that allowed for rapid switching between stimulation and recording.
After image reconstruction, we corrected for sample movement by running a 2D registration and demotion between adjacent frames using a modified version of SWiFT-IR66. The 3D ROI were then manually defined for each neuron. The optical read-out was calculated by averaging the pixel intensities in the ROI and normalized by the temporal baseline: , where is the temporal mean value. To detect spikes from the optical signal, we detrended the trace by subtracting its median-filtered version (window size, 50 ms). It was then binarized by a Schmitt trigger, and a peak detection was performed to locate the voltage spikes.
Coherence analysis method
We used multitaper estimation techniques67 to calculate the power spectral densities denoted as for the optical signals and for the reference signal, and their coherence, denoted as . The use of multiple tapers ensures a more balanced weighting across all regions of a record, in contrast to the central bias introduced by a single taper. In addition, this approach allows for the estimation of the standard deviation of the spectral estimates within a single trial. The spectral measures are defined by . The brackets denote an average over all trials and tapers, specifically, , where is the number of tapers. is the discrete Fourier transform of the product , which refers to the time-domain optical signal multiplied by the kth taper . The discrete prolate spheroidal sequences (Slepian sequences) are used as tapers to minimize power leakage between frequency bands. In fictive swimming datasets (), we selected extracellular recording as reference signal and used ten tapers () for spectral coherence analysis. Additional processing steps, including demotion and detrending, were applied before the coherence analysis.
Standard deviations of the coherence are reported as jackknife estimates within single trials68. In this method, the variance of a dataset with independent estimates of a quantity is calculated by sequentially deleting each estimate and computing the variance over the resulting averages. The ‘delete-one’ averages of coherence, denoted as where is the index of the deleted taper, are given by , where .
The standard deviation of the magnitude of is then computed as . The variance estimate for the phase is determined by comparing the relative directions of the delete-one unit vectors. The standard deviation is computed as .
Behaving mouse sample preparation
All experiments were conducted according to the National Institutes of Health (NIH) guidelines and with the approval of the Chancellor’s Animal Research Committee of the University of California, Los Angeles. Mice were anesthetized with isoflurane (5% for induction, 1–2% (v/v) for maintenance). The depth of anesthesia was monitored continuously and adjusted when necessary. After induction of anesthesia, the mice were fitted into a stereotaxic frame (Kopf), with their heads secured by blunt ear bars and their noses placed into an anesthesia and ventilation system (David Kopf Instruments). Body temperature was kept at 37 °C with a feedback-controlled heating pad (Harvard Apparatus). Mice were administered 0.05 ml of lidocaine (2%; Akorn) subcutaneously as a local anesthetic before surgery. The surgical incision site was cleaned three times with 10% povidone-iodine and 70% ethanol. After removing the scalp and clearing the skull of connective tissues, we drilled a hole above the virus injection location. Then, we injected GEVIs to the CA1 of the hippocampus, with coordinates mediolateral ±1.8 mm, anteroposterior −2 mm, dorsoventral −1.3 mm from Bregma. One Chrna2−Cre+ male mouse was injected with Cre-dependent GEVI pAce (AAV-DJ-CAG-DIO-pAce-kv2.1; titer, 2.6 × 1012 viral genomes per ml), allowing for imaging of neuron populations expressing nicotinic acetylcholine receptor alpha2, a specific marker for oriens lacunosum-moleculare interneurons69. Another wild-type male mouse was injected with a cocktail of the GEVI pAce (AAV9-EF1a-DIO-pAce-Kv-WPRE, titer, 2.1 × 1013 viral genomes per ml) and a principal-cell specific Cre promoter (pAAV1-CamKII-Cre, Addgene, 105558; titer, 1.9 × 1013 viral genomes per ml) allowing for imaging of excitatory pyramidal neurons. After the termination of viral injection, a circular craniotomy (3 mm diameter) was made around the injection site. Dura over the exposed brain surface was removed and the cortical tissue above the dorsal CA1 was carefully aspirated using a 27-gauge blunt needle. Buffered artificial cerebrospinal fluid (7.888 g NaCl, 0.372 g KCl, 1.192 g HEPES, 0.264 g CaCl2, 0.204 g MgCl2 per 1,000 ml Millipore water) was constantly applied throughout the aspiration to prevent desiccation of the tissue. The aspiration ceased after partial removal of the corpus callosum and bleeding terminated, at which point a 3-mm titanium ring with a glass coverslip attached to its bottom was implanted into the aspirated area and its circular flange was secured to the skull surface using vetbond (3 M). A custom-made lightweight metal head holder (headbar) was attached to the skull posterior to the implant. Cyanoacrylate glue and black dental cement (Ortho-Jet, Lang Dental) were used to seal and cover the exposed skull. During recovery mice were administered carprofen (5 mg per kg of body weight) for 3 days as a systemic analgesic and amoxicillin antibiotic (0.25 mg ml−1 in drinking water) through the water supply for 7 days.
Behaving mouse imaging
Mice were imaged at least 3 weeks after surgery on a treadmill setup that used an optical rotary encoder to track movement for three minutes with epochs of locomotion and stationary behaviors. SLIM was configured to image at 800 Hz with a 1,230-μs exposure time for each frame, under a widefield epifluorescence illumination (UHP-T-470SR Prizmatix). The camera was externally triggered. A data acquisition board (NI USB-6002; National Instruments) recorded both the camera trigger signals and the two-channel rotary encoder outputs simultaneously at 10 kHz, allowing for alignment between optical read-out and animal locomotion.
The 3-min raw data from the camera (dynamic range mode, 16 bits) were continuously streamed to the host computer through a PCIe Gen 3×16 interface and saved on four solid-state drives (SSDs) (Samsung 970 Pro 1TB) in RAID-0 configuration via a controller (HighPoint SSD7101A-1). We also tested direct streaming to a single M.2 NVMe SSD (Samsung 970 Pro 1 TB). Both storage configurations enabled nonmissing-frame recording at 800 Hz for up to 10 min, thanks to SLIM’s compressed data load. However, performance may decline after repeated acquisitions without sufficient intervals for SSD cache recovery. In addition, extended continuous recording is limited by fluorescence photobleaching (Supplementary Fig. 15). See Supplementary Table 4 for the illumination power in mice imaging experiments.
Each raw measurement contained 29 subaperture images, and we reconstructed a 3D volume with an axial range from −296 μm to 296 μm at a step size of 8 μm by eight iterations. The resulted image stack had a resolution of 305 × 305 × 75 pixels and took around 1 s on average after distributing the entire time sequence (~3 min, 144,000 frames) to 24 parallel workers (Parallel Computing Toolbox, MATLAB) in our workstations (graphical processing units: Nvidia RTX 2080, 3070 and 3090).
Similar to leech ganglion experiments, we corrected sample movement by SWiFT-IR66 and manually defined 3D ROI for each neuron candidates. The optical read-out was calculated by averaging the pixels in the ROI and normalized by the temporal mean value. The spikes are detected by detrending the signals with a moving median filter (window size, 125 ms), binarizing by a Schmitt trigger and a peak detection. The signal traces visualized in the figures (Fig. 4, Extended Data Figs. 1 and 2 and Supplementary Fig. 16) used a larger filter window size (1 s) to keep the subthreshold oscillations. We calculated the ratio between the signal and the standard deviation of entire trace to represent the SNR. However, in the analysis of spike waveform evolution, the standard deviations were calculated for each individual time bin (35 s), respectively. The subthreshold oscillations (Extended Data Fig. 2) were analyzed by a band-pass filter (4–10 Hz) on the detrended signal (median filter window size, 1 s). To calculate the firing rate and animal locomotion velocity, we counted the number of spikes and rotary encoder pulses in the same sliding window (window size 5 s).
Image reconstruction with scanning multi-plane illumination
The axial positions of plane illumination during a scan were calibrated using fluorescent beads. For each scanning step, we reconstructed the bead measurement and localized two slices exhibiting the highest image contrasts. In the following imaging experiments, the same slices were sampled from the reconstruction and constituted a new stack with all other scanning steps. During high-speed scans, the sawtooth function that drives the galvo-mirror could suffer from the limited bandwidth of the waveform generator. We removed the measurements at the beginning and/or the end of the scan if repeating abnormalities (that happened at every cycle) were observed during the calibration. For image reconstruction of each scanning step, we treated it as the same reconstruction problem under volumetric illumination.
Extended Data
Extended Data Fig. 1 |. 3D voltage imaging of hippocampus pyramidal neurons in awake mice.

a. MIP of SLIM reconstruction of the 3D located neurons. Neuron indices are partially labeled in y-z view to avoid cluttered markers. Representative result from five FOVs using mice labeled for excitatory pyramidal neurons. Scale bar, 100 μm. b. 3D distribution of neuron center locations and their corresponding index. c. Detrended signal traces and the detected spikes labeled in black dots.
Extended Data Fig. 2 |. Observation of subthreshold oscillations in hippocampus interneurons.

a. Example MIP of identical dataset used in Fig. 4, with neurons of interest labeled with number. Representative result from nine FOVs conducted using mice labeled for OLM interneurons. Scale bar, 100 μm. b. The power spectral densities of 18 neuron traces throughout the entire recording (around three minutes). Red regions denote the frequency band known for theta oscillations. c. Raw traces of ten seconds with red lines plot the band-pass filtered signal in 4–10 Hz.
Extended Data Fig. 3 |. Comparison between SLIM reconstruction and ground truth.

High-resolution ground truth images are acquired with a reference camera (sharing same objective with SLIM) under either widefield or light-sheet illumination. Sample is axially scanned with a motorized stage to acquire a z stack. SLIM and ground truth image are acquired sequentially on the same sample. a. Tilted mouse kidney slice (FluoCells™ Prepared Slide from ThermoFisher) b. Live mouse hippocampus (CA1, excitatory pyramidal neurons). Green arrows denote representative neuron candidates. Yellow arrows mark the overlaps between sub-aperture views (fabrication tolerance of prisms and holder). When sample exhibits a strong background, this overlap could induce reconstruction artefacts. Blue arrows indicate the MIP artifacts due to the bright boundary of circular FOV. c. Embryonic zebrafish vasculature (endothelial cells, Tg(flk:mCherry)). Scale bar, 100 μm.
Supplementary Material
Supplementary information The online version contains supplementary material available at https://doi.org/10.1038/s41592-025-02843-8.
Acknowledgements
We thank Y. Dong and Y. Zhang at UCLA for their assistance in zebrafish experiments. We acknowledge the David Geffen School of Medicine at UCLA for providing the Zebrafish Core Facilities. This work was supported by the following grants: NIH (grant nos. R01HL165318 (L.G.), RF1NS128488 (L.G.), R35GM128761 (L.G.), R01AI102584 (G.C.L.W.), R01HL129727 (T.K.H.), R01HL159970 (T.K.H.) and T32HL144449 (T.K.H.)). W.C.S. was supported by the National Science Foundation Graduate Research Fellowship Program (grant nos. DGE-1650604 and DGE-2034835) and Ruth L. Kirschstein National Research Service Award ‘Multidisciplinary Training in Microbial Pathogenesis’ (grant no. T32AI007323). The funders had no role in study design, data collection and analysis, decision to publish or preparation of the paper.
Footnotes
Competing interests
L.G. has a financial interest in Lift Photonics. However, it was not involved in the research presented in this paper. The other authors declare no competing interests.
Code availability
Codes for 3D reconstruction are available on GitHub at https://github.com/aaronzq/SLIM and via Zenodo at https://doi.org/10.5281/zenodo.15793563 (ref. 70) under a BSD-License.
Extended data is available for this paper at https://doi.org/10.1038/s41592-025-02843-8.
Reporting summary
Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article.
Data availability
Data underlying the results are publicly available on GitHub at https://github.com/aaronzq/SLIM and via Zenodo at https://doi.org/10.5281/zenodo.15793563 (ref. 70). Source data are provided with this paper.
References
- 1.Meng G et al. Ultrafast two-photon fluorescence imaging of cerebral blood circulation in the mouse brain in vivo. Proc. Natl Acad. Sci. USA 119, e2117346119 (2022). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Abdelfattah AS et al. Bright and photostable chemigenetic indicators for extended in vivo voltage imaging. Science 365, 699–704 (2019). [DOI] [PubMed] [Google Scholar]
- 3.Abdelfattah AS et al. Sensitivity optimization of a rhodopsin-based fluorescent voltage indicator. Neuron 111, 1547–1563.e9 (2023). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Hochbaum DR et al. All-optical electrophysiology in mammalian neurons using engineered microbial rhodopsins. Nat. Methods 11, 825–833 (2014). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Wu J et al. Kilohertz two-photon fluorescence microscopy imaging of neural activity in vivo. Nat. Methods 17, 287–290 (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Zhang T et al. Kilohertz two-photon brain imaging in awake mice. Nat. Methods 16, 1119–1122 (2019). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Wang Z et al. Imaging the voltage of neurons distributed across entire brains of larval zebrafish. Preprint at 10.1101/2023.12.15.571964 (2023). [DOI] [Google Scholar]
- 8.Weber TD, Moya MV, Kılıç K, Mertz J & Economo MN High-speed multiplane confocal microscopy for voltage imaging in densely labeled neuronal populations. Nat. Neurosci. 26, 1642–1650 (2023). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Sacconi L et al. KHz-rate volumetric voltage imaging of the whole Zebrafish heart. Biophys. Rep. 2, 100046 (2022). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Voleti V et al. Real-time volumetric microscopy of in vivo dynamics and large-scale samples with SCAPE 2.0. Nat. Methods 16, 1054–1062 (2019). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Pavani SRP et al. Three-dimensional, single-molecule fluorescence imaging beyond the diffraction limit by using a double-helix point spread function. Proc. Natl Acad. Sci. USA 106, 2995–2999 (2009). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Llull P et al. Coded aperture compressive temporal imaging. Opt. Express 21, 10526–10545 (2013). [DOI] [PubMed] [Google Scholar]
- 13.Wagadarikar AA, Pitsianis NP, Sun X & Brady DJ Video rate spectral imaging using a coded aperture snapshot spectral imager. Opt. Express 17, 6368–6388 (2009). [DOI] [PubMed] [Google Scholar]
- 14.Prevedel R et al. Simultaneous whole-animal 3D imaging of neuronal activity using light-field microscopy. Nat. Methods 11, 727–730 (2014). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Cong L et al. Rapid whole brain imaging of neural activity in freely behaving larval zebrafish (Danio rerio). eLife 6, e28158 (2017). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.Skocek O et al. High-speed volumetric imaging of neuronal activity in freely moving rodents. Nat. Methods 15, 429–432 (2018). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Zhang Z et al. Imaging volumetric dynamics at high speed in mouse and zebrafish brain with confocal light field microscopy. Nat. Biotechnol. 39, 74–83 (2021). [DOI] [PubMed] [Google Scholar]
- 18.Wagner N et al. Instantaneous isotropic volumetric imaging of fast biological processes. Nat. Methods 16, 497–500 (2019). [DOI] [PubMed] [Google Scholar]
- 19.Wang Z et al. Real-time volumetric reconstruction of biological dynamics with light-field microscopy and deep learning. Nat. Methods 18, 551–556 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20.Lu Z et al. Virtual-scanning light-field microscopy for robust snapshot high-resolution volumetric imaging. Nat. Methods 20, 735–746 (2023). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.Wagner N et al. Deep learning-enhanced light-field imaging with continuous validation. Nat. Methods 18, 557–563 (2021). [DOI] [PubMed] [Google Scholar]
- 22.Yi C et al. Video-rate 3D imaging of living cells using Fourier view-channel-depth light field microscopy. Commun. Biol. 6, 1259 (2023). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23.Tian T, Yuan Y, Mitra S, Gyongy I & Nolan MF Single photon kilohertz frame rate imaging of neural activity. Adv. Sci. 9, 2203018 (2022). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24.Guo R et al. EventLFM: event camera integrated Fourier light field microscopy for ultrafast 3D imaging. Light Sci. Appl. 13, 144 (2024). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25.Ashok A & Neifeld MA Compressive light field imaging. In Proc. Three-Dimensional Imaging, Visualization, and Display 2010 and Display Technologies and Applications for Defense, Security, and Avionics IV Vol. 7690 (eds Bahram J et al.) 221–232 (SPIE, 2010). [Google Scholar]
- 26.Babacan SD et al. Compressive light field sensing. IEEE Trans. Image Process. 21, 4746–4757 (2012). [DOI] [PubMed] [Google Scholar]
- 27.Marwah K, Wetzstein G, Bando Y & Raskar R Compressive light field photography using overcomplete dictionaries and optimized projections. ACM Trans. Graph. 32, 46:1–46:12 (2013). [Google Scholar]
- 28.Antipa N, Necula S, Ng R & Waller L Single-shot diffuser-encoded light field imaging. In 2016 IEEE International Conference on Computational Photography (ICCP) 1–11 (IEEE, 2016). [Google Scholar]
- 29.Liu FL, Kuo G, Antipa N, Yanny K & Waller L Fourier diffuserScope: single-shot 3D Fourier light field microscopy with a diffuser. Opt. Express 28, 28969–28986 (2020). [DOI] [PubMed] [Google Scholar]
- 30.Yanny K et al. Miniscope3D: optimized single-shot miniature 3D fluorescence microscopy. Light. Sci. Appl. 9, 171 (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 31.Antipa N et al. DiffuserCam: lensless single-exposure 3D imaging. Optica 5, 1–9 (2018). [Google Scholar]
- 32.Feng X, Ma Y & Gao L Compact light field photography towards versatile three-dimensional vision. Nat. Commun. 13, 3333 (2022). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 33.Feng X & Gao L Ultrafast light field tomography for snapshot transient and non-line-of-sight imaging. Nat. Commun. 12, 2179 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 34.Wang Z, Hsiai TK & Gao L Augmented light field tomography through parallel spectral encoding. Optica 10, 62–65 (2023). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 35.Mandracchia B et al. High-speed optical imaging with sCMOS pixel reassignment. Nat. Commun. 15, 4598 (2024). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 36.Guo C et al. Fourier light-field microscopy. Opt. Express 27, 25573–25594 (2019). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 37.Llavador A, Sola-Pikabea J, Saavedra G, Javidi B & Martínez-Corral M Resolution improvements in integral microscopy with Fourier plane recording. Opt. Express 24, 20792–20798 (2016). [DOI] [PubMed] [Google Scholar]
- 38.Scrofani G et al. FIMic: design for ultimate 3D-integral microscopy of in-vivo biological samples. Biomed. Opt. Express 9, 335–346 (2018). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 39.Levoy M, Ng R, Adams A, Footer M & Horowitz M Light field microscopy. ACM Trans. Graph. 25, 924–934 (2006). [Google Scholar]
- 40.Broxton M et al. Wave optics theory and 3-D deconvolution for the light field microscope. Opt. Express 21, 25418–25439 (2013). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 41.Zhou Y, Zickus V, Zammit P, Taylor JM & Harvey AR High-speed extended-volume blood flow measurement using engineered point-spread function. Biomed. Opt. Express 9, 6444–6454 (2018). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 42.Tomina Y & Wagenaar DA A double-sided microscope to realize whole-ganglion imaging of membrane potential in the medicinal leech. eLife 6, e29839 (2017). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 43.Briggman KL & Kristan WB Imaging dedicated and multifunctional neural circuits generating distinct behaviors. J. Neurosci. 26, 10925–10933 (2006). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 44.Briggman KL, Abarbanel HDI & Kristan WB Optical imaging of neuronal populations during decision-making. Science 307, 896–901 (2005). [DOI] [PubMed] [Google Scholar]
- 45.Kannan M et al. Dual-polarity voltage imaging of the concurrent dynamics of multiple neuron types. Science 378, eabm8797 (2022). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 46.Buzsáki G Theta oscillations in the hippocampus. Neuron 33, 325–340 (2002). [DOI] [PubMed] [Google Scholar]
- 47.Gu Z et al. Hippocampal interneuronal α7 nAChRs modulate theta oscillations in freely moving mice. Cell Rep. 31, 107740 (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 48.Taxidis J et al. Voltage imaging reveals hippocampal inhibitory dynamics shaping pyramidal memory-encoding sequences. Nat. Neurosci. 28, 1946–1958 (2025). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 49.Varga C, Golshani P & Soltesz I Frequency-invariant temporal ordering of interneuronal discharges during hippocampal oscillations in awake mice. Proc. Natl Acad. Sci. USA 109, E2726–E2734 (2012). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 50.Yoon Y-G et al. Sparse decomposition light-field microscopy for high speed imaging of neuronal activity. Optica 7, 1457–1468 (2020). [Google Scholar]
- 51.Wang Z et al. A hybrid of light-field and light-sheet imaging to study myocardial function and intracardiac blood flow during zebrafish development. PLoS Comput. Biol. 17, e1009175 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 52.Vedula V et al. A method to quantify mechanobiologic forces during zebrafish cardiac development using 4-D light sheet imaging and computational modeling. PLoS Comput. Biol. 13, e1005828 (2017). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 53.Zhang Y et al. DiLFM: an artifact-suppressed and noise-robust light-field microscopy through dictionary learning. Light. Sci. Appl. 10, 152 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 54.Pégard NC et al. Compressive light-field microscopy for 3D neural activity recording. Optica 3, 517–524 (2016). [Google Scholar]
- 55.Lin X, Wu J, Zheng G & Dai Q Camera array based light field microscopy. Biomed. Opt. Express 6, 3179–3189 (2015). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 56.Dunsby C Optically sectioned imaging by oblique plane microscopy. Opt. Express 16, 20306–20316 (2008). [DOI] [PubMed] [Google Scholar]
- 57.Zhang Y et al. Computational optical sectioning with an incoherent multiscale scattering model for light-field microscopy. Nat. Commun. 12, 6391 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 58.Zhang Y et al. Multi-focus light-field microscopy for high-speed large-volume imaging. PhotoniX 3, 30 (2022). [Google Scholar]
- 59.Li C, Chen G, Zhang Y, Wu F & Wang Q Advanced fluorescence imaging technology in the near-infrared-II window for biomedical applications. J. Am. Chem. Soc. 142, 14789–14804 (2020). [DOI] [PubMed] [Google Scholar]
- 60.Miao Y et al. Recent progress in fluorescence imaging of the near-infrared II window. Chem. Bio. Chem. 19, 2522–2541 (2018). [DOI] [PubMed] [Google Scholar]
- 61.Biggs DSC & Andrews M Acceleration of iterative image restoration algorithms. Appl. Opt. 36, 1766–1775 (1997). [DOI] [PubMed] [Google Scholar]
- 62.Ershov D et al. TrackMate 7: integrating state-of-the-art segmentation algorithms into tracking pipelines. Nat. Methods 19, 829–832 (2022). [DOI] [PubMed] [Google Scholar]
- 63.Tinevez J-Y et al. TrackMate: an open and extensible platform for single-particle tracking. Methods 115, 80–90 (2017). [DOI] [PubMed] [Google Scholar]
- 64.Tomina Y & Wagenaar D Dual-sided voltage-sensitive dye imaging of leech ganglia. Bio. Protoc. 8, e2751 (2018). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 65.Miller EW et al. Optically monitoring voltage in neurons by photo-induced electron transfer through molecular wires. Proc. Natl Acad. Sci. USA 109, 2114–2119 (2012). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 66.Wetzel AW et al. Registering large volume serial-section electron microscopy image sets for neural circuit reconstruction using FFT signal whitening. In Proc. 2016 IEEE Applied Imagery Pattern Recognition Workshop (AIPR) 1–10 (IEEE, 2016). [Google Scholar]
- 67.Thomson DJ Spectrum estimation and harmonic analysis. Proc. IEEE 70, 1055–1096 (1982). [Google Scholar]
- 68.Thomson DJ Jackknifed error estimates for spectra, coherences, and transfer functions. In Advances in Spectrum Analysis and Array Processing Vol 1 (ed. Haykin SS) 58–113 (Prentice Hall, 1991). [Google Scholar]
- 69.Nichol H, Amilhon B, Manseau F, Badrinarayanan S & Williams S Electrophysiological and morphological characterization of Chrna2 cells in the subiculum and CA1 of the hippocampus: an optogenetic investigation. Front. Cell. Neurosci. 12, 32 (2018). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 70.Wang Z Source data for manuscript ‘Kilohertz volumetric imaging of in-vivo dynamics using squeezed light field microscopy’. Zenodo 10.5281/zenodo.15793563 (2025). [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Data Availability Statement
Data underlying the results are publicly available on GitHub at https://github.com/aaronzq/SLIM and via Zenodo at https://doi.org/10.5281/zenodo.15793563 (ref. 70). Source data are provided with this paper.
