Skip to main content
eLife logoLink to eLife
. 2017 Sep 20;6:e28158. doi: 10.7554/eLife.28158

Rapid whole brain imaging of neural activity in freely behaving larval zebrafish (Danio rerio)

Lin Cong 1,, Zeguan Wang 2,, Yuming Chai 2,, Wei Hang 1,, Chunfeng Shang 1, Wenbin Yang 2, Lu Bai 1,3, Jiulin Du 1,3, Kai Wang 1,3,, Quan Wen 2,
Editor: Ronald L Calabrese4
PMCID: PMC5644961  PMID: 28930070

Abstract

The internal brain dynamics that link sensation and action are arguably better studied during natural animal behaviors. Here, we report on a novel volume imaging and 3D tracking technique that monitors whole brain neural activity in freely swimming larval zebrafish (Danio rerio). We demonstrated the capability of our system through functional imaging of neural activity during visually evoked and prey capture behaviors in larval zebrafish.

Research organism: Zebrafish

eLife digest

How do neurons in the brain process information from the senses and drive complex behaviors? This question has fascinated neuroscientists for many years. It is currently not possible to record the electrical activities of all of the 100 billion neurons in a human brain. Yet, in the last decade, it has become possible to genetically engineer some neurons in animals to produce fluorescence reporters that change their brightness in response to brain activity and then monitor them under a microscope. In small animals such as zebrafish larvae, this method makes it possible to monitor the activities of all the neurons in the brain if the animal’s head is held still. However, many behaviors – for example, catching prey – require movement, and no existing technique could image brain activity in enough detail if the animal’s head was moving.

Cong, Wang, Chai, Hang et al. have now made progress towards this goal by developing a new technique to image neural activity across the whole brain of a zebrafish larva as it swims freely in a small water-filled chamber. The technique uses high-speed cameras and computer software to track the movements of the fish in three dimensions, and then automatically moves the chamber under the microscope such that the animal’s brain is constantly kept in focus. The newly developed microscope can capture changes in neural activity across a large volume all at the same time. It is then further adapted to overcome problems caused by sudden or swift movements, which would normally result in motion blur. With this microscope set up, Cong et al. were able to capture, for the first time, activity from all the neurons in a zebrafish larva’s brain as it pursued and caught its prey.

This technique provides a new window into how brain activity changes when animals are behaving naturally. In the future, this technique could help link the activities of neurons to different behaviors in several popular model organisms including fish, worms and fruit flies.

Introduction

A central goal in systems neuroscience is to understand how distributed neural circuitry dynamics drive animal behaviors. The emerging field of optical neurophysiology allows monitoring (Kerr and Denk, 2008; Dombeck et al., 2007) and manipulating (Wyart et al., 2009; Boyden et al., 2005; Zhang et al., 2007) the activities of defined populations of neurons that express genetically encoded activity indicators (Chen et al., 2013; Tian et al., 2009) and light-activated proteins (Kerr and Denk, 2008; Boyden et al., 2005; Zhang et al., 2007; Luo et al., 2008). Larval zebrafish (Danio rerio) are an attractive model system to investigate the neural correlates of behaviors owing to their small brain size, optical transparency, and rich behavioral repertoire (Friedrich et al., 2010; Ahrens and Engert, 2015). Whole brain imaging of larval zebrafish using light sheet/two-photon microscopy holds considerable potential in creating a comprehensive functional map that links neuronal activities and behaviors (Ahrens et al., 2012; Ahrens et al., 2013; Engert, 2014).

Recording neural activity maps in larval zebrafish has been successfully integrated with the virtual reality paradigm: closed-loop fictive behaviors in immobilized fish can be monitored and controlled via visual feedback that varies according to the electrical output patterns of motor neurons (Ahrens et al., 2012; Engert, 2012). The behavioral repertoire, however, may be further expanded in freely swimming zebrafish whose behavioral states can be directly inferred and when sensory feedback loops are mostly intact and active. For example, it is likely that vestibular as well as proprioceptive feedbacks are perturbed in immobilized zebrafish (Engert, 2012; Bianco et al., 2012). The crowning moment during hunting behavior (Bianco et al., 2011; Patterson et al., 2013; Trivedi and Bollmann, 2013) — when a fish succeeds in catching a paramecium — cannot be easily replicated in a virtual reality setting. Therefore, whole brain imaging in a freely swimming zebrafish may allow optical interrogation of brain circuits underlying a range of less explored behaviors.

Although whole brain functional imaging methods are available for head-fixed larval zebrafish, imaging a speeding brain imposes many technical challenges. Current studies on freely swimming zebrafish are either limited to non-imaging optical systems (Naumann et al., 2010) or wide field imaging at low resolution (Muto et al., 2013). While light sheet microscopy (LSM) has demonstrated entire brain coverage and single neuron resolution in restrained zebrafish (Ahrens et al., 2013), it lacks the speed to follow rapid fish movement. Moreover, in LSM, the sample is illuminated from its side, a configuration that is difficult to be integrated with a tracking system. Conventional light field microscopy (LFM) (Broxton et al., 2013; Prevedel et al., 2014) is a promising alternative due to its higher imaging speed; however, its spatial resolution is relatively low. Specialized LFMs for monitoring neural activity utilizing temporal information were also developed recently (Pégard et al., 2016; Nöbauer et al., 2017), which rely on spatiotemporal sparsity of fluorescent signals and cannot be applied to moving animals.

Here, we describe a fast 3D tracking technique and a novel volume imaging method that allows whole brain calcium imaging with high spatial and temporal resolution in freely behaving larval zebrafish. Zebrafish larvae possess extraordinary mobility. They can move at an instantaneous velocity up to 50 mm/s (Severi et al., 2014) and acceleration of 1 g (9.83 m/s2). To continuously track fish motion, we developed a high-speed closed-loop system in which (1) customized machine vision software allowed rapid estimation of fish movement in both the x-y and z directions; and, (2) feedback control signals drove a high-speed motorized x-y stage (at 300 Hz) and a piezo z stage (at 100 Hz) to retain the entire fish head within the field of view of a high numerical aperture (25×, NA = 1.05) objective.

Larval zebrafish can make sudden and swift movements that easily cause motion blur and severely degrade imaging quality. To overcome this obstacle, we developed a new eXtended field of view LFM (XLFM). The XLFM can image sparse neural activity over the larval zebrafish brain at near single cell resolution and at a volume rate of 77 Hz, with the aid of genetically encoded calcium indicator GCamp6f. Furthermore, the implementation of flashed fluorescence excitation (200 μs in duration) allowed blur-free fluorescent images to be captured when a zebrafish moved at a speed up to 10 mm/s. The seamless integration of the tracking and imaging system made it possible to reveal rich whole brain neural dynamics during natural behavior with unprecedented resolution. We demonstrated the ability of our system during visually evoked and prey capture behaviors in larval zebrafish.

Results

The newly developed XLFM is based on the general principle of light field (Adelson and Wang, 1992) and can acquire 3D information from a single camera frame. XLFM greatly relaxed the constraint imposed by the tradeoff between spatial resolution and imaging volume coverage in conventional LFM. This achievement relies on optics and in computational reconstruction techniques. First, a customized lenslet array (Figure 1a, Figure 1—figure supplement 1) was placed at the rear pupil plane of the imaging objective, instead of at the imaging plane as in LFM. Therefore, in ideal conditions, a 2D spatially invariant point spread function (PSF) could be defined and measured; in practice, the PSF was approximately spatially invariant (see Materials and methods). Second, the aperture size of each micro-lens was decoupled from their interspacing and spatial arrangement, so that both the imaging volume and the resolution could be optimized simultaneously given the limited imaging sensor size. Third, multifocal imaging (Abrahamsson et al., 2013; Perwass and Wietzke, 2012) was introduced to further increase the depth of view by dividing the micro-lenses array into two groups whose focal planes were at different axial positions (Figure 1b and c, Figure 1—figure supplements 3 and 4). Fourth, a new computational algorithm based on optical wave theory was developed to reconstruct the entire 3D volume from one image (Figure 1—figure supplement 5) captured by a fast camera (see Materials and methods).

Figure 1. Whole brain imaging of larval zebrafish with XLFM.

(a) Schematic of XLFM. Lenslet array position was conjugated to the rear pupil plane of the imaging objective. Excitation laser (blue) provided uniform illumination across the sample. (b–c) Point sources at two different depths formed, through two different groups of micro-lenses, sharp images on the imaging sensor, with positional information reconstructed from these distinct patterns. (d) Maximum intensity projections (MIPs) on time and space of time series volume images of an agarose-restrained larval zebrafish with pan-neuronal nucleus-localized GCaMP6f (huc:h2b-gcamp6f) fluorescence labeling. (e) Normalized neuronal activities of selected neurons exhibited increasing calcium responses after the onset of light stimulation at t = 0. Neurons were ordered by the onset time when the measured fluorescence signals reached 20% of their maximum. (f) Selected neurons in (e) were color coded based on their response onset time. Scale bar is 100 μm.

Figure 1.

Figure 1—figure supplement 1. Customized lenslet array.

Figure 1—figure supplement 1.

Customized lenslet array consisted of 27 customized micro-lenses embedded in an aluminum plate with 27 drilled holes. (a) Micro-lenses were divided into two groups (A or B), illustrated in yellow and green, respectively. (b) Micro-lens had a diameter of 1.3 mm and focal length of 26 mm. (c). The aluminum housing plate had a 1.3 mm diameter aperture on one side and 1 mm diameter aperture on the other side. Group A and Group B micro-lenses were displaced axially.

Figure 1—figure supplement 2. Experimentally measured PSF of the whole imaging system.

Figure 1—figure supplement 2.

Maximum intensity projections (MIPs) of the measured raw PSF stack. The stack was 2048 pixels × 2048 pixels×200 pixels with a voxel size of 1.6 µm × 1.6 μm × 2 μm.

Figure 1—figure supplement 3. PSF of Group A micro-lenses: PSF_A.

Figure 1—figure supplement 3.

Maximum intensity projections (MIP) of PSF_A. PSF_A was extracted from experimentally measured PSF (Figure 1—figure supplement 2) according to individual micro-lens positions in group A.

Figure 1—figure supplement 4. PS F of Group B micro-lenses: PSF_B.

Figure 1—figure supplement 4.

Maximum intensity projections (MIP) of PSF_B. PSF_B was extracted from experimentally measured PSFs (Figure 1—figure supplement 2) according to individual micro-lens positions in group B.

Figure 1—figure supplement 5. Example of camera captured raw imaging data of larval zebrafish.

Figure 1—figure supplement 5.

Raw fluorescence imaging data consisted of 27 sub-images of a larval zebrafish formed by 27 micro-lenses. Under the condition that the PSF is spatially invariant, which is satisfied apart from small aberrations, the algorithm can handle overlapping fish images.

Figure 1—figure supplement 6. Characterization of in-plane resolution of micro-lenses.

Figure 1—figure supplement 6.

Fourier transforms of raw images of a 0.5-μm diameter fluorescent particle placed at different locations (x = −400, 0, 400 μm; z = −100, 0, 100 μm) were plotted in log scales. Dashed circles represent in-plane spatial frequency coordinates corresponding to spatial resolutions of 3.2 μm and 4 μm, respectively.

Figure 1—figure supplement 7. Characterization of axial resolution of XLFM afforded by individual micro-lenses.

Figure 1—figure supplement 7.

Characterization of axial resolution using a 0.5 μm diameter bright fluorescent particle. (a) Maximum intensity projection of an image stack consisting of the particle’s fluorescent images captured at different z positions. (b) Analysis of the images formed by micro-lenses 1 and 2, indicated by sub-regions in (a). The first and second columns are the particle’s fluorescent images captured at different z positions separated by 5 μm. The third column is the sum of columns 1 and 2. The fourth column is the Fourier analysis of column three using function: fx=log((x)), where (x) represents the Fourier transform. The fifth column is the deconvolution of column three using Wiener filtering method. Experimentally measured images of the bead at different z positions (z = −100 μm, z = 0 μm and z = 100 μm) are employed as PSFs to deconvolve different images (C1, C2 and C3), respectively.

Figure 1—figure supplement 8. Characterization of magnification variation of micro-lenses in XLFM.

Figure 1—figure supplement 8.

Magnifications of 27 micro-lenses were measured at different locations across the field of view. A fluorescent bead originally placed at the center of the field of view (x, y, z = 0) was moved to six different locations (x = 200 μm, 300 μm, 400 μm, −200 μm, −300 μm, −400 μm, y = 0, z = 0). Six classes of the bead’s image shifts, represented by different colors, were measured. Each class consisted of 27 image shifts formed by 27 micro-lenses. Within each class, image shifts were normalized to the one from the first micro-lens. The first 12 micro-lenses and the rest formed two different groups of micro-lenses: group B and group A, consistent with Figure 1—figure supplements 3 and 4. The magnification variation of a single micro-lens across the field of view was small (<0.3%), suggesting that the spatial invariance of individual micro-lens’ PSF was well preserved across the field of view of Ø = 800 μm. The variation across different micro-lenses within one group (A/B) was more evident (~2%), suggesting that the combined PSF from different micro-lenses was not perfectly spatially invariant.

Figure 1—figure supplement 9. Resolution degradation due to focal length variation of micro-lenses.

Figure 1—figure supplement 9.

Maximum intensity projections (MIPs) of a reconstructed fluorescent bead positioned at different locations across the field of view. As the bead moved to the edge of the field of view, the reconstruction became distorted because the magnification variation of the micro-lenses led to spatial variance of total PSF. Scale bars are 10 µm.

Figure 1—figure supplement 10. Characterization of axial resolution of XLFM at low SNR.

Figure 1—figure supplement 10.

Characterization of axial resolution using densely packed fluorescent particles (0.5 μm in diameter) at low SNR. (a) Synthetic XLFM raw image (Materials and methods) formed by two layers of fluorescent particles with different z positions. (b) Axial resolution at different depths characterized by the minimum separation of two particles in z, which can be resolved using the reconstruction algorithm (Materials and methods). (c) Left, reconstructed examples of X-Z projections of two particles located at different z positions (−70 μm, −30 μm, 30 μm, 70 μm) with different axial separations (6 μm, 5 μm, 5 μm, 6 μm); right, extracted intensity profiles of these examples.

Figure 1—figure supplement 11. Dependence of imaging resolution on the sparseness of the sample.

Figure 1—figure supplement 11.

Characterization of the dependence of imaging resolution on the sparseness of the sample using computer simulation. (a) Maximum intensity projections (MIPs) of a numerically simulated (top) and reconstructed (bottom) larval zebrafish with randomly distributed active neurons. Red and green lines indicate positions where simulated (red) and reconstructed (green) cross-sections are compared. We assumed that the total number of neurons in the zebrafish brain is 80,000, and gradually increased the sparseness index ρ, the fraction of neurons activated at a given frame. (bd) Characterization of the reconstruction results for different ρ. Insets are magnified views of rectangular regions. Red and green dots are simulated and reconstructed neurons, respectively.

Figure 1—figure supplement 12. Characterization of photobleaching in fluorescence imaging by XLFM.

Figure 1—figure supplement 12.

Photobleaching was characterized by a total fluorescence intensity change of five 5 dpf zebrafish larval with nucleus-localized GCamp6f (huc:h2b-gcamp6f). Each fish was embedded in 1% agarose and continuously exposed to 2.5 mW/mm2 fluorescence excitation laser (488 nm) illumination. After ~100 min, corresponding to 300,000 volumes with a volume rate of 50 volumes/s, total fluorescence intensity dropped to half of that at the starting point. Random spikes corresponded to spontaneous neural activity. Fish were alive and swam normally when they were relieved from the agarose after imaging.

We first characterized the XLFM by imaging 0.5 μm diameter fluorescent beads. In our design, the system had ~ Ø800 μm in plane coverage (Ø is the diameter of the lateral field of view) and more than 400 μm depth of view, within which an optimal resolution of 3.4 μm × 3.4 μm × 5 μm could be achieved over a depth of 200 μm (Figure 1—figure supplements 6 and 7, Materials and methods). In the current implementation, however, the imaging performance suffered from the variation in the focal length of the micro-lenses (Figure 1—figure supplement 8), which led to spatial variance of the PSF. As a result, the reconstruction performance and the achievable optimal resolution were shown to degrade beyond the volume of Ø500 μm × 100 μm (Figure 1—figure supplements 9 and 10). To minimize the reconstruction time while assuring whole brain coverage (~250 μm thick), all imaging reconstructions were carried out over a volume of Ø800 μm × 400 μm.

We next characterized the imaging performance by considering more fluorescent light sources distributed within the imaging volume. The achievable optimal resolution depends on the sparseness of the sample, because the information captured by the image sensor was insufficient to assign independent values for all voxels in the entire reconstructed imaging volume. Given the total number of neurons (~80,000 [Hill et al., 2003]) in a larval zebrafish brain, we next introduced a sparseness index ρ, defined as the fraction of neurons in the brain active at a given instant, and used numerical simulation and our reconstruction algorithm to characterize the dependence of achievable resolution on ρ. We identified a critical ρc ≈ 0.11, below which active neurons could be resolved at the optimal resolution (Figure 1—figure supplement 11b). As ρ increased, closely clustered neurons could no longer be well resolved (Figure 1—figure supplement 11c–d). Therefore, sparse neural activity is a prerequisite in XLFM for resolving individual neurons at the optimal resolution. Moreover, the above characterization assumed an aberration and scattering free environment; complex optical properties of biological tissue could also degrade the resolution (Ji, 2017).

We demonstrated the capabilities of XLFM by imaging the whole brain neuronal activities of a larval zebrafish (5 d post-fertilization [dpf]) at a speed of 77 volumes/s and relatively low excitation laser exposure of 2.5 mW/mm2 (Figure 1d, Video 1). The fluorescent intensity loss due to photobleaching reached ~50% when the zebrafish, which expressed pan-neuronal nucleus-labelled GCamp6f (huc:h2b-gcamp6f), was imaged continuously for ~100 min and over more than 300,000 volumes (Figure 1—figure supplement 12, Video 2 and 3). To test whether XLFM could monitor fast changes in neuronal dynamics across the whole brain at high resolution (close to single neuron level), we first presented the larval zebrafish, restrained in low melting point agarose, with visual stimulation (~2.6 s duration). We found that different groups of neurons in the forebrain, midbrain, and hindbrain were activated at different times (Figure 1e–f, Video 1 and 4), suggesting rapid sensorimotor transformation across different brain regions.

Video 1. Whole brain functional imaging of larval zebrafish under light stimulation.

Download video file (4.8MB, mp4)
DOI: 10.7554/eLife.28158.016

Whole brain XLFM imaging of a 5 dpf agarose-embedded larval zebrafish expressing nucleus-localized GCamp6f (huc:h2b-gcamp6f). Light stimulation was introduced at time point t = 0. Whole brain activity was recorded at 77 volumes/s.

Video 2. Whole brain functional imaging of spontaneous activities of larval zebrafish.

Download video file (2MB, mp4)
DOI: 10.7554/eLife.28158.017

Whole brain XLFM imaging of a 5 dpf agarose-embedded larval zebrafish expressing nucleus-localized GCamp6f (huc:h2b-gcamp6f). Spontaneous neural activity was recorded at 0.6 volumes/s.

Video 3. Whole brain functional imaging of spontaneous activities of larval zebrafish.

Download video file (2.2MB, mp4)
DOI: 10.7554/eLife.28158.018

Whole brain XLFM imaging of a 5 dpf agarose-embedded larval zebrafish expressing cytoplasm-labeled GCamp6s (huc:gcamp6s). Spontaneous neural activity was recorded at 0.6 volumes/s.

Video 4. Whole brain functional imaging of larval zebrafish under light stimulation.

Download video file (1.5MB, mp4)
DOI: 10.7554/eLife.28158.019

Whole brain XLFM imaging of a 5 dpf agarose-embedded larval zebrafish expressing cytoplasm-labeled GCamp6s (huc:gcamp6s). Light stimulation was introduced at time point t = 0. Whole brain activity was recorded at 50 volumes/s.

To track freely swimming larval zebrafish, we transferred fish into a water-filled chamber with a glass ceiling and floor. The 20 mm × 20 mm × 0.8 mm-sized chamber was coupled with a piezo actuator and mounted on a high-speed 2D motorized stage (Figure 2). A tracking camera monitored the lateral movement of the fish, and an autofocus camera, which captured light field images, monitored the axial movement of the fish head (Figure 2, Figure 2—figure supplement 1).

Figure 2. System schematics that integrated tracking, whole brain functional imaging, and real time behavioral analysis.

Larval zebrafish swam in a customized chamber with an optically transparent ceiling and floor. The water-filled chamber was mounted on a high-speed three-axis stage (PI M686 and PI P725KHDS). Customized LED rings generated dark field illumination of the zebrafish. The scattered light was collected by four cameras: two cameras below the chamber were used for x-y plane tracking and low magnification real-time (RT) analysis, respectively; two cameras above the chamber and after the imaging objective were used for Z autofocus and high magnification RT analysis. The positional information of the larval zebrafish, acquired from the tracking and autofocus system, was converted to feedback voltage signals to drive the three-axis stage and to compensate for fish movement. The functional imaging system, described in Figure 1, shared the same imaging objective placed above the swimming chamber. The 3D tracking, RT behavioral analysis, and functional imaging system were synchronized for accurate correlation between neural activity and behavioral output.

Figure 2.

Figure 2—figure supplement 1. Characterization of the autofocus system.

Figure 2—figure supplement 1.

(a) Autofocus camera behind a one-dimensional lenslet array captured triplet images of the fish head (up). Its autocorrelation function was computed (bottom). (b) Central line profile of the autocorrelation function was extracted and inter-fish distance was computed as local maximums in the autocorrelation function. (c) Axial shift of the fish head, calibrated by moving the piezo at a constant interval, changed linearly (red line) with inter-fish distance.

Real-time machine vision algorithms allowed quick estimate of lateral (within 1 ms) and axial (~5 ms) head positions (see Materials and methods). The error signals in three dimensions, defined as the difference between the head position and set point, were calculated (Figure 3a) and converted to analog voltage signals through proportional-integral-derivative (PID) control to drive the motorized stage and z-piezo scanner. Tracking and autofocusing allowed for rapid compensation of 3D fish movement (300 Hz in x and y, 100 Hz in z, Figure 3a) and retainment of the fish head within the field of view of the imaging objective.

Figure 3. 3D tracking of larval zebrafish.

Figure 3.

(a) Representative time varying error signals in three dimensions, defined as the difference between real head position and set point. Inset provides magnified view at short time interval. Lateral movement can be rapidly compensated for within a few milliseconds with an instantaneous velocity of up to 10 mm/s. The axial shift was small compared with the depth coverage (200 μm) during whole brain imaging, and thereby had minor effect on brain activity reconstruction. (b) Tracking images at four time points during prey capture behavior, acquired at low (left) and high (right) magnification simultaneously. Scale bars are 1 mm (left) and 200 μm (right). (c) Kinematics of behavioral features during prey capture. Shaded region marks the beginning and end of the prey capture process.

Our tracking system permitted high-speed and high-resolution recording of larval zebrafish behaviors. With two cameras acquiring head and whole body videos simultaneously (Figure 2, Figure 3b), we recorded and analyzed in real time (see Materials and methods) the kinematics of key features during larval zebrafish prey capture (Figure 3b and c, Video 5 and 6). Consistent with several earlier findings (Bianco et al., 2011; Patterson et al., 2013; Trivedi and Bollmann, 2013), eyes converged rapidly when the fish entered the prey capture state (Figure 3c). Other features that characterized tail and fin movement were also analyzed at high temporal resolution (Figure 3c).

Video 5. Tracking of larval zebrafish during prey capture behavior at low resolution.

Download video file (463.7KB, mp4)
DOI: 10.7554/eLife.28158.023

Tracking and real time kinematic analysis of larval zebrafish during prey capture behavior at low resolution. Recorded at 190 frames/s.

Video 6. Tracking of larval zebrafish during prey capture behavior at high resolution.

Download video file (290KB, mp4)
DOI: 10.7554/eLife.28158.024

Tracking and real time kinematic analysis of larval zebrafish during prey capture behavior at high resolution. Recorded at 160 frames/s.

The integration of the XLFM and 3D tracking system allowed us to perform whole brain functional imaging of a freely behaving larval zebrafish (Figure 2). We first replicated the light-evoked experiment (similar to Figure 1), albeit in a freely behaving zebrafish with pan-neuronal cytoplasm-labeled GCaMP6s (huc:gcamp6s), which exhibited faster and more prominent calcium response (Video 7). Strong activities were observed in the neuropil of the optical tectum and the midbrain after stimulus onset. The fish tried to avoid strong light exposure and made quick tail movement at ~60 Hz. Whole brain neural activity was monitored continuously during the light-evoked behavior, except for occasional blurred frames due to the limited speed and acceleration of the tracking stage.

Video 7. Whole brain functional imaging of a freely swimming larval zebrafish under light stimulation.

Download video file (3MB, mp4)
DOI: 10.7554/eLife.28158.025

Whole brain XLFM imaging of a 7 dpf freely swimming larval zebrafish expressing cytoplasm-labeled GCamp6s (huc:gcamp6s). Light stimulation was introduced at time point t = 0. Whole brain activities were recorded at 77 volumes/s and with a flashed excitation laser under 0.3 ms exposure time.

Next, we captured whole brain neural activity during the entire prey capture process in freely swimming larval zebrafish (huc:gcamp6s, Video 8). When a paramecium moved into the visual field of the fish, groups of neurons, indicated as group one in Figure 4b, near the contralateral optical tectum of the fish were first activated (t1). The fish then converged its eyes onto the paramecium and changed its heading direction in approach (t2). Starting from t2, several groups of neurons in the hypothalamus, midbrain, and hindbrain, highlighted as groups two, three, and four in Figure 4b, were activated. It took the fish three attempts (Figure 4c) to catch and eat the paramecium. After the last try (t4), neuron activity in group one decreased gradually, whereas activities in the other groups of neurons continued to rise and persisted for ~1 s before the calcium signals decreased. The earliest tectal activity (group 1) responsible for prey detection found here is consistent with previous studies (Semmelhack et al., 2014; Bianco and Engert, 2015). Moreover, our data revealed interesting neural dynamics arising from other brain regions during and after successful prey capture. We also monitored similar behavior in a zebrafish expressing nucleus-localized GCamp6f (huc:h2b-gcamp6f) with better resolution but less prominent calcium response (Video 9).

Figure 4. Whole brain imaging of larval zebrafish during prey capture behavior.

Figure 4.

(a) Renderings of whole brain calcium activity at six time points (up) and the corresponding behavioral images (bottom). Features used to quantify behavior were: fish-paramecium azimuth α; convergence angle between eyes β; head orientation γ; and fish-paramecium distance d. (b) Maximum intensity projections of zebrafish brain with pan-neuronal cytoplasm-labeled GCaMP6s (huc:gcamp6s). Boundaries of four brain regions are color marked. (c) Neural dynamics inferred from GCaMP6 fluorescence changes in these four regions during the entire prey capture behavior (up) and the kinematics of behavioral features (bottom). Note that between t2 and t4, fish-paramecium distance d exhibits three abrupt kinks, representing the three attempts to catch prey.

Video 8. Whole brain functional imaging of a freely swimming larval zebrafish during prey capture behavior.

Download video file (3.7MB, mp4)
DOI: 10.7554/eLife.28158.027

Whole brain XLFM imaging of an 11 dpf freely swimming larval zebrafish expressing cytoplasm-labeled GCamp6s (huc:gcamp6s). The entire process during which the larval zebrafish caught and ate the paramecium was recorded.

Video 9. Whole brain functional imaging of a freely swimming larval zebrafish during prey capture behavior.

Download video file (5.3MB, mp4)
DOI: 10.7554/eLife.28158.028

Whole brain XLFM imaging of a 7 dpf freely swimming larval zebrafish expressing nucleus-localized GCamp6f (huc:h2b-gcamp6f). The entire process during which the larval zebrafish caught and ate the paramecium was recorded.

Discussion

Whole brain imaging in freely behaving animals has been previously reported in Caenorhabditis elegans, by integrating spinning-disk confocal microscopy with a 2D tracking system (Venkatachalam et al., 2016; Nguyen et al., 2016). In the more remote past, Howard Berg pioneered the use of 3D tracking microscopy to study bacteria chemotaxis (Berg, 1971). However, the significant increase of animal size imposes challenges both in tracking and imaging technologies. The XLFM, derived from the general concept of light field imaging (Broxton et al., 2013; Adelson and Wang, 1992; Ng et al., 2005; Levoy et al., 2006), overcomes several critical limitations of conventional LFM and allows optimization of imaging volume, resolution, and speed simultaneously. Furthermore, it can be perfectly combined with flashed fluorescence excitation to capture blur-free images at high resolution during rapid fish movement. Taken together, we have developed a volume imaging and tracking microscopy system suitable for observing and capturing freely behaving larval zebrafish, which have ~80,000 neurons and can move two orders of magnitude faster than C. elegans.

Tracking and whole brain imaging of naturally behaving zebrafish provide an additional way to study sensorimotor transformation across the brain circuit. A large body of research suggests that sensory information processing depends strongly on the locomotor state of an animal (Niell and Stryker, 2010; Maimon et al., 2010; Chiappe et al., 2010). The ability to sense self-motion, such as proprioceptive feedback (Pearson, 1995) and efferent copy (Bell, 1981), can also profoundly shape the dynamics of the neural circuit and perception. To explore brain activity in swimming zebrafish, several studies have utilized an elegant tail-free embedding preparation (Severi et al., 2014; Portugues and Engert, 2011; Portugues et al., 2014), in which only the head of the fish is restrained in agarose for functional imaging. Nevertheless, it would be ideal to have physiological access to all neurons in defined behavioral states, where all sensory feedback loops remain intact and functional. Our XLFM-3D tracking system is one step towards this goal, and could be better exploited to explore the neural basis of more sophisticated natural behaviors, such as prey capture and social interaction, where the integration of multiple sensory feedbacks becomes critical.

In the XLFM, the camera sensor size limited the number of voxels and hence the number of neurons that could be reliably reconstructed. Our simulation suggested that the sparseness of neuronal activities is critical for optimal imaging volume reconstruction. A growing body of experimental data indeed suggests that population neuronal activities are sparse (Hromádka et al., 2008; Buzsáki and Mizuseki, 2014) and sparse representation is useful for efficient neural computation (Olshausen and Field, 1996; Olshausen and Field, 2004). Given the total number of neurons in the larval zebrafish brain, we found that when the fraction of active neurons in a given imaging frame was less than ρc ≈ 0.11, individual neurons could be resolved at optimal resolution. When population neural activity was dense (e.g., neurons have high firing rate and firing patterns have large spatiotemporal correlation), we obtained a coarse-grained neural activity map with reduced resolution.

To retain the fish head within the field of view of the imaging objective, our tracking system compensated for fish movement by continuously adjusting the lateral positions of the motorized stage. As a result, self-motion perceived by the fish was not exactly the same as that during natural behaviors. The linear acceleration of the swimming fish, encoded by vestibular feedback, was significantly underestimated. The perception of angular acceleration during head orientation remained largely intact. The relative flow velocity along the fish body, which was invariant upon stage translation, can still be detected by specific hair cells in the lateral line system (Coombs, 2014; Liao, 2010). Together, the interpretation of brain activity associated with self-motion must consider motion compensation driven by the tracking system.

Both tracking and imaging techniques can be improved in the future. For example, the current axial displacement employed by the piezo scanner had a limited travelling range (400 µm), and our swimming chamber essentially restrained the movement of the zebrafish in two dimensions. This limitation could be relaxed by employing axial translation with larger travelling range and faster dynamics. Furthermore, to avoid any potential disturbance of animal behaviors, it would be ideal if the imaging system moved, instead of the swimming chamber.

In XLFM, the performance degradation caused by focal length variation of the micro-lenses could be resolved by higher precision machining. In addition, the capability of XLFM could be further improved with the aid of technology development in other areas. With more pixels on the imaging sensor, we could resolve more densely labelled samples, and achieve higher spatial resolution without sacrificing imaging volume coverage by introducing more than two different focal planes formed by more groups of micro-lenses. With better imaging objectives that could provide higher numerical aperture and larger field of view at the same time, we could potentially image the entire nervous system of the larval zebrafish with single neuron resolution in all three dimensions. Additionally, the fast imaging speed of XLFM holds the potential for recording electrical activity when high signal-to-noise ratio (SNR) fluorescent voltage sensors become available (St-Pierre et al., 2014). Finally, the illumination-independent characteristic of XLFM is perfectly suitable for recording brain activities from bioluminescent calcium/voltage indicators in a truly natural environment, where light interference arising from fluorescence excitation can be eliminated (Naumann et al., 2010).

Materials and methods

XLFM

The imaging system (Figure 1) was a customized upright microscope. Along the fluorescence excitation light path, a blue laser (Coherent, OBIS 488 nm, 100 mW, USA) was expanded and collimated into a beam with a diameter of ~25 mm. It was then focused by an achromatic lens (focal length: 125 mm) and reflected by a dichroic mirror (Semrock, Di02-R488−25×36, USA) into the back pupil of the imaging objective (Olympus, XLPLN25XWMP2, 25X, NA 1.05, WD 2 mm, Japan) to result in an illumination area of ~1.44 mm in diameter near the objective’s focal plane. In the fluorescence imaging light path, excited fluorescence was collected by the imaging objective and transmitted through the dichroic mirror. A pair of achromatic lenses (focal lengths: F1 = 180 mm and F2 = 160 mm), arranged in 2F1 +2F2, were placed after the objective and dichroic mirror to conjugate the objective’s back pupil onto a customized lenslet array (Figure 1—figure supplement 1). The customized lenslet array was an aluminum plate with 27 holes (1.3 mm diameter aperture on one side and 1 mm diameter aperture on the other side, Source code file 1) housing 27 customized micro-lenses (1.3 mm diameter, focal length: 26 mm). The 27 micro-lenses were divided into two groups (Figure 1—figure supplement 1) and an axial displacement of 2.5 mm was introduced between them. Apertures of 1 mm diameter on the aluminum plate were placed right at the objective’s pupil plane so that all micro-lenses samples light at pupil plane even though they were displaced axially after apertures. Due to the blockage of light by the aluminum micro-lenses housing, 16% of the light after a 1.05 NA imaging objective was effectively collected by the camera. This efficiency is equivalent to using a 0.4 NA imaging objective. Finally, the imaging sensor of a sCMOS camera (Hamamatsu, Orca-Flash 4.0 v2, Japan) was placed at the middle plane between two focal planes formed by two different groups of micro-lenses. The total magnification of the imaging system was ~4, so one camera pixel (6.5 µm) corresponded to ~1.6 µm on the sample.

We developed a computational algorithm for 3D volume reconstruction, which required an accurately measured PSF (Figure 1—figure supplement 2). The PSF was measured by recording images of a 500 nm diameter fluorescent bead sitting on a motorized stage under the objective. A stack of 200 images was recorded when the bead was scanned with a step size of 2 µm in the axial direction from 200 µm below the objective’s focal plane to 200 µm above. Since the images formed by two different groups of micro-lenses were from different axial locations and had different magnifications, the measured raw PSF data were reorganized into two complementary parts: PSF_A and PSF_B (Figure 1—figure supplements 3 and 4), according to the spatial arrangement of the micro-lenses. We took PSF_A stack, PSF_B stack, and a single frame of a raw image (2048 × 2048 pixels) as inputs, and applied a newly developed algorithm to reconstruct the 3D volume.

Image reconstruction of XLFM

The reconstruction algorithm was derived from the Richardson-Lucy deconvolution. The goal was to reconstruct a 3D fluorescent object from a 2D image:

Obj(x,y,z)

The algorithm assumes that the real 3D object can be approximated by a discrete number of x-y planes at different z positions:

Objx,y,z~Objx,y,zk

The numbers and positions of these planes can be arbitrary, yet the Nyquist sampling rate should be chosen to optimize the speed and accuracy of the reconstruction.

As the imaging system consisted of two different groups of micro-lenses (Figure 1—figure supplement 1), their PSFs (Figure 1—figure supplements 3 and 4) each consisted of a stack of planes that were measured at the same chosen axial positions zk:

PSFA(x,y,zk)

Although the PSF was measured in imaging space, here we denote x and y as coordinates in object space to follow conventions in optical microscopy. Here and below, the combination of PSFA and PSFB is the total PSF.

Additionally, the images formed by two different groups of micro-lenses had different magnifications, which could be determined experimentally. The ratio between two different magnifications can be defined as:

γ=MagnificationofgroupAmicrolensesMagnificationofgroupBmicrolenses

Then, the captured image on the camera can be estimated as:

ImgEst(x,y)=k=1n{ObjA(x,y,zk)PSFA(x,y,zk)+ObjB(x,y,zk)PSFB(x,y,zk)},
where ObjA(x,y,zk)=ObjB(γx,γy,zk)

The operator represents 2D convolution. Here, x and y on the left hand side of the equation also represent coordinates in object space so that 2D convolution was carried out in the same coordinates.

The goal of the algorithm is to estimate the Obj(x,y,zk) from the measured camera frame:

ImgMeasx,y

According to the Richardson-Lucy deconvolution algorithm, the iterative reconstruction can be expressed as:

ImgEsti(x,y)=k=1n{ObjAi1(x,y,zk)PSFA(x,y,zk)+ObjBi1(x,y,zk)PSFB(x,y,zk)}
ObjAtmp(x,y,zk)=ObjAi1(x,y,zk){ImgMeas(x,y)ImgEsti(x,y)PSFA(x,y,zk)}
ObjBtmp(x,y,zk)=ObjBi1(x,y,zk){ImgMeas(x,y)ImgEsti(x,y)PSFB(x,y,zk)}
ObjAi(x,y,zk)=w(zk)ObjAtmp(x,y,zk)+(1w(zk))ObjBtmp(γx,γy,zk)
ObjBi(x,y,zk)=w(zk)ObjAtmp(xγ,yγ,zk)+(1w(zk))ObjBtmp(x,y,zk)

Here, 0w(zk)1 is the weighting factor at different axial positions. The choice of w(zk) can be arbitrary. Because the resolutions achieved by different groups of micro-lenses at different z positions were not the same, the weighting factor can take this effect into consideration by weighing higher quality information more than lower quality information. One simple choice is wzk=0.5, that is, to weigh information from two groups of micro-lenses equally.

The starting estimate of the object can be any non-zero value. Near the end of the iterations, ObjAix,y,zk and ObjBix,y,zk are interchangeable, except with different magnifications. Either can be used as the resulting estimate of the 3D object.

In XLFM, together with its reconstruction algorithm, the diffraction of the 3D light field is properly considered by experimentally measured PSF. The raw imaging data can be fed into the algorithm directly without any preprocessing. Given that the PSF is spatially invariant, which is satisfied apart from small aberrations, the algorithm can handle overlapping fish images (Figure 1—figure supplement 5). As a result, the field of view can be increased significantly. The reconstruction algorithm was typically terminated after 30 iterations when modifications in the estimated object became very small. The computation can speed up significantly via GPU. It took about 4 min to reconstruct one 3D volume using a desktop computer with a GPU (Nvidia Titan X). In comparison, the reconstruction ran ~20 × slower using a CPU (Intel E5-2630v2) on a Dell desktop. The source code written in MATLAB can be found in the Source code file 2.

The 3D deconvolution method has been developed for conventional LFM (Broxton et al., 2013). Our method differs from Broxton et al. (2013) in several ways. (1) The optical imaging systems are different. (2) The definitions of PSFs are different. Ours defines a spatially invariant PSF (see below for detailed characterization), whereas Broxton et al. (2013) defined a spatially variant PSF, leading to increased computational complexity in the deconvolution algorithm. (3) The PSF in Broxton et al. (2013) was simulated based on a model derived from an ideal imaging system, whereas ours was measured experimentally. Furthermore, our system took practical conditions, such as a non-ideal imaging objective, actual positions of microlenses, the spectrum of received fluorescence signal et al., into consideration.

Characterization of the spatial invariance of PSF in XLFM

The definition of a 2D spatially invariant PSF fundamentally means that in an ideal optical microscopy system, the resulting image can be described as a 2D convolution between object and PSF. As discussed in the previous section, this operation forms the basis of our reconstruction algorithm.

One of the fundamental differences between XLFM and conventional LFM is the location of the microlens array. In XLFM, the microlens array is placed at the pupil plane and the image sensor is at imaging plane, whereas in conventional LFM, the microlens array is placed at the image plane and the image sensor is| at pupil plane. It is possible to define a spatially invariant PSF in XLFM because:

  1. Spatially invariant PSFs can be defined for individual sub-imaging systems consisting of different micro-lenses.

  2. A spatially invariant PSF can be defined for the entire imaging system if the magnifications of all sub-imaging systems are the same.

By definition, the imaging formation in an ideal optical imaging system is linear and spatially invariant, so spatially invariant PSFs for sub-imaging systems consisting of micro-lens A1 and A2 can be defined as:

ImageA1=ObjectPSFA1
ImageA2=ObjectPSFA2

where ImageA1/2 are sub-images behind individual micro-lens. If we perform the convolution in the imaging space, the coordinates of Object(x,y) should be scaled by the magnification factors of their sub-image systems, respectively. Now if the magnifications of different sub-image systems are the same, the summation of all PSFs formed by individual micro-lenses can be defined as a single PSF. In other words,

ImageA=ImageA1+ImageA2=Object(PSFA1+PSFA2)=ObjectPSFA
where PSFA=PSFA1+PSFA2

Experimentally, the small variation of individual micro-lenses’ focal length (Figure 1—figure supplement 8) resulted in spatial variance of PSFA or PSFB, but it does not affect the imaging formation theory of XLFM. The spatial variance led to degraded reconstruction performance, as shown in Figure 1—figure supplement 9. This degradation was negligible near the center of the field of view, but became more evident near the edge of the field of view. This is because the PSF was measured near the center of the field of view. The reconstruction algorithm produces 27 estimates of the same object based on 27 sub-images. In the meanwhile, it tries to combine and align these estimates all together in the same coordinates. The position where the PSF is measured determines the origin of this coordinates. If the magnifications of different micro-lenses are different, the reconstruction will yield an image that is clear near the origin of the coordinates but blurred at the edge, as shown in in Figure 1—figure supplement 9.

Resolution characterization of XLFM

Unlike conventional microscopy, where the performance of the imaging system is fully characterized by the PSF at the focal plane, the capability of XLFM is better characterized as a function of positions throughout the imaging volume.

We first characterized the spatial resolution in the x-y plane by analyzing the spatial frequency support of the experimentally measured PSF from individual micro-lenses using a 0.5 µm diameter fluorescent bead. The optical transfer function (OTF), which is the Fourier transform of the PSF in the x-y plane, was extended to a spatial frequency of ~1/3.4 µm−1 (Figure 1—figure supplement 6), a result that agreed well with the designed resolution at 3.4 μm, given that the equivalent NA of individual micro-lenses was 0.075.

The lateral resolution, measured from the raw PSF behind individual micro-lenses, was preserved across the designed cylindrical imaging volume of Ø800 μm × 200 μm (Figure 1—figure supplement 6). However, the reconstruction results (Figure 1—figure supplement 9), which used total PSF (Figure 1—figure supplement 2), exhibited resolution degradation when the fluorescent bead was placed more than 250 μm away from the center (Figure 1—figure supplement 9). This discrepancy resulted from the variation in focal length of the micro-lenses (Figure 1—figure supplement 8), which, in turn, led to spatial variance of the defined PSFA and PSFB. In principle, the designed lateral resolution of 3.4 µm could be preserved over a volume of Ø800 μm × 200 μm by reducing focal length variation to below 0.3%

We next characterized the axial resolution of the XLFM. The XLFM gained axial resolution by viewing the object from large projection angles achieved by micro-lenses sitting near the edge of the objective’s back pupil plane. For example, if two points of light source were located at the same position in the x-y plane, but were separated by z in the axial direction, then one micro-lens in the XLFM could capture an image of these two points with a shift between them. The shift can be determined as:

d=z*tanθ

where θ is the inclination angle inferred from the measured PSF (Figure 1—figure supplement 2). If the two points in the image can be resolved, the two points separated by z can be resolved by the imaging system. Since a micro-lens sitting in the outer layer of the array offered the largest inclination angle of 40 degree in our system, the axial resolution dz can be directly calculated as:

dz=dxytanθmax=3.4μmtan(40°)=4μm

The best way to confirm the theoretical estimate is to image two fluorescent beads with precisely controlled axial separations. However, this is technically very challenging. Instead, we pursued an alternative method that is equivalent to imaging two beads simultaneously:

  1. We took a z stack of images of fluorescent beads, as done in measuring the PSF.

  2. In post processing, we added two images from different z positions to mimic the beads being present simultaneously at two different z positions.

The above method allowed us to experimentally characterize the axial resolution afforded by individual micro-lenses focusing at different z positions. We used a single fluorescent bead (0.5 μm in diameter) with a high SNR (Figure 1—figure supplement 7a). We imaged at different axial positions: z = −100 μm, z = 0 μm, and z = 100μm (Figure 1—figure supplement 7b). The third column is the combined images in column 1 and 2. The capability of resolving the two beads in the third column can be demonstrated by spatial frequency analysis (fourth column in Figure 1—figure supplement 7b). The two line dips, indicating the existence of two beads instead of one rod in the fourth column, were confirmations of the resolving capability. This becomes more evident after deconvolution of the raw images (fifth column in Figure 1—figure supplement 7b). Micro-lenses 1 and 2 could resolve two beads, separated by 5 μm, within the range of -100 μm≤z ≤ 0 and 0z100μm,respectively. In other words, the complementary information provided by the two micro-lenses allowed the system to maintain a high axial resolution at 5 μm across a 200μm depth.

Next, we imaged densely packed fluorescent beads (0.5 μm in diameter) with a low SNR (Figure 1—figure supplement 10a), and used our reconstruction algorithm to determine the minimum axial separation between beads that could be resolved (Figure 1—figure supplement 10b–c). In this case, 5 μm axial resolution could be preserved across a depth of 100 μm. The resolution decayed gradually to ~10 μm at the edge of an imaging volume with a 400 μm axial coverage (Figure 1—figure supplement 10b). We believe that the optimal axial resolution at 5 µm could be achieved over an axial coverage of 200 μm by minimizing micro-lens focal length variation (Figure 1—figure supplement 8).

Finally, we characterized how the imaging performance depended upon the sparseness of the sample. Given the total number of neurons (~80,000) in a larval zebrafish brain, we introduced a sparseness index ρ, defined as the fraction of neurons in the brain active at an imaging frame, and used numerical simulation to characterize the dependence of achievable resolution on ρ. To this end, we simulated a zebrafish larva with uniformly distributed firing neurons (red dots in Figure 1—figure supplement 11a). By convolving the simulated zebrafish with the experimentally measured PSFs (Figure 1—figure supplements 3 and 4), we generated an image that mimicked the raw data captured by the camera. We then reconstructed the simulated neurons from this image, represented by green dots. When ρ was equal to or less than 0.11, which corresponded to ~9000 neurons activated at a given instant, all active neurons, including those closely clustered, could be reconstructed with optimal resolution (Figure 1—figure supplement 11b inset). As the sparseness index ρ increased, the resolution degraded: nearby neurons merged laterally and elongated axially (Figure 1—figure supplement 11c–d). In all calculations, the Poisson noise was properly considered by assuming that each active neuron emitted 20,000 photons, 2.2% of which were collected by our imaging system.

In vivo resolution characterization is challenging due to a lack of bright and spot-like features in living animals. Additionally, achievable resolution depends on the optical properties of biological tissues, which can be highly heterogeneous and difficult to infer. The light scattering and aberration induced by biological tissue usually leads to degraded imaging performance (Ji, 2017; Ji et al., 2010; Wang et al., 2014; Wang et al., 2015).

XY tracking system

To compensate for lateral fish movement and retain the entire fish head within the field of view of a high NA objective (25×, NA = 1.05), a high-speed camera was used to capture fish motion (2 ms exposure time, 300 fps or higher, Basler aca2000-340kmNIR, Germany). We developed an FPGA-based RT system in LabVIEW that could rapidly identify the head position by processing the pixel stream data within the Cameralink card before the whole image was transferred to RAM. The error signal between the actual head position and the set point was then fed into the PID to generate output signals and control the movement of a high-speed motorized stage (PI M687 ultrasonic linear motor stage, Germany). In the case of large background noise, we alternatively performed conventional imaging processing in C/C++ (within 1 ms delay). The rate-limiting factor of our lateral tracking system was the response time of the stage (~300 Hz).

Autofocus system

We applied the principle of LFM to determine the axial movement of larval zebrafish. The autofocus camera (100 fps or higher, Basler aca2000-340kmNIR, Germany) behind a one-dimensional micro-lens array captured triplet images of the fish from different perspectives (Figure 2—figure supplement 1a). Z motion caused an extension or contraction between the centroids of the fish head in the left and right sub-images, an inter-fish distance (Figure 2—figure supplement 1b) that can be accurately computed from image autocorrelation. The inter-fish distance, multiplied by a pre-factor, can be used to estimate the z position of the fish, as it varies linearly with axial movement (Figure 2—figure supplement 1c). The error signal between the actual axial position of the fish head and the set point was then fed into the PID to generate an output signal to drive a piezo-coupled fish container. The feedback control system was written in LabVIEW. The code was further accelerated by parallel processing and the closed loop delay was ~5 ms. The rate-limiting factor of the autofocus system was the settling time of the piezo scanner (PI P725KHDS, Germany, 400 μm travelling distance), which was about 10 ms.

Real-time behavioral analysis

Two high-speed cameras acquired dark-field images at high and low magnification, respectively, and customized machine vision software written in C/C ++ with the aid of OpenCV library was used to perform real-time behavioral analysis of freely swimming larval zebrafish. At high magnification, eye positions, their orientation, and convergence angle were computed; at low magnification, the contour of the whole fish, centerline, body curvature, and bending angle of the tail were computed. The high mag RT analysis was run at ~120 fps and the low mag RT analysis was run at ~180 fps. The source code can be found in the Source code file 3.

Ethics statement and animal handling

All animal handling and care were conducted in strict accordance with the guidelines and regulations set forth by the Institute of Neuroscience, Chinese Academy of Sciences, University of Science and Technology of China (USTC) Animal Resources Center, and University Animal Care and Use Committee. The protocol was approved by the Committee on the Ethics of Animal Experiments of the USTC (permit number: USTCACUC1103013).

All larval zebrafish (huc:h2b-gcamp6f and huc:gcamp6s) were raised in embryo medium under 28.5°C and a 14/10 hr light/dark cycle. Zebrafish were fed with paramecium from 4 dpf. For restrained experiments, 4–6 dpf zebrafish were embedded in 1% low melting point agarose. For freely moving experiments, 7–11 dpf zebrafish with 10% Hank’s solution were transferred to a customized chamber (20 mm in diameter, 0.8 mm in depth), and 10–20 paramecia were added before the chamber was covered by a coverslip.

Neural activity analysis

To extract neural activity induced by visual stimuli (Figure 1e and f), time series 3D volume stacks were first converted to a single 3D volume stack, in which each voxel represented variance of voxel values over time. Candidate neurons were next extracted by identifying local maxima in the converted 3D volume stack. The region-of-interest (ROI) was set according to the empirical size of a neuron. The voxels around the local maxima were selected to represent neurons. The fluorescence intensity over each neuron’s ROI was integrated and extracted as neural activity. Relative fluorescent changes F/F0 were normalized to their maximum calcium response Fmax/F0 over time, and sorted according to their onset time when F first reached 20% of its Fmax (Figure 1e and f) after the visual stimulus was presented.

Visual stimulation

A short wavelength LED was optically filtered (short-pass optical filter with cut-off wavelength at 450 nm, Edmund #84–704) to avoid light interference with fluorescence. It was then focused by a lens into a spot 2 ~ 3 mm in diameter. The zebrafish was illuminated from its side. The total power of the beam was roughly 3 mW.

Statement of replicates and repeats in experiments

Each experiment was repeated at least three times with similar experimental conditions. Imaging and video data acquired from behaviorally active larval zebrafish with normal huc:h2b-gcamp6f or huc:gcamp6s expression were used in the main figures and videos.

Acknowledgements

We thank Misha B Ahrens for the zebrafish lines. We thank Yong Jiang, Tongzhou Zhao, WenKai Han, Shenqi Fan for assistance in building the 3D tracking system, real time behavioral analysis, and larval zebrafish experiments. We thank Dr Bing Hu and Dr Jie He for his support in zebrafish handling and helpful discussions.

Funding Statement

The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.

Contributor Information

Kai Wang, Email: wangkai@ion.ac.cn.

Quan Wen, Email: qwen@ustc.edu.cn.

Ronald L Calabrese, Emory University, United States.

Funding Information

This paper was supported by the following grants:

  • Strategic Priority Research Program of the Chinese Academy of Sciences XDB02060012 to Kai Wang.

  • National Science Foundation of China NSFC-31471051 to Quan Wen.

  • China Thousand Talents Program to Kai Wang.

  • CAS Pioneer Hundred Talents Program to Quan Wen.

Additional information

Competing interests

No competing interests declared.

Author contributions

Designed and built the XLFM, Designed and built the autofocus system, Did experiments under the supervision of Chunfeng Shang, Jiulin Du, Kai Wang and Quan Wen, Worked collaboratively to integrate the XLFM and the tracking system.

Designed and built the XLFM, Designed and built the X-Y tracking and the real-time behavioral analysis system, Designed and built the autofocus system, Did experiments under the supervision of Chunfeng Shang, Jiulin Du, Kai Wang and Quan Wen, Worked collaboratively to integrate the XLFM and the tracking system.

Designed and built the X-Y tracking and the real-time behavioral analysis system, Did experiments under the supervision of Chunfeng Shang, Jiulin Du, Kai Wang and Quan Wen, Worked collaboratively to integrate the XLFM and the tracking system.

Designed and built the XLFM, Designed and built the autofocus system, Did experiments under the supervision of Chunfeng Shang, Jiulin Du, Kai Wang and Quan Wen, Worked collaboratively to integrate the XLFM and the tracking system.

Designed zebrafish behavioral experiments, Worked collaboratively to integrate the XLFM and the tracking system.

Designed and built the X-Y tracking and the real-time behavioral analysis system, Worked collaboratively to integrate the XLFM and the tracking system.

Designed and built the XLFM, Worked collaboratively to integrate the XLFM and the tracking system.

Designed zebrafish behavioral experiments, Worked collaboratively to integrate the XLFM and the tracking system.

Conceived the project, Conceived the idea of XLFM, Designed and built the XLFM, Designed and built the autofocus system, Wrote the paper with inputs from all authors, Worked collaboratively to integrate the XLFM and the tracking system.

Conceived the project, Designed and built the X-Y tracking and the real-time behavioral analysis system, Designed zebrafish behavioral experiments, Wrote the paper with inputs from all authors, Worked collaboratively to integrate the XLFM and the tracking system.

Ethics

Animal experimentation: Zebrafish handling procedures were approved by the Institute of Neuroscience, Shanghai Institutes for Biological Sciences, Chinese Academy of Sciences.(permit number: USTCACUC1103013).

Additional files

Source code 1. Computer-Aided design files of mounting plates for micro-lenses array.
elife-28158-code1.zip (16.4KB, zip)
DOI: 10.7554/eLife.28158.029
Source code 2. Source code for XLFM reconstruction.
elife-28158-code2.zip (3.2KB, zip)
DOI: 10.7554/eLife.28158.030
Source code 3. Source code for Real-Time behavioral analysis.
elife-28158-code3.zip (77.9MB, zip)
DOI: 10.7554/eLife.28158.031
Supplement file 1. Acquisition parameters for fluorescence imaging.
elife-28158-fig4.docx (15.2KB, docx)
DOI: 10.7554/eLife.28158.032
Transparent reporting form
DOI: 10.7554/eLife.28158.033

References

  1. Abrahamsson S, Chen J, Hajj B, Stallinga S, Katsov AY, Wisniewski J, Mizuguchi G, Soule P, Mueller F, Dugast Darzacq C, Darzacq X, Wu C, Bargmann CI, Agard DA, Dahan M, Gustafsson MG. Fast multicolor 3D imaging using aberration-corrected multifocus microscopy. Nature Methods. 2013;10:60–63. doi: 10.1038/nmeth.2277. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Adelson EH, Wang JYA. Single lens stereo with a plenoptic camera. IEEE Transactions on Pattern Analysis and Machine Intelligence. 1992;14:99–106. doi: 10.1109/34.121783. [DOI] [Google Scholar]
  3. Ahrens MB, Li JM, Orger MB, Robson DN, Schier AF, Engert F, Portugues R. Brain-wide neuronal dynamics during motor adaptation in zebrafish. Nature. 2012;485:471–477. doi: 10.1038/nature11057. [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Ahrens MB, Orger MB, Robson DN, Li JM, Keller PJ. Whole-brain functional imaging at cellular resolution using light-sheet microscopy. Nature Methods. 2013;10:413–420. doi: 10.1038/nmeth.2434. [DOI] [PubMed] [Google Scholar]
  5. Ahrens MB, Engert F. Large-scale imaging in small brains. Current Opinion in Neurobiology. 2015;32:78–86. doi: 10.1016/j.conb.2015.01.007. [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Bell CC. An efference copy which is modified by reafferent input. Science. 1981;214:450–453. doi: 10.1126/science.7291985. [DOI] [PubMed] [Google Scholar]
  7. Berg HC. How to track bacteria. Review of Scientific Instruments. 1971;42:868–871. doi: 10.1063/1.1685246. [DOI] [PubMed] [Google Scholar]
  8. Bianco IH, Kampff AR, Engert F. Prey capture behavior evoked by simple visual stimuli in larval zebrafish. Frontiers in Systems Neuroscience. 2011;5:101. doi: 10.3389/fnsys.2011.00101. [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Bianco IH, Ma LH, Schoppik D, Robson DN, Orger MB, Beck JC, Li JM, Schier AF, Engert F, Baker R. The tangential nucleus controls a gravito-inertial vestibulo-ocular reflex. Current Biology. 2012;22:1285–1295. doi: 10.1016/j.cub.2012.05.026. [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Bianco IH, Engert F. Visuomotor transformations underlying hunting behavior in zebrafish. Current Biology. 2015;25:831–846. doi: 10.1016/j.cub.2015.01.042. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Boyden ES, Zhang F, Bamberg E, Nagel G, Deisseroth K. Millisecond-timescale, genetically targeted optical control of neural activity. Nature Neuroscience. 2005;8:1263–1268. doi: 10.1038/nn1525. [DOI] [PubMed] [Google Scholar]
  12. Broxton M, Grosenick L, Yang S, Cohen N, Andalman A, Deisseroth K, Levoy M. Wave optics theory and 3-D deconvolution for the light field microscope. Optics Express. 2013;21:25418–25439. doi: 10.1364/OE.21.025418. [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Buzsáki G, Mizuseki K. The log-dynamic brain: how skewed distributions affect network operations. Nature Reviews Neuroscience. 2014;15:264–278. doi: 10.1038/nrn3687. [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Chen TW, Wardill TJ, Sun Y, Pulver SR, Renninger SL, Baohan A, Schreiter ER, Kerr RA, Orger MB, Jayaraman V, Looger LL, Svoboda K, Kim DS. Ultrasensitive fluorescent proteins for imaging neuronal activity. Nature. 2013;499:295. doi: 10.1038/nature12354. [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Chiappe ME, Seelig JD, Reiser MB, Jayaraman V. Walking modulates speed sensitivity in Drosophila motion vision. Current Biology. 2010;20:1470–1475. doi: 10.1016/j.cub.2010.06.072. [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Coombs S. The Lateral Line System. New York: Springer; 2014. p. xiv, 347. [Google Scholar]
  17. Dombeck DA, Khabbaz AN, Collman F, Adelman TL, Tank DW. Imaging large-scale neural activity with cellular resolution in awake, mobile mice. Neuron. 2007;56:43–57. doi: 10.1016/j.neuron.2007.08.003. [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. Engert F. Fish in the matrix: motor learning in a virtual world. Frontiers in neural circuits. 2012;6:125. doi: 10.3389/fncir.2012.00125. [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Engert F. The big data problem: turning maps into knowledge. Neuron. 2014;83:1246–1248. doi: 10.1016/j.neuron.2014.09.008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Friedrich RW, Jacobson GA, Zhu P. Circuit neuroscience in zebrafish. Current Biology. 2010;20:R371–R381. doi: 10.1016/j.cub.2010.02.039. [DOI] [PubMed] [Google Scholar]
  21. Hill A, Howard CV, Strahle U, Cossins A. Neurodevelopmental defects in zebrafish (Danio rerio) at environmentally relevant dioxin (TCDD) concentrations. Toxicological Sciences. 2003;76:392–399. doi: 10.1093/toxsci/kfg241. [DOI] [PubMed] [Google Scholar]
  22. Hromádka T, Deweese MR, Zador AM. Sparse representation of sounds in the unanesthetized auditory cortex. PLoS Biology. 2008;6:e16. doi: 10.1371/journal.pbio.0060016. [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Ji N, Milkie DE, Betzig E. Adaptive optics via pupil segmentation for high-resolution imaging in biological tissues. Nature Methods. 2010;7:141–147. doi: 10.1038/nmeth.1411. [DOI] [PubMed] [Google Scholar]
  24. Ji N. Adaptive optical fluorescence microscopy. Nature Methods. 2017;14:374–380. doi: 10.1038/nmeth.4218. [DOI] [PubMed] [Google Scholar]
  25. Kerr JN, Denk W. Imaging in vivo: watching the brain in action. Nature Reviews Neuroscience. 2008;9:195–205. doi: 10.1038/nrn2338. [DOI] [PubMed] [Google Scholar]
  26. Levoy M, Ng R, Adams A, Footer M, Horowitz M. Light field microscopy. ACM Transactions on Graphics. 2006;25:924–934. doi: 10.1145/1141911.1141976. [DOI] [Google Scholar]
  27. Liao JC. Organization and physiology of posterior lateral line afferent neurons in larval zebrafish. Biology Letters. 2010;6:402–405. doi: 10.1098/rsbl.2009.0995. [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Luo L, Callaway EM, Svoboda K. Genetic dissection of neural circuits. Neuron. 2008;57:634–660. doi: 10.1016/j.neuron.2008.01.002. [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Maimon G, Straw AD, Dickinson MH. Active flight increases the gain of visual motion processing in Drosophila. Nature Neuroscience. 2010;13:393–399. doi: 10.1038/nn.2492. [DOI] [PubMed] [Google Scholar]
  30. Muto A, Ohkura M, Abe G, Nakai J, Kawakami K. Real-time visualization of neuronal activity during perception. Current Biology. 2013;23:307–311. doi: 10.1016/j.cub.2012.12.040. [DOI] [PubMed] [Google Scholar]
  31. Naumann EA, Kampff AR, Prober DA, Schier AF, Engert F. Monitoring neural activity with bioluminescence during natural behavior. Nature Neuroscience. 2010;13:513–520. doi: 10.1038/nn.2518. [DOI] [PMC free article] [PubMed] [Google Scholar]
  32. Ng R, Levoy M, Bredif M, Duval G, Horowitz M, Hanrahan P. Light Field Photography with a Hand-Held Plenoptic Camera. Stanford, United States: Stanford University; 2005. [Google Scholar]
  33. Nguyen JP, Shipley FB, Linder AN, Plummer GS, Liu M, Setru SU, Shaevitz JW, Leifer AM. Whole-brain calcium imaging with cellular resolution in freely behaving Caenorhabditis elegans. PNAS. 2016;113:E1074–E1081. doi: 10.1073/pnas.1507110112. [DOI] [PMC free article] [PubMed] [Google Scholar]
  34. Niell CM, Stryker MP. Modulation of visual responses by behavioral state in mouse visual cortex. Neuron. 2010;65:472–479. doi: 10.1016/j.neuron.2010.01.033. [DOI] [PMC free article] [PubMed] [Google Scholar]
  35. Nöbauer T, Skocek O, Pernía-Andrade AJ, Weilguny L, Traub FM, Molodtsov MI, Vaziri A. Video rate volumetric Ca(2+) imaging across cortex using seeded iterative demixing (SID) microscopy. Nature Methods. 2017;14:811–818. doi: 10.1038/nmeth.4341. [DOI] [PubMed] [Google Scholar]
  36. Olshausen BA, Field DJ. Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature. 1996;381:607–609. doi: 10.1038/381607a0. [DOI] [PubMed] [Google Scholar]
  37. Olshausen BA, Field DJ. Sparse coding of sensory inputs. Current Opinion in Neurobiology. 2004;14:481–487. doi: 10.1016/j.conb.2004.07.007. [DOI] [PubMed] [Google Scholar]
  38. Patterson BW, Abraham AO, MacIver MA, McLean DL. Visually guided gradation of prey capture movements in larval zebrafish. Journal of Experimental Biology. 2013;216:3071–3083. doi: 10.1242/jeb.087742. [DOI] [PMC free article] [PubMed] [Google Scholar]
  39. Pearson KG. Proprioceptive regulation of locomotion. Current Opinion in Neurobiology. 1995;5:786–791. doi: 10.1016/0959-4388(95)80107-3. [DOI] [PubMed] [Google Scholar]
  40. Perwass C, Wietzke L. Single Lens 3D-Camera with Extended Depth-of-Field. Human Vision and Electronic Imaging XVII.2012. [Google Scholar]
  41. Portugues R, Engert F. Adaptive locomotor behavior in larval zebrafish. Frontiers in Systems Neuroscience. 2011;5:72. doi: 10.3389/fnsys.2011.00072. [DOI] [PMC free article] [PubMed] [Google Scholar]
  42. Portugues R, Feierstein CE, Engert F, Orger MB. Whole-brain activity maps reveal stereotyped, distributed networks for visuomotor behavior. Neuron. 2014;81:1328–1343. doi: 10.1016/j.neuron.2014.01.019. [DOI] [PMC free article] [PubMed] [Google Scholar]
  43. Prevedel R, Yoon YG, Hoffmann M, Pak N, Wetzstein G, Kato S, Schrödel T, Raskar R, Zimmer M, Boyden ES, Vaziri A. Simultaneous whole-animal 3D imaging of neuronal activity using light-field microscopy. Nature Methods. 2014;11:727–730. doi: 10.1038/nmeth.2964. [DOI] [PMC free article] [PubMed] [Google Scholar]
  44. Pégard NC, Liu H-Y, Antipa N, Gerlock M, Adesnik H, Waller L. Compressive light-field microscopy for 3D neural activity recording. Optica. 2016;3:517–524. doi: 10.1364/OPTICA.3.000517. [DOI] [Google Scholar]
  45. Semmelhack JL, Donovan JC, Thiele TR, Kuehn E, Laurell E, Baier H. A dedicated visual pathway for prey detection in larval zebrafish. eLife. 2014;3:e04878. doi: 10.7554/eLife.04878. [DOI] [PMC free article] [PubMed] [Google Scholar]
  46. Severi KE, Portugues R, Marques JC, O'Malley DM, Orger MB, Engert F. Neural control and modulation of swimming speed in the larval zebrafish. Neuron. 2014;83:692–707. doi: 10.1016/j.neuron.2014.06.032. [DOI] [PMC free article] [PubMed] [Google Scholar]
  47. St-Pierre F, Marshall JD, Yang Y, Gong Y, Schnitzer MJ, Lin MZ. High-fidelity optical reporting of neuronal electrical activity with an ultrafast fluorescent voltage sensor. Nature Neuroscience. 2014;17:884–889. doi: 10.1038/nn.3709. [DOI] [PMC free article] [PubMed] [Google Scholar]
  48. Tian L, Hires SA, Mao T, Huber D, Chiappe ME, Chalasani SH, Petreanu L, Akerboom J, McKinney SA, Schreiter ER, Bargmann CI, Jayaraman V, Svoboda K, Looger LL. Imaging neural activity in worms, flies and mice with improved GCaMP calcium indicators. Nature Methods. 2009;6:875–881. doi: 10.1038/nmeth.1398. [DOI] [PMC free article] [PubMed] [Google Scholar]
  49. Trivedi CA, Bollmann JH. Visually driven chaining of elementary swim patterns into a goal-directed motor sequence: a virtual reality study of zebrafish prey capture. Frontiers in Neural Circuits. 2013;7:86. doi: 10.3389/fncir.2013.00086. [DOI] [PMC free article] [PubMed] [Google Scholar]
  50. Venkatachalam V, Ji N, Wang X, Clark C, Mitchell JK, Klein M, Tabone CJ, Florman J, Ji H, Greenwood J, Chisholm AD, Srinivasan J, Alkema M, Zhen M, Samuel AD. Pan-neuronal imaging in roaming Caenorhabditis elegans. PNAS. 2016;113:E1082–E1088. doi: 10.1073/pnas.1507109113. [DOI] [PMC free article] [PubMed] [Google Scholar]
  51. Wang K, Milkie DE, Saxena A, Engerer P, Misgeld T, Bronner ME, Mumm J, Betzig E. Rapid adaptive optical recovery of optimal resolution over large volumes. Nature Methods. 2014;11:625–628. doi: 10.1038/nmeth.2925. [DOI] [PMC free article] [PubMed] [Google Scholar]
  52. Wang K, Sun W, Richie CT, Harvey BK, Betzig E, Ji N. Direct wavefront sensing for high-resolution in vivo imaging in scattering tissue. Nature Communications. 2015;6:7276. doi: 10.1038/ncomms8276. [DOI] [PMC free article] [PubMed] [Google Scholar]
  53. Wyart C, Del Bene F, Warp E, Scott EK, Trauner D, Baier H, Isacoff EY. Optogenetic dissection of a behavioural module in the vertebrate spinal cord. Nature. 2009;461:407–410. doi: 10.1038/nature08323. [DOI] [PMC free article] [PubMed] [Google Scholar]
  54. Zhang F, Wang LP, Brauner M, Liewald JF, Kay K, Watzke N, Wood PG, Bamberg E, Nagel G, Gottschalk A, Deisseroth K. Multimodal fast optical interrogation of neural circuitry. Nature. 2007;446:633–639. doi: 10.1038/nature05744. [DOI] [PubMed] [Google Scholar]

Decision letter

Editor: Ronald L Calabrese1

In the interests of transparency, eLife includes the editorial decision letter and accompanying author responses. A lightly edited version of the letter sent to the authors after peer review is shown, indicating the most substantive concerns; minor comments are not usually included.

Thank you for submitting your article "Rapid whole brain imaging of neural activities in freely behaving larval zebrafish" for consideration by eLife. Your article has been reviewed by three peer reviewers, one of whom, Ronald L Calabrese (Reviewer #1), is a member of our Board of Reviewing Editors, and the evaluation has been overseen by Eve Marder as the Senior Editor.

The reviewers have discussed the reviews with one another and the Reviewing Editor has drafted this decision to help you prepare a revised submission.

Summary:

This is an exciting manuscript, which reports a new development in Light Field Microscopy (LFM). The authors developed a Light Field Microscope: eXtended (XLFM) field of view seamlessly integrated with an X-Y tracking system and an auto focus. XLFM can simultaneously image intact brain neural activities (over a volume of 800 µm X 800 µm X 200 µm) at ~ 3.4 µm X 3.4 µm X 5 µm spatial resolution and at 77 Hz volume rate, with the aid of genetically encoded calcium indicator GCamp6f in a freely moving larval zebrafish during visual stimulation and prey capture. They provide stunning videos and enough processed data to show the value of the new development for imaging activity across the brain during real behavior.

The work is nicely illustrated with exemplar data. This is not a full report on the science behind the experiments illustrated but rather a proof of principle. Exciting science is in the offing but a new technology is showcased, as is appropriate for a Tools and Resources paper.

Essential revisions:

1) We find this technology to be a significant advance. There are several technical issues, however, that must be resolved. Further clarifications to the text are needed about precisely what was done and how it was done. Some claims need to be more carefully worded to recognize the limitations of the technique and recognize other contributions. The writing should be improved. The expert reviews provide a detailed list of all the points that should be considered in revision. Rather than paraphrasing those reports, they are included in full to ensure that the detailed technological issues are well-stated.

2) As stated explicitly in the expert reviews, software and design features must be made fully available to the scientific community with publication.

Reviewer #1:

1) The chamber in which the freely moving larva swims is ONLY 0.8 mm deep. Thus the animal is sandwiched between glass plates with no real ability to move in the z-direction. Essentially, it moves in two dimensions. The authors should address this limitation in their approach.

Reviewer #2:

In Cong et al., two advances are reported. First, a tracking system is introduced capable of keeping a freely swimming larval zebrafish in one location most of the time. Second, a new form of light field microscopy is reported capable of fast 3D imaging. Putting these together constitutes a system for whole-brain imaging in freely swimming zebrafish larvae, with a resolution slightly below single-cell.

In my opinion this is a major advance and I am supportive of publication in eLife with a few improvements.

For background, previous efforts to perform whole-brain imaging in behaving animals consisted of light-sheet (slower than light-field) in head-restrained animals, or light-field (a variant with, I believe, lower spatial resolution) in head-restrained animals. Imaging in freely behaving animals has been done in C. elegans, which move more slowly than zebrafish. Thus, compared to previous work, the advances of this manuscript are considerable. Furthermore, imaging most of the brain in freely swimming animals in really impressive.

Points that should be addressed:

1) The authors claim that the point spread function (PSF) is spatially invariant. This appears to be true if one considers the microscope an ideal optical system, but with non-ideal optics, it's unlikely. Even with a good objective, the entire system contributes to the PSF, and it's unlikely that all microlenses are diffraction limited over the entire field of view. Moreover, differential distortion between the sub-images would cause the full camera PSF to warp as the point source moves in the sample. So if Richardson-Lucy deconvolution only works with a spatially invariant PSF, and the true PSF is not fully spatially invariant, the question arises, What image artifacts do you get?

There may be multiple ways to answer this question. One path might be to (a) move a bead around a few x-y locations, including the extreme ones, and check how spatially invariant the PSF really is (and include the raw PSF volumes in the manuscript, e.g. by measuring them at two extreme points, shifting one by the predicted amount, and overlaying the two in different colors). (b) Next, assuming a spatially invariant PSF derived from one of the bead locations (e.g. the center), reconstruct a bead positioned at various points, including the edges of the volume, and quantify the spread of the point source in the reconstructed volume (this should have high brightness at the original bead position, plus dimmer pixel values spread at other locations, which should be quantified).

2) A follow-up: if the PSF is not fully spatially invariant, what does this mean for the statement that overlapping sub-images are permitted (subsection “Image reconstruction of XLFM”)? My understanding is that the overlap is fine so long as the PSF is fully x-y invariant, and if not, then some artifacts will be introduced. The reasons and assumptions underlying this statement should be clarified in the text.

For clarity, points (1) and (2) are not criticisms of the system, only a call for characterization of the artifacts that the reconstruction algorithm introduces when using simplifying assumptions.

3) The reconstruction algorithm (subsection “Image reconstruction of XLFM”) contains confusing notation (if I understand it correctly). The coordinates (x,y) on the left hand side of Equation 5, refer to image coordinates. But (x,y,z_k) on the right hand side refers to coordinates in the 3D volume. That's confusing, x and y should not be used for both. Moreover, I believe that PSF_A,B(x,y,z_k) are each 2 dimensional objects. So in reality, spelled out with all the indices, using ^superscript for volume coordinates and _subscript for image coordinates, I believe the equation is

ImgEst_(x',y') = sum {ObjA^(x,y,z_k) conv^(x,y,z_k) PSFA^(x,y,z_k)_(x',y') +.…}

Explaining this equation better, e.g. by writing it out as above or stating that Img_est is a 2D object in image coordinates, and PSF_A,B(x,y,z_k) is 2D in image space contingent on x,y,z_k in volume space, will make this section more understandable.

4) Can the lateral resolution be measured instead of estimated?

5) The manuscript says the reconstruction algorithm is based on optical wave theory. What do the authors mean by this? The algorithm is based on the assumption of a spatially invariant PSF and observations of how to apply Richardon-Lucy to sets of microlenses of different focal lengths. Where does this rely on optical wave theory instead of just classical optics?

6) I assume CAD models of the microlens holder and the autofocus system exists, can these files be made available?

7) Most of the code is said to be available (e.g. real-time behavioral analysis and 3D reconstruction), but in some cases it is not mentioned. Can the code for the tracking and autofocus system be made available?

Reviewer #3:

My overall opinion of the manuscript is positive. I think being able to image neuronal activity in a freely moving larval zebrafish is an advance and the current paper serves as a satisfactory proof of principle.

I have some issues regarding the term "whole-brain" and the resolution claimed by the authors. The authors claim, or at least imply, that they can simultaneously (within 1 Orca camera frame which has a 2048 x 2048 pixel sensor) image 800 x 800 x 200 microns at 3.4 x 3.4 x 5 micron resolution. I find this very difficult to believe. Imaging with this resolution requires imaging 800/(3.4/2) = 470 pixels in both x and y for 200/(5/2) = 80 planes (the factor of 2 arise from Nyquist sampling). Given the sensor dimensions one can fit at most 25 planes into it (5 x 5). The authors show that they are able to use their microlens distribution to image 27. I do not believe there is enough information on the chip to have the claimed resolution. The authors may be able to distinguish 2 fluorescent particles 6 microns apart as in Figure 1—figure supplement 7, but these are still sparse particles appearing in the center of their CMOS chip, not a densely fluorescent tissue as a pan-neuronally fluorescent larval zebrafish. I think my argument is corroborated by the data shown in Figure 4: this data does not have the resolution claimed and does not show the whole brain of the larva.

The above argument assumes that the fish's head is also perfectly in focus. The z extent of a larval zebrafish head at this age is ~ 250 microns, which will already be larger than the z field of view. The axial shift shown in Figure 3, typically 20 microns but up to 80 will greatly affect this. The authors mention they use a 500 fps camera for the lateral tracking, but do not (or I missed) the speed of their auto-focus camera for axial tracking: how fast is this?

I do not think that these are "deal-breakers", but I think it is important for the authors to rewrite their claims and be explicit about what their system can't and can do (which is a lot). In the Discussion the authors claim to have developed a whole brain imaging setup: I am not sure what this means.

Figure 1 looks at an agarose restrained larval zebrafish. The authors should be explicit about this in the text and the Figure caption (for example the name of the figure). I do not think that panel d presents maximum intensity projections – they are far too clean for this (the bottom panel looks more like a snapshot of a 3d rendering of the stack). Can the authors correct this in the caption or be explicit about what they are showing?

Closed-loop systems have also been implemented in restrained fish with their tail free to move (e.g. Portugues and Engert 2011) which can remove the issue the authors mention relating to proprioception (Introduction paragraph two). The authors also mention improper vestibular feedback when fish are restrained, but in their setup, due to the closed look, the fish would also experience a reduced vestibular feedback: if the closed-loop was perfect the head would not move at all and the same vestibular deficits would be observed. If this is correct then the authors should comment on this.

The authors talk about "visual stimulation. What does this visual stimulation consist of? This should be explained in the Materials and methods clearly.

The claim in Results paragraph seven is a strong one and I am not sure it is fully warranted. Given the resolution and the data shown I would omit the phrase "for the first time" and again, explain carefully what is meant (here and in other places) by the term "whole brain".

[Editors' note: further revisions were requested prior to acceptance, as described below.]

Thank you for resubmitting your work entitled "Rapid whole brain imaging of neural activity in freely behaving larval zebrafish (Danio rerio)" for further consideration at eLife. Your revised article has been favorably evaluated by Eve Marder (Senior editor), a Reviewing editor, and two reviewers.

The manuscript has been improved but there are some remaining issues that need to be addressed before acceptance, as outlined below:

This is an exciting manuscript, which reports a new development in Light Field Microscopy (LFM). The authors have made a strong effort at revising the manuscript in response to the last review. They have answered forthrightly almost every point raised and the manuscript is much stronger. There is still one major concern that must be addressed, and there some more minor concerns. The reviews are reproduced in their entirety to aid revision.

Major Concern

1) As brought up by reviewer #3 the definition and technical details of the claimed resolution are not adequately documented and explained. The detailed comments of the reviewer should be fully addressed.

Reviewer #2:

The manuscript has improved and I think that most of our comments have been addressed.

The changes include a new piece of useful information: the non-idealness of the point spread function (PSF) has been measured, and attributed, in part, to small differences in focal lengths of the microlenses.

I think this is great work and the revised version is even better. There are a few final comments I'd like to make,

In the subsection “Image reconstruction of XLFM”: "Furthermore, our system took practical conditions, such as imaging system and light properties, into consideration." What does this mean; can this be explained better?

About the reconstruction algorithm: The authors opt for sticking with the conventions and use the same indices x,y on both sides of the equation. This is ok, but in that case, some explanation should be added. For example,

"The 2D convolution is over x and y" and "Per convention in optics, x,y on both sides represent object space, even though in practice, x,y at the left will refer to image space on the camera chip and x,y on the right to the sample coordinates."

In the same section the authors seem to agree that the statement that the algorithm can deal with overlapping fish images depends on the invariance of the PSF, this information should be explicitly stated, i.e. include a statement like "under the condition that the PSF is spatially invariant, which is satisfied apart from small aberrations, the algorithm can handle overlapping fish images".

Results paragraph one: "Therefore, a spatially invariant point spread function (PSF) of the entire optical imaging system could be defined and measured (Figure 1—figure supplement 2)". Here also it would be good to mention that it's an approximately spatially invariant PSF.

A reference to a recept paper from the Vaziri lab, Nobauer et al., 2017, should be included. (This is also about light field microscopy, but I want to emphasize that this does in no way diminish the impact of the current manuscript.)

Reviewer #3:

As I mentioned previously, I like the paper, and the authors have addressed most of the minor issues appropriately. There are two points which I am still not sure have been resolved.

1) I do not believe the authors have fully addressed my previous point relating to resolution, which I reproduce below. This argument may be wrong, and I would be very happy if the authors could explain to me where my logic fails.

"I have some issues regarding the term "whole-brain" and the resolution claimed by the authors. The authors claim, or at least imply, that they can simultaneously (within 1 Orca camera frame which has a 2048 x 2048 pixel sensor) image 800 x 800 x 200 microns at 3.4 x 3.4 x 5 micron resolution. I find this very difficult to believe. Imaging with this resolution requires imaging 800/(3.4/2) = 470 pixels in both x and y for 200/(5/2) = 80 planes (the factor of 2 arise from Nyquist sampling). Given the sensor dimensions one can fit at most 25 planes into it (5 x 5). The authors show that they are able to use their microlens distribution to image 27.”

I am not worried about reconstructing the volume; that is not the point here. The issue is that of resolution and discriminability of points. This involves two aspects: the "optical resolution" of the imaging system and the sampling. The Rayleigh resolution criterion states the minimal distance resolvable is half the width of the first diffraction order. This depends on the wavelength of the light and the NA of the system. Ideal sampling is then obtained by using the Nyquist criterion: the inverse image of a pixel should be half the optical resolution of the system, so that there is a "negative dip" in between the two bright pixels.

Using this last sampling definition, the number of resolvable points can be estimated as follows:

- every bright pixel has to be surrounded (in the chip) by a dark set of pixels.

- the number of bright pixels you can have on the chip in this configuration is the number of resolvable points. This is a quarter of the number of pixels on the chip.

- ideally, the inverse image of a pixel should be a quarter of the first diffraction order peak width, for sampling and optical resolution to be perfectly matched.

With a 2048 by 2048 chip, there are at most ~ 1 million resolvable points.

The authors claim they can resolve ~ 2.2 million (800 x 800 x 200 microns at 3.4 x 3.4 x 5 micron resolution).

Deconvolution is a linear process, so XLFM may "shuffle or combine" the intensities of different pixels, but must do so in a linear way.

In addition, as the authors state, the NA of the objective (nominally 1.05) is greatly decreased (to an effective 0.4) because a lot of light is blocked by the array casing. The optical resolution of the system is bound to be significantly affected.

Figure 1—figure supplement 7 does try to address this issue, but I remain unconvinced because column 3 of panel b shows no dip whatsoever in between the two particles. This indicates to me that the authors are not using the Rayleigh criterion. It is definitely possible to separate two identical Gaussians whose positions differ by less than two SDs (situation which approximates to the Rayleigh criterion), but this is not what is usually called resolution.

2) I am not sure I understand the authors' claims of a spatially invariant PSF. In Figure 1—figure supplement 9 they in fact show it is not spatially invariant even within the focal plane (which they attribute to variation in the magnification across microlenses).

[Editors' note: further revisions were requested prior to acceptance, as described below.]

Thank you for resubmitting your work entitled "Rapid whole brain imaging of neural activity in freely behaving larval zebrafish (Danio rerio)" for further consideration at eLife. Your revised article has been favorably evaluated by Eve Marder (Senior editor), a Reviewing editor, and one reviewer.

The manuscript has been improved but there are some remaining issues that need to be addressed before acceptance, as outlined below. Given that this would be a third round, we must now bring this process to a close with no possibility of further revisions.

Reviewer #3 concerns must be addressed more fully.

I appreciate the authors' comments to the points I raised, but unfortunately I do not feel they have been addressed. I still believe there are two points which need to be addressed: the resolution and the PSF. I think both these points could be addressed by rewriting the manuscript and making claims that are both theoretically sound and supported by the data presented. Given that the manuscript is a proof of concept about an exciting and promising new imaging technique I think it is fundamental to be precise, as this paper will set the baseline for all future work involving this technique and other freely swimming whole brain imaging approaches.

1) Resolution.

The authors have not addressed my argument or provided a counterargument. I still believe my argument is correct. I think the authors are under a misconception. As I mentioned in my previous report, the resolution of an optical system depends on only two things:

a) The optical resolution of the system that results in the Rayleigh criterion.b) The sampling resolution that results in the Nyquist criterion.

Specifically, the resolution does not depend on either the reconstruction (as suggested by the authors in answer to my first review) or the sparseness (as the authors argue in the answer to my second and third review).

I repeat the basis of my argument. Given the dimensions of the chip, the number of resolvable points cannot be more than ~ (2000/2)^2 which is about 1 million. This assumes that a diffraction-limited point in the sample is images onto 1 pixel in the camera chip and that the whole chip is used. The first point is not true for the current system (see Figure 1—figure supplement 9 in which a 0.5um bead results in at least 6 reconstructed pixels in the image). In fact, a one such diffraction limited spot will automatically be imaged onto 27 points just by construction). The second point is also not true for this system (see Figure 1—figure supplement 5 in which only about half the chip is used). My estimate is that at most one fifth of this upper limit of the number of points can be resolved, about 200,000, and probably much closer to 100,000.

Paragraph two of the Results is now very confusing. In the second sentence, the authors state that they can image a field of view of ~ 800 x 800 x 200 μm with 3.4 x 3.4 x 5 μm resolution. This corresponds to 2 million resolvable points. This is probably a factor of 20 out. Then later in the paragraph it is claimed that it is 500 x 500 x 100 μm at this resolution. This corresponds to 400,000 points. First of all I think this is still way too high. And secondly I do not understand the contradictory statements in this paragraph. The resolution does not depend on the sparseness. In addition, one can interpolate and reconstruct at whatever desired resolution to make pretty images, so this does not play a role either.

2) PSF.

The authors have developed a deconvolution algorithm that to my understanding calculates an effective PSF which they assume to be spatially invariant (in 3 dimensions) and which they then use to deconvolve their data. If this is true I think this is a statement that I am very happy with and support. In traditional lightfield microscopy it can be shown theoretically that the PSF is not spatially invariant. I cannot see why the setup presented by the authors would result in a spatially invariant PSF. If they claim this it should be shown theoretically (this is a methods paper). I agree that inhomogeneities arising from unequal microlens magnifications will contribute to worsen the spatial invariance of the PSF, as argued in Figure 1—figure supplement 8. But I do not believe this is the only source for them, so I do not agree with the statement in the figure legend. In Figure 1—figure supplement 9 the authors show that the PSF is not spatially invariant. And the data here relates to the "focal plane z=0" How does the PSF look at x=400um and z =100um? It will most likely look "worse" than that in the last panel of this figure. The authors seem to propose that the PSF of their imaging system is spatially invariant until it is not, away from the center of the field of view and the focal plane. This is not a rigorous scientific statement and not one which can be made in a methods paper that proposes a new imaging technique in a highly regarded journal such as eLife. I can definitely stand and support the argument put forward in the first sentence of this paragraph but if the authors claim spatial invariance they will have to either theoretically prove it or measure the PSF throughout the field of view and show it (and Figure 1—figure supplement 9 contradicts this).

eLife. 2017 Sep 20;6:e28158. doi: 10.7554/eLife.28158.035

Author response


Reviewer #1:

1) The chamber in which the freely moving larva swims is ONLY 0.8 mm deep. Thus the animal is sandwiched between glass plates with no real ability to move in the z-direction. Essentially, it moves in two dimensions. The authors should address this limitation in their approach.

We agree with the reviewer that leaving more space in the z direction would be beneficial. Nevertheless, another important factor we must consider is the tracking speed. The traveling range and the moving speed in z-direction are two parameters that cannot be easily optimized simultaneously in commercially available products. Here, we chose to use a piezo scanner (Physik Instrumente, P-725KHDS) with a good combination of traveling range (400 μm) and moving speed (330 Hz resonance frequency in the absence of load). In our experimental setup, larva zebrafish, which is typically ~400 μm thick, was swimming in an 800 μm deep chamber, and it had 400 μm free space to explore along the z direction, which can be covered by the 400 μm traveling range moving stage. The prey capture behavior in the larval zebrafish appeared to be normal in such a semi-2D environment.

Tracking in the z direction can be improved in the future. A traveling range beyond 1 mm with sufficiently fast dynamics along the z direction requires a new motion control system. So far, we haven’t explored in this direction. We have added related discussion in the manuscript and addressed the limitation of our approach (Discussion, fifth paragraph).

Reviewer #2:

In Cong et al., two advances are reported. First, a tracking system is introduced capable of keeping a freely swimming larval zebrafish in one location most of the time. Second, a new form of light field microscopy is reported capable of fast 3D imaging. Putting these together constitutes a system for whole-brain imaging in freely swimming zebrafish larvae, with a resolution slightly below single-cell.

In my opinion this is a major advance and I am supportive of publication in eLife with a few improvements.

For background, previous efforts to perform whole-brain imaging in behaving animals consisted of light-sheet (slower than light-field) in head-restrained animals, or light-field (a variant with, I believe, lower spatial resolution) in head-restrained animals. Imaging in freely behaving animals has been done in C. elegans, which move more slowly than zebrafish. Thus, compared to previous work, the advances of this manuscript are considerable. Furthermore, imaging most of the brain in freely swimming animals in really impressive.

Points that should be addressed:

1) The authors claim that the point spread function (PSF) is spatially invariant. This appears to be true if one considers the microscope an ideal optical system, but with non-ideal optics, it's unlikely. Even with a good objective, the entire system contributes to the PSF, and it's unlikely that all microlenses are diffraction limited over the entire field of view. Moreover, differential distortion between the sub-images would cause the full camera PSF to warp as the point source moves in the sample. So if Richardson-Lucy deconvolution only works with a spatially invariant PSF, and the true PSF is not fully spatially invariant, the question arises, What image artifacts do you get?

There may be multiple ways to answer this question. One path might be to (a) move a bead around a few x-y locations, including the extreme ones, and check how spatially invariant the PSF really is (and include the raw PSF volumes in the manuscript, e.g. by measuring them at two extreme points, shifting one by the predicted amount, and overlaying the two in different colors). (b) Next, assuming a spatially invariant PSF derived from one of the bead locations (e.g. the center), reconstruct a bead positioned at various points, including the edges of the volume, and quantify the spread of the point source in the reconstructed volume (this should have high brightness at the original bead position, plus dimmer pixel values spread at other locations, which should be quantified).

We agree with the reviewer that only an ideal imaging system would have a spatially invariant PSF and it is informative to characterize the imaging performance under realistic conditions. We performed calibration in the way the reviewer suggested and confirmed that the spatial invariance could not be perfectly conserved across the entire imaging volume. We further confirmed that the spatial variance of PSF was mainly due to the focal length variation of the customized micro-lenses (Figure 1—figure supplement 8). We have clarified this point in the Results, second paragraph.

This problem can be solved either by employing more precisely machined micro-lenses or a generalized reconstruction algorithm. The generalized reconstruction will take 27 PSFs measured from 27 micro-lenses, instead of 2 PSFs measured from two groups of micro-lenses, as in our current implementation. For accurate reconstruction, the magnification factor for each micro-lens should be characterized experimentally to account for the focal length variation. In this way, the reconstruction will be more accurate. However, the increased computational complexity cannot be handled by our current computing platform. Further optimization of XLFM will be under future investigation.

We also reconstructed beads that were placed at extreme points in the imaging volume using a PSF that was measured by placing a fluorescent particle at the center of the field of view. We found that reconstructions resulted in nicely localized spots within a field of view of 500 μm in diameter but were distorted near the edge of the imaging volume. This is apparently due to the spatial variance of the PSF, as suggested by the reviewer.

In summary, we appreciate the reviewer’s comments, which help us identify the focal length variation of the customized micro-lenses as the major contribution to the spatial variance of PSF. We envisioned two ways to solve this problem: (1) a more precisely machined microlenses array; (2) employing generalized reconstruction algorithm in which each micro-lens is characterized individually. Both directions will be investigated in the future.

2) A follow-up: if the PSF is not fully spatially invariant, what does this mean for the statement that overlapping sub-images are permitted (subsection “Image reconstruction of XLFM”)? My understanding is that the overlap is fine so long as the PSF is fully x-y invariant, and if not, then some artifacts will be introduced. The reasons and assumptions underlying this statement should be clarified in the text.

We agree that spatial invariance is important for correct reconstruction over the overlapping regions. However, our current implementation cannot produce a perfectly spatially invariant PSF. If the sample is not sparse, spatially variant PSF could lead to reconstruction artifacts, which is not easy to quantify. The best way to solve this problem would be to build the next generation XLFM and to make a direct comparison with the current one.

For clarity, points (1) and (2) are not criticisms of the system, only a call for characterization of the artifacts that the reconstruction algorithm introduces when using simplifying assumptions.

3) The reconstruction algorithm (subsection “Image reconstruction of XLFM”) contains confusing notation (if I understand it correctly). The coordinates (x,y) on the left hand side of Equation 5, refer to image coordinates. But (x,y,z_k) on the right hand side refers to coordinates in the 3D volume. That's confusing, x and y should not be used for both. Moreover, I believe that PSF_A,B(x,y,z_k) are each 2 dimensional objects. So in reality, spelled out with all the indices, using ^superscript for volume coordinates and _subscript for image coordinates, I believe the equation is

ImgEst_(x',y') = sum {ObjA^(x,y,z_k) conv^(x,y,z_k) PSFA^(x,y,z_k)_(x',y') +.…}

Explaining this equation better, e.g. by writing it out as above or stating that Img_est is a 2D object in image coordinates, and PSF_A,B(x,y,z_k) is 2D in image space contingent on x,y,z_k in volume space, will make this section more understandable.

Thanks for the comment. We believe that the reviewer has fully understood the equation. The notations may sound confusing, but we would like to follow the convention in the field of optical imaging: PSF is conventionally defined in object space even though it is actually measured in imaging space, so that the size of the PSF reflects the imaging resolution. The image captured on the camera is also conventionally transformed and interpreted as in object space because the actual pixel size and the factor of magnification are not important in above equations. In this way, the imaging formation can be conveniently written as the convolution between object and PSF in object space.

4) Can the lateral resolution be measured instead of estimated?

We have added experimental characterization (Figure 1—figure supplement 6).

5) The manuscript says the reconstruction algorithm is based on optical wave theory. What do the authors mean by this? The algorithm is based on the assumption of a spatially invariant PSF and observations of how to apply Richardon-Lucy to sets of microlenses of different focal lengths. Where does this rely on optical wave theory instead of just classical optics?

The optical wave theory we referred here is to distinguish it from classical optics, which is often called light ray optics. In conventional light field microscopy (LFM), the reconstruction is based on the light ray assumption, which is implied in the name of “light field”. However, the light ray assumption cannot account for the limitations of resolution and the depth of view in LFM. To take these effects into account accurately, optical diffraction described by the optical wave theory needs to be incorporated into the reconstruction algorithm. Actually, M. Broxton et al. had introduced a way of doing so in the conventional LFM, as seen in Broxton et al., 2013. In XLFM, we defined a spatially invariant PSF (assuming an ideal imaging system), which reflects the resolution limit and the beam diffraction effect. For this reason, we claimed that the XLFM reconstruction algorithm was based on optical wave theory to distinguish it from conventional LFM reconstruction algorithm.

6) I assume CAD models of the microlens holder and the autofocus system exists, can these files be made available?

We have added CAD models of the microlens holder for fluorescence imaging and autofocus.

7) Most of the code is said to be available (e.g. real-time behavioral analysis and 3D reconstruction), but in some cases it is not mentioned. Can the code for the tracking and autofocus system be made available?

The code is available in the supplementary software.

Reviewer #3:

My overall opinion of the manuscript is positive. I think being able to image neuronal activity in a freely moving larval zebrafish is an advance and the current paper serves as a satisfactory proof of principle.

I have some issues regarding the term "whole-brain" and the resolution claimed by the authors. The authors claim, or at least imply, that they can simultaneously (within 1 Orca camera frame which has a 2048 x 2048 pixel sensor) image 800 x 800 x 200 microns at 3.4 x 3.4 x 5 micron resolution. I find this very difficult to believe. Imaging with this resolution requires imaging 800/(3.4/2) = 470 pixels in both x and y for 200/(5/2) = 80 planes (the factor of 2 arise from Nyquist sampling). Given the sensor dimensions one can fit at most 25 planes into it (5 x 5). The authors show that they are able to use their microlens distribution to image 27. I do not believe there is enough information on the chip to have the claimed resolution. The authors may be able to distinguish 2 fluorescent particles 6 microns apart as in Figure 1—figure supplement 7, but these are still sparse particles appearing in the center of their CMOS chip, not a densely fluorescent tissue as a pan-neuronally fluorescent larval zebrafish. I think my argument is corroborated by the data shown in Figure 4: this data does not have the resolution claimed and does not show the whole brain of the larva.

The above argument assumes that the fish's head is also perfectly in focus. The z extent of a larval zebrafish head at this age is ~ 250 microns, which will already be larger than the z field of view. The axial shift shown in Figure 3, typically 20 microns but up to 80 will greatly affect this. The authors mention they use a 500 fps camera for the lateral tracking, but do not (or I missed) the speed of their auto-focus camera for axial tracking: how fast is this?

1) XLFM can cover a volume larger than 200 μm in the z direction. All data shown in the manuscript were reconstructed over a volume of 800 μm x 800 μm x 400 μm and were cropped later to remove empty space for better display. As shown in the reconstruction algorithm, there was no constraint on the number of z planes to be reconstructed. Since we measured PSF over 200 planes with 2 μm interspacing, the reconstruction was done over the same z range of 400 μm. Therefore, the whole brain of the larval zebrafish was indeed covered by XLFM. We have clarified this point in the Results, second paragraph.

2) We appreciate the reviewer’s comment on the imaging resolution. Indeed, the sparseness of neuronal activities (or sparsely labeled neurons) is a prerequisite for obtaining both high resolution and large field of view. The relationship between resolution and sparseness of neural activity was discussed and added to the manuscript and summarized in Figure 1—figure supplement 11. In short, we introduced a sparseness index ρ, defined as the fraction of neurons that areactivated at a given instant. Given the total number of neurons (~ 80,000) in the larval zebrafish brain, we performed computer simulation and identified a critical ρ = 0.11, below which neuronal activities can be resolved at optimal resolution (Figure 1—figure supplement 11B). When population activity is denser, XLFM would obtain a more coarse-grained neural activity map with reduced resolution (Figure 1— figure supplement 11C-D). We have clarified this point in the Results, third paragraph, and in Discussion, third paragraph.

3) Thanks for pointing out the error. The correct acquisition parameters for the lateral tracking camera should be 2 ms exposure time and 300 Hz (or higher) frame rate, which is consistent with the claimed lateral tracking update rate of 300 Hz. The axial tracking camera ran at 10 ms exposure time and 100 Hz frame rate, which is also consistent with claimed axial tracking update rate of 100 Hz. We have corrected and included more information in the manuscript, see Materials and methods.

I do not think that these are "deal-breakers", but I think it is important for the authors to rewrite their claims and be explicit about what their system can't and can do (which is a lot). In the Discussion the authors claim to have developed a whole brain imaging setup: I am not sure what this means.

Figure 1 looks at an agarose restrained larval zebrafish. The authors should be explicit about this in the text and the Figure caption (for example the name of the figure). I do not think that panel d presents maximum intensity projections – they are far too clean for this (the bottom panel looks more like a snapshot of a 3d rendering of the stack). Can the authors correct this in the caption or be explicit about what they are showing?

Thanks for the comment. We explicitly mentioned the agarose-restrained condition in the main text as well as in the Figure 1 caption. Because the main theme of Figure 1 is to introduce the principle of XLFM and to demonstrate its capabilities for volume imaging in larval zebrafish, we think it might be better to keep the figure title unchanged.

Panel D in Figure 1 shows maximum intensity projections of time series 3D volume images. In other words, we performed maximum intensity projections on space (top, top view; bottom, side view) and time. We have clarified this point in the figure caption accordingly.

Closed-loop systems have also been implemented in restrained fish with their tail free to move (e.g. Portugues and Engert 2011) which can remove the issue the authors mention relating to proprioception (Introduction paragraph two). The authors also mention improper vestibular feedback when fish are restrained, but in their setup, due to the closed look, the fish would also experience a reduced vestibular feedback: if the closed-loop was perfect the head would not move at all and the same vestibular deficits would be observed. If this is correct then the authors should comment on this.

We agree with the reviewer that the head-restrained and tail-free setting is a simple and elegant behavioral paradigm for incorporating multiple sensory cues, such as proprioception, and for studying sensorimotor transformation in larval zebrafish. We have included references to related works and added discussion on these in the manuscript (see Discussion, second paragraph).

We also agree that in our closed-loop setup, any interpretation of behaviors and neural activity associated with self-motion must take into account motion compensation driven by the tracking system. Indeed, the perception of linear acceleration, encoded by the vestibular feedback, would be significantly reduced. The perceptions of angular acceleration and the relative velocity of water flow may remain intact. We have added one paragraph discussing the limitation of our approach and future improvement of the tracking system. See Discussion, fourth paragraph.

The authors talk about "visual stimulation. What does this visual stimulation consist of? This should be explained in the Materials and methods clearly.

Thanks! Detailed information about visual stimulation has been added in the Materials and methods section.

The claim in Results paragraph seven is a strong one and I am not sure it is fully warranted. Given the resolution and the data shown I would omit the phrase "for the first time" and again, explain carefully what is meant (here and in other places) by the term "whole brain".

Thanks! We have deleted this phrase. We agree that our writing may be misleading. “Whole brain” means that our imaging volume reconstruction can cover the entire larval zebrafish head. However, to achieve close to single neuron resolution, the population neural activity must be sparse. We have clarified our essential claims in the introduction of XLFM.

[Editors' note: further revisions were requested prior to acceptance, as described below.]

Reviewer #2:

The manuscript has improved and I think that most of our comments have been addressed.

The changes include a new piece of useful information: the non-idealness of the point spread function (PSF) has been measured, and attributed, in part, to small differences in focal lengths of the microlenses.

I think this is great work and the revised version is even better. There are a few final comments I'd like to make,

In the subsection “Image reconstruction of XLFM”, "Furthermore, our system took practical conditions, such as imaging system and light properties, into consideration." What does this mean; can this be explained better?

Thanks for comment. As compared with a theoretically derived PSF, the experimentally measured one account for practically conditions, such as a non-ideal imaging objective, actual positions of individual micro-lenses, the actual spectrum of received fluorescence signal et al. We have added more description in the manuscript.

About the reconstruction algorithm: The authors opt for sticking with the conventions and use the same indices x,y on both sides of the equation. This is ok, but in that case, some explanation should be added. For example,

"The 2D convolution is over x and y" and "Per convention in optics, x,y on both sides represent object space, even though in practice, x,y at the left will refer to image space on the camera chip and x,y on the right to the sample coordinates."

Thanks for the comment. We have modified the manuscript accordingly.

In the same section the authors seem to agree that the statement that the algorithm can deal with overlapping fish images depends on the invariance of the PSF, this information should be explicitly stated, i.e. include a statement like "under the condition that the PSF is spatially invariant, which is satisfied apart from small aberrations, the algorithm can handle overlapping fish images".

Thanks for the comment. We have modified the manuscript accordingly.

Results paragraph one: "Therefore, a spatially invariant point spread function (PSF) of the entire optical imaging system could be defined and measured (Figure 1—figure supplement 2)". Here also it would be good to mention that it's an approximately spatially invariant PSF.

Thanks for the comment. We have modified the manuscript accordingly.

A reference to a recent paper from the Vaziri lab, Nobauer et al., 2017, should be included. (This is also about light field microscopy, but I want to emphasize that this does in no way diminish the impact of the current manuscript.)

Thanks for the comment. We have updated the manuscript.

Reviewer #3:

As I mentioned previously, I like the paper, and the authors have addressed most of the minor issues appropriately. There are two points which I am still not sure have been resolved.

1) I do not believe the authors have fully addressed my previous point relating to resolution, which I reproduce below. This argument may be wrong, and I would be very happy if the authors could explain to me where my logic fails.

"I have some issues regarding the term "whole-brain" and the resolution claimed by the authors. The authors claim, or at least imply, that they can simultaneously (within 1 Orca camera frame which has a 2048 x 2048 pixel sensor) image 800 x 800 x 200 microns at 3.4 x 3.4 x 5 micron resolution. I find this very difficult to believe. Imaging with this resolution requires imaging 800/(3.4/2) = 470 pixels in both x and y for 200/(5/2) = 80 planes (the factor of 2 arise from Nyquist sampling). Given the sensor dimensions one can fit at most 25 planes into it (5 x 5). The authors show that they are able to use their microlens distribution to image 27."

I am not worried about reconstructing the volume; that is not the point here. The issue is that of resolution and discriminability of points. This involves two aspects: the "optical resolution" of the imaging system and the sampling. The Rayleigh resolution criterion states the minimal distance resolvable is half the width of the first diffraction order. This depends on the wavelength of the light and the NA of the system. Ideal sampling is then obtained by using the Nyquist criterion: the inverse image of a pixel should be half the optical resolution of the system, so that there is a "negative dip" in between the two bright pixels.

Using this last sampling definition, the number of resolvable points can be estimated as follows:

- every bright pixel has to be surrounded (in the chip) by a dark set of pixels.

- the number of bright pixels you can have on the chip in this configuration is the number of resolvable points. This is a quarter of the number of pixels on the chip.

- ideally, the inverse image of a pixel should be a quarter of the first diffraction order peak width, for sampling and optical resolution to be perfectly matched.

With a 2048 by 2048 chip, there are at most ~ 1 million resolvable points.

The authors claim they can resolve ~ 2.2 million (800 x 800 x 200 microns at 3.4 x 3.4 x 5 micron resolution).

Deconvolution is a linear process, so XLFM may "shuffle or combine" the intensities of different pixels, but must do so in a linear way.

In addition, as the authors state, the NA of the objective (nominally 1.05) is greatly decreased (to an effective 0.4) because a lot of light is blocked by the array casing. The optical resolution of the system is bound to be significantly affected.

Figure 1—figure supplement 7 does try to address this issue, but I remain unconvinced because column 3 of panel b shows no dip whatsoever in between the two particles. This indicates to me that the authors are not using the Rayleigh criterion. It is definitely possible to separate two identical Gaussians whose positions differ by less than two SDs (situation which approximates to the Rayleigh criterion), but this is not what is usually called resolution.

Thanks for the comment.

We agree with reviewer’s comment that that a total number of resolvable points is limited by the total number of pixels on image sensor. We thus responded that the sparsity was a prerequisite for obtaining both high resolution and large field of view. By doing simulation, we found that when the sparseness index ρ, defined as the fraction of total neuron population (~80,000 in larval zebrafish), was less than 0.11, corresponding to 8,800 neurons distributed over a larval zebrafish brain, individual neurons could be resolved with an optimal resolution of 3.4 μm x 3.4 μm x 5 μm, as shown in Figure 1—figure supplement 11.

In the extreme case when there were only two particles separated in z direction, as shown in Figure 1—figure supplement 7B, 27 sub-images of the same two particles were captured, which provided partially redundant information for these two particles. As a result, these two particles could be resolved at optimal resolution. The theoretical analysis of this optimal resolution was provided in the Materials and methods section of “Resolution characterization of XLFM”. It was also experimentally confirmed, as shown in Figure 1—figure supplement 6 and 7.

To respond to reviewer’s concern on Figure 1—figure supplement 7, we characterized resolution by spatial frequency analysis (Figure 1—figure supplement 7b column 4), which is an important and precise way to characterize the resolution. But we agree that it would be more convincing to see a dip between two particles. Because higher spatial frequency component has much lower signal to noise ratio than lower spatial frequency component, as shown in Figure 1—figure supplement 7B column 4, it may not be easy to directly see a dip in raw images. Therefore, raw images are conventionally deconvolved to assist visualization. In the updated Figure 1—figure supplement 7B, we added an extra column to show the results after deconvolution using linear Wiener filtering method. The expected dips between two particles were much more evident in the deconvolved images than that before deconvolution.

Our reconstruction method, which was developed from Richardson-Lucy deconvolution, inherited the same property that there was no limit on the number of voxels to be reconstructed: all voxels were estimated with maximum likelihood. However, as the reviewer correctly pointed out, there was no information gain during the reconstruction. As a result, although we could reconstruct 2.2 million voxels of 1.7 μm × 1.7 μm × 2 μm over the imaging volume of Ф 800 μm × 200 μm, these voxels were not completely independent variables. However, when sample was sparse, the small fraction of nonzero voxels can be treated independently, and by using Nyquist sampling and keeping the voxel small, we could achieve the optimal resolution at 3.4 μm x 3.4 μm x 5 μm. An extreme case was when there were only two particles separated in z direction, as shown in Figure 1—figure supplement 7B and discussed above.

When sample was dense, nearby voxels would not be independent anymore because the captured information was insufficient to assign independent value for each voxel. This resulted in degraded resolution, as shown in Figure 1—figure supplement 11. Together, given the limited number of pixels on the image sensor implemented in our setup, the optimal resolution could only be achieved when the sample was sparse. We apologize for possible misleading statements. We have clarified this point in the main text.

Throughout the manuscript, we used Abbe limitd=0.5λNA for resolution characterization, which differs slightly from Rayleigh criteriond=0.61λNA in an optical system with circular apertures.

The effective NA of 0.4 we mentioned in the manuscript is defined based on light collection efficiency. It means the light collection efficiency of this system is equivalent to the one using an objective of 0.4 NA. This collection efficiency could be improved by using more micro-lenses in the array, but it also requires more camera pixels to ensure certain field of view for each micro-lens. This collection efficiency argument is not applicable for resolution comparison.

2) I am not sure I understand the authors' claims of a spatially invariant PSF. In Figure 1—figure supplement 9 they in fact show it is not spatially invariant even within the focal plane (which they attribute to variation in the magnification across microlenses).

Our statement actually is that a spatially invariant PSF can be defined if the optical system is ideal. Our current implementation, however, is not perfect as shown in Figure 1—figure supplement 9. We have modified the text to clarify this point.

[Editors' note: further revisions were requested prior to acceptance, as described below.]

Reviewer #3 concerns must be addressed more fully.

I appreciate the authors' comments to the points I raised, but unfortunately I do not feel they have been addressed. I still believe there are two points which need to be addressed: the resolution and the PSF. I think both these points could be addressed by rewriting the manuscript and making claims that are both theoretically sound and supported by the data presented. Given that the manuscript is a proof of concept about an exciting and promising new imaging technique I think it is fundamental to be precise, as this paper will set the baseline for all future work involving this technique and other freely swimming whole brain imaging approaches.

1) Resolution.

The authors have not addressed my argument or provided a counterargument. I still believe my argument is correct. I think the authors are under a misconception. As I mentioned in my previous report, the resolution of an optical system depends on only two things:

a) The optical resolution of the system that results in the Rayleigh criterion.b) The sampling resolution that results in the Nyquist criterion.

Specifically, the resolution does not depend on either the reconstruction (as suggested by the authors in answer to my first review) or the sparseness (as the authors argue in the answer to my second and third review).

I repeat the basis of my argument. Given the dimensions of the chip, the number of resolvable points cannot be more than ~ (2000/2)^2 which is about 1 million. This assumes that a diffraction-limited point in the sample is images onto 1 pixel in the camera chip and that the whole chip is used. The first point is not true for the current system (see Figure 1—figure supplement 9 in which a 0.5um bead results in at least 6 reconstructed pixels in the image). In fact, a one such diffraction limited spot will automatically be imaged onto 27 points just by construction). The second point is also not true for this system (see Figure 1—figure supplement 5 in which only about half the chip is used). My estimate is that at most one fifth of this upper limit of the number of points can be resolved, about 200,000, and probably much closer to 100,000.

Paragraph two of the Results is now very confusing. In the second sentence, the authors state that they can image a field of view of ~ 800 x 800 x 200 μm with 3.4 x 3.4 x 5 μm resolution. This corresponds to 2 million resolvable points. This is probably a factor of 20 out. Then later in the paragraph it is claimed that it is 500 x 500 x 100 μm at this resolution. This corresponds to 400,000 points. First of all I think this is still way too high. And secondly I do not understand the contradictory statements in this paragraph. The resolution does not depend on the sparseness. In addition, one can interpolate and reconstruct at whatever desired resolution to make pretty images, so this does not play a role either.

Thanks for the comment. We would like to try our best to clarify our statement and avoid possible misunderstanding.

We believe that the major disagreement between us might be due to our different understanding on the claim, that is, XLFM could achieve optimal resolution of 3.4 x 3.4 x 5 μm within the volume of Ø 800 x 200 μm when the sample is sparse.

The reviewer argued that resolution is independent of sparsity and our claim thus meant that all N ≈ 2 million voxels in this volume should be measured independently, as imposed by Nyquist sampling theorem. On the other hand, we argued that sparsity constraint was central to this claim. If there is no sparsity constraint, we would agree that the Nyquist sampling is required. However, if sparsity constraint is imposed, which means the number of non-zero voxels is far less than the total number of voxels in the imaging volume, our reconstruction algorithm can achieve the claimed resolution of 3.4 x 3.4 x 5 μm within the volume of Ø 800 x 200 um. This claim can be illustrated by the following simplified example:

The designed XLFM has 27 micro-lenses. These micro-lenses image the same object from different view angles and form 27 sub-images on a single image sensor chip. Here, we simplify the imaging formation model and assume that each sub-image is captured by an individual camera (Author response image 1). Please note that this model is not a precise description of the actual XLFM system; but it draws a close analogy with XLFM and serves as a valid model to illustrate the design principle of XLFM. Additionally, the example is illustrated in two dimensions, but can be easily generalized to three dimensions.

Author response image 1.

Author response image 1.

As shown in Author response image 1A, Camera A captures a top view of three pairs of dots within a gray square area. The imaging system of camera A is designed to have relatively high resolution in x direction, but very poor resolution in z direction. The poor resolution in z direction actually means very large depth of view in this direction. As a result, camera A can resolve two laterally aligned and closely spaced particles anywhere within the imaging field of view (gray square). This system, however, do not provide any capability to resolve two particles in z direction.

To make the system capable of resolving particles in z direction, we can add camera B to get a side view of the same object over the same field of view. The camera B provides high resolution in z direction, but poor resolution in x direction. If we combine information from both camera A and B, which are complementary to each other, we can see that this system can resolve two closely spaced particles anywhere within the field of view, as shown in Author response image 1B.

Therefore, we claim that the imaging system, which consists of two cameras A and B oriented orthogonal to each other, provides high resolution both in x and z directions over the entire field of view when the sample is sparse.

In the above claim, the sparsity constraint is important because this system has problems to resolve denser samples. As shown in Author response image 1C, three different sample distributions i, ii, and iii can generate exactly the same observation on camera A and B. In this case, it is not possible to distinguish whether four particles or two particles are present within the field of view.

The above problem can be alleviated by adding more cameras sampling from different perspectives. By adding camera C, as shown in Author response image 1D, the images captured by camera C can provide useful information to distinguish all three different cases.

In the above example, each camera is analogous to one micro-lens in XLFM. In our XLFM, we have 27 micro-lenses sampling from 27 different view angles. As a result, the designed XLFM can handle much more dense samples than that illustrated in Author response image 1C.

To summarize, our claim of “XLFM could achieve optimal resolution of 3.4 x 3.4 x 5 μm within the volume of Ø 800 x 200 μm when the sample is sparse” is based on following facts:

1) We showed that each micro-lens with NA of 0.075 provided an in-plane resolution of 3.4 um. This resolution was experimentally confirmed to be preserved throughout the imaging volume of Ø 800 x 200 um, as shown in Figure 1—figure supplement 6.

2) We theoretically showed that the micro-lenses on the edge of the objective’s pupil provide resolution in z direction because the PSFs generated by these micro-lenses are tilted, as shown in Figure 1—figure supplement 2 and discussed in subsection “Resolution characterization of XLFM” in manuscript. Combing this theoretical analysis and result (1), we expected that the z resolution of 4um can be achieved throughout the imaging volume of Ø 800 x 200 um. Due to the limited signal to noise ratio in practical conditions, however, we experimentally obtained a resolution of 5 μm in z direction, as shown in Figure 1—figure supplement 7.

3) Based on the results (1) and (2), we concluded that our designed XLFM can resolve two closely spaced particles anywhere in the imaging volume of Ø 800 x 200 μm with optimal resolution of 3.4 x 3.4 x 5 um.

4) We expected that the capability of resolving objects with optimal resolution can be generalized from the simple case of having only two particles within the field of view to more complicated cases. A detailed theoretical analysis of such generalization is out of the scope of this work. Instead, we did computer simulation and found that the sample sparseness can be a proper indicator of when this optimal resolution can be achieved, as shown in Figure 1—figure supplement 11.

5) By combining the above results, we claimed that our designed XLFM, in the absence of micro-lenses’ focal length variation, can achieve a resolution of 3.4 x 3.4 x 5 μm within the volume of Ø 800 x 200 μm when sample is sparse.

6) Due to the focal length variation, the PSF of the optical system is not fully spatially invariant (see below). As a result, the reconstruction performance is degraded comparing to our initial design (Figure 1—figure supplement 9). Thus we claimed that our current implementation of XLFM achieved optimal resolution of 3.4 x 3.4 x 5 μm within a reduced imaging volume of Ø 500 x 100 um.

Now we have streamlined the structure of the main text based on the logical flow from (1)-(6).

As a side note, our designed XLFM is not the only one that makes use of the sparsity constraint. In the field of compressed sensing, it has been demonstrated that image signals can be recovered with fewer number of measurements than that required by the Shannon-Nyquist sampling theorem when sample is sparse. Strictly speaking, our method is not compressed sensing, but our developed XLFM can also recover high resolution information of sparse objects within a large field of view in the way illustrated in Author response image 1.

2) PSF.

The authors have developed a deconvolution algorithm that to my understanding calculates an effective PSF which they assume to be spatially invariant (in 3 dimensions) and which they then use to deconvolve their data. If this is true I think this is a statement that I am very happy with and support. In traditional lightfield microscopy it can be shown theoretically that the PSF is not spatially invariant. I cannot see why the setup presented by the authors would result in a spatially invariant PSF. If they claim this it should be shown theoretically (this is a methods paper). I agree that inhomogeneities arising from unequal microlens magnifications will contribute to worsen the spatial invariance of the PSF, as argued in Figure 1—figure supplement 8. But I do not believe this is the only source for them, so I do not agree with the statement in the figure legend. In Figure 1—figure supplement 9 the authors show that the PSF is not spatially invariant. And the data here relates to the "focal plane z=0" How does the PSF look at x=400um and z =100um? It will most likely look "worse" than that in the last panel of this figure. The authors seem to propose that the PSF of their imaging system is spatially invariant until it is not, away from the center of the field of view and the focal plane. This is not a rigorous scientific statement and not one which can be made in a methods paper that proposes a new imaging technique in a highly regarded journal such as eLife. I can definitely stand and support the argument put forward in the first sentence of this paragraph but if the authors claim spatial invariance they will have to either theoretically prove it or measure the PSF throughout the field of view and show it (and Figure 1—figure supplement 9 contradicts this).

Thanks for the comment. We would like to make our best effort to explain the underlying theory of XLFM. We have added a new section discussing the spatial invariance of the PSF in the Materials and methods of the manuscript.

Since the raw image is in 2D, the spatially invariance of PSF is only required in 2D as well. It’s implied in the reconstruction algorithm. To avoid misunderstanding, we have clarified this in the manuscript.

The definition of a spatially invariant PSF fundamentally means that in an ideal optical microscopy system, the resulting image can be described as a convolution between object and PSF (Introduction to Fourier Optics, Goodman):

Image=ObjectPSF

This equation forms the basis of our reconstruction algorithm, as shown in subsection “Image reconstruction of XLFM” of the manuscript. If the PSF is far from spatially invariant, the imaging reconstruction wouldn’t yield any meaningful result.

One of the fundamental differences between XLFM and conventional LFM is the location of the microlens array. In XLFM, the microlens array is placed at the pupil plane and the image sensor is at imaging plane, whereas in conventional LFM, the microlens array is placed at the image plane and the image sensor is at pupil plane. Only in XLFM, it is possible to define and measure a spatially invariant PSF. The reasons are as follows:

1) Spatially invariant PSFs can be defined for individual sub-imaging systems consisting of different micro-lenses.

As shown in Author response image 2, which is a simplified version of Author response image 1A in the manuscript, the object under the imaging objective is firstly imaged onto an intermediate imaging plane by tube lens 1, and then relayed by tube lens 2 and individual micro-lens A1 and A2 onto the camera image sensor. By definition, the imaging process in an ideal imaging system is linear and spatially invariant, so spatially invariant PSFs for sub-imaging systems consisting of micro-lens A1 and A2 can be defined as:

ImageA1=ObjectA1PSFA1ImageA2=ObjectA2PSFA2

Author response image 2. Point spread function of XLFM.

Author response image 2.

As discussed in the Materials and methods, the convolution can be performed either in the object space or imaging space. If we perform the convolution in the imaging space, then the coordinates of ObjectA1 and ObjectA2 should be scaled by the magnification factors of their sub-image systems.

2) A spatially invariant PSF can be defined for the entire imaging system if the magnifications of all sub-imaging systems are the same.

Because the camera captured image is the summation of all sub-images, the summation of all PSFs formed by individual micro-lenses can be defined as a single PSF if the magnifications of different sub-images are the same. Below is the simple proof:

ImageA=ImageA1+ImageA2=ObjectA1PSFA1+ObjectA2PSFA2

If magnifications of the sub-imaging systems A1 and A2 are the same, then ObjectA1=ObjectA2=Object, and the above equation can be rewritten as:

ImageA=Object(PSFA1+PSFA2)=ObjectPSFAWhere PSFA=PSFA1+PSFA2.

The variation of individual micro-lenses’ focal length indicate that 𝑃𝑆𝐹𝐴 or 𝑃𝑆𝐹𝐵 (see Materials and methods) are not fully spatially invariant, but it does not affect the imaging formation theory of XLFM. The spatial variance, as measured in Figure 1—figure supplement 8, leads to degraded reconstruction performance, as shown in Figure 1—figure supplement 9. This degradation is negligible near the center of the field of view, but becomes more evident at the edge of the field of view. This is because the PSF is measured near the center of the field of view. The reconstruction algorithm will produce 27 estimates of the same object based on 27 sub-images. In the meanwhile, it tries to combine and align these estimates all together in the same coordinates. The position where the PSF is measured determines the origin of this coordinates. If the magnifications of all sub-images are the same, all estimates can be combined coherently to produce an accurate reconstruction. If magnifications of different micro-lenses are different, the reconstruction will yield an image that is clear near the origin of the coordinates but blurred at the edge, as shown in Author response image 3.

Author response image 3. Resolution degradation caused by magnification variation of micro-lenses in XLFM.

Author response image 3.

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    Source code 1. Computer-Aided design files of mounting plates for micro-lenses array.
    elife-28158-code1.zip (16.4KB, zip)
    DOI: 10.7554/eLife.28158.029
    Source code 2. Source code for XLFM reconstruction.
    elife-28158-code2.zip (3.2KB, zip)
    DOI: 10.7554/eLife.28158.030
    Source code 3. Source code for Real-Time behavioral analysis.
    elife-28158-code3.zip (77.9MB, zip)
    DOI: 10.7554/eLife.28158.031
    Supplement file 1. Acquisition parameters for fluorescence imaging.
    elife-28158-fig4.docx (15.2KB, docx)
    DOI: 10.7554/eLife.28158.032
    Transparent reporting form
    DOI: 10.7554/eLife.28158.033

    Articles from eLife are provided here courtesy of eLife Sciences Publications, Ltd

    RESOURCES