Abstract.
Functional optical imaging in neuroscience is rapidly growing with the development of optical systems and fluorescence indicators. To realize the potential of these massive spatiotemporal datasets for relating neuronal activity to behavior and stimuli and uncovering local circuits in the brain, accurate automated processing is increasingly essential. We cover recent computational developments in the full data processing pipeline of functional optical microscopy for neuroscience data and discuss ongoing and emerging challenges.
Keywords: fluorescence microscopy, calcium imaging, functional imaging, data analysis
1. Introduction
Modern neuroscience has been propelled forward by the development of new technologies that offer unique windows into the brain’s activity. These new techniques, including high-density electrodes,1,2 functional ultrasound imaging,3–6 high magnetic field functional Magnetic Resonance Imaging (fMRI),7–9 and optical imaging10,11 all stand to further our fundamental understanding of the brain and represent potential new tools for future therapies. One of the fastest growing directions in this push for advanced neural recording technologies is functional optical microscopy, in particular, in-vivo recordings using fluorescence microscopy.
Fluorescence microscopy holds a number of unique advantages over other imaging methods. Unlike fMRI and functional ultrasound, the activity measured is more directly related to neural activity and not as spatially and temporally blurred by the hemodynamic response.12,13 Furthermore, optical methods do not require inserting probes into the brain, making them typically less invasive than electrophysiology. The drawback to this advantage is the limited penetration depth of the photons. Imaging deeper structures requires more invasive methods, such as implanting a gradient refractive index (GRIN) lens.14,15 Additionally, fluorescence microscopy provides entire images that capture not just the neural activity, but also the morphology of the cells. Thus fields of view can in theory be registered across days to enable chronic long-term recordings of the same identified neurons, e.g., during learning. However, these high-dimensional and spatiotemporally rich data require significant computational power to extract the core information contained within. Moreover, fluorescence microscopy is actively growing and new volumetric imaging techniques10,11,16–18 promise to even further increase the scale of such data and the spatiotemporal statistics that must be leveraged in the analysis. To solve this “big data” explosion and to process the ever-growing datasets, there is an ongoing need to meet the challenge of designing robust automated algorithms to accurately extract information from these rich data.
In functional optical imaging of neurons, a fluorescing indicator (typically a protein) sensitive to a biomarker of neural activity is introduced to a cell. Examples of biomarkers include voltage, calcium, potassium, glutamate, sodium, etc.19–23 Out of these, calcium indicators have been the most widespread. When the level of given biomarker changes, i.e., during or right after a neural firing event, a fluorescent property such as the brightness or emission color of the indicator changes as well. At each image frame, the tissue is illuminated with light at a specific wavelength, and any fluorescing indicators have a probability of raising their energy level. When the energy level falls back to the lower-energy state, light at a longer wavelength is emitted and collected by the microscope. The measured value of the collected light thus reflects the value of the biomarker at a given location.
Practically, the tissue can be illuminated in a number of ways. For example, in single photon widefield microscopy, an entire plane is illuminated at once, using a camera to collect full frames simultaneously. Widefield microscopy can thus acquire high-resolution videos at kilohertz framerates; however, it is limited in its ability to image deeper regions in highly scattering tissues. Specifically, scattering of light in brain tissue greatly blurs images, requiring optical methods that better localize fluorescence at depth. Multiphoton imaging, e.g., two- and three-photon microscopy can penetrate deeper tissue 400 to , but rely on raster scanning technologies to sequentially measure small volumes throughout the plane of imaging.24–26 Hence, multiphoton imaging is often used to image smaller structures, such as axons, dendrites, and somas, whereas one-photon imaging is used at meso- and cortex-wide scales24,27–29 or at somatic resolution under challenging optical conditions, such as via endoscopes and miniscopes in freely moving animals.30,31
Here, we discuss the emerging line of work that has focused on the task of building the analysis tools required to realize the full potential of high-resolution large scale imaging. The primary goal is extracting time-courses of all the individual units (e.g., neurons, dendrites, brain regions, etc.) from the data so as to relate these to behavior and stimuli in downstream analyses. This is often accomplished by decomposing the movies into two sets of variables: the spatial profiles, representing the area in the field-of-view (FOV) that a unit occupies, and the corresponding temporal fluorescence traces. While this goal is simply stated, the unique properties of in-vivo fluorescence imaging create a number of challenges in misalignment of data, tissue aberrations and signal distortion, imaging-dependent noise levels, and severe lack of large-scale ground truth data for validation. These challenges have led to a myriad of approaches, from solving one specific step in the process, e.g., denoising,32 to whole pipeline implementations.33 Approaches also range from imaging of specific structures, e.g., widefield34 to methods aimed at many imaging classes.35 We aim to provide here a walkthrough of the basic challenges, the landscape of current approaches, and finally emerging challenges with no current solutions.
2. Functional Fluorescence Microscopy
Functional fluorescence microscopy in neuroscience is used to capture the dynamics of neural activity in a wide variety of animal models and targets. Improvements in bioengineering have allowed researchers to study targets from nanometer-sized targets in individually labeled neurons to spanning multiple brain regions across centimeter-wide fields of view. Recordings are taken at several different timescales and framerates, up to hours long recordings at kilohertz rates. Accordingly, specialized microscopes have been developed to tackle these individual experimental requirements.
Most microscopes can be described by two properties: illumination and sampling, of which either one or both may be time-varying. The illumination is a structured light source that aims to excite fluorophores (indicators) in the targeted region of interest without needlessly exciting other areas. The sampling is a mapping of the light emitted from targeted regions onto the sensor. The two most common configurations used for functional imaging today are two-photon laser scanning microscopy (TPM)36 [Figs. 1(a) and 1(b)] and widefield microscopy [Figs. 1(c) and 1(d)].
In TPM, the illumination is a focused, pulsed near-infrared (NIR) laser that is raster-scanned in a two-dimensional (2D) pattern [Fig. 1(a)] and the sensor is a single pixel detector (typically a photo-multiplier tube) collecting all emitted light from the whole FOV. The fluorophores are excited via two-photon absorption,37 which has a squared-law relationship with the intensity of the excitation beam. This results in the absorption process being confined to a small, point-like focal volume [Fig. 1(b)]. As emitted light is typically bulk-collected in TPM, the excitation focal volume describes the point-spread function (PSF) for these systems. TPM has been widely adopted primarily due to its excellent optical sectioning properties and use of NIR wavelengths. Because scattering of NIR light is much lower in brain tissue than visible light and scattering of the fluorescence does not impact image quality, TPM achieves a practical imaging depth of up to , even in the highly scattering mammalian or avian brain.
In widefield microscopy [Fig. 1(c)], the illumination is a visible LED or laser that excites the imaged volume and the sensor is a camera that maps onto the target volume. The excitation light propagates through the whole imaging volume, strongest near the surface of the sample, and is attenuated over a couple of into the sample via tissue absorption. The emitted light is imaged onto the camera, with the contribution from the focal plane mapped most closely onto the sensor and out-of-focus light diffusely collected as well [Fig. 1(d)]. The PSF for the system is determined by the imaging optics, with light from the focal plane having the sharpest response and blurring away from the focus. The image quality is further degraded by scattering in the emission. In reasonably transparent animal models (such as C. elegans or larval zebrafish), this effect is minimal. In highly scattering tissues, like the mammalian brain, this effect limits the overall depth with acceptable imaging quality for neuron-scale recordings to typically . While the emitted light frequently comes from the whole volume, as in densely labeled samples, genetic targeting or microscopy techniques can isolate the signal to a smaller population of cells. A major advantage of widefield microscopy is the ease of setup in a variety of animal models and scalability to very large areas and framerates.
Several microscopes have been designed to optimize functional neural recordings. For TPM, development has focused on improving the acquisition speed and volume size. The illumination shape of the PSF has been transformed for imaging sparse tissues effectively38 or for volumetric imaging.16 Three-photon microscopy enables even deeper imaging.39 Changing the illumination time course with custom scanning paths can make fast jumps from cell to cell.40 Live updating of both the scanning pattern and timecourse is used for high-speed image acquisition.41 For collection, cameras have been used to replace the single-pixel detectors to improve acquisition rates.42 Recently, the combination of multiple techniques and technologies has reported cellular resolution imaging of up to a million neurons simultaneously.11
Changes to the techniques of widefield microscopy have focused on optical sectioning and volume scanning. Optical sectioning can be achieved by structuring the illumination such that only the plane mapping onto the camera is excited. This has been achieved primarily through light-sheet microscopy, which has been scanned to image three-dimensional (3D) volumes in larval zebrafish43 and also with a single objective in mice.10 Light field microscopy has been used to structure mapping of the sampling from the imaged region to the camera44 and in combination with confocal microscopy enables high-quality optical sectioning.45 One of the most major developments has been the miniaturization of microscopes (miniscopes) for head-mounted functional imaging, which has greatly expanded the types of problems and brain areas researchers can explore.46–48
All of the advances outlined have enriched the space of acquired brain recordings, especially when intersected with the range of available indicators. Specifically, they create a range of spatiotemporal resolutions, signal quality, and imaged morphology, a subsection of which we depict in Fig. 2. To bridge the gap between the raw data and scientific discovery, these data must be analyzed to extract useful neuronal activity.
3. Fluorescence Microscopy in Space and Time
The current state of functional fluorescence microscopy analysis can be understood through the long history of developments that have led to today’s state-of-the-art. Initially, fluorescence microscopy had long scan times and was thus focused on imaging static tissue (see Ref. 54 for a review of the very early history of fluorescence microscopy for biology). Thus, all the relevant information was anatomical in nature and could be traced manually or, later on, identified automatically to discover complex biological structures. Fluorescence microscopy continues to play a vital role in imaging static anatomical structures and morphological fitting algorithms have evolved with the technology.55–57
As scan times improved and the ability to inject fluorescent indicators into live tissue emerged, fluorescence microscopy expanded to the imaging of time-varying biomarkers, e.g., voltage indicators in the 1970s58,59 and calcium indicators in the 1980s.60 With this shift came the added dimension of time, as now fluorescence was a temporal quantity. In neuroscience, this enabled the study of the activity of the neurons over time. Initially, however, labeling methods were in their nascent stage and the use of viruses to introduce fluorescing proteins into cells created a high variability in labeling strength and longevity. The result was often sparsely labeled tissue where overlapping neurons were rare. Without overlap, isolating a single cell’s activity could thus be accomplished by identifying individual neurons anatomical extent in the video (e.g., based on the temporal mean or variance of the imaging stack), and then averaging the pixels belonging to each neuron to extract their time-trace.
To see this, we consider the statistical assumption that each neuron is modulated in a linear way (i.e., the spatial shape and temporal fluctuations are independent). Each -pixel frame at time , stacked as an vector, can be thought of as the linear combination of the cell shapes for and its activation at time :
(1) |
where is a matrix consisted of the spatial components and is the activation of all cells at time . Solving a least-squares optimization of the form
(2) |
for the activity of all cells at time can then be written as the pseudo-inverse . With no overlap the Gram matrix, is simply a diagonal matrix with the norms of the cell shapes on the diagonal. Thus, the activity for the cell at time is simply a weighted average of the pixel values at that time-point, which has been exactly the methodology with hand-drawn spatial profiles, sometimes also called regions of interest (ROIs).
As labeling methods advanced and indicator designs provided brighter fluorescent tags, overlaps between neurons and with other processes (e.g., dendrites) became more common. The increased cell count vastly increased the burden of manual annotation and the increased overlaps mathematically results in a nondiagonal Gram matrix , removing the validity of direct averaging. Thus, new conceptual approaches were required for cell identification. Rather than solely relying on spatial cues to isolate neurons, activity had to be demixed in space and time. In fact, as opposed to purely anatomical studies, many modern systems neuroscience analyses abstract away from space, analyzing a neuron--time matrix irrelevant of anatomy. Methods such as dimensionality reduction,61–63 dynamical systems analysis,64 statistical connectivity,65 etc. operate on the time-traces and thus suffer more from inexact estimation of neural activity rather than inexact estimation of spatial shapes.
Under the statistical assumption that each neuron is modulated in a linear way, i.e., the spatial profile is constant over time with brightness controlled by a time-vector representing the cell’s activity, the problem of isolating neurons can be considered as a matrix factorization problem. If the data matrix is the pixel-by-time fluorescence video matrix, each neuron contributes one rank-1 component
(3) |
where is again the ’th neuron’s spatial profile (how it appears visually in the data), and is the matrix where the ’th column, is the ’th neuron’s time-trace (how the biomarker-driven fluorescence changes over time). To find the rank- decomposition, Mukamel et al.66 used a combination of PCA and ICA: first, PCA was used to reduce the overall dimension of the data and to obtain an initial guess , where and are the left and right singular vectors, respectively. ICA was then performed on the set of vectors , i.e., identifying a rotation of that makes the columns more independent, and the parameter trades off importance of the spatial and temporal components obtained via PCA (i.e., the left and right singular vectors). The rotation in the ICA step transforms the PCA components into their independent components, demixing both the spatial and the temporal components at once. This procedure allowed for overlapping components and included temporal independence when identifying neurons.
The matrix factorization approach continued to expand,67–69 primarily formulated as an optimization program:
(4) |
where and represent appropriate regularization terms over space and time, respectively, that can vary between methods, and often include terms such as the component norms, number of components, sparsity, non-negativity, and spatial cohesion. As direct optimization is often difficult for problems of this size, alternating descent type algorithms are often employed, i.e., iteratively solving
Positive aspects of this approach include that each of these optimization problems can be solved reasonably efficiently (conditioned on judicious choices of regularization) and that different assumptions on the time-traces and component shapes can be incorporated naturally via regularization.
Thus, in the current landscape of algorithms that extract neural signals from fluorescence microscopy data we now see a combination of approaches. One approach focuses primarily on anatomical identification, leaving the identification of functional traces as a later stage, and another which places identification of the time-traces on equal ground (or more) with the spatial maps and extracts the two jointly.
3.1. Methods Focusing on Space
The class of methods that focus on anatomical identification have been mostly inspired by image segmentation, using both classical and modern approaches. Each of these methods relies on a spatial model of the data, either preset or learned from the data.
One such approach, dictionary learning, stems from the broader image processing literature.70 Dictionary learning assumes a sparse generative model for image patches71 and has been applied to calcium imaging to learn spatial dictionaries whose atoms represent neuronal shapes. The identified neuron shapes can then be used to estimate corresponding time courses.72–74 An early approach ignored temporal information entirely and developed a spatial generative model based on convolutional sparse block coding,72 applied to normalized mean projections of the imaging data. A dictionary of cellular shapes was learned with corresponding locations of these shapes in the mean image.
Modern deep learning methods for image segmentation have also been adapted to the ROI extraction problem. These are supervised approaches that require labeled training data. UNet2DS75 is a neural network based on the fully convolutional UNet76 model, an architecture that was developed for biomedical image segmentation. UNet2DS takes the mean image as an input and outputs two probability masks of a pixel belonging to either a cell or the background. Since U-Net2DS ignores temporal information, it has difficulty separating overlapping neurons and detecting sparsely firing cells. Such methods also cannot differentiate active cells, i.e., cells that exhibit deviations from baseline fluorescence consistent with spiking events, from nonactive cells.
3.2. Methods Using Time Information to Identify Spatial Masks
Another class of methods still emphasizes the identification of the spatial components, however, they rely on temporal information, e.g., in calculating correlations between pixels in the FOV based on their temporal activity.
One approach is to view cell detection as a clustering problem.77,78 For example, in HNCcorr78 after selecting a set of seeds (superpixels), this method aims to find a cluster within a patch containing the seed by solving the Hochbaum’s normalized cut (HNC) problem, which is similar to normalized cut. The HNC formulation balances a coherence term, which maximizes the total similarity of the pixels within each cluster, with a distinct term that minimizes the similarity between the cluster and all others, where similarity is based on time-trace activity. Local selective spectral clustering (LSSC)79 also solves the cell identification problem as clustering pixels in a high-dimensional feature space. Following the construction of a sparse graph whose nodes are the pixels and whose weights are determined by pairwise similarity of the time-traces of the pixels, the nodes are embedded using the eigenvectors of the random-walk graph Laplacian. Pixels are then clustered together in an iterative approach based on selecting subsets of the eigenvectors that best separate a cluster from the rest of the data, and enabling overlapping clusters. After detecting neuronal components (the graph construction allows for morphology-agnostic clustering), time traces are demixed and calculated by projecting to a low-rank space.
Activity-based level set segmentation (ABLE)80 is an image segmentation approach relying on active contours.81 ABLE defines multiple coupled active contours in the FOV where an active contour seeks to partition a local region into an interior corresponding to a neuronal component and a local exterior, such that the pixels within each region are similar to one another temporally. The evolution of the contours is performed by the level set method,82 where only neighboring cells directly affect a cell’s evolution. ABLE handles overlapping cells by coupling the evolution of active contours that are close to one another. ABLE automatically merges two cells if they are close and temporally correlated and prunes cells if their size is too small or too large. An advantage of this method is that it makes no assumption on a cell’s morphology or temporal activity, therefore it can potentially generalize to different indicators and spatial morphologies.
Convolutional deep learning networks taking into account temporal statistics or activity have also been developed in recent years.83–85 A (2+1)D convolutional neural network83 was trained on spatiotemporal sliding windows and the output represented the probability of a pixel belonging to an ROI centroid. Apthorpe et al.83 demonstrated that adding the temporal domain helps suppress noisy detections compared with a 2D network that only took as input the time-averaged image. However, the network they trained only learned spatial 2D kernels.
In comparison, STNeuroNet84 is a 3D convolutional neural network based on DenseVNet (similar to a U-Net) also trained on overlapping spatiotemporal blocks, and its output is a binarized probability map. By adding a temporal max-pooling layer to a typical DenseVNet architecture, the spatiotemporal features are reduced to a spatial output, which increases the speed of training and inference. This gain is important for network validation and low-latency inference in closed-loop experiments. Shallow U-net neuron segmentation (SUNS)85 aims to simplify the neural network architecture of STNeuroNet to further improve speed while still incorporating temporal information. To this end, a temporal matched filter tailored for shot noise is applied to the input video, thereby enhancing calcium transients occurring across multiple frames, and reducing temporal information into individual 2D frames. Preprocessing also includes whitening, which normalizes the fluorescence time series of each pixel by the estimated noise of that pixel. This yields an SNR representation that highlighted active neurons and obscured inactive neurons. The SNR video is then processed by a shallow UNet, whose output is a probability map.
Postprocessing for both of these deep-learning methods includes clustering of the output probability maps to detect individual cells and aggregation of detected cells across the probability maps for merging and pruning. Finally, deep learning, instance segmentation, and correlations (DISCo)86 uses a combination of time-trace correlation-based pixel segmentation on a graph and a convolutional neural network to identify individual spatial profiles. Note that while these methods identify cells based on a spatiotemporal analysis, they do not address the issue of estimating time-traces.
3.3. Methods Focusing on Space and Time
Rather than leave time-trace estimation as a secondary step, spatiotemporal demixing methods aim to simultaneously identify the spatial maps with their corresponding fluorescence traces.
One such method builds off of the dictionary learning framework first used in spatially-focused cell identification. Specifically, Diego et al.73 generate a space-time dictionary learning extension of convolutional sparse coding to video data with nonuniform and temporally varying background components. Petersen et al.74 approach the dictionary learning of spatial components via iterative merging and clustering. The former of these extracts sparse spike trains in the convolutional dictionary. The latter mainly emphasizes identifying the spatial maps but, as a by-product of the decomposition algorithm, computes the time traces as well.
Many algorithms that jointly estimate both spatial profiles and time traces aim to optimize the cost of the form of the optimization program in Eq. (4). In particular, many of these methods follow developments in analyzing somatic calcium imaging data, which are based on matrix factorization with non-negative constraints that represent the expected positive deviations from baseline: non-negative matrix factorization (NMF).35,67–69,87–89 The main difference between these methods is the specific choices of regularization and the specific implementation of the algorithmic steps. For example, multiple methods follow the aforementioned alternating optimization,35,67,69,87 whereas another approach instead uses semidefinite programming.68 Versions of matrix factorization methods have been perhaps the most widespread, with special versions being designed for one-photon endoscopy data,90 widefield data,34,35 and voltage data.91
Regularization has been applied both on the temporal and spatial dimensions. Temporally, sparsity over the time traces has been a popular constraint. Specifically, neurons often fire infrequently, relative to the number of frames in a video.16,67 Taking advantage of this sparsity requires accounting for the biophysically-induced exponential decay of fluorescence. While more sophisticated methods have been developed for postprocessing deconvolution (see Sec. 4.5), a simple way that has been applied to cheaply deconvolve data is based on the observation that exponentially decaying responses (i.e., a single-pole response function) with decay rate can be expressed as the difference equation , where is the instantaneous impulses over time. Thus simply computing should give a reasonable estimate of a sparse impulse train. The parameter can further be tuned to maximize sparsity .
Spatial regularization is more complex and often requires a model for neural processes of interest. Initial ideas included ensuring connected components, or by minimizing the total variation (TV) norms to minimize the complexity of shapes and remove spurious pixels.68 In more recent versions of matrix factorization algorithms, deep networks trained on human-annotated labeled data have been used to classify true and false cells,49 the resulting system serving as a posthoc filter that effectively double-checks the spatial regularization.
As an alternative to shape-based regularization, recent work has leveraged sparsity and spatial correlations in a new way.35 Instead of using spatial locations of pixels to constrain the components’ spatial profiles, graph-filtered temporal (GraFT) dictionary learning builds a graph among all pixels that enables like-pixels to share time-trace decompositions. This approach effectively gives up completely on space and instead focuses on the learning of a dictionary of time traces over the data-driven pixel graph.
More recently, another important modification to the base optimization in (4) has been emerging. Rather than focusing on changing the regularization terms used, the least-squares form of the data fidelity term has instead been reconsidered. The least-squares cost enforces a linear-Gaussian data generation hypothesis, however, a number of nonlinearities in fluorescence dynamics, incompleteness in component discovery, and the non-Gaussian statistics of the photo-diodes all contribute to various extents of errors in demixing. Four main alternatives have emerged, including a robust-statistical approach leveraging a Huber cost function,87 a contamination-aware generative model approach,92 a zero-Gamma mixture model,93 and a deep-learning approach.94
4. Imaging Analysis Pipeline
The fundamental output of an optical functional imaging analysis pipeline is the identified spatial profiles and more importantly their corresponding time traces. While demixing thus serves as the core of analysis, often multiple preprocessing and postprocessing steps are typically part of the pipeline to facilitate this output (Fig. 3). Motion correction is typically the first step in the analysis pipeline, to register all frames in the imaging stack such that the neuronal components to be extracted occupy the same spatial footprint in all frames. Denoising can be applied as a preprocessing step either temporally35 or spatially79 to improve the detection of ROIs. Normalizing per-pixel time-traces, e.g., by -scoring, can enhance dim cells, and improve cell detection. Following ROI extraction, postprocessing in the temporal domain can include the following: neuropil estimation, denoising of time traces (if not implicitly part of the extraction itself), and calculating normalized time traces (). Postprocessing in the spatial domain can include automatic classification49 of identified components into true or false neuronal components or manual quality control. To identify single spiking events, deconvolution is a postprocessing step that aims to recover sparse firing events from the fluorescence time traces. Finally, in longitudinal studies, postprocessing in the spatial domain can include registration of imaging sessions and matching of ROIs across sessions.
4.1. Motion Correction
The majority of ROI extraction methods rely on neuronal components being within a fixed/consistent spatial footprint in the FOV – thus necessitating image registration of the individual frames. Motion between frames can be due to several factors:95–99 animal motion during imaging (e.g., locomotion), scanning artifacts, mechanical strain, drift relative to the objective, changes in the brain, e.g., due to hydration, etc. Given the length of imaging sessions (tens of thousands of time frames), computational complexity of the registration approach is an important consideration. For fast acquisition rates, interframe motion can be considered as a global constant offset of the FOV, therefore rigid registration of translational shifts is sufficient.100–102 For example, registration can be performed to a reference (template) image using cross-correlation or phase-correlation.69 The reference image is typically set as an average over the initial frames and then regularly updated as an average over later subsets of frames, or an average over the full stack. The reference frame can be made more precise by an iterative refinement procedure to reduce blurring.69,103 As an alternative to performing correlation-based translation registration, bright cells can be detected and tracked over time using particle tracking.104 A rigid (translation and rotation) transform is then calculated for each frame to the next by minimizing the residual displacements of all tracked cells. This approach can also be applied to volumetric imaging.
Nonrigid motion artifacts can occur due to lower acquisition speeds, which result in artifacts such as shearing in later scanned lines of the image, or slow distortions over long recording sessions due to mechanical (-drift with respect to the objective) and biological issues (warping of brain tissue due to metabolic activity, dilation of blood vessels, and liquid reward consumption).105 Nonrigid motion correction usually relies on splitting the image into overlapping spatial patches and performing registration at the patch level. This registration can be rigid at subpixel resolution69,106 or a more flexible affine transform.103
Most methods for motion correction target two-photon somatic imaging. However, such methods can struggle with nonsomatic neuronal components such as dendrites, due to the difference in size and impact of -drift. While a cell-body is typically on the order of —with variation depending on brain area, species, etc.—the width of an axon or a dendrite is . Thus, slight registration errors can have a significant effect on identifying the spatial footprint of these components. Furthermore, -drift can cause segments of tuft dendrites to shift in and out of the FOV. This can lead to difficulties in aligning the dendrite to the reference image. From a computer vision perspective, this can be thought of as image registration under occlusions.
Another potential complication is the use of GRIN lenses to image deeper structures in the brain. Optical aberrations near the edges of GRIN lenses can significantly change the motion characteristics, requiring distortion-aware realignment. Finally, niche technologies can also create novel situations, such as the rotating platform developed for near-freely moving imaging.107,108
4.2. Denoising and Normalization
The absolute noise levels present in optical imaging create a challenging signal extraction environment. It is only because many sequential frames of the same population are recorded that individual cells can be identified. This process, however, can be improved by modest noise filtering as a preprocessing stage. A number of methods (with example applications) are used across the literature, including median or low-pass filtering,16,79 downsampling, PCA projection,69,79 z-scoring, wavelet denoising,35 other hierarchical models,109 and deep learning-based denoising.32,110 All these approaches make different noise and signal model assumptions and should be used judiciously.
For example, median/low-pass filtering and downsampling, are simple, quick steps that can be run on the data at the early stages. Median filtering is effective at reducing shot noise common in low-photon environments, at the cost of making shapes in the image more “convex” (i.e., filling in corners). Low-pass filtering and downsampling reduce high-frequency noise, however, blur the data via the convolutional filter. Downsampling83,111 has the further benefit of reducing the data size in space or time, thus reducing later computational costs, however, effectively reduces the sampling resolution.
Other signal models are more complex. Hierarchical models can provide flexible ways of both incorporating different noise classes (e.g., Poisson) and flexible signal models (e.g., via interpixel correlations), however, at a heavy computational overhead.109 Wavelet-based denoising112,113 is perhaps the most versatile in this class, as both per-image and per-pixel time-trace denoising are computationally efficient, readily implemented across programming languages, and can handle sharp transitions, thereby reducing blurring.
Yet other methods model the noise in ways based on the data itself. PCA-based projections identify a low-dimensional space that captures much of the data variance, effectively assuming that low-variance principal components represent noise. Sparsely firing cells or dim cells, however, can often end up in low-variance components and thus removed with the noise. Penalized matrix decomposition91 denoises the imaging data by performing a patch-wise penalized low-rank decomposition. Recently, general advances from deep learning32,110,114 have been proposed for denoising calcium imaging.
For example, DeepInterpolation32 uses a clever design that trains a neural network to predict a movie frame based on previous and past frames. Independent noise, which cannot be predicted, is thus filtered out. A broader image-restoration method CARE,115 uses a U-net trained on high- and low-resolution images to enhance signal quality. As with all deep-learning methods, the drawbacks include training the network (if the pre-trained options do not fit the application) and a general lack of knowledge as to the exact expected biases of the black box system.
Usually performed at the same time as denoising, normalization plays an important role in the numerical stability of algorithms. Having data with large (or small) overall fluorescence values can cause problems in the condition numbers of the matrices used in optimizing costs in the demixing step. Normalizing, e.g., to unit-median or unit-max values help constrain these effects, and all normalization can be undone after demixing to return meaningful fluorescence values to the identified time-traces. As an example, methods based on deep networks typically employ preprocessing to ensure appropriate dynamic range across the data, regardless of background or other inhomogeneities in the illumination (e.g., via homomorphic filtering85).
As a final note, a form of normalization that often is required in preprocessing is detrending to remove photobleaching116 (e.g., see Refs. 117 and 118). Photobleaching is the reduction in overall fluorescence over time stemming from the fluorescent proteins becoming trapped in an intermediate quantum state and becoming inactive. The result is a large imbalance over time in the dynamic range of the signal, which can hinder most demixing algorithms.
4.3. Neuropil Estimation
In 2p imaging, neuropil is a “background” signal that contains the fluorescence of neuronal elements that are out-of-focus (e.g., dendritic and axonal) and scattering. This signal can contaminate the estimated fluorescence of an ROI if it is not properly accounted for. Some methods estimate neuropil as part of the ROI extraction process by automatically identifying the signal of the exterior surrounding the cell,80 or explicitly or implicitly adding at least one background signal in the linear decomposition model,35,49,67,89 e.g.
(5) |
where and are the spatial and temporal components of the background, respectively. In other methods, neuropil is estimated in a posthoc process by calculating the average time trace from the pixels within the ring surrounding each extracted spatial profile.69,119 The signal is then subtracted from the time trace of the spatial profile. Care should be taken in neuropil subtraction not to over-correct. Over-correction can often be identified by noting significant negative dips in the fluorescence baseline
4.4. Normalized Fluorescence ()
Neurons have different concentrations of fluorescence indicator, e.g., GCaMP, which can result in varying levels of fluorescence both across cell populations and between individual neurons.120,121 Additional variation also occurs across the image field of view due to imaging technique, microscope optics, and sample variation.
Therefore, to obtain comparable signals from varying neural sources, extracted time-traces are normalized122 to remove the baseline fluorescence activity and adjust the amplitude as follows:
(6) |
where is the extracted time trace and is the baseline fluorescence.
Calculation of baseline fluorescence relies on the assumption that neurons have sufficiently sparse activity, and thus a baseline level that does not correspond to neuronal activity can be estimated using varying heuristics. A common method of estimating is averaging the activity of the time frames with lowest activity (the exact value of can vary from lab to lab, for widefield imaging may be used to instead describe the percent change from the mean activity level). This method assumes neurons are quiescent for long enough periods that the change in fluorescence level is effectively zero at this percentile. A lower percentile is not used to account for error caused by sources such as shot noise or axial motion.
The running percentile calculation additionally assumes there is little to no fluorescence from other sources. In datasets that are densely labeled or have high background fluorescence, the technique will consistently underestimate the true as contributions from other sources get added to the baseline estimate. Estimating the baseline fluorescence with overlapping sources requires separating the fluorescence contributions from individual sources and any background source, which is automatically achieved in most segmentation algorithms.67,69 This approach may be more reliable than the running percentile calculation by integrating additional information provided by activity transients. A test using synthetic data demonstrated that these estimates of baseline fluorescence were reliable for the brightest cells, and another estimation procedure making use of pixel-wise spatial information further improved estimates of baseline activity.123
4.5. Deconvolution
The product of the core demixing stage includes a set of time traces—one per component—that contain the temporal fluorescence fluctuations. Due to the complex interactions and resulting buffering of nonvoltage fluorescence-inducing biomarkers, the transient increases in fluorescence due to individual spiking events are stretched out over time. For example, calcium indicators can observe deviations from baseline in the recorded fluorescence from a single spiking even for seconds.
As spiking events are typically considered the primary source of neural communication, efforts to infer underlying spiking from fluorescent data have emerged, taking a number of forms.73,124–131 The attempt to invert a generative model of fluorescence as a function of spike times is fundamental. The full biophysical model dictates that at each time of a spike event, a stochastic influx of the biomarker (e.g., calcium) flows into the cell and drives nonlinear differential equations and nonlinearity that determine the level of bound fluorescent protein over time. While the full biophysical model includes nonlinear differential equations and nonlinearities,123,132,133 this model can be linearly approximated as
(7) |
where is the response curve (e.g., exponential rise-and-decay) to a single event and represents the stochasticity of the influx of biomarker at the ’th event.
Given the convolutional form of this model, identifying the time-points of events has taken the form of deconvolution. These algorithms have taken the form of assuming functional forms over , such as exponential67 or double exponential124 kernels, or taken more model-free approaches in a deep-learning framework.130,131,134 Specific algorithms have also varied widely, including exact -regularized optimization,127 marked point process,129 interior point optimization,135 active-set methods,136 and variational autoencoders.134
One of the core difficulties in deconvolution is the probabilistic relationship between spikes and fluorescence. In addition to the biomarker levels being variable, simultaneous recordings of electrophysiology and calcium imaging show that at times optical imaging can miss single events at significant levels (i.e., missing 70% to 80% of individual spikes).137 Thus, in general across noise, indicators, and other experimental conditions, deconvolution may have highly varying performance limits. In fact, for assessment, benchmarking attempts have avoided direct timing comparisons, opting instead for using local rate averages over bins as a validation metric.130 Interestingly, spike ambiguity in optical imaging has seeded another approach: to remove spiking events from the equation—literally—by marginalizing out the spiking events and directly estimating a latent spike rate.138
4.6. Multisession Registration
The ability to record from the same identified population of neurons with functional optical imaging in longitudinal experiments across multiple days is a major advance and enables understanding of long-term processes such as learning and memory. One of the crucial postprocessing steps of such longitudinal experiments is the alignment of the recorded imaging data across days to enable one-to-one mapping of neurons across all sessions. This is essential to understanding the changes in neural representation over time. However, alignment is challenging due to the 3D nonrigid transformations between imaging sessions. These are a result of, for example, day-to-day variance in the imaging angle due to slight changes in the angular placement of the microscope objective, day-to-day variance in optical clarity of cranial windows, and changes in the brain tissue over days. Specifically, in TPM, this can lead to differences in the shape of recorded cells since slight -drift and tilts can result in relatively large changes in the cross-section of a cell. Semiautomated approaches to calculate the transform between imaging sessions and match neurons exist, however these rely on user input to select matching ROIs and only align pairs of sessions.69 Recent methods propose registering imaging sessions based on fully affine invariant methods originally developed for natural image registration.139 A recent approach140 based on the classical SIFT141 algorithm enables fast automatic registration of calcium imaging sessions and one-to-one matching of ROIs, even if the neuron was not detected in all sessions.
In one-photon imaging, alignment is challenging due to light scattering and lack of optical sectioning, which increase the similarity between the time traces of neighboring neurons in the FOV.142 In addition, only active cells can be tracked, as opposed to multiphoton imaging. Sheintuch et al.142 developed a probabilistic method for automated registration across one-photon imaging sessions that is adaptive and optimized to different datasets. First, all cells are mapped to the same image by registering each session to a reference session using a rigid transformation based on the centroid locations of extracted ROIs. Next, the probability for any pair of neighboring cells from different sessions to be the same cell is calculated, given their spatial correlation and centroid distance and a probabilistic model for similar and dissimilar matched cells. Cells are finally aligned across sessions by an iterative procedure based on the estimated probabilities.
5. Widefield Imaging Analysis
Multiphoton microscopy, which was the main focus of the previous sections, provides a way to record neuronal activity at the cellular and sub-cellular levels in a given FOV, typically limited to a few . These recordings allow one to thoroughly investigate local microcircuits comprised from 100s to 1000s of cells at a time. At the other end of the spectrum, cortex-wide (i.e., widefield) imaging trades cellular resolution for increased FOV (millimeters) and enables exploration of the overall activity through imaging of the entire cortical surface through one-photon illumination143 [Figs. 4(a) and 4(b)]. The captured signal exhibits significantly different statistics from micron-resolution imaging both in time and space: the spatial resolutions are too coarse to isolate activity signals of individual neurons, but instead can capture brain-wide activity patterns.48 As a developing modality it presents many open questions regarding processing and analysis. Current approaches are nascent and inconsistent across labs. Moreover, validation and comparisons between distinct approaches are limited. In this section, we review the arising challenges and leading technological and computational approaches for capturing and processing of widefield calcium imaging.
The collection of widefield signals is performed using a scientific CMOS camera, capable of imaging hundreds of frames per second. To control the acquired data size and increase framerate, most researchers reduce the spatial resolution to pixels, resulting in to spatial resolution.
The acquired time traces reflect an aggregated summary of the neuronal activity captured from thousands of cells, cellular compartments, and depths (although mostly superficial layers12). Estimation of spike rates is usually not performed for widefield signals as the captured signals may originate from various cell parts such as axons, dendrites as well as somas, each related to a different kernels. While this issue can be resolved by calcium indicators targeting specific parts of the cell at the expense of limited temporal resolution,145,146 most researchers find the standard GCaMP indicators as the (current) best in terms of spatial and temporal resolution for capturing synaptic activity, providing a valuable tool for exploration of the dynamics of large-scale networks and their relation to complex behavior, perception, and cognition.147,148
Preprocessing of widefield recordings typically includes four stages: alignment, normalization, hemodynamics correction, and parcellation. Below we describe the motivation for each stage and common practices. Imaging of the entire cortical surface naturally enables analysis/modeling across animals. To facilitate a one-to-one correspondence of data acquired from different animals, frames of each session are registered to align to a global template of the cortex, according to several anatomical control points using an affine transform.149 In many cases, the captured time traces exhibit a slow decrease in baseline activity, ascribed to bleaching. This effect is easily removed by subtracting the slow trend (evaluated by low-pass filtering) from each pixel.150 To equalize spatial differences of expression levels, each pixel is normalized with respect to its own overall variance post detrending.
The next stage of preprocessing aims to correct the hemodynamics artifacts, which is unique to widefield signals and is not typically present in multiphoton imaging. Fluctuations in blood flow and oxygenation alter excitation and emission of photons due to hemoglobin absorption. This phenomenon contaminates the captured signals with unwanted dynamic components.12 The most common approach to correct this artifact is to alternate the emitted light with an additional reference channel, for example, UV light (). As GCaMP6 is isosbestic to UV light, the emitted photons are assumed to be independent of neuronal activity whereas the data channel (typically blue ) will cause emission of photons affected by fluctuations of both neuronal activity and hemodynamics signals.
The Beer–Lambert law is an exponential model for the measured light intensity as a function of wavelength, absorption, and traveled path. Assuming that temporal deviations from the average signal are small (as they often are for widefield calcium imaging) this relation is simplified by taking a first-order estimation of the signal in a given pixel
(8) |
where and are -dimensional vectors of the recorded signals at time through the blue and UV channels respectively and is the number of pixels in a frame. The corrected signal, , is evaluated using a pixelwise linear regression.150,151 Recently, a computational approach for improving hemodynamics reduction based on a single reference wavelength was proposed.152 This approach exploits spatial dependencies between pixels by formulating a multivariate model
(9) |
where , and are the -dimensional vectors, and is the number of pixels included in a certain local patch. The corrected signal is estimated using the optimal linear predictor, , where the matrices are evaluated from the signals at both wavelengths. Taking the patch size to be 1, this approach reduces to pixelwise regression Eq. (8) where using leads to improved reduction of the hemodynamics artifact.152 An alternative approach relies on using two reference wavelength (thus alternating three channels altogether) to obtain a more accurate correction of hemodynamic absorption.153
The signals at this stage are time-traces of the activity at individual pixels. The data are high-dimensional in space (typically over pixels) and subject to several noise sources (e.g., electronic, photonic shot noise). The final stage of widefield preprocessing is therefore to extract a compact representation of brain activity and filter out the noise component. As with spatiotemporal decomposition models in TPM, most methods for extracting this representation can be formulated as a linear decomposition model: , where is a matrix of the activity of pixels at time frames, is a matrix of spatial components, is a matrix of temporal components, and . Different methods for decomposing vary from solely relying on anatomical features to being completely data-driven, which affects the spatial interpretability of accordingly. Choosing one method over another should be done considering what downstream analyses will be used and the overall biological hypothesis of the research.
Using singular value decomposition (SVD) to reduce the dimension of widefield data relies on the assumption that the variance of neuronal activity within the widefield signal is significantly higher than the noise variance. The activity is decomposed to where is a diagonal matrix of the singular values and are orthogonal matrices and a low-dimensional representation is obtained by setting
(10) |
where and are the columns of and , respectively and is the singular values. The number of components, , is selected to capture at least 80% to 90% of variance explained of the data, assuming that the remaining 10% to 20% relates to noise. The spatial components are not constrained to be localized or non-negative and therefore the temporal components extracted in Eq. (10) do not indicate a trace of activity related to a specific brain region. Postprocessing is performed in the reduced dimension domain and then projected back to the full dimension using . For example, in Ref. 149, SVD components were used to measure how well external variables (behavior and stimuli) can predict brain activity. The trained regression parameters, each computed per temporal component, were projected to the brain-mask domain to biologically interpret the statistical findings.
A different approach is to take advantage of the spatial structure of widefield signals where adjacent pixels are typically highly correlated while noise is uncorrelated. Therefore a compact and (spatially) filtered representation can be obtained by dividing the brain into localized subregions (parcels) and extracting the average trace within each parcel. In this case is non-negative where each row relates to a specific brain parcel with the corresponding column in as its activity time trace. Unlike imaging of local circuits, where identifying cell boundaries is a well-defined task (although not simple for automation), detection of parcel boundaries is not straightforward.
The most common approach for cortex-wide parcellation is to use a predefined atlas based on anatomical features such as the common coordinate framework (CCFv3) proposed by the Allen Institute for Brain Research154 or the mouse brain atlas of Paxinos and Franklin155 [Fig. 4(c)]. Parcellation based on anatomical atlases presents many advantages—each brain parcel represents a well-known biological functionality (e.g., vision, motor, and sensory) and a straightforward way to compare neuronal activity across animals (and studies). However, it is often observed that the spatial patterns of activity in some regions are not well described by anatomical outlines [e.g., via principal components analysis Fig. 4(d)]. Identification of subcortical borders can be performed experimentally by presenting sensory stimuli, e.g., the auditory cortex156 or the visual cortex.157 These methods are highly efficient for detecting boundaries of subregions within a specific cortical area (visual cortex and auditory cortex), but cannot be used to detect regions that are not responsive to sensory stimuli. Therefore, computational methods for functional parcellation [Fig. 4(c)] have been a major target of research in recent years for calcium imaging as well as in the fMRI community.158
Localized seminonnegative matrix factorization (LocaNMF) is a recently proposed approach aiming to tackle this issue by formulating an optimization problem for minimizing the mean-squared error between the widefield signal and the estimated signal such that the columns of are non-negative and localized according to anatomical clusters.34 LocaNMF produces, by nature, spatial patterns that are similar to the anatomical boundaries used by the optimization process and therefore typically does not deviate much from the anatomical atlas. Related to NMF, a linear one-hidden-layer autoencoder has also been used to identify parcellations in auditory cortex.159 Similar in spirit, GraFT,35 being agnostic to spatial morphology due to its underlying graph-based modeling, has also been applied to extracting (potentially overlapping) widefield spatial maps in rat and mouse macroscopic data. In a recent study, functional parcellation took a different turn by adding a temporal component to brain parcellation. This approach is based on finding repeated spatiotemporal patterns of activation termed motifs, which can be viewed as a time-varying brain parcel. The overall activity is therefore represented as a sum of convolution terms between each motif and its corresponding time trace.160
A different approach for brain parcellation is to cluster the brain into regions of coactivity, with no regard to anatomical features. In this case, the matrix is comprised of binary vectors, each corresponding to a specific brain parcel. Li et al.161 proposed an iterative greedy algorithm for parcellating the brain based on correlation similarity. Other approaches use correlations as graph weights connecting pixels serving as nodes. Parcels are obtained by clustering the graph using Ncut, where the number of parcels is a hyperparameter,162 or a greedy adaptation of spectral clustering, where the number of parcels is learned from the data.163
Overall, the product of functional parcellation methods describe the spatial distribution of coactivity in a given session (per animal). These patterns may be consistent across sessions163 but are not, in general, uniform across animal and obviously vary with the nature of the experiment (e.g., spontaneous activity and task directed). In that regard, as performed for the SVD approach, postprocessing values produced per functional parcel (e.g., goodness of fit and modeling coefficients) can be projected onto the brain mask using the spatial components.
Mapping of cortical neuronal activity is also recently addressed experimentally through multimodal imaging where widefield calcium imaging recording is performed simultaneously with fRMI signals164 or imaging of local circuit using two-photon microscopy150 or electrophysiology165–168 where specific regions are identified as highly correlated to a specific cell (or subpopulation).
6. Modern Challenges in Optical Functional Imaging
While there remain numerous challenges in widespread multiphoton and widefield imaging, there are further emerging challenges that will necessitate new data processing solutions in the near future. We outline here a number of primary avenues that we believe show promise but have many outstanding problems to solve.
6.1. Imaging Morphologies Beyond the Soma
More recently, variants of optical imaging have aimed to expand the scope of accessible brain signals by imaging both larger and smaller neural structures. At one end of this spectrum, zooming in enables the imaging of dendritic and spine structures, which captures how individual neurons communicate.169–174 Dendritic175 and axonal176 imaging, while also having sparse temporal statistics, can have long, thin spatial profiles that span the entire FOV. We note that these types of neural morphologies are also vital in some species, such as Drosophila melanogaster, where dendritic activity is vital to tracking neural processing.177,178 Approaches to dendritic/axonal imaging have largely followed the path of somatic imaging analysis. For example, recent versions of Suite2p69 can be run in “dendrite mode.” The long, stringy morphologies, however, are at odds with typical built-in assumptions of spatial locality. A more recent approach instead redefines pixel ordering on a data-driven graph to better identify irregular morphologies.35 Astrocytes also exhibit irregular morphologies and size, for which targeted approaches have been recently developed.179,180 Interestingly, larger-scale optical imaging of hemodynamic activity also captures components (i.e., blood vessels) that have spatial statistics more similar to dendrites than somas.181
6.2. Voltage Imaging
Voltage imaging is a technique that has been used for decades to record changes in neural activity using voltage-sensitive dyes.182 Voltage imaging, as compared with recording changes in calcium, is a more direct way of measuring neural signals. A comparison of population responses to optical recordings using calcium indicators and voltage indicators showed major differences in the temporal response of the recorded calcium signal as compared with the voltage signal.183 Widefield voltage imaging has also been explored184 and faces many of the same challenges as widefield imaging with calcium sensors (see Sec. 4.6).
Technology for the optical recording of voltage signals has improved rapidly over the past few years. The development of improved voltage sensors in the form of bright genetically encoded voltage indicators (GEVIs) has enabled high-resolution voltage recordings at multiple spatial scales.185 Genetic targeting of these indicators to subcellular structures isolate signal to particular neuronal structures, further increasing the signal-background ratio (SBR). These improvements have allowed researchers to generate optical voltage recordings from a population of cells in awake, behaving animals.186
Several challenges still exist that prevent the generation of large-scale optical voltage recordings. Voltage indicators are membrane-bound, which limits the total concentration of the sensors is compared with calcium indicators, which may fill the whole cytoplasm. Generally, GEVIs are, at least for now, dimmer than their GECIs counterparts. To take advantage of the improved temporal response function, recordings must be made at much higher framerates as compared with calcium indicators (versus ). The combination of these challenges reduces the overall spatial scale of current cellular or subcellular resolution recordings with voltage indicators.
Current-voltage imaging analysis includes both matrix factorization91,187 and deep learning52 demixing approaches similar to methods used in calcium imaging. The potential for low-SNR and non-Gaussian noise statistics can complicate demixing. Moreover, extremely high temporal resolutions create much larger datasets, increasing the computational cost of processing. Finally, non-negativity is a basic assumption built into many calcium imaging analysis methods: i.e., deviations from baseline are only positive, however, voltage traces have no such constraint.
6.3. Computational Imaging
One critical avenue that may completely upend much of how optical imaging data is processed is computational imaging. Computational imaging represents a paradigm wherein optical and algorithmic components are codesigned to compensate and enhance each other and achieve superior results to advances in either area independently.188,189 Codesigned approaches are nascent in in-vivo functional imaging of the brain, with only a few examples aimed at faster imaging41,190 or volumetric imaging.16,45,191,192 The algorithmic designs for computational imaging tend to require specialized and often unique processing elements that invert the optical path of the co-designed microscope.189 Examples include tomography,41 light-field imaging,45,192–194 combined light-field and tomography,191 stereoscopy,16 and computational systems for imaging through scattering tissue.195
While basic denoising or other discussed techniques might still be applicable, more highly coded optics may not even be able to use the same motion correction, let alone the other advances in demixing of individual neuronal signals. For example, lightfield and stereoscopy methods need demixing methods for decoding projections of volumetric components onto 2D arrays. Thus, new frontiers are constantly expanding and requiring novel advances in our handling of functional optical data.
7. Validation and Assessment
One of the most difficult tasks in creating widely applicable and robust calcium image processing methods is proper assessment.196 The effects of mismatch between the necessary simplified statistical assumptions in the signal processing models and the actual data properties must be explored in terms of the effect on the fidelity of extracted signals and, when possible, the later scientific analyses. A prime example of such an effect is explored in Ref. 92, in which it is shown that the i.i.d. Gaussian noise assumption often considered can create bleed-through between overlapping cells and additional fluorescent biological processes in the tissue. The result is that the time traces have high levels (between 15% and 25%) of transient events being false transients in that they do not reflect the activity of that cell, but rather other, nearby, fluorescing components in the tissue. While significant in and of itself, it is further noted that these errors can cause errors in the interpretation of the neural activity, including the skewed discovery of location encoding in the hippocampus.92
To identify the accuracy of optical imaging processing algorithms, a number of avenues have emerged. Specifically, four current available avenues are assessment based on (1) manual annotation, (2) biophysical simulation, (3) local self consistency of global decompositions, and (4) consistency based on external experimental variables.
We note that a fifth form of validating signals extracted from optical measurements take the form of simultaneous electrophysiological recording and optical imaging.130,137,197 The number of simultaneous cells that can be recorded during imaging within an FOV, however, is limited. Thus these recordings have primarily taken a role in assessing either the accuracy of estimated spikes from optically recorded calcium,130,197 or the similarity between electrically and optically recorded neural signals.137 Until now, this approach does not meet the scale required to completely assess the processing of a full FOV.
7.1. Manual Annotation
The most basic assessments of optical imaging analysis are the manual labeling of cells. This is typically done by annotating an image summarizing the structure in an imaging session, i.e., temporal max projection, temporal average, local temporal correlation,198 or nonlinear embeddings.199 We note that while manual labeling is typically done on the processed data, labels can also be obtained via anatomical imaging (e.g., -stacks or nuclear labeling with activity-invariant fluorescent proteins). Comparison to manual annotation gives clear metrics: hits (manually identified cells that overlapped significantly with a match in the returned spatial profiles) and misses (those with no match) [Fig. 5(a)].
However, there are limitations to assessment via manual annotation. Time traces are not available as ground truth and must be inferred from the data given in the annotations. Additionally, the annotations are often incomplete, excluding sparsely firing cells, or nonsomatic components. The effect is that found profiles that do not match well to the manual annotations could appear as “false alarms” but fall into many categories. They may be actual cells in the data missed by the annotator, they may be algorithmic artifacts caused by merging or splitting parts of cells,92 or they may be overfitting to noise or neuropil. Thus at best, manual annotations give a lower bound on true hits and missed detections but not much information on false positives. Some of the limitations stemming from human error can be removed in new datasets consisting of electron microscopy reconstructions of the imaged tissue.51 This dataset, and any that follow, provide anatomical ground truth in the form of a registerable volume to match spatial components to, although time traces are still unavailable.
7.2. Biophysical Simulations
Another important form of assessment is in-silico simulation.123,201 Simulation plays an important role in assessing the fundamental limits of algorithms across signal processing and machine learning. In particular, simulations are vital when ground truth information is difficult to obtain, either efficiently, or at all. Functional fluorescence microscopy is exactly one such situation. The time, effort, and expense of simultaneous recording fluorescence imaging and electrophysiology is a remarkable effort for only a limited portion of the ground-truth data. Simulations offer a potential solution by leveraging anatomical and physical knowledge of the system to generate data where the underlying activity and anatomy driving the synthetic observations are completely known and can be compared with Fig. 5(b).
Simulations, however, pose their own risk. Simplistic simulations can miss complexities in the real-data imaging statistics, e.g., non-Gaussian or non-i.i.d. noise. Complex simulations run the opposite risk of being so detailed that the computational run-time and memory requirements become excessive and capture details irrelevant to the assessment being conducted. Recent work on the neural anatomy and optical microscopy (NAOMi) simulator seeks to balance these competing needs.123 To date, NAOMi has been applied to a number of scenarios, such as testing different demixing algorithms,35 denoising methods,32 sensitivity to negative transients,202 and testing/training calcium imaging demixing algorithms.131
7.3. Data Consistency
A third form of assessment uses no ground truth data, manual or synthetic, and instead focuses on self-consistency of the data model. The spatial profiles and time traces give a global decomposition of the data by minimizing data fidelity and regularization terms over the entire dataset. In this approach, one can focus on a smaller segment of the data and check if the global decomposition matches the local statistics. Specifically, one can check if in the movie frames, during which a given cell was purported to have fired (a transient burst in the time trace during those frames), if the shape of the cell truly appears in the video. Recent work used this local averaging idea to identify how many algorithms are not locally consistent: i.e., demonstrating that activity from different sources bleed into each other92 [Fig. 5(c)]. The resulting errors (termed false transients) can influence scientific findings and can appear in the time-trace estimates of many different algorithms and across many datasets.92 To address this finding, additional work has sought to develop more robust time-trace estimators that prevent these errors.87,92,94
7.4. Consistency with External Measures
The final form of validation we discuss is with respect to external measures outside the imaged brain area. To explore the relation between brain activity and behavior, perception, and cognition most experimental setups record, simultaneously with the brain activity, external variables such as spontaneous behavior (whisking, running, and pupil size), responses to sensory stimuli, and task-related behavior. A thorough examination of the extracted traces of activity with respect to external variables can be an important tool for the assessment of fluorescence microscopy processing. For example, aligning the activity traces to external onsets such as repeated presentations of a sensory cue, specifically trained behavior or even running onsets allows one to examine brain activity in a behavioral context. Applying this strategy should be done with caution, as uninstructed behavior may cause significant cell-to-cell and trial-to-trial variability.149 Still, averaging across dozens of well-pronounced presentations (visual or auditory cues for example) and across cells usually yields a significant increase of activity in the visual/auditory cortices203,204 or to a distinct pattern of activation of the motor cortex in well-trained animals200 [Fig. 5(d)]. For widefield imaging, these same strategies can be applied with respect to the appropriate brain parcels. If no sensory cues are presented, averaging across running onsets should lead to a significant increase in the overall activity of the cortex.152 Alternatively, additional modalities of neuronal activity can be used for validation of calcium imaging signals such as electrophysiology,115 fMRI,164 or dual imaging of widefield and two-photon imaging.150
8. Discussion
We have aimed to review here a large portion of the literature related to the analysis of functional optical microscopy data. The topics we have covered aim to provide a practical overview of how optics choices affect signal processing challenges, and the many methods that have been developed to solve these challenges. While broad categories, such as denoising, motion correction, and in particular signal extraction, have been explored in detail in the literature, there are many other challenges that have yet to be solved.
For one, robust alignment across sessions is an ongoing challenge. While matching cells based on anatomical morphology is currently possible, effects such as nonlinear shearing in the brain and axial drift can create situations where only portions of an FOV can be recovered and aligned across recording sessions. Even within long sessions, these effects can remove cells from parts of the recording or reduce the signal quality. Identifying the periods where cells are not visible is critical in determining how subsequent analyses should interpret zero values. Passing zero values for these times into typical analyses assumes a cell not firing, rather than the neuron's activity being unavailable. Instead, missing data methods must be employed, which need to know exactly when the missing data occurred. As chronic recordings become more commonplace, we expect that cross-session alignment and axial shift compensation will become critical hurdles to pass.
Another growing area of interest is the real-time analysis of functional microscopy data. Real-time analysis enables closing of the loop, i.e., the ability to use estimates of neural activity to drive future experimental trials.205 While basic manual annotation is trivial to move to an online setting, full demixing algorithms that solve many of the aforementioned challenges are still in their nascent stages. Initial work is promising in the ability to motion correct104 and infer calcium activity.206,207 However, the ability to infer activity and cell shapes completely online for dense neural fields is still an ongoing research direction.
A critical aspect of analysis methods that we have not discussed here is the computational cost across methods. There are large variations in cost from simple averaging to training entire deep neural networks. Unfortunately in some regards, these disparities match the high variability in the computational infrastructure available across labs. Moreover, as optical imaging of neurons continues to advance, computationally efficient techniques will only become more critical. Already at moderate fields of view, there are up to neurons. New methods leveraging the latest microscope designs are recording neurons at once.10,11 Furthermore, volumetric (3D) imaging complicates image analysis by rendering many planar segmentation methods inapplicable.16,38,208–213 In these cases new methods will need to be developed, and computational efficiency will be key to analyzing these very high dimensional datasets.
Another topic we have not elaborated upon is the effect of indicators on signal analysis. We have focused primarily on generic properties of fast- and slow- calcium indicators and, as a faster comparison, voltage indicators. Most indicators share basic characteristics with these classes. In the spirit of universal analysis pipelines, one possible approach is to further abstract all indicators into basic quantities, e.g., quantum efficiency, coupling affinity, etc., which can be used to tune parameters in more generic versions of the current algorithms. This approach, which requires a precise estimate of these quantities and their variation, would greatly benefit the user base.
Another change with different indicators is the assumption of non-negativity. Calcium imaging analysis has largely assumed only positive deviations from baseline. Voltage imaging and widefield calcium imaging all display negative dips as well, which will represent more degrees of changes that need to be implemented into more general pipelines. We note that even for calcium imaging, interestingly new explorations discuss the presence of so-called negative transients, as well as the ability of various algorithms to cope with these unexpected dips.202
One increasingly popular group of approaches in the analysis of functional optical imaging is to leverage advances in deep learning. Critical to the success of deep learning approaches is (1) the availability of training data and (2) the generalization of the trained system to new datasets. For training data, most systems use NeuroFinder197 and/or the Allen Institute Mouse Brain Observatory.214 However, both datasets suffer from both mislabeled data and missing labels, or erroneous time-trace estimates.92 Generalization is tougher to ensure, as image statistics can affect deep learning systems in a myriad of unexpected ways. Therefore extensive experimentation is required, e.g., by testing a trained system on imaging from different depths to explore the effect of tissue distortion.84 Thus, despite the fact that pre-trained networks are fast to run, these challenges create a high level of uncertainty in using pretrained networks accurately. Individual labs can instead choose to annotate their data to train networks from scratch, such that they are optimized for the imaging used locally. While this solves some of the aforementioned challenges (assuming care is taken in annotation), the training procedure can be computationally intensive relative to a lab’s computational capabilities.
One path to improving training data has been to augment the dataset, such as using random rotations and reflections.215 For optical imaging, augmentation should use the degrees of variation known through characterizations of optical-tissue interactions. The use of biophysical simulators to generate synthetic training data, or even modulate real data, can prove useful. In fact, recent work used the NOAMi simulation suite123 to generate data for training a spike estimation network.131 Finally, another consideration in deep learning approaches is a heavy class imbalance both spatially and temporally between neurons and background in functional fluorescence imaging datasets, i.e., background frequently dominates the acquired FOV, and the desired neural activity may be temporally sparse.
As we have noted, assessment of data segmentation quality is, in general, challenging. Evaluation datasets must reflect typical use cases of the community, and therefore they need to constantly be updated and expanded. Due to the rapid expansion of functional microscopy technology, the typical use case is already eclipsing standard datasets, requiring the description of new benchmarks. New datasets are beginning to fill the void, e.g., in voltage imaging.52 However, the quality and diversity of imaging data are only accelerating. For example, it would be immensely valuable to the community to provide benchmark data for especially difficult situations, such as cortex-wide widefield data. Even more powerful would be an amalgamation of different benchmark datasets—both real and synthetic—similar to what SpikeForest216 has done for spike sorting algorithms. Only with such a wide-ranging set of tests will the strengths and weaknesses of different algorithms become apparent.
Assessment is even more difficult in widefield data, where anatomy provides minimal aid in assessing the validity of spatial component shapes. Neighboring brain areas can share tracts of coordinated activity, and activity can likewise be constrained to a small portion of a brain area. Parcellations thus need to be carefully considered in the context of the behavior and other recordings. In this line of consideration is the nature of parcellations themselves. An unsolved question currently facing the community is whether parcels can overlap: current methods include both solutions that allow34,35,159 or prohibit161–163 overlapping parcels. Both confer different interpretations: nonoverlapping parcels provide a solution wherein each component represents a given brain area’s activity while overlapping can enable the time traces to be better event- or behavior-locked in the case of a brain region having multiple uses. Future work should explore the interplay between these two windows into widefield data.
One final major concern is the reproducibility and accessibility of functional imaging analysis techniques (which we note is not restricted to this modality217). Making code available and easier to use33,218 is only the first step in this path. Learning the intricacies in robustly running the software in new hardware environments, such as local clusters or desktops, is a hard-earned expertise. One option is to ensure that the software is robust and tested on many systems. This approach requires either systems engineering skills that are not typically within the budget or scope of a typical research lab. Instead, a concerted effort across an entire community is required. The community has to coalesce around—and contribute actively to—a specific approach. An alternative being explored is to have individual labs containerize their software, i.e., create shareable virtual environments that are tested with the given software. In neuroscience, an emerging example is the NeuroCAAS system,219 which provides Dockerized implementations of algorithms to be run, e.g., on the amazon web service.
In conclusion, functional optical imaging in neuroscience is rapidly growing and is accurate, and automated processing of the massive data being generated is becoming increasingly essential to the continued progress of understanding the brain. Solving these challenges will take both forms of methods that enable, e.g., real-time, robust, fast analyses, and well-engineered infrastructure that democratizes the current advances to the growing number of labs employing functional microscopy in their experiments. We thus expect that this area will continue to grow rapidly in the next decade, drawing on increased interest from labs across neuroscience, data science, imaging, and other related disciplines.
Acknowledgment
G.M. was supported by funding from the NIH Grant Nos. R01EB026936 and U19NS123717.
Biographies
Hadas Benisty is an associate research scientist in the Neuroscience Department at Yale University. She is an expert in developing interpretable models, aiming to investigate the dynamics of high-dimensional neuronal networks and how their organizational principles and plasticity relate to behavior. Before joining Yale University, she was a post-doc in the EE Department at Technion. She received her PhD in electrical engineering and BSc degrees in electrical engineering and physics from Technion.
Alexander Song received his BS degree from Cornell University in 2013 and his MS and PhD degrees from Princeton University in 2019. His doctoral research focused on developing microscopy techniques for neuroscience research. He joined as a postdoctoral researcher at MPI for Intelligent Systems, Stuttgart, in 2019, and is working on developing hardware for optical computation.
Gal Mishne is an assistant professor in the Halıcıoğlu Data Science Institute (HDSI) at UCSD, and affiliated with the ECE and CSE Departments and the Neurosciences Graduate Program. Before joining UCSD, she was a Gibbs assistant professor in Applied Math at Yale University, with Prof. Ronald Coifman’s research group. She received her PhD in electrical engineering from Technion in 2017. Her research interests include high-dimensional data analysis, manifold learning, and computational neuroscience.
Adam S. Charles is an assistant professor in the Department of Biomedical Engineering at Johns Hopkins University. He received his master’s and bachelor’s degrees from Cooper Union in NYC and his PhD in electrical and computer engineering from Georgia Tech. He then held a post-doc position at the Princeton Neuroscience Institute. His interests are at the intersection of computational and theoretical neuroscience and data science as well as focusing on next-generation algorithms for extracting meaning from complex neural data.
Disclosures
The authors declare no conflicts of interest in the preparation and publication of this work.
Contributor Information
Hadas Benisty, Email: hadas.benesti@yale.edu.
Alexander Song, Email: asong@is.mpg.de.
Gal Mishne, Email: gmishne@ucsd.edu.
Adam S. Charles, Email: adamsc@jhu.edu.
References
- 1.Jun J. J., et al. , “Fully integrated silicon probes for high-density recording of neural activity,” Nature 551(7679), 232 (2017). 10.1038/nature24636 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Steinmetz N. A., et al. , “Neuropixels 2.0: a miniaturized high-density probe for stable, long-term brain recordings,” Science 372(6539), eabf4588 (2021). 10.1126/science.abf4588 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Mace E., et al. , “Functional ultrasound imaging of the brain: theory and basic principles,” IEEE Trans. Ultrason. Freq. Control 60(3), 492–506 (2013). 10.1109/TUFFC.2013.2592 [DOI] [PubMed] [Google Scholar]
- 4.Urban A., et al. , “Real-time imaging of brain activity in freely moving rats using functional ultrasound,” Nat. Methods 12(9), 873–878 (2015). 10.1038/nmeth.3482 [DOI] [PubMed] [Google Scholar]
- 5.Takahashi D. Y., et al. , “Social-vocal brain networks in a non-human primate,” bioRxiv (2021).
- 6.Hady A. E., et al. , “Chronic brain functional ultrasound imaging in freely moving rodents performing cognitive tasks,” bioRxiv (2022). [DOI] [PMC free article] [PubMed]
- 7.Tik M., et al. , “Ultra-high-field fMRI insights on insight: neural correlates of the Aha!-moment,” Hum. Brain Mapp. 39(8), 3241–3252 (2018). 10.1002/hbm.24073 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Viessmann O., Polimeni J. R.. “High-resolution fMRI at 7 Tesla: challenges, promises and recent developments for individual-focused fMRI studies,” Curr. Opin. Behav. Sci. 40, 96–104 (2021). 10.1016/j.cobeha.2021.01.011 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Huber L., et al. , “Ultra-high resolution blood volume fMRI and BOLD fMRI in humans at 9.4 T: capabilities and challenges,” Neuroimage 178, 769–779 (2018). 10.1016/j.neuroimage.2018.06.025 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Bouchard M. B., et al. , “Swept confocally-aligned planar excitation (SCAPE) microscopy for high-speed volumetric imaging of behaving organisms,” Nat. Photonics 9(2), 113–119 (2015). 10.1038/nphoton.2014.323 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Demas J., et al. , “High-speed, cortex-wide volumetric recording of neuroactivity at cellular resolution using light beads microscopy,” Nat. Methods 18(9), 1103–1111 (2021). 10.1038/s41592-021-01239-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Ma Y., et al. , “Wide-field optical mapping of neural activity and brain hemodynamics: considerations and novel approaches,” Philos. Trans. R. Soc. B Biol. Sci. 371(1705), 20150360 (2016). 10.1098/rstb.2015.0360 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.Nunez-Elizalde A. O., et al. , “Neural basis of functional ultrasound signals,” bioRxiv (2021).
- 14.Xie T., et al. , “GRIN lens rod based probe for endoscopic spectral domain optical coherence tomography with fast dynamic focus tracking,” Opt. Express 14(8), 3238–3246 (2006). 10.1364/OE.14.003238 [DOI] [PubMed] [Google Scholar]
- 15.Wang C., Ji N., “Characterization and improvement of three-dimensional imaging performance of GRIN-lens-based two-photon fluorescence endomicroscopes with adaptive optics,” Opt. Express 21(22), 27142–27154 (2013). 10.1364/OE.21.027142 [DOI] [PubMed] [Google Scholar]
- 16.Song A., et al. , “Volumetric two-photon imaging of neurons using stereoscopy (vTwINS),” Nat. Methods 14(4), 420 (2017). 10.1038/nmeth.4226 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Beaulieu D. R., et al. , “Simultaneous multiplane imaging with reverberation multiphoton microscopy,” arXiv:1812.05162 (2018). [DOI] [PMC free article] [PubMed]
- 18.Shao W., et al. , “Wide field-of-view volumetric imaging by a mesoscopic scanning oblique plane microscopy with switchable objective lenses,” Quant. Imaging Med. Surg. 11(3), 983–997 (2020). 10.21037/qims-20-806 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.Badura A., et al. , “Fast calcium sensor proteins for monitoring neural activity,” Neurophotonics 1(2), 025008 (2014). 10.1117/1.NPh.1.2.025008 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20.Kannan M., et al. , “Fast, in vivo voltage imaging using a red fluorescent indicator,” Nat. Methods 15(12), 1108 (2018). 10.1038/s41592-018-0188-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.Shen Y., et al. , “Genetically encoded fluorescent indicators for imaging intracellular potassium ion concentration,” Commun. Biol. 2(1), 18 (2019). 10.1038/s42003-018-0269-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22.Chen T.-W., et al. , “Ultrasensitive fluorescent proteins for imaging neuronal activity,” Nature 499(7458), 295 (2013). 10.1038/nature12354 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23.Dana H., et al. , “High-performance calcium sensors for imaging activity in neuronal populations and microcompartments,” Nat. Methods 16(7), 649–657 (2019). 10.1038/s41592-019-0435-6 [DOI] [PubMed] [Google Scholar]
- 24.Denk W., Strickler J. H., Webb W. W., “Two-photon laser scanning fluorescence microscopy,” Science 248(4951), 73–76 (1990). 10.1126/science.2321027 [DOI] [PubMed] [Google Scholar]
- 25.Jung J. C, et al. , “In vivo mammalian brain imaging using one-and two-photon fluorescence microendoscopy,” J. Neurophysiol. 92(5), 3121–3133 (2004). 10.1152/jn.00234.2004 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26.Rodriguez C., et al. , “Three-photon fluorescence microscopy with an axially elongated bessel focus,” Opt. Lett. 43(8), 1914–1917 (2018). 10.1364/OL.43.001914 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27.Sofroniew N. J., et al. , “A large field of view two-photon mesoscope with subcellular resolution for in vivo imaging,” Elife 5, e14472 (2016). 10.7554/eLife.14472 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28.Yasuda R., et al. , “Supersensitive RAS activation in dendrites and spines revealed by two-photon fluorescence lifetime imaging,” Nat. Neurosci. 9(2), 283 (2006). 10.1038/nn1635 [DOI] [PubMed] [Google Scholar]
- 29.Homma R., et al. , “Wide-field and two-photon imaging of brain activity with voltage and calcium-sensitive dyes,” in Dynamic Brain Imaging,Hyder F., ed., pp. 43–79, Springer; (2009). [DOI] [PubMed] [Google Scholar]
- 30.Jacob A. D., et al. , “A compact head-mounted endoscope for in vivo calcium imaging in freely behaving mice,” Curr. Protoc. Neurosci. 84(1), e51 (2018). 10.1002/cpns.51 [DOI] [PubMed] [Google Scholar]
- 31.Zhang L., et al. , “Miniscope GRIN lens system for calcium imaging of neuronal activity from deep brain structures in behaving animals,” Curr. Protoc. Neurosci. 86(1), e56 (2019). 10.1002/cpns.56 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 32.Lecoq J., et al. , “Removing independent noise in systems neuroscience data using deepinterpolation,” Nat. Methods 18(11),1401–1408 (2021). 10.1038/s41592-021-01285-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 33.Cantu D. A., et al. , “Ezcalcium: open-source toolbox for analysis of calcium imaging data,” Front. Neural Circuits 14, 25 (2020). 10.3389/fncir.2020.00025 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 34.Saxena S., et al. , “Localized semi-nonnegative matrix factorization (locanmf) of widefield calcium imaging data,” PLoS Comput. Biol. 16(4), e1007791 (2020). 10.1371/journal.pcbi.1007791 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 35.Charles A. S., et al. , “Graft: graph filtered temporal dictionary learning for functional neural imaging,” IEEE Trans. Image Process. 31, 3509–3524 (2022). 10.1109/TIP.2022.3171414 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 36.Stosiek C., et al. , “In vivo two-photon calcium imaging of neuronal networks,” Proc. Natl. Acad. Sci. U. S. A. 100(12), 7319–7324 (2003). 10.1073/pnas.1232232100 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 37.Zipfel W. R., Williams R. M., Webb W.W., “Nonlinear magic: multiphoton microscopy in the biosciences,” Nat. Biotechnol. 21(11), 1369–1377 (2003). 10.1038/nbt899 [DOI] [PubMed] [Google Scholar]
- 38.Lu R., et al. , “Video-rate volumetric functional imaging of the brain at synaptic resolution,” Nat. Neurosci. 20(4), 620–628 (2017). 10.1038/nn.4516 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 39.Ouzounov D. G., et al. , “In vivo three-photon imaging of activity of GCaMP6-labeled neurons deep in intact mouse brain,” Nat. Methods 14(4), 388–390 (2017). 10.1038/nmeth.4183 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 40.Katona G., et al. , “Fast two-photon in vivo imaging with three-dimensional random-access scanning in large tissue volumes,” Nat. Methods 9(2), 201–208 (2012). 10.1038/nmeth.1851 [DOI] [PubMed] [Google Scholar]
- 41.Kazemipour A., et al. , “Kilohertz frame-rate two-photon tomography,” Nat. Methods 16(8), 778–786 (2019). 10.1038/s41592-019-0493-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 42.Zhang T., et al. , “Kilohertz two-photon brain imaging in awake mice,” Nat. Methods 16(11), 1119–1122 (2019). 10.1038/s41592-019-0597-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 43.Vladimirov N., et al. , “Light-sheet functional imaging in fictively behaving zebrafish,” Nat. Methods 11(9), 883–884 (2014). 10.1038/nmeth.3040 [DOI] [PubMed] [Google Scholar]
- 44.Skocek O., et al. , “High-speed volumetric imaging of neuronal activity in freely moving rodents,” Nat. Methods 15(6), 429–432 (2018). 10.1038/s41592-018-0008-0 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 45.Zhang Z., et al. , “Imaging volumetric dynamics at high speed in mouse and zebrafish brain with confocal light field microscopy,” Nat. Biotechnol. 39(1), 74–83 (2021). 10.1038/s41587-020-0628-7 [DOI] [PubMed] [Google Scholar]
- 46.Ghosh K. K., et al. , “Miniaturized integration of a fluorescence microscope,” Nat. Methods 8(10), 871–878 (2011). 10.1038/nmeth.1694 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 47.Cai D. J., et al. , “A shared neural ensemble links distinct contextual memories encoded close in time,” Nature 534(7605), 115–118 (2016). 10.1038/nature17955 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 48.Scott B. B, et al. , “Imaging cortical dynamics in GCaMP transgenic rats with a head-mounted widefield macroscope,” Neuron 100(5), 1045–1058.e5 (2018). 10.1016/j.neuron.2018.09.050 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 49.Giovannucci A., et al. , “CaImAn an open source tool for scalable calcium imaging data analysis,” Elife 8, e38173 (2019). 10.7554/eLife.38173 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 50.Aggarwal A., et al. , “Glutamate indicators with improved activation kinetics and localization for imaging synaptic transmission,” bioRxiv (2022). [DOI] [PMC free article] [PubMed]
- 51.Zhou P., et al. , “Efficient and accurate extraction of in vivo calcium signals from microendoscopic video data,” Elife 7, e28728 (2018). 10.7554/eLife.28728 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 52.Cai C., et al. , “VolPy: automated and scalable analysis pipelines for voltage imaging datasets,” PLoS Comput. Biol. 17(4), e1008806 (2021). 10.1371/journal.pcbi.1008806 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 53.Chen X., et al. , “Brain-wide organization of neuronal activity and convergent sensorimotor transformations in larval zebrafish,” Neuron 100(4), 876–890.e5 (2018). 10.1016/j.neuron.2018.09.042 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 54.Ellinger P, “Fluorescence microscopy in biology,” Biol. Rev. 15(3), 323–347 (1940). 10.1111/j.1469-185X.1940.tb00761.x [DOI] [Google Scholar]
- 55.Kayasandik C. B., Labate D., “Improved detection of soma location and morphology in fluorescence microscopy images of neurons,” J. Neurosci. Methods 274, 61–70 (2016). 10.1016/j.jneumeth.2016.09.007 [DOI] [PubMed] [Google Scholar]
- 56.Korfhage N., et al. , “Detection and segmentation of morphologically complex eukaryotic cells in fluorescence microscopy images via feature pyramid fusion,” PLOS Comput. Biol. 16(9), e1008179 (2020). 10.1371/journal.pcbi.1008179 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 57.Xu Y. K. T., et al. , “Automated in vivo tracking of cortical oligodendrocytes,” Front. Cell. Neurosci. 15, 667595 (2021). 10.3389/fncel.2021.667595 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 58.Cohen L. B., Salzberg B. M., “Optical measurement of membrane potential,” Rev. Physiol. Biochem. Pharmacol. 83, 35–88 (1978). 10.1007/3-540-08907-1_2 [DOI] [PubMed] [Google Scholar]
- 59.Grinvald A., “Real-time optical mapping of neuronal activity: from single growth cones to the intact mammalian brain,” Annu. Rev. Neurosci. 8(1), 263–305 (1985). 10.1146/annurev.ne.08.030185.001403 [DOI] [PubMed] [Google Scholar]
- 60.Tsien R. Y., “New calcium indicators and buffers with high selectivity against magnesium and protons: design, synthesis, and properties of prototype structures,” Biochemistry 19(11), 2396–2404 (1980). 10.1021/bi00552a018 [DOI] [PubMed] [Google Scholar]
- 61.Cunningham J. P., Byron M. Y., “Dimensionality reduction for large-scale neural recordings,” Nat. Neurosci. 17(11), 1500–1509 (2014). 10.1038/nn.3776 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 62.Mishne G., et al. , “Hierarchical coupled-geometry analysis for neuronal structure and activity pattern discovery,” IEEE J. Sel. Top. Signal Process. 10(7), 1238–1253 (2016). 10.1109/JSTSP.2016.2602061 [DOI] [Google Scholar]
- 63.Benisty H., et al. , “Rapid fluctuations in functional connectivity of cortical networks encode spontaneous behavior,” bioRxiv (2021). [DOI] [PMC free article] [PubMed]
- 64.Vyas S., et al. , “Computation through neural population dynamics,” Annu. Rev. Neurosci. 43, 249–275 (2020). 10.1146/annurev-neuro-092619-094115 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 65.Pillow J. W., et al. , “Spatio-temporal correlations and visual signalling in a complete neuronal population,” Nature 454(7207), 995–999 (2008). 10.1038/nature07140 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 66.Mukamel E. A, Nimmerjahn A., Schnitzer M. J., “Automated analysis of cellular signals from large-scale calcium imaging data,” Neuron 63(6), 747–760 (2009). 10.1016/j.neuron.2009.08.009 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 67.Pnevmatikakis E. A., et al. , “Simultaneous denoising, deconvolution, and demixing of calcium imaging data,” Neuron 89(2), 285–299 (2016). 10.1016/j.neuron.2015.11.037 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 68.Haeffele B. D, Vidal R., “Structured low-rank matrix factorization: global optimality, algorithms, and applications,” IEEE Trans. Pattern Anal. Mach. Intell. 42(6), 1468–1482 (2020). 10.1109/TPAMI.2019.2900306 [DOI] [PubMed] [Google Scholar]
- 69.Pachitariu M., et al. , “Suite2p: beyond 10,000 neurons with standard two-photon microscopy,” bioRxiv (2016).
- 70.Elad M., Figueiredo M. A. T., Ma Y., “On the role of sparse and redundant representations in image processing,” Proc. IEEE 98(6), 972–982 (2010). 10.1109/JPROC.2009.2037655 [DOI] [Google Scholar]
- 71.Olshausen B. A., Field D. J., “Emergence of simple-cell receptive field properties by learning a sparse code for natural images,” Nature 381(6583), 607 (1996). 10.1038/381607a0 [DOI] [PubMed] [Google Scholar]
- 72.Pachitariu M., et al. , “Extracting regions of interest from biological images with convolutional sparse block coding,” in Adv. Neural Inf. Process. Syst., pp. 1745–1753 (2013). [Google Scholar]
- 73.Diego F., Hamprecht F. A., “Sparse space-time deconvolution for calcium image analysis,” in NIPS, pp. 64–72 (2014). [Google Scholar]
- 74.Petersen A., Simon N., Witten D., “Scalpel: extracting neurons from calcium imaging data,” Ann. Appl. Stat. 12(4), 2430 (2018). 10.1214/18-AOAS1159 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 75.Klibisz A., et al. , “Fast, simple calcium imaging segmentation with fully convolutional networks,” Lect. Notes Comput. Sci. 10553, 285–293 (2017). 10.1007/978-3-319-67558-9_33 [DOI] [Google Scholar]
- 76.Ronneberger O., Fischer P., Brox T., “U-Net: convolutional networks for biomedical image segmentation,” Lect. Notes Comput. Sci. 9351, 234–241 (2015). 10.1007/978-3-319-24574-4_28 [DOI] [Google Scholar]
- 77.Kaifosh P., et al. , “Sima: Python software for analysis of dynamic fluorescence imaging data,” Front. Neuroinf. 8, 80 (2014). 10.3389/fninf.2014.00080 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 78.Spaen Q., et al. , “HNCcorr: a novel combinatorial approach for cell identification in calcium-imaging movies,” eNeuro 6(2), 1–19 (2019). 10.1523/ENEURO.0304-18.2019 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 79.Mishne G., et al. , “Automated cellular structure extraction in biological images with applications to calcium imaging data,” bioRxiv 313981 (2018).
- 80.Reynolds S., et al. , “ABLE: an activity-based level set segmentation algorithm for two-photon calcium imaging data,” eNeuro 4(5), 1–13 (2017). 10.1523/ENEURO.0012-17.2017 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 81.Chan T., Vese L., “An active contour model without edges,” Lect. Notes Comput. Sci. 1682, 141–151 (1999). 10.1007/3-540-48236-9_13 [DOI] [Google Scholar]
- 82.Osher S., Sethian J. A., “Fronts propagating with curvature-dependent speed: Algorithms based on hamilton-jacobi formulations,” J. Comput. Phys. 79(1), 12–49 (1988). 10.1016/0021-9991(88)90002-2 [DOI] [Google Scholar]
- 83.Apthorpe N., et al. , “Automatic neuron detection in calcium imaging data using convolutional networks,” in Adv. Neural Inf. Process. Syst., Vol. 29, pp. 3270–3278 (2016). [Google Scholar]
- 84.Soltanian-Zadeh S., et al. , “Fast and robust active neuron segmentation in two-photon calcium imaging using spatiotemporal deep learning,” Proc. Natl. Acad. Sci. U. S. A. 116(17), 8554–8563 (2019). 10.1073/pnas.1812995116 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 85.Bao Y., et al. , “Segmentation of neurons from fluorescence calcium recordings beyond real time,” Nat. Mach. Intell. 3, 590–600 (2021). 10.1038/s42256-021-00342-x [DOI] [PMC free article] [PubMed] [Google Scholar]
- 86.Kirschbaum E., Bailoni A., Hamprecht F. A., “Disco: deep learning, instance segmentation, and correlations for cell segmentation in calcium imaging,” Lect. Notes Comput. Sci. 12265, 151–162 (2020). 10.1007/978-3-030-59722-1_15 [DOI] [Google Scholar]
- 87.Inan H., Erdogdu M. A., Schnitzer M., “Robust estimation of neural signals in calcium imaging,” in NIPS, pp. 2905–2914 (2017). [Google Scholar]
- 88.Maruyama R., et al. , “Detecting cells using non-negative matrix factorization on calcium imaging data,” Neural Netw. 55, 11–19 (2014). 10.1016/j.neunet.2014.03.007 [DOI] [PubMed] [Google Scholar]
- 89.Mishne G., Charles A. S., “Learning spatially-correlated temporal dictionaries for calcium imaging,” in IEEE Int. Conf. Acoustics, Speech and Signal Process. (ICASSP-2019), IEEE, pp. 1065–1069 (2019). 10.1109/ICASSP.2019.8683375 [DOI] [Google Scholar]
- 90.Tran L. M., et al. , “Automated curation of cnmf-e-extracted roi spatial footprints and calcium traces using open-source automl tools,” Front. Neural Circuits 14, 42 (2020). 10.3389/fncir.2020.00042 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 91.Buchanan E. K., et al. , “Penalized matrix decomposition for denoising, compression, and improved demixing of functional imaging data,” arXiv:1807.06203 (2018).
- 92.Gauthier J. L, et al. , “Detecting and correcting false transients in calcium imaging,” bioRxiv 473470 (2018). [DOI] [PMC free article] [PubMed]
- 93.Wei X.-X., et al. , “A zero-inflated gamma model for deconvolved calcium imaging traces,” arXiv:2006.03737 (2020).
- 94.Denis J., et al. , “Deepcinac: a deep-learning-based python toolbox for inferring calcium imaging neuronal activity based on movie visualization,” eNeuro 7(4), 1–15 (2020). 10.1523/ENEURO.0038-20.2020 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 95.Stringer C., Pachitariu M., “Computational processing of neural recordings from calcium imaging data,” Curr. Opin. Neurobiol. 55, 22–31 (2019). 10.1016/j.conb.2018.11.005 [DOI] [PubMed] [Google Scholar]
- 96.Laffray S., et al. , “Adaptive movement compensation for in vivo imaging of fast cellular dynamics within a moving tissue,” PloS One 6(5), e19928 (2011). 10.1371/journal.pone.0019928 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 97.Chen J. L., et al. , “Online correction of licking-induced brain motion during two-photon imaging with a tunable lens,” J. Physiol. 591(19), 4689–4698 (2013). 10.1113/jphysiol.2013.259804 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 98.Collman F. C., High Resolution Imaging in Awake Behaving Mice: Motion Correction and Virtual Reality, Princeton University; (2010). [Google Scholar]
- 99.Sekiguchi K. J., et al. , “Imaging large-scale cellular activity in spinal cord of freely behaving mice,” Nat. Commun. 7(1), 1–13 (2016). 10.1038/ncomms11450 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 100.Guizar-Sicairos M., Thurman S. T., Fienup J. R., “Efficient subpixel image registration algorithms,” Opt. Lett. 33(2), 156–158 (2008). 10.1364/OL.33.000156 [DOI] [PubMed] [Google Scholar]
- 101.Dubbs A., Guevara J., Yuste R., “Moco: fast motion correction for calcium imaging,” Front. Neuroinf. 10, 6 (2016). 10.3389/fninf.2016.00006 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 102.Mitani A., Komiyama T., “Real-time processing of two-photon calcium imaging data including lateral motion artifact correction,” Front. Neuroinf. 12, 98 (2018). 10.3389/fninf.2018.00098 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 103.Hattori R., Komiyama T., “Patchwarp: corrections of non-uniform image distortions in two-photon calcium imaging data by patchwork affine transformations,” bioRxiv (2021). [DOI] [PMC free article] [PubMed]
- 104.Aghayee S., et al. , “Particle tracking facilitates real time capable motion correction in 2d or 3d two-photon imaging of neuronal activity,” Front. Neural Circuits 11, 56 (2017). 10.3389/fncir.2017.00056 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 105.Pachitariu M., Carandini M. S. C., Harris K. D., “Drift correction for electrophysiology and two-photon calcium imaging,” Comput. Syst. Neurosci. III–82 (2018). 10.25378/janelia.5946574.v1 [DOI] [Google Scholar]
- 106.Pnevmatikakis E. A., Giovannucci A., “Normcorre: an online algorithm for piecewise rigid motion correction of calcium imaging data,” J. Neurosci. Methods 291, 83–94 (2017). 10.1016/j.jneumeth.2017.07.031 [DOI] [PubMed] [Google Scholar]
- 107.Voigts J., Harnett M. T., “An animal-actuated rotational head-fixation system for 2-photon imaging during 2-d navigation,” bioRxiv 262543 (2018).
- 108.Voigts J., Harnett M. T., “Somatic and dendritic encoding of spatial variables in retrosplenial cortex differs during 2D navigation,” Neuron 105(2), 237–245.e4 (2020). 10.1016/j.neuron.2019.10.016 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 109.Charles A. S., et al. , “Stochastic filtering of two-photon imaging using reweighted l1,” in IEEE Int. Conf. Acoustics, Speech and Signal Process. (ICASSP), IEEE, pp. 1038–1042 (2017). 10.1109/ICASSP.2017.7952314 [DOI] [Google Scholar]
- 110.Li X., et al. , “Reinforcing neuron extraction and spike inference in calcium imaging using deep self-supervised denoising,” Nat. Methods 18(11), 1395–1400 (2021). 10.1038/s41592-021-01225-0 [DOI] [PubMed] [Google Scholar]
- 111.Friedrich J., et al. , “Multi-scale approaches for high-speed imaging and analysis of large neural populations,” PLoS Comput. Biol. 13(8), e1005685 (2017). 10.1371/journal.pcbi.1005685 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 112.Donoho D. L., Johnstone J. M., “Ideal spatial adaptation by wavelet shrinkage,” Biometrika 81(3), 425–455 (1994). 10.1093/biomet/81.3.425 [DOI] [Google Scholar]
- 113.Chang S. G., Yu B., Vetterli M., “Adaptive wavelet thresholding for image denoising and compression,” IEEE Trans. Image Process. 9(9), 1532–1546 (2000). 10.1109/83.862633 [DOI] [PubMed] [Google Scholar]
- 114.Weigert M., et al. “Content-aware image restoration: pushing the limits of fluorescence microscopy,” Nat. Methods 15(12), 1090–1097 (2018). 10.1038/s41592-018-0216-7 [DOI] [PubMed] [Google Scholar]
- 115.Wei Z., et al. , “A comparison of neuronal population dynamics measured with calcium imaging and electrophysiology,” PLoS Comput. Biol. 16(9), e1008198 (2020). 10.1371/journal.pcbi.1008198 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 116.Nolan R., “Algorithms for the correction of photobleaching,” PhD thesis, University of Oxford (2018). [Google Scholar]
- 117.Kubler S., et al. , “A robust and versatile framework to compare spike detection methods in calcium imaging of neuronal activity,” in IEEE 18th Int. Symp. Biomed. Imaging (ISBI), IEEE, pp. 375–379 (2021). 10.1109/ISBI48211.2021.9433951 [DOI] [Google Scholar]
- 118.Cutrale F., et al. , “Using enhanced number and brightness to measure protein oligomerization dynamics in live cells,” Nat. Protoc. 14(2), 616–638 (2019). 10.1038/s41596-018-0111-9 [DOI] [PubMed] [Google Scholar]
- 119.Keemink S. W., et al. , “Fissa: a neuropil decontamination toolbox for calcium imaging signals,” Sci. Rep. 8(1), 3493 (2018). 10.1038/s41598-018-21640-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 120.Dana H., et al. , “Thy1-GCaMP6 transgenic mice for neuronal population imaging in vivo,” PloS One 9(9), e108697 (2014). 10.1371/journal.pone.0108697 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 121.Daigle T. L., et al. , “A suite of transgenic driver and reporter mouse lines with enhanced brain-cell-type targeting and functionality,” Cell 174(2), 465–480.e22 (2018). 10.1016/j.cell.2018.06.035 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 122.Helmchen F., “Calibration of fluorescent calcium indicators,” Cold Spring Harbor Protoc. 2011(8), pdb–top120 (2011). 10.1101/pdb.top120 [DOI] [PubMed] [Google Scholar]
- 123.Song A., et al. , “Neural anatomy and optical microscopy (naomi) simulation for evaluating calcium imaging methods,” J. Neurosci. Methods 358, 109173 (2021). 10.1016/j.jneumeth.2021.109173 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 124.Pachitariu M., Stringer C., Harris K. D, “Robustness of spike deconvolution for neuronal calcium imaging,” J. Neurosci. 38(37), 7976–7985 (2018). 10.1523/JNEUROSCI.3339-17.2018 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 125.Friedrich J., Zhou P., Paninski L., “Fast online deconvolution of calcium imaging data,” PLoS Comput. Biol. 13(3), e1005423 (2017). 10.1371/journal.pcbi.1005423 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 126.Evans M. H., Petersen R. S., Humphries M. D., “On the use of calcium deconvolution algorithms in practical contexts,” bioRxiv 871137 (2019).
- 127.Jewell S. W., et al. , “Fast nonconvex deconvolution of calcium imaging data,” Biostatistics 21(4), 709–726 (2020). 10.1093/biostatistics/kxy083 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 128.Pnevmatikakis E., Paninski L., “Sparse nonnegative deconvolution for compressive calcium imaging: algorithms and phase transitions,” in Adv. Neural Inf. Process. Syst., pp. 1250–1258 (2013). [Google Scholar]
- 129.Shibue R., Komaki F., “Deconvolution of calcium imaging data using marked point processes,” PLoS Comput. Biol. 16(3), e1007650 (2020). 10.1371/journal.pcbi.1007650 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 130.Theis L., et al. , “Benchmarking spike rate inference in population calcium imaging,” Neuron 90(3), 471–482 (2016). 10.1016/j.neuron.2016.04.014 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 131.Rupprecht P., et al. , “A database and deep learning toolbox for noise-optimized, generalized spike inference from calcium imaging,” Nat. Neurosci. 24(9), 1324–1337 (2021). 10.1038/s41593-021-00895-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 132.Helmchen F., Tank D. W., “A single-compartment model of calcium dynamics in nerve terminals and dendrites,” Cold Spring Harbor Protoc. 2015(2), pdb–top085910 (2015). 10.1101/pdb.top085910 [DOI] [PubMed] [Google Scholar]
- 133.Lütcke H., et al. , “Inference of neuronal network spike dynamics and topology from calcium imaging data,” Front. Neural Circuits 7, 201 (2013). 10.3389/fncir.2013.00201 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 134.Speiser A., et al. , “Fast amortized inference of neural activity from calcium imaging data with variational autoencoders,” in Proc. 31st Int. Conf. Neural Inf. Process. Syst., pp. 4027–4037 (2017). [Google Scholar]
- 135.Vogelstein J. T., et al. , “Fast nonnegative deconvolution for spike train inference from population calcium imaging,” J. Neurophysiol. 104(6), 3691–3704 (2010). 10.1152/jn.01073.2009 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 136.Friedrich J., Zhou P., Paninski L., “Fast active set methods for online deconvolution of calcium imaging data,” arXiv. org, Sept (2016). [DOI] [PMC free article] [PubMed]
- 137.Huang L., et al. , “Relationship between simultaneously recorded spiking activity and fluorescence signal in GCaMP6 transgenic mice,” Elife 10, e51675 (2021). 10.7554/eLife.51675 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 138.Ganmor E., et al. , “Direct estimation of firing rates from calcium imaging data,” arXiv:1601.00364 (2016).
- 139.Li C., et al. , “Fully affine invariant methods for cross-session registration of calcium imaging data,” eNeuro 7(4), 1–12 (2020). 10.1523/ENEURO.0054-20.2020 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 140.Yadav S., et al. , “Multi-session alignment for longitudinal calcium imaging,” in COSYNE (2021). [Google Scholar]
- 141.Lowe D. G., “Distinctive image features from scale-invariant keypoints,” Int. J. Comput. Vis. 60(2), 91–110 (2004). 10.1023/B:VISI.0000029664.99615.94 [DOI] [Google Scholar]
- 142.Sheintuch L., et al. , “Tracking the same neurons across multiple days in ca2+ imaging data,” Cell Rep. 21(4), 1102–1115 (2017). 10.1016/j.celrep.2017.10.013 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 143.Cardin J. A., Crair M. C., Higley M. J., “Mesoscopic imaging: shining a wide light on large-scale neural dynamics,” Neuron 108(1), 33–43 (2020). 10.1016/j.neuron.2020.09.031 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 144.Churchland A. K., et al. , “Dataset from: Single-trial neural dynamics are dominated by richly varied movements,” http://repository.cshl.edu/id/eprint/38599 (2020). [DOI] [PMC free article] [PubMed]
- 145.Bengtson C. P., et al. , “Nuclear calcium sensors reveal that repetition of trains of synaptic stimuli boosts nuclear calcium signaling in ca1 pyramidal neurons,” Biophys. J. 99(12), 4066–4077 (2010). 10.1016/j.bpj.2010.10.044 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 146.Kim C. K., et al. , “Prolonged, brain-wide expression of nuclear-localized GCaMP3 for functional circuit mapping,” Front. Neural Circuits 8, 138 (2014). 10.3389/fncir.2014.00138 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 147.Cramer J. V., et al. , “In vivo widefield calcium imaging of the mouse cortex for analysis of network connectivity in health and brain disease,” Neuroimage 199, 570–584 (2019). 10.1016/j.neuroimage.2019.06.014 [DOI] [PubMed] [Google Scholar]
- 148.Ren C., Komiyama T., “Characterizing cortex-wide dynamics with wide-field calcium imaging,” J. Neurosci. 41(19), 4160–4168 (2021). 10.1523/JNEUROSCI.3003-20.2021 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 149.Musall S., et al. , “Single-trial neural dynamics are dominated by richly varied movements,” Nat. Neurosci. 22(10), 1677–1686 (2019). 10.1038/s41593-019-0502-4 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 150.Barson D., et al. , “Simultaneous mesoscopic and two-photon imaging of neuronal activity in cortical circuits,” Nat. Methods 17(1), 107–113 (2020). 10.1038/s41592-019-0625-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 151.Tian L., et al. , “Imaging neural activity in worms, flies and mice with improved GCaMP calcium indicators,” Nat. Methods 6(12), 875–881 (2009). 10.1038/nmeth.1398 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 152.Lohani S., et al. , “Dual color mesoscopic imaging reveals spatiotemporally heterogeneous coordination of cholinergic and neocortical activity,” bioRxiv (2020). [DOI] [PMC free article] [PubMed]
- 153.Valley M. T., et al. , “Separation of hemodynamic signals from GCaMP fluorescence measured with wide-field imaging,” J. Neurophysiol. 123(1), 356–366 (2020). 10.1152/jn.00304.2019 [DOI] [PubMed] [Google Scholar]
- 154.Oh S. W., et al. , “A mesoscale connectome of the mouse brain,” Nature 508(7495), 207–214 (2014). 10.1038/nature13186 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 155.Paxinos G., Franklin K. B. J., The Mouse Brain in Stereotaxic Coordinates, pp. 1–93, Elsevier; (2001). [Google Scholar]
- 156.Romero S., et al. , “Cellular and widefield imaging of sound frequency organization in primary and higher order fields of the mouse auditory cortex,” Cereb. Cortex 30(3), 1603–1622 (2020). 10.1093/cercor/bhz190 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 157.Sit K. K., Goard M. J., “Distributed and retinotopically asymmetric processing of coherent motion in mouse visual cortex,” Nat. Commun. 11(1), 1–14 (2020). 10.1038/s41467-020-17283-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 158.Zhi D., King M., Diedrichsen J., “Evaluating brain parcellations using the distance controlled boundary coefficient,” bioRxiv (2021). [DOI] [PMC free article] [PubMed]
- 159.Liu J., et al. , “Parallel processing of sound dynamics across mouse auditory cortex via spatially patterned thalamic inputs and distinct areal intracortical circuits,” Cell Rep. 27(3), 872–885.e7 (2019). 10.1016/j.celrep.2019.03.069 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 160.MacDowell C. J., Buschman T. J., “Low-dimensional spatiotemporal dynamics underlie cortex-wide neural activity,” Curr. Biol. 30(14), 2665–2680.e8 (2020). 10.1016/j.cub.2020.04.090 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 161.Li M., et al. , “Density center-based fast clustering of widefield fluorescence imaging of cortical mesoscale functional connectivity and relation to structural connectivity,” Neurophotonics 6(4), 045014 (2019). 10.1117/1.NPh.6.4.045014 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 162.Lake E. M. R., et al. , “Spanning spatiotemporal scales with simultaneous mesoscopic Ca2+ imaging and functional MRI,” bioRxiv 464305 (2018).
- 163.Mishne G., et al. , “Calcium imaging data analysis (CIDAN): a multiscale approach for extraction of neuronal structures from calcium imaging data,” in preparation.
- 164.Lake E. M. R., et al. , “Simultaneous cortex-wide fluorescence ca 2+ imaging and whole-brain fMRI,” Nat. Methods 17(12), 1262–1271 (2020). 10.1038/s41592-020-00984-6 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 165.Xiao D., et al. , “Mapping cortical mesoscopic networks of single spiking cortical or sub-cortical neurons,” Elife 6, e19976 (2017). 10.7554/eLife.19976 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 166.Clancy K. B., Orsolic I., Mrsic-Flogel T. D., “Locomotion-dependent remapping of distributed cortical networks,” Nat. Neurosci. 22(5), 778–786 (2019). 10.1038/s41593-019-0357-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 167.Liu X., et al. , “Multimodal neural recordings with neuro-fitm uncover diverse patterns of cortical–hippocampal interactions,” Nat. Neurosci. 24(6), 886–896 (2021). 10.1038/s41593-021-00841-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 168.Peters A. J., et al. , “Striatal activity topographically reflects cortical activity,” Nature 591(7850), 420–425 (2021). 10.1038/s41586-020-03166-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 169.Denk W., et al. , “Imaging calcium dynamics in dendritic spines,” Curr. Opin. Neurobiol. 6(3), 372–378 (1996). 10.1016/S0959-4388(96)80122-X [DOI] [PubMed] [Google Scholar]
- 170.Kerlin A., et al. , “Functional clustering of dendritic activity during decision-making,” Elife 8, e46966 (2019). 10.7554/eLife.46966 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 171.Suratkal S. S., Yen Y.-H., Nishiyama J., “Imaging dendritic spines: molecular organization and signaling for plasticity,” Curr. Opin. Neurobiol. 67, 66–74 (2021). 10.1016/j.conb.2020.08.006 [DOI] [PubMed] [Google Scholar]
- 172.Graves A. R., et al. , “Visualizing synaptic plasticity in vivo by large-scale imaging of endogenous ampa receptors,” Elife 10, e66809 (2021). 10.7554/eLife.66809 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 173.Ali F., Kwan A. C., “Interpreting in vivo calcium signals from neuronal cell bodies, axons, and dendrites: a review,” Neurophotonics 7(1), 011402 (2020). 10.1117/1.NPh.7.1.011402 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 174.Sabatini B. L., Maravall M., Svoboda K., “Ca2+ signaling in dendritic spines,” Curr. Opin. Neurobiol. 11(3), 349–356 (2001). 10.1016/S0959-4388(00)00218-X [DOI] [PubMed] [Google Scholar]
- 175.Xu N.-L., et al. , “Nonlinear dendritic integration of sensory and motor input during an active sensing task,” Nature 492(7428), 247–251 (2012). 10.1038/nature11601 [DOI] [PubMed] [Google Scholar]
- 176.Broussard G. J., et al. , “In vivo measurement of afferent activity with axon-specific calcium imaging,” Nat. Neurosci. 21(9), 1272–1280 (2018). 10.1038/s41593-018-0211-4 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 177.Seelig J. D., et al. , “Two-photon calcium imaging from head-fixed drosophila during optomotor walking behavior,” Nat. Methods 7(7), 535–540 (2010). 10.1038/nmeth.1468 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 178.Vajente N., et al. , “Calcium imaging in drosophila melanogaster,” Adv. Exp. Med. Biol. 1131, 881–900 (2020). 10.1007/978-3-030-12457-1_35 [DOI] [PubMed] [Google Scholar]
- 179.Srinivasan R., et al. , “Ca2+ signaling in astrocytes from ip3r2-/- mice in brain slices and during startle responses in vivo,” Nat. Neurosci. 18(5), 708–717 (2015). 10.1038/nn.4001 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 180.Wang Y., et al. , “Accurate quantification of astrocyte and neurotransmitter fluorescence dynamics for single-cell and population-level physiology,” Nat. Neurosci. 22(11), 1936–1944 (2019). 10.1038/s41593-019-0492-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 181.O’Herron P., et al. , “Neural correlates of single-vessel haemodynamic responses in vivo,” Nature 534(7607), 378–382 (2016). 10.1038/nature17965 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 182.Ebner T. J., Chen G., “Use of voltage-sensitive dyes and optical recordings in the central nervous system,” Progr. Neurobiol. 46(5), 463–506 (1995). 10.1016/0301-0082(95)00010-S [DOI] [PubMed] [Google Scholar]
- 183.Zhu M. H., et al. , “Population imaging discrepancies between a genetically-encoded calcium indicator (GECI) versus a genetically-encoded voltage indicator (GEVI),” Sci. Rep. 11(1), 1–15 (2021). 10.1038/s41598-021-84651-6 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 184.Mohajerani M. H., et al. , “Mirrored bilateral slow-wave cortical activity within local circuits revealed by fast bihemispheric voltage-sensitive dye imaging in anesthetized and awake mice,” J. Neurosci. 30(10), 3745–3751 (2010). 10.1523/JNEUROSCI.6437-09.2010 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 185.Knöpfel T., Song C., “Optical voltage imaging in neurons: moving from technology development to practical tool,” Nat. Rev. Neurosci. 20(12), 719–727 (2019). 10.1038/s41583-019-0231-4 [DOI] [PubMed] [Google Scholar]
- 186.Piatkevich K. D., et al. , “Population imaging of neural activity in awake behaving mice,” Nature 574(7778), 413–417 (2019). 10.1038/s41586-019-1641-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 187.Xie M. E., et al. , “High-fidelity estimates of spikes and subthreshold waveforms from 1-photon voltage imaging in-vivo,” Cell Rep. 35(1), 108954 (2021). 10.1016/j.celrep.2021.108954 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 188.Mait J. N., Euliss G. W., Athale R. A., “Computational imaging,” Adv. Opt. Photonics 10(2), 409–483 (2018) 10.1364/AOP.10.000409 [DOI] [Google Scholar]
- 189.Waller L., “Physics-constrained computational imaging,” Proc. SPIE 11469, 114690M (2020). 10.1117/12.2571478 [DOI] [Google Scholar]
- 190.Deb D., et al. , “Programmable 3D snapshot microscopy with fourier convolutional networks,” arXiv:2104.10611 (2021).
- 191.Wu J., et al. , “Iterative tomography with digital adaptive optics permits hour-long intravital observation of 3D subcellular dynamics at millisecond scale,” Cell 184(12), 3318–3332.e17 (2021). 10.1016/j.cell.2021.04.029 [DOI] [PubMed] [Google Scholar]
- 192.Xue Y., et al. , “Single-shot 3d wide-field fluorescence imaging with a computational miniature mesoscope,” Sci. Adv. 6(43), eabb7508 (2020). 10.1126/sciadv.abb7508 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 193.Yoon Y.-G., et al. , “Sparse decomposition light-field microscopy for high speed imaging of neuronal activity,” Optica 7(10), 1457–1468 (2020). 10.1364/OPTICA.392805 [DOI] [Google Scholar]
- 194.Zhang Y., et al. , “Computational optical sectioning with an incoherent multiscale scattering model for light-field microscopy,” Nat. Commun. 12(1), 1–11 (2021). 10.1038/s41467-021-26730-w [DOI] [PMC free article] [PubMed] [Google Scholar]
- 195.Moretti C., Gigan S., “Readout of fluorescence functional signals through highly scattering tissue,” Nat. Photonics 14(6), 361–364 (2020). 10.1038/s41566-020-0612-2 [DOI] [Google Scholar]
- 196.Pnevmatikakis E. A., “Analysis pipelines for calcium imaging data,” Curr. Opin. Neurobiol. 55, 15–21 (2019). 10.1016/j.conb.2018.11.004 [DOI] [PubMed] [Google Scholar]
- 197.Berens P., et al. , “Standardizing and benchmarking data analysis for calcium imaging,” in COSYNE (2017). [Google Scholar]
- 198.Smith S. L., Häusser M., “Parallel processing of visual space by neighboring neurons in mouse visual cortex,” Nat. Neurosci. 13(9), 1144–1149 (2010). 10.1038/nn.2620 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 199.Cheng X., Mishne G., “Spectral embedding norm: Looking deep into the spectrum of the graph laplacian,” SIAM J. Imaging Sci. 13(2), 1015–1048 (2020). 10.1137/18M1283160 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 200.Levy S., et al. , “Cell-type-specific outcome representation in the primary motor cortex,” Neuron 107(5), 954–971.e9 (2020). 10.1016/j.neuron.2020.06.006 [DOI] [PubMed] [Google Scholar]
- 201.Li B. M., et al. , “Calciumgan: a generative adversarial network model for synthesising realistic calcium imaging data of neuronal populations,” arXiv:2009.02707 (2020).
- 202.Vanwalleghem G., Constantin L., Scott E. K., “Calcium imaging and the curse of negativity,” Front. Neural Circuits 14, 607391 (2021). 10.3389/fncir.2020.607391 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 203.Glickfeld L. L., Histed M. H., Maunsell J. H. R., “Mouse primary visual cortex is used to detect both orientation and contrast changes,” J. Neurosci. 33(50), 19416–19422 (2013). 10.1523/JNEUROSCI.3560-13.2013 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 204.Kato H. K., Gillet S. N., Isaacson J. S., “Flexible sensory representations in auditory cortex driven by behavioral relevance,” Neuron 88(5), 1027–1039 (2015). 10.1016/j.neuron.2015.10.024 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 205.Charles A. S., et al. , “Dethroning the fano factor: a flexible, model-based approach to partitioning neural variability,” Neural Comput. 30(4), 1012–1045 (2018). 10.1162/neco_a_01062 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 206.Giovannucci A., et al. , “Onacid: online analysis of calcium imaging data in real time,” in Proc. 31st Int. Conf. Neural Inf. Process. Syst., pp. 2378–2388 (2017). 10.1101/193383 [DOI] [Google Scholar]
- 207.Giovannucci A., et al. , “Fiola: an accelerated pipeline for fluorescence imaging online analysis,” (2021). [DOI] [PubMed]
- 208.Botcherby E. J., Juskaitis R., Wilson T., “Scanning two photon fluorescence microscopy with extended depth of field,” Opt. Commun. 268(2), 253–260 (2006). 10.1016/j.optcom.2006.07.026 [DOI] [Google Scholar]
- 209.Theriault G., et al. , “Extended two-photon microscopy in live samples with bessel beams: steadier focus, faster volume scans, and simpler stereoscopic imaging,” Front. Cell Neurosci. 8, 139 (2014). 10.3389/fncel.2014.00139 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 210.Gobel W., Kampa B. M., Helmchen F., “Imaging cellular network dynamics in three dimensions using fast 3D laser scanning,” Nat. Methods 4(1), 73–79 (2007). 10.1038/nmeth989 [DOI] [PubMed] [Google Scholar]
- 211.Yang W., et al. , “Simultaneous multi-plane imaging of neural circuits,” Neuron 89(2), 269–284 (2016). 10.1016/j.neuron.2015.12.012 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 212.Grewe B. F., et al. , “Fast two-layer two-photon imaging of neuronal cell populations using an electrically tunable lens,” Biomed. Opt. Express 2(7), 2035–2046 (2011). 10.1364/BOE.2.002035 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 213.Duemani Reddy G., et al. , “Three-dimensional random access multiphoton microscopy for functional imaging of neuronal activity,” Nat. Neurosci. 11(6), 713–720 (2008). 10.1038/nn.2116 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 214.“Allen institute brain observatory whitepaper,” The Allen Institute, https://help.brain-map.org/display/observatory/Documentation (accessed 27 September 2019).
- 215.Mikołajczyk A., Grochowski M., “Data augmentation for improving deep learning in image classification problem,” in Int. Interdisciplinary PhD Workshop (IIPhDW), IEEE, pp. 117–122 (2018). 10.1109/IIPHDW.2018.8388338 [DOI] [Google Scholar]
- 216.Magland J., et al. , “Spikeforest, reproducible web-facing ground-truth validation of automated neural spike sorters,” Elife 9, e55167 (2020). 10.7554/eLife.55167 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 217.Charles A. S., et al. , “Toward community-driven big open brain science: open big data and tools for structure, function, and genetics,” Annu. Rev. Neurosci. 43, 441–464 (2020). 10.1146/annurev-neuro-100119-110036 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 218.Romano S. A., et al. , “An integrated calcium imaging processing toolbox for the analysis of neuronal population dynamics,” PLoS Comput. Biol. 13(6), e1005526 (2017). 10.1371/journal.pcbi.1005526 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 219.Abe T., et al. , “Neuroscience cloud analysis as a service,” bioRxiv 2020–2026 (2021).