Abstract
Objective
Fluorescence imaging through head-mounted microscopes in freely behaving animals is becoming a standard method to study neural circuit function. Flexible, open-source designs are needed to spur evolution of the method.
Approach
We describe a miniature microscope for single-photon fluorescence imaging in freely behaving animals. The device is made from 3D printed parts and off-the-shelf components. These microscopes weigh less than 1.8 g, can be configured to image a variety of fluorophores, and can be used wirelessly or in conjunction with active commutators. Microscope control software, based in Swift for macOS, provides low-latency image processing capabilities for closed-loop, or BMI, experiments.
Main results
Miniature microscopes were deployed in the songbird premotor region HVC (used as a proper name), in singing zebra finches. Individual neurons yield temporally precise patterns of calcium activity that are consistent over repeated renditions of song. Several cells were tracked over timescales of weeks and months, providing an opportunity to study learning related changes in HVC.
Significance
3D printed miniature microscopes, composed completely of consumer grade components, are a cost-effective, modular option for head-mounting imaging. These easily constructed and customizable tools provide access to cell-type specific neural ensembles over timescales of weeks.
Keywords: Calcium Imaging, 3D printing, Miniature microscopy, Chronic recording, Neural Interface
1. INTRODUCTION
Optical recording of neural activity in the brains of behaving animals has become an essential method in systems neuroscience. Through the use of genetically encoded calcium indicators, the principles of learning can be studied in large ensembles of neurons over timescales of weeks and months (Cai et al. 2016), (Ziv et al. 2013), (Liberti et al. 2016). An increasingly widespread and powerful method employs miniature head-mounted fluorescence microscopes to record cellular resolution activity in freely moving animals (Ghosh et al. 2011), (Barbera et al. 2016; Park et al. 2011). A variety of miniature head-mounted microscopes are available commercially, and the technique has been adopted by many labs (Betley et al. 2015), (Rubin et al. 2015), but these off-the-shelf devices currently lack a number of desirable features such as easy modification, wireless interfacing, color sensors, and flexible real-time analysis software. Some exciting custom built and/or open-source options have emerged (Cai et al. 2016), (Barbera et al. 2016), but the need remains for simple modular designs that use off the shelf parts and rapid prototyping tools such as a 3D printed microscope housing. In our particular application, none of the existing options were sufficiently lightweight for the small animal models employed in our lab. In response, we developed a flexible platform for miniature microscope design and fabrication that could address a variety of experimental needs. New features described in this project include an open-source 3D printed housing for easy experiment-specific reconfiguration, and wireless telemetry. For small animals such as juvenile mice or small songbirds that cannot easily carry the extra weight of the wireless transmitter and battery, we also describe a torque-sensing motorized commutator based on 3D printed parts and low cost hardware. A wired configuration connected through the commutator enables an ultralight configuration for recording (Fee & Leonardo 2001).
In addition to the microscope, this project describes open-source control software capable of low-latency image processing and feedback for closed-loop experiments. Such experiments can trigger stimulation or other feedback in response to either patterns of recorded neural activity or external analog inputs. As a proof of concept, we demonstrate this capability by performing a closed-loop experiment in which auditory feedback is triggered with precise latency on particular syllables of a zebra finch’s song.
We intend for this project to act synergistically with other emerging open-source neurophotonic efforts, to bring these new tools to larger user base and provide a platform for innovative optical recording techniques across the wider scientific community.
2. METHODS
2.1. General Miniature Microscope Design
The microscope design was partially based on a previously described optical pathway Figure 1A (Ghosh et al. 2011). These miniature microscopes use a gradient refractive index (GRIN) lens as a high quality, short focal length objective lens. This design takes advantage of the off-axis focusing properties of these lenses, enabling fine focus adjustments post implant. We performed our original experiments with a green fluorescence indicator (GCaMP6) and chose our standard filters accordingly. A blue LED produces excitation light (470 nm peak, LUXEON Rebel), powered by a microcontroller. A drum lens (Edmund Optics, NT45-549) directs the LED emission, which passes through a 4 mm × 4 mm excitation filter (Chroma, bandpass filter, 480/40 nm, 4 mm × 4 mm × 1.05 mm), deflects off a dichroic mirror (Chroma, 500 BS, 4 mm × 4.8 mm) and enters the imaging pathway via the gradient refractive index (GRIN) objective lens (GRINTech, GT-IFRL-200, 0.245 pitch length, 0.45 numerical aperture, or Edmund Optics 1.8mm Dia, 670 nm DWL, 0.23 pitch length, VIS Coated, GRIN Lens). Fluorescence from the sample returns through the objective, the dichroic, the emission filter (Chroma, bandpass filter, 535/50 nm, 4 mm × 4 mm × 1.05 mm) and an achromatic doublet lens (Edmund Optics, NT45-207, f = 15, 12.5 or 10 mm) that focuses the image onto the CMOS. This sensor is described in greater detail below. The frame rate of the camera is 30 Hz, and the field of view, when using a 15 mm achromat, is approximately 800 μm × 600 μm.
Figure 1. A custom 3D printed head-mounted fluorescence microscope.
A. Optical layout of emission pathway for miniature microscope. B. Microscope schematic. Microscope body is lightweight and robust; CAD design is easily modified. C. A wide range of 3D printers and plastics were surveyed to maximize resolution and minimize autofluorescence. The red asterisk indicates our final choice of material: Formlab’s Black resin. Autofluorescence of the current design is 1/2 the autofluorescence of black Delrin, one material used for machined microscope designs. D. Photograph of the microscope produced on a consumer grade 3D printer (Form 2), with inset showing the 3D printed focussing threads with a pitch of 0.34 mm.
The microscope housing is provided in the common STL format. Various 3D printing resins were screened for their light blocking capacity, minimum print resolution, and auto-fluorescence in response to imaging wavelengths. We surveyed ten 3D printer and plastic combinations to find a low cost, high quality print option. We chose to build our microscope using a commercially available, consumer grade desktop 3D printer (Formlabs Form 2, resin type FGPBLK01 and FGPBLK02). The material has half the autofluorescence as Delrin in the green emission channel (480 nm excitation, 510 nm emission). Other materials surveyed include black acrylic-like resin (Shapeways), opaque acrylic-like resin (Shapeways), Black Delrin (DuPont), Black Nylon (Shapeways), ABS-black (Realize inc), ABS-P430 Ivory (UPrint) and VeroBlack Matte (Objet) Figure 1C. With the Form 2 printer, parts can be printed with extremely small feature size (25 μm layers, 140 μm laser spot size), which allows for printing high-resolution threads used to adjust the focal length Figure 1D.
2.2. Software Design
Custom image acquisition software running on the macOS operating system leverages native AVFoundation frameworks to communicate with a USB frame grabber and capture a synchronized audio-video stream Figure 2. Video and audio are written to disk in MPEG-4 container files with video encoded at full resolution using either H.264 or lossless MJPEG Open DML codecs and audio encoded using the AAC codec with a 48 kHz sampling rate. Data on the regions of interest, fluorescence and triggered feedback are simultaneously written to a table (CSV) that includes corresponding frame numbers. In addition, the software communicates with a microcontroller (Arduino 2560 Mega) via a USB-to-serial connection. The microcontroller board allows the software to control LED brightness, as well as interface with other analog and digital signals.
Figure 2. Performance of closed loop feedback based on near real-time audio or image processing using custom software and GUI.
A. Image of user interface of acquisition software written in Swift for macOS. The software allows for low-latency ROI tracking and microscope control. It also interfaces with a microcontroller with 54 digital I/O pins, 16 8-bit analog inputs and an ADC with two tightly synchronized 16-bit 48 kHz analog inputs (typically for audio). B. Example of feedback contingent on features of the audio input: white noise is triggered at a specific syllable of song. White dotted lines mark a single motif of song, blue indicates the target syllable. Top: a ‘catch’ trial, where no feedback was delivered. Middle: a ‘hit’ trial, where a 50ms white noise pulse was delivered. Bottom: spectral density image of all song aligned trials (including hit and catch trials), demonstrating the reliability of the white noise pulse. C. Example of feedback contingent on ROI tracking. Black: voltage driving an LED light flash that is recorded in the field of the CMOS; blue: the cumulative probability density function (CDF) of a brief TTL pulse triggered by the software in response to the LED flash processed through the entire acquisition system. Event detection was based on ROI analysis on a Mac Mini computer. Latency of the full loop from camera to Arduino based TTL output is approximately 23.9 ms ± 7.9 ms (95% confidence interval), with the jitter comparable to the frame rate of the camera. In this test, the LED was not synchronized to the onset of the frame, as would be the case for spontaneous video recording of neural activity. This represents the experimentally relevant performance of the system, intrinsically limited by the 33 ms frame rate of the camera. D. Of the total latency, image processing to extract fluorescence from ten cell-sized regions of interest contributes an average of only 0.17 ms; much of the ROI feedback latency is a reflection of the frame rate and acquisition time. E. Latency of auditory-based feedback, where a TTL pulse followed detections of a specific syllable structure (shown in B), had a latency of 12 ms ± 6 ms.
2.3. Near-real-time Software Details
The software is able to perform near real-time analysis on both the video stream as well as two independent analog channels. Sampled at 48 kHz, these can be used for analog inputs such as audio, TTL pulses, electrophysiology or data about behavioral context. These channels are used for low-latency data collection and triggering, with precise video alignment. For our songbird experiments, one of these channels is used for audio recording and song detection. For example we activate the microscope’s LED only during singing, restricting photobleaching and streamlining data collection. For this functionality, the audio stream is processed through a short-time Fourier transform to identify spectral content consistent with singing.
Either the analog input or the video stream can be the substrate for near-real time computational analysis. This allows feedback to be selectively triggered during activity of interest on the video or analog channels. In the case of video-analysis, the uncompressed video stream is processed to extract fluorescence information for predefined regions of interest. Figure 2 shows an example of how the software can be used with a predefined triggering rule that applies feedback contingent on fluorescence signals measured in specific ROIs. In this paradigm, the camera can be used as a brain machine interface (BMI) in which songbirds control sounds directly through the measured calcium signals in the brain (Clancy et al. 2014). Other applications of the software could include closed-loop stimulation experiments that seek to electrically or optically disrupt patterns of activity in real-time.
Figure 2C depicts time between LED onset and feedback in our software test bed. The average time to trigger a feedback stimulus is 23.9 ms ± 7.9 ms (95% confidence interval), including frame exposure, digitization, transmission, data acquisition and software processing. This latency reflects the intrinsic limitations of the 30 FPS (33 ms) frame rate as well as time spent capturing and decoding video, and communicating an output TTL with the Arduino controller through the serial port. Data acquisition, region of interest processing, and feedback triggering all occur within an average of 1.7 ms after frame capture (95% confidence: <11 ms). This variability stems primarily from video encoding and storage, where the codec and write buffering necessitate increased processing time for a subset of frames. An additional source of variability in processing time, which has a smaller impact and is shown Figure 2D, relates to the number and complexity of the defined regions of interest. As the number and size of the regions of interest increases, the processing time increases. In tests involving ten cell-sized regions of interest, the average processing time was 0.17 ms with a range of 0.11 ms – 0.22 ms (95% confidence interval).
2.4. Auditory Feedback
To test the acquisition pipeline for triggering on behavior, we used the integrated microphone on the head mounted microscopes to detect specific syllables of a bird’s song and deliver white noise auditory feedback at specific delays after the syllable was produced. White noise bursts are a common aversive stimulus for differential reinforcement in songbirds because it obscures normal feedback that the bird experiences (Andalman & Fee 2009), (Tumer & Brainard 2007). Custom Swift software was used for online detection of target syllables. White noise bursts were delivered at precise times relative to the target syllable during the bird’s vocalization. The duration of white noise bursts was long enough to overlap with the remainder of the targeted syllable (50 ms). This feedback was delivered with minimal trial to trial jitter, on the order of 12 ms ± 6 ms latency from the target syllable, using a neural-network-based syllable detector (Ben Pearre, Nathan Perkins, Jeff Markowitz, Timothy Gardner 2016).
2.5. CMOS Sensor and Wireless Transmitter
The choice of CMOS imaging sensor is flexible. Benchtop systems and existing commercial solutions have employed a High Definition (HD) resolution sensor with on-board digitization in the microscope. However, due to out-of plane fluorescence, the resolution of single photon imaging in-vivo is typically much lower that the Nyquist sampling of a HD CMOS. At HD resolution, spatially oversampled images can result in a cumbersomely large file size that hinders data processing on conventional lab computers—as a result, it is common for processing pipelines to start with spatial downsampling. Given the limited value in HD resolution for our application, we chose to start with a lower resolution sensor having only 640 × 480 pixels, a design choice that provided other benefits. Specifically, this lower resolution frame can be transmitted through a variety of low-cost, compact wireless transmitters Figure 3. The NTSC video protocol we use here is a mature technology incorporated into many low cost devices such as wireless spy cameras or miniature cameras for remote control airplanes. The same format also allows for analog transmission via flexible tethers for use in small songbirds or juvenile mice. Minimal degradation over distances used in realistic experiments Figure 3D,E and normal behavior of animals wearing the system (Supplemental Video 5) demonstrate that the system can be appropriately implemented in-vivo. However, the greatest utility will be in animals where a wired option is not possible or feasible, such as in behaving bats or crows, or where a data cable might restrict normal behavioral gestures.
Figure 3. Wireless microscope acquisition system and performance.
Signals from the camera can be relayed with an off-the-shelf wireless transmitter. A. Diagram of the system, mounted on a songbird. The microscope LED, CMOS and transmitter together draw approximately 100 mAh, at the typical input voltage of 3.5 v and the typical in-vivo imaging LED brightness. B. The wireless acquisition system uses a wireless receiver, frame-grabber and digitizer to acquire synchronized video and two channels of 48 kHz audio. C. Image of wireless transmitter and 50 mAh battery. D. Comparison of pixel noise in the wired and wireless conditions (see methods 2.8.3). E. Histogram of per-frame PSNR values of the wireless condition, as compared to the wired condition (at 3 meters). F. Variance of the mean pixel value, over 100 continuously recorded frames (per distance). High variance indicates signal degradation due to transmission loss or interference. Over distances relevant for typical neuroscience applications in an indoor laboratory setting, these transmitters are subject to minimal signal degradation.
The camera circuit employed for the microscopes in this study is an off-the-shelf integrated color CMOS camera system (3rd eye electronics, MC900). It uses a 1/3″ color CMOS (OV7960 TSV). Each on-chip pixel is 6 μm, and the signal to noise ratio (SNR) for the camera is 52 dB, with a sensitivity of 0.008 Lux. The camera circuit is also available with an integrated microphone (MC900A). The circuit draws less than 70 mA with an operating voltage of 3.3–5 v. A total of five wires run from the microscope—one wire each for camera power, ground, audio, NTSC analog video, and LED power. These wires run through a custom flex-PCB interconnect (Rigiflex) up to a custom-built active commutator (described in greater detail below). The NTSC video signal and analog audio are digitized through a USB frame-grabber. The integrated video frame-grabber and analog-to-digital converter (AV2USB_V1.4, or DM420) converts two channels of 48 kHz analog data (in the case of songbirds, used for audio) and a synchronized NTSC video stream for reading by the host computer. While the color NTSC cameras provide wireless and color imaging, it potentially results in lower SNR in the GFP band. This may not be ideal for some applications—fortunately, the CMOS is easily changed for users that do not require color or wireless recording.
Off-the-shelf wireless transmitters are available for the NTSC format audio/video, and weigh less than 0.6 grams (for example, BOSCAM TX24019, or many others) Figure 3C These transmitters perform reliably over distances of 5 meters or more, a measure confirmed in Figure 3E. The small scale and relatively low power requirements of the transmitter make them an attractive modification for untethered recording in freely moving mice or other organisms. In our tests, we powered the device with a lightweight, consumer grade Lithium Polymer (LiPo) battery. The choice of battery depends on weight constraints of the animal species: we have tested small, 1 gram 50 mAh batteries for functional use up to 30 minutes, and 3 gram 105 mAh batteries for up to an hour, at average imaging LED intensities. The microscope weighs 1.8 grams and the 50 mAh battery and transmitter adds an additional ~2 grams. This combination was burdensome for most zebra finches, although they are able to carry the complete system (see Supplemental Video 5). The wireless system is ideal for larger animals, such as bats, rats or crows, that can carry the additional weight. We have had success in using this system in mice (unpublished observations).
2.6. Commutator Design
To image in songbirds, which were unable to carry the extra weight of the battery and wireless transmitter, we developed a torque-sensing 3D printed commutator (Figure 4A), as well as lightweight (~0.1 g) low-noise, flexible PCB cables. The commutator was loosely inspired by previous designs (Fee & Leonardo 2001). These commutators allowed for 24/7 longitudinal recording without battery replacement or frequent handling of the animals. The ability to record longitudinally from animals may prove useful in larger animals, such as mice, as it avoids the complexity of battery management and charging.
Figure 4. 3D printed active commutator system for chronic neural recording in small animals.
A. Schematic of the 3D printed commutator. B. An image of an assembled commutator. C. These devices use the deflection of the magnetic field of a disk magnet located on a flex PCB cable to detect torque via a hall sensor. A feedback circuit mediated by a microcontroller corrects the deflection by rotating a slip ring via a servo-driven gearbox with a 1:1 ratio. D. Example of two different flex cables designs with 7–9 conductors, weighing under 0.25 grams. The additional wires are present for electrical/optical stimulation or other head-mounted accessories. Scale bar indicates 9 mm.
The active commutator consists of a gear assembly (ABS-P430 Ivory, UPrint), along with seven electronic components that can be purchased for a total cost of under $100. These additional components include a servo and a resistor, a slip ring commutator, a disk magnet and a hall sensor Figure 4C. The mechanical designs, editable stereolithography designs, and software to operate the commutator is included in the supplement.
2.7. Surgical Procedure
To label neurons in the song-related premotor nucleus HVC with a calcium indicator, birds received three 250 nl injections of lentivirus packaged with either GCaMP6s or GCaMP6f under a Rous sarcoma virus (RSV) promoter into the song premotor cortex. Virus is prepared as described previously (Liberti et al. 2016), and constructs are available in Addgene (https://www.addgene.org/Darrell_Kotton/). To guide the injection of virus, the boundaries of HVC were determined by fluorescence targeting of a DiI retrograde tracer injected into the downstream song motor nucleus Area X.
2.8. Data Analysis
2.8.1. Song Alignments
For all neural recording modalities, trials were aligned to song using previously described methods (Poole et al. 2012), using the Euclidean distance in spectral features between the data and a template song in a sliding window. Local minima in the Euclidean distance were considered candidate hits, which were then plotted in 2 dimensions for the user to perform a cluster cut. No time warping was applied to any data (Glaze & Troyer 2006).
2.8.2. Calcium Imaging ROI Analysis
Calcium imaging data was analyzed as described previously (Markowitz et al. 2015). In brief, raw imaging data was motion corrected using a previously published algorithm (Guizar-Sicairos et al. 2008). Then, regions of interest (ROIs) were manually selected, and for each frame, pixels intensities were averaged for all pixels in the region. ROI traces were converted to ΔF/F 0 by estimating F 0 as the 12th percentile in an 800 ms sliding window.
2.8.3. Wireless imaging quality quantification
Wireless imaging data quality was assessed by splitting the analog NTSC video signal into two paths: one to a wired frame-grabber, and the other through a wireless transmitter and receiver, to a second frame-grabber. This allows for explicit evaluation of the image quality for the same exact video stream, enabling frame-by-frame comparison of the wireless and wired conditions. The differences between frames encompases both signal degredation due to wireless encoding, and noise introduced due to interference from other devices, giving a practical measurement of signal degradation. These experiments were performed in realistic lab conditions with a clear path from transmitter to receiver, but no additional steps were taken to mitigate potential sources of noise.
3. RESULTS
3.1. Optical Recording of Neurons Expressing Genetically Encoded Calcium Indicators
To provide information about the stability of excitatory cells, we used our miniature microscopes to perform optical imaging of genetically encoded calcium indicators Figure 5 (Markowitz et al. 2015), (Liberti et al. 2016). Electrophysiology methods are typically unable to track individual neurons over long time periods. Excitatory projection cells in this region are extremely difficult to continuously record for timescales longer than a single day, perhaps due to limitations of recording from smaller cells (Guitchounts et al. 2013). As a proof of concept, we optically recorded the fluorescence transients from neurons expressing the calcium indicator GCaMP6 in the songbird premotor cortex HVC. Consistent with previous extracellular recordings and calcium imaging studies in HVC, projection neuron calcium activity patterns were highly stereotyped and stable across song trials within a day of singing Figure 5A. Microvasculature can be clearly seen above HVC on the surface Figure 5B. Cells were found that produced stereotyped calcium transients at all time-points within song, and their spatiotemporal organization was defined by a 100 μm length-scale clustering (Markowitz et al. 2015). Within a song rendition, dozens of cells can be recorded simultaneously. Because these microscopes are lightweight and inexpensive, they can be chronically implanted and left in place for weeks at a time. This provides a paradigm for stable longitudinal recordings.
Figure 5. Images and in-vivo video collected from the microscope.
A. Image taken by microscope of a High-Frequency NBS 1963A Resolution Test Target, showing 228 lines per mm. B. In-vivo widefield image of blood vessels over premotor area HVC in a zebra finch. C. Maximum intensity projection of ΔF/F 0 video from a bout of singing. Imaging depth is 150–200 μm below the surface of intact dura. D. Time-intensity plot, where each pixel is colored by its center of mass in time. E. Stereotyped single neuron calcium traces recorded in singing birds using GCaMP6, aligned to song. Top: spectrogram of a single song rendition; bottom: calcium traces from 18 ROIs over 50 song-aligned trials from a single bird. Vertical scale bar indicates standard deviation.
4. DISCUSSION
Cellular resolution optical imaging in behaving animals is a foundational method in modern neuroscience—allowing researchers to longitudinally track cells in sparsely active networks like the songbird premotor region HVC and the rodent hippocampus with high spatial resolution. Through the use of genetically encoded calcium indicators (Chen et al. 2013), (Dana et al. 2016), the principles of learning, information encoding, etc, can be studied in large ensembles of neurons at cellular resolution, over periods of weeks and months. Typically, optical experiments utilize either benchtop two-photon imaging systems in head-restrained animals (Dombeck, 2010), (Minderer et al. 2016), (Rickgauer et al. 2014), or single photon imaging in freely moving animals through the use of a head-mounted miniature fluorescence (single photon) microscope (Cai et al. 2016), (Ghosh et al. 2011), (Barbera et al. 2016, Park et al. 2011). While the axial resolution of multiphoton microscopy is vastly superior (Helmchen & Denk 2005), head-mounted microscopes are often the only way to optically observe neural populations during naturalistic behaviors (Resendez et al. 2016).
The motivation for this project was that existing commercially-available miniature microscopes proved too heavy to consistently evoke undirected song, a learning-intensive motor behavior in zebra finches. With extensive screening and training of birds, it is possible to evoke song in a head-fixed preparation in the presence of a mate (Picardo et al. 2016), but this approach can be low yield since few birds will sing head-fixed, and the method may preclude the study of the mechanisms of motor maintenance that occur during undirected singing in the absence of a female. More generally, there are many applications in neuroscience where even a wire tether may restrict natural behavior and prevent interrogation of the underlying neural mechanisms (Wiltschko et al. 2015), (Yartsev & Ulanovsky 2013). Songbirds wearing commercially available microscopes rarely sing—possibly due to the microscope weight or bulky cables used to stream data from the CMOS. With the torque sensing commutator and lightweight tether, our present design is sufficiently unobtrusive for zebra finches; we routinely gather 400–1,000 song trials per day. This microscope system, in conjunction with behaviorally triggered acquisition has allowed us to perform around-the-clock studies of brain activity in a substantial number of animals without constant human supervision, enabling us to gather densely sampled longitudinal recordings. The tool will enable further studies of learning at cellular resolution in zebra finches (Figure 5). This development history and outcome underscores a limitation of commercial solutions: while off-the-shelf versions may provide adequate functionality for some species or experimental paradigms, many experiments require user-defined customization. It is likely that modifications described here, such as wireless interfacing, custom filter sets, and color sensors, as well as the modular design of this miniature microscope will inspire innovation and enable data collection in new experimental models or species.
4.1. Advantage of 3D Printed Housing for Rapid Microscope Assembly and Prototyping
Existing miniature microscope designs require assembly of carefully machined parts, typically milled from Peek or Delrin plastic (Cai et al. 2016), (Ghosh et al. 2011). These machined parts have the advantage of precise tolerances, but the disadvantage of higher cost and longer design timescales relative to 3D printed materials. 3D printed microscopes can be produced at low cost with geometries that are not possible with CNC milling—we take advantage of a single-piece design that reduces weight by eliminating metal bolts and allowing thin walls. Because 3D printed parts avoid the constraints of machined parts, these microscopes can be lighter, more easily constructed, and readily reconfigurable to accommodate design variations.
4.2. All Optical Physiology with Multi-Channel Light Delivery and Recording
While we ran initial tests with filters optimized for imaging GCaMP6, different filter sets can be used to accommodate newer, red-shifted indicators. With some modifications, these cameras can be adapted to incorporate two independent color channels at the expense of SNR within each channel. Optimization of filter sets and incorporating multi-wavelength spectral peak LEDs can provide additional information about cells in the imaging plane, using fluorescent proteins or tracers with well-isolated spectral profiles to differentially label subpopulations of cells. Examples include GCaMP6 combined with infrared Alexa dyes for retrograde labelling. This additional channel can allow disambiguation of specific neuron types within an imaging field. Alternatively, adding a second pass to the dichroic and excitation filter can allow full field optical stimulation outside of the imaging bands.
4.3. Low Latency, Open Source Acquisition Software
In our tests, the latency of triggering off fluorescence activity was limited by the framerate of the microscope, about 23.9 ms ± 7.9 ms Figure 2C. For many experiments, this response latency is acceptable. In the songbird, for example, the sensory-motor latency from pre-motor neuron activity in HVC to auditory sensory consequences processed in the basal ganglia is a minimum of 32–50 ms (Andalman & Fee 2009). Based on these estimates and published accounts of conditional feedback experiments in songbirds, the time-delay should provide a learnable brain machine interface (Olveczky et al. 2005), (Tumer & Brainard 2007), (Sakata & Brainard 2008), (Sober & Brainard 2009). However it remains to be seen whether the timing jitter in the current system is acceptable for brain machine interface experiments in a system as precise as the songbird. For the zebra finch, relative jitter between premotor commands and auditory feedback is just a few milliseconds. For some experiments lower latency and lower variability may be desirable, and with the introduction of faster frame rate cameras and deterministic real-time operating systems, this latency and latency variability will decrease.
4.4. CMOS Trade-Offs and Deficits
In our tests, we use an off-the-shelf CMOS that outputs analog video. In terms of resolution, shot noise and frame rate, the NTSC analog camera is less versatile than many new digital sensors. These sensors do have the advantage of being easily broadcast wirelessly with high signal fidelity and low power consumption, which allows real-time data streaming for wireless BMI experiments, and also allows adjustment of imaging parameters on-the-fly. These sensors have been adequate for calcium imaging experiments— although other experiments may benefit from higher resolution digital sensors (Deisseroth & Schnitzer 2013), (Cai et al. 2016).
4.5. Use of Off-the-Shelf Components for Ease of Construction
Designing and constructing optical equipment poses engineering challenges—many researchers and educators, especially in resource-limited circumstances, may be intimidated by constructing their own systems. This can especially be the case with custom ASIC or chip design requiring considerable technical skill. With this in mind, we have aimed to present a design that is built from low cost, off-the-shelf components. The design requires minimal maintenance, enabling longitudinal use. This microscope design will allow labs with little electronics experience to enter the field of awake behaving imaging and to build their own microscopes, while providing room for electronics savvy experimentalists to iterate and develop novel imaging and acquisition back ends.
The future development of these devices relies greatly on a combined multidisciplinary effort, involving biological, chemical, mechanical, and materials engineering, among others. In particular, neuroscience stands to benefit from advances in consumer grade electronics. The cost, availability, size and quality of electronics used in this project has been driven by cellular and telecommunications industries (Deisseroth & Schnitzer 2013). This sector will likely continue to drive rapid innovation in miniaturization of electronics, nanoscale 3D printed components (Sun & Kawata 2004) and optics (Gissibl et al. 2016), and high-fidelity wireless technology—all of which stand to increase the quality of neuroscience instrumentation. We hope that the microscope presented here will be further enhanced by these efforts and allow maximum integration with other emerging open-source neurophotonics projects.
Supplementary Material
Internal wiring schematic and pin mapping for data acquisition box, and wireless receiver.
Video of blood vessels right above area HVC, taken on the surgery table, imaged through a GRIN relay lens, in brightfield. Video was acquired at 30 frames per second, temporally smoothed by three frames, and spatially downsampled by a factor of two.
Wiring schematic for MC900A camera, and wireless transmitter.
Video of neural activity in HVC in a zebra finch during a bout of singing. Video was acquired at 30 frames per second, temporally smoothed by two frames, and spatially downsampled by a factor of two. Imaging depth about 100 um below intact dura.
Video of neural activity in HVC in a zebra finch during a bout of singing. This is the same bird as in MOV2, taken several days later. Video was acquired at 30 frames per second, temporally smoothed by two frames, and spatially downsampled by a factor of two. Imaging depth about 100 um below intact dura.
Video of neural activity in HVC in a zebra finch during a bout of singing. Video was acquired at 30 frames per second, temporally smoothed by two frames, and spatially downsampled by a factor of two. Imaging depth was 150–200 um below intact dura.
Video of zebra finch wearing the complete wireless microscope system.
Acknowledgments
This effort would not have been possible without the critical advice and support at the outset of the project by Daniel Aharoni and Peyman Golshani. The authors would like to thank the Gardner lab, especially Alejandro Eguren, Carlos Gomez and Ali Mohammed for their help with microscope assembly, as well as Ben Pearre for developing and providing the neural network based syllable detector. We would also like to thank Aurelien Begue and Bernardo Sabatini for assistance with ZEMAX modeling and useful discussions. We also would like to thank Derek Liberti and Darrell Kotton for constructing and designing the RSV-GCaMP lentiviral construct. Special thanks to D.S. Kim and L. Looger for providing the GCaMP6 DNA and to the GENIE project at Janelia Farm Research Campus, Howard Hughes Medical Institute. This work was supported by a grant from CELEST, an NSF Science of Learning Center (SBE-0354378) and by grants from NINDS R24NS098536 and R01NS089679.
Footnotes
ADDITIONAL SUPPLEMENTAL MATERIAL:
-
1Microscope Body
-
2Microscope Baseplate
-
3Commutator Arm
-
4Commutator Gear 1
-
5Commutator Gear 2
-
6Microscope Body
-
7Microscope Baseplate
-
8Commutator Arm
-
9Commutator Gear 1
-
10Commutator Gear 2
A web resource containing assembly instructions, sample analysis code, as well updated STL and build files can be found at: https://github.com/gardner-lab/FinchScope This information is also available upon request.
References
- Andalman AS, Fee MS. A basal ganglia-forebrain circuit in the songbird biases motor output to avoid vocal errors. Proceedings of the National Academy of Sciences of the United States of America. 2009;106(30):12518–12523. doi: 10.1073/pnas.0903214106. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Barbera G, et al. Spatially Compact Neural Clusters in the Dorsal Striatum Encode Locomotion Relevant Information. Neuron. 2016;92(1):202–213. doi: 10.1016/j.neuron.2016.08.037. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Pearre Ben, Perkins Nathan, Markowitz Jeff, Gardner Timothy. A Fast and Accurate Zebra Finch Syllable Detector. PloS one. 2016 doi: 10.1371/journal.pone.0181992. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Betley JN, et al. Neurons for hunger and thirst transmit a negative-valence teaching signal. Nature. 2015;521(7551):180–185. doi: 10.1038/nature14416. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Cai DJ, et al. A shared neural ensemble links distinct contextual memories encoded close in time. Nature. 2016;534(7605):115–118. doi: 10.1038/nature17955. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Chen T-W, et al. Ultrasensitive fluorescent proteins for imaging neuronal activity. Nature. 2013;499(7458):295–300. doi: 10.1038/nature12354. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Clancy KB, et al. Volitional modulation of optically recorded calcium signals during neuroprosthetic learning. Nature neuroscience. 2014;17(6):807–809. doi: 10.1038/nn.3712. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Dana H, et al. Sensitive red protein calcium indicators for imaging neural activity. eLife. 2016:5. doi: 10.7554/eLife.12727. Available at: http://dx.doi.org/10.7554/eLife.12727. [DOI] [PMC free article] [PubMed]
- Deisseroth K, Schnitzer MJ. Engineering approaches to illuminating brain structure and dynamics. Neuron. 2013;80(3):568–577. doi: 10.1016/j.neuron.2013.10.032. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Fee MS, Leonardo A. Miniature motorized microdrive and commutator system for chronic neural recording in small animals. Journal of neuroscience methods. 2001;112(2):83–94. doi: 10.1016/s0165-0270(01)00426-5. [DOI] [PubMed] [Google Scholar]
- Ghosh KK, et al. Miniaturized integration of a fluorescence microscope. Nature methods. 2011;8(10):871–878. doi: 10.1038/nmeth.1694. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gissibl T, et al. Two-photon direct laser writing of ultracompact multi-lens objectives. Nature photonics. 2016;10(8):554–560. [Google Scholar]
- Glaze CM, Troyer TW. Temporal structure in zebra finch song: implications for motor coding. The Journal of neuroscience: the official journal of the Society for Neuroscience. 2006;26(3):991–1005. doi: 10.1523/JNEUROSCI.3387-05.2006. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Guitchounts G, et al. A carbon-fiber electrode array for long-term neural recording. Journal of neural engineering. 2013;10(4):046016. doi: 10.1088/1741-2560/10/4/046016. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Guizar-Sicairos M, Thurman ST, Fienup JR. Efficient subpixel image registration algorithms. Optics letters. 2008;33(2):156. doi: 10.1364/ol.33.000156. [DOI] [PubMed] [Google Scholar]
- Helmchen F, Denk W. Deep tissue two-photon microscopy. Nature methods. 2005;2(12):932–940. doi: 10.1038/nmeth818. [DOI] [PubMed] [Google Scholar]
- Liberti WA, 3rd, et al. Unstable neurons underlie a stable learned behavior. Nature neuroscience. 2016 doi: 10.1038/nn.4405. Available at: http://dx.doi.org/10.1038/nn.4405. [DOI] [PMC free article] [PubMed]
- Markowitz JE, et al. Mesoscopic patterns of neural activity support songbird cortical sequences. PLoS biology. 2015;13(6):e1002158. doi: 10.1371/journal.pbio.1002158. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Minderer M, et al. Neuroscience: Virtual reality explored. Nature. 2016;533(7603):324–325. doi: 10.1038/nature17899. [DOI] [PubMed] [Google Scholar]
- Olveczky BP, Andalman AS, Fee MS. Vocal experimentation in the juvenile songbird requires a basal ganglia circuit. PLoS biology. 2005;3(5):e153. doi: 10.1371/journal.pbio.0030153. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Park JH, et al. Head-mountable high speed camera for optical neural recording. Journal of neuroscience methods. 2011;201(2):290–295. doi: 10.1016/j.jneumeth.2011.06.024. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Picardo MA, et al. Population-Level Representation of a Temporal Sequence Underlying Song Production in the Zebra Finch. Neuron. 2016;90(4):866–876. doi: 10.1016/j.neuron.2016.02.016. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Poole B, Markowitz JE, Gardner TJ. The song must go on: resilience of the songbird vocal motor pathway. PloS one. 2012;7(6):e38173. doi: 10.1371/journal.pone.0038173. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Resendez SL, et al. Visualization of cortical, subcortical and deep brain neural circuit dynamics during naturalistic mammalian behavior with head-mounted microscopes and chronically implanted lenses. Nature protocols. 2016;11(3):566–597. doi: 10.1038/nprot.2016.021. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Rickgauer JP, Deisseroth K, Tank DW. Simultaneous cellular-resolution optical perturbation and imaging of place cell firing fields. Nature neuroscience. 2014;17(12):1816–1824. doi: 10.1038/nn.3866. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Rubin A, et al. Hippocampal ensemble dynamics timestamp events in long-term memory. eLife. 2015;4:e12247. doi: 10.7554/eLife.12247. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Sakata JT, Brainard MS. Online contributions of auditory feedback to neural activity in avian song control circuitry. The Journal of neuroscience: the official journal of the Society for Neuroscience. 2008;28(44):11378–11390. doi: 10.1523/JNEUROSCI.3254-08.2008. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Sober SJ, Brainard MS. Adult birdsong is actively maintained by error correction. Nature neuroscience. 2009;12(7):927–931. doi: 10.1038/nn.2336. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Sun H-B, Kawata S. NMR 3D Analysis Photopolymerization. Berlin, Heidelberg: Springer Berlin Heidelberg; 2004. Two-Photon Photopolymerization and 3D Lithographic Microfabrication; pp. 169–273. Advances in Polymer Science. [Google Scholar]
- Tumer EC, Brainard MS. Performance variability enables adaptive plasticity of “crystallized” adult birdsong. Nature. 2007;450(7173):1240–1244. doi: 10.1038/nature06390. [DOI] [PubMed] [Google Scholar]
- Wiltschko AB, et al. Mapping Sub-Second Structure in Mouse Behavior. Neuron. 2015;88(6):1121–1135. doi: 10.1016/j.neuron.2015.11.031. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Yartsev MM, Ulanovsky N. Representation of three-dimensional space in the hippocampus of flying bats. Science. 2013;340(6130):367–372. doi: 10.1126/science.1235338. [DOI] [PubMed] [Google Scholar]
- Ziv Y, et al. Long-term dynamics of CA1 hippocampal place codes. Nature neuroscience. 2013;16(3):264–266. doi: 10.1038/nn.3329. [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Internal wiring schematic and pin mapping for data acquisition box, and wireless receiver.
Video of blood vessels right above area HVC, taken on the surgery table, imaged through a GRIN relay lens, in brightfield. Video was acquired at 30 frames per second, temporally smoothed by three frames, and spatially downsampled by a factor of two.
Wiring schematic for MC900A camera, and wireless transmitter.
Video of neural activity in HVC in a zebra finch during a bout of singing. Video was acquired at 30 frames per second, temporally smoothed by two frames, and spatially downsampled by a factor of two. Imaging depth about 100 um below intact dura.
Video of neural activity in HVC in a zebra finch during a bout of singing. This is the same bird as in MOV2, taken several days later. Video was acquired at 30 frames per second, temporally smoothed by two frames, and spatially downsampled by a factor of two. Imaging depth about 100 um below intact dura.
Video of neural activity in HVC in a zebra finch during a bout of singing. Video was acquired at 30 frames per second, temporally smoothed by two frames, and spatially downsampled by a factor of two. Imaging depth was 150–200 um below intact dura.
Video of zebra finch wearing the complete wireless microscope system.