Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2012 Oct 15.
Published in final edited form as: J Neurosci Methods. 2011 Jul 6;201(2):290–295. doi: 10.1016/j.jneumeth.2011.06.024

Head-mountable high speed camera for optical neural recording

Joon Hyuk Park 1, Jelena Platisa 2, Justus V Verhagen 2,4, Shree H Gautam 2, Ahmad Osman 2, Dongsoo Kim 1, Vincent A Pieribone 2,3, Eugenio Culurciello 1
PMCID: PMC3179772  NIHMSID: NIHMS310099  PMID: 21763348

Abstract

We report a head-mountable CMOS camera for recording rapid neuronal activity in freely-moving rodents using fluorescent activity reporters. This small, lightweight camera is capable of detecting small changes in light intensity (0.2% ΔI/I) at 500 fps. The camera has a resolution of 32 × 32, sensitivity of 0.62 V/lux·s, conversion gain of 0.52 μV/e- and well capacity of 2.1 Me-. The camera, containing intensity offset subtraction circuitry within the imaging chip, is part of a miniaturized epi-fluorescent microscope and represents a first generation, mobile scientific-grade, physiology imaging camera.

Keywords: CMOS image sensor, imaging, integrated circuit, voltage sensitive dye, calcium sensitive dye, genetically-encodable optical probe, optogenetic

1. Introduction

The study of neuronal activity in awake and freely moving animals represents a chance to examine neuronal function during natural behavior without the stress and suppression of activity inherent in the use of anesthetics and physical restraints. Functional magnetic resonance imaging and positron emission tomography allow three-dimensional recording of brain metabolism, however these techniques are indirect measures of neuronal activity, have low temporal and spatial resolution and require head fixation. Conversely, implantable micro-electrode arrays allow high speed recording of neuronal activity in awake, unrestrained, and mobile animals. Such recordings have greatly expanded our understanding of behavior-related neuronal activity. The development of optical probes of cellular function has made it possible to image neuronal activity in freely moving animals [1, 2, 3, 4, 5, 6, 7, 8, 9].

Optical signals can provide high spatial and temporal resolution, while being less invasive than traditional micro-electrode methods, and allow in vivo studies of more “natural” neuronal function, and support closed-loop optical neuronal prosthesis development. Fiber optic connected laser-scanning imaging methods (confocal [10] and multi photon [11]) have produced high resolution images of neurons in animals. However, these approaches currently lack the sensitivity, speed and field of view to record the rapid (500 Hz) and relatively small fluorescence intensity changes produced by optical probes across a functionally relevant domain of cortex (i.e. visual cortex, > 2 mm2). Conventional wide field epi-fluorescence microscopy using electronic imaging requires elements that have not been designed to be miniaturized to allow head mounting in rodents.

Currently, head-mountable imaging systems are generally coupled by fiber optics to a immobile “desktop” imaging system. While such setups allow experiments with awake animals, they still do not allow the animal to move freely, limiting the types of realistic experiments that can be performed. For a truly autonomous head-mountable system for freely moving animals (Figure 1A), the long fiber optic cable (which is rigid and has a low numerical aperture) must be replaced with a complete imaging system composed of a miniature microscope (including lenses, filters, mirror and illumination source) and an image sensor mounted on the head of the animal. A similar system has been developed for reflected light imaging in rats [12]. However, for high speed, head mounted fluorescence imaging, a new type of imaging system must be developed based on a custom image sensor designed specifically for low power and light weight while performing at specifications similar to much larger, fixed imaging setups. Herein we describe the design and implementation of a custom complimentary metal-oxide-semiconductor (CMOS) imager [13] specifically for use in wide field, high speed, mobile imaging of low signal-to-noise ratio (SNR) optical signals from neuronal tissues. This sensor will be deployed on a custom built, head-mountable microscope (Figure 1). The full specifications and performance of this microscope will be presented in a separate communication (Pieribone, Osman, Dickensheets, Park, Culurciello, in preparation).

Figure 1.

Figure 1

A - Future implementation of the our sensor in a self-contained system for imaging fluorescent activity reporters in freely moving rodents. B - The camera described in this communication attached to a head-mountable microscope system, umbilical cable not shown. The microscope contains an excitation filter, emission filter, dichroic and lenses (objective, condenser, relay). The light source contains a LED and heatsink. C - The camera housing and umbilical cable. The camera head weighs 3.4 grams. D - The signal to noise ratio with respect to exposure.

2. Materials and methods

In this section, we describe what image sensors are currently available for a variety of applications and why they are not suitable for a head-mountable imaging system for freely moving animals. We also describe our work on a head-mountable system and how we have used our previous results to design a high performance, compact imaging system.

2.1. Existing Methods

Scientific-grade image sensors such as NeuroCCD (RedShirtImaging, Decatur, GA) and dicam (PCO, Kelheim, Germany) have typically used charge-coupled devices (CCD) due to lower read noise (80 dB peak SNR for NeuroCCD and 69.3 dB peak SNR for dicam) and higher quantum efficiency (80% for NeuroCCD and 50% for dicam) than CMOS sensors [14]. However, CCDs consume tens of watts (51W for dicam) of power and the imaging systems have active cooling components weighing several pounds (total weight of 8 Kg for dicam). The main advantage of CMOS is the integration of peripheral circuitry at the pixel, column or chip level. This enables on-chip image processing while consuming only milliwatts of power and requiring only passive cooling. While scientific-grade CMOS imagers such as MiCAM ULTIMA (BrainVision, Tokyo, Japan) are approaching the quality of CCD imagers, power and weight are still issues that need to be addressed. Commercial CMOS imagers are mainly designed for cellphones or professional photography, and are energy intensive (> 100 mW), slow (usually 30 fps), designed for color imaging and do not have a high enough SNR to detect small changes in intensity (Table 1). Cellphone CMOS sensors are small in size (about 3 mm × 3 mm), but still consume tens of milliwatts of power, have SNR of less than 40 dB and are limited to 60 fps. Professional photography CMOS sensors have higher, yet inadequate, SNR of less than 50 dB and require too much power (Table 1). Also, both of these classes of imagers have higher resolution (> 1 Mpixels) than necessary for biological applications. Binning would have to be implemented to collect enough photons in typical biological conditions to increase the SNR, which reduces the overall resolution. A custom pixel array design based on the image sensor fabrication technology of professional photography CMOS sensors is preferred, but this technology is highly confidential and biological imaging does not currently have large enough market to interest large CMOS image sensor companies.

Table 1.

A representative summary of different image sensors available today. A: Aptina MT9H004. B: Omnivision OV5690. C: > 1 lb. D: estimate. E: at full resolution.

DSLRA Mobile CameraB NeuroCCD -SMQ NeuroCCD -SM256 MiCam Ultima This Work
Weight (g) NA NA NAC NAC 250 3.4
Power Consumption NA 155 mW 110 WD 110 WD > 60 W 12 mW
Peak Signal-to-Noise Ratio (dB) 47 36 > 70 > 70 > 70 61
Dynamic Range (dB) NA 71.6 85 - 97 87 - 98 80 83
Max speed (fps) 10.5E 30 2000E 100E 10000 890
Resolution (Mpixel) 16 5 0.0064 0.066 0.01 0.001
Photodiode Size (μm2) 23 2 24 26 NA 2516
Operational Quantum Efficiency 35 - 45D% 35 - 45D% > 80% > 80% 61% 12.5%
Full Well Capacity 45 ke-D 4 ke-D 215 ke- 600 ke- 10 Me- 2.1 Me-
Conversion Gain 31 μV/e-D 195 μV/e-D NA NA NA 0.52 μV/e-
Shutter Rolling Rolling Global Global Global Global/Rolling
Color Filter Yes Yes No No No No

Previously, we developed low power image sensors for miniature imaging systems [14, 15, 16]. A high speed, low power sensor was developed in a silicon-on-sapphire process [15]. While this sensor offered good quantum efficiency in the UV range (325 nm to 450 nm), anything outside that range resulted in poor quantum efficiency. In a second version of the design, we used a bulk CMOS process, which gave acceptable performance for a head-mountable system [15, 16]. This design also implemented a temporal difference output. However, the complete imaging setup could not be miniaturized.

2.2. Imaging System

Our new imaging system improves the design of our previous works [15, 16]. The temporal difference capability and the SNR at lower light intensities have been improved, allowing us to collect data from high speed voltage sensitive dye experiments. Table 1 compares the specifications of our system with other currently available image sensors.

The image sensor contains a 32 × 32 active pixel sensor array, a column/row control circuit and a global buffer to drive the analog output pad. All necessary clock, control and switching signals for the circuit is generated by an FPGA board and sent to the imaging system through a 20-wire bundle. Our pixel array size of 32 × 32 is a good compromise between coverage and amount of data that has to be streamed from this mobile-platform (for the required frame rate). Each 75 μm × 75 μm pixel contains a 74 μm × 34 μm photodiode. The n-well photodiode has a capacitance of 305 fF at reset, and a well capacity of 2.1 Me-. The large capacitance results in a low conversion gain, but reduces kT/C noise and increases the dynamic range.

The pixel (Figure 2A) is a modification of a 3-transistor active pixel sensor design commonly found in CMOS image sensors [18]. A PMOS reset transistor is used to minimize noise and to allow a larger voltage swing, resulting in a greater well capacity. A NMOS follower is used to charge a 788 fF storage capacitor through a NMOS shutter switch. A full transmission gate is not necessary in this design because the voltage passed through this switch and stored in the capacitor will range from 1.1 V to 2.1 V. A dummy switch reduces clock feedthrough and charge injection into the capacitor. For normal, global shutter operation (Figure 2B), the photodiode is initially reset to Vrst by closing the Reset switch. Then, photon integration on the photodiode is started by reopening the Reset switch for the duration of Exposure Time. After Exposure Time, the voltage over the photodiode is stored onto the Storage Capacitor by pulsing Shutter. Then, the pixel is reset again and keeps integrating while the stored voltage is read out through the output line. The Reset Pixel, Exposure Time and Store operations can occur during the Read Out phase without affecting the voltage in Storage Capacitor. Thus, the Read Out phase adds no additional delay between frames, and our system in global shutter mode offers frame rates comparable to rolling shutter imagers.

Figure 2.

Figure 2

A - A simple schematic and the operation of our image sensor. Each pixel contains a n-well/p-substrate photodiode, a Reset switch to reset the Photodiode to Vrst at the beginning of each Exposure Time of a frame, a Shutter switch to store the photodiode voltage into the Storage Capacitor, and a Rowsel switch to output the pixel value to the global amplifier and comparator. B - In a normal, global shutter mode, the photodiode is reset to Vrst. Then, the voltage of the photodiode drops from Vrst relative to the light intensity during the duration of Exposure Time. After the Exposure Time, the Shutter switch is pulsed to store the resulting voltage over the photodiode onto the Storage Capacitor. During the Read Out phase, the value of the Storage Capacitor is read out through a global buffer in parallel with the Reset and Exposure Time phase. C - In a temporal difference mode, for the first image frame, the photodiode is reset to Vrst. Then, the photodiode is integrated for the duration of Frame 1 Exposure Time. Then, the voltage of the photodiode (of Frame 1) is stored onto the Storage Capacitor with the Shutter switch. After Reset Pixel, the second frame is integrated over the photodiode during Frame 2 Exposure Time. Then, both the stored voltage of the first frame and the voltage over the photodiode of the second frame is read out simultaneously to the global comparator by turning on the Rowsel switch during the Read Differential Out phase. With the Rowsel switch still on, the Shutter switch is pulsed again to store the voltage of Frame 2 onto the Storage Capacitor during the Store Frame 2 phase. Then, the pixel is reset and the third frame is integrated. The read out after the third frame

2.3. Temporal difference operation

In order to reduce the performance requirements of the ADC, and to increase the frame rate, the image sensor also has the ability to perform temporal difference at the chip level. In temporal difference operation (Figure 2C), the photodiode is reset to Vrst by turning on the Reset switch. Then, the integration is started by turning off the Reset switch for the duration of Exposure Time. After the duration of Exposure Time, the voltage of the photodiode is stored onto the Storage Capacitor by pulsing the Shutter switch. Then, the pixel is reset again and keeps integrating for a second Exposure Time. After the second Exposure Time, the stored voltage from the previous frame and the voltage from the current frame is simultaneously outputted to a global comparator that takes the difference between the two frame data for each pixel. After the difference is read out, the current voltage is stored into the Storage Capacitor and the pixel is reset again and integrates for the next Exposure Time. In the temporal difference mode, the frame rate is decreased over the normal mode, as the Read Out phase cannot be done in parallel with the other phases. This means that in this mode of operation, there is no electronic shutter available and a rolling shutter is used.

3. Results

A test platform to characterize the image sensor was built around an Opal Kelly XEM3001v2 FPGA board. The FPGA was used to control a 16 bit ADC and two 12 bit digital-to-analog converters that provided the appropriate biases to the image sensor. A C++ control program was written to communicate with the FPGA board through an USB port.

Our custom-designed imaging chip is integrated into a small, lightweight PC board containing only an ADC, voltage regulator, 4 resistors and four bypass capacitors. The current camera head measures 22.25 mm × 22.25 mm × 6 mm, and is connected to a remote image capture system using a wire bundle with a weight of 4 g/m.

3.1. Image sensor characterization

The sensitivity of the pixel was calculated by measuring the voltage integration at 50 lux with the sensor running at 500 fps. The voltage integrated was 288 mV and the integration time was 709 μs. Thus, the sensitivity is 0.62 V/ lux·s. Since the full voltage swing of the pixel is 1.1 V, the full well capacity is 2.1 Me-, and the conversion gain is 0.52 μV/e-.

To test the SNR of the image sensor, the photodiode in every pixel was integrated for 709 μs at different light intensities. The light was provided by a Luxeon Rebel LED controlled with a custom-built feedback controlled drive circuit (an Arduino micro controller and a National Instruments DAQ board). A set of 10,000 consecutive frames at each light intensity was collected and the voltage drop of the photodiode was divided by the standard deviation of the values of the photodiode to calculate the SNR. The photodiode saturates at around 120 lux at 500 fps and has a SNR of 61 dB. The SNR needs to be between 55 and 72 dB for the VSDI application of our experiment.

The quantum efficiency of the photodiode was calculated using light between 250 to 1000 nm produced by HORIBA Jobin Yvon FluoroMax-3. A Newport 818-UV calibrated photodiode was used as a reference to calculate the light power at different wavelengths. The integration time to drop the photodiode voltage of the sensor by 1V was measured using the light source and the quantum efficiency was derived from the collected data. The peak quantum efficiency was about 25 %. This is due to the fact that the CMOS process we used is not optimized for optical imaging applications. With a fill factor of 50 %, the operational quantum efficiency of each pixel is 12.5 %.

We analyzed the temporal noise of our image sensor and its SNR. The pixel reset noise, directly related to the capacitance of the photodiode, is 115 μV. The voltage drop due to dark current has a slope of 0.14 V/s, so the dark current is 162 fA or 6.4 nA/cm2. At 500 fps, the dark shot noise is 3.8 μV. The board noise is calculated by connecting a very stable DC voltage to the output pin of the image sensor (input pin of the ADC) on the board and collecting 12,000 frames of all 32 × 32 pixels. The ADC was running at 1 MSPS, the-highest achievable with this specific model (Analog Devices AD7980). The standard deviation of all pixels in the 12,000 frames was 138 μV. The noise due to all other components on the chip is calculated by running the image sensor with a very short integration time (approximately 1 μs) in the dark. Under these conditions, the total noise of the circuit was 300 μV. The only noise sources to take into consideration in such a short integration time in the dark are reset, board and readout. Thus, the read noise is 260 μV. The fixed pattern noise (FPN) was calculated by measuring the standard deviation of each of the 32 × 32 pixels over 10,000 frames and then taking the mean of the resulting 1024 standard deviation values. The image sensor has a pixel FPN of 1.5 %, a column FPN of 1.1 % and a row FPN of 0.82 %.

The temporal difference output was tested by recording a spinning wheel with two gray scale gradient (a differential of 12.5 %) wedges. The temporal difference mode was running at 333 fps. As seen in Figure 3, the performance between off chip and on chip temporal difference is similar. However, this was performed under ideal conditions where the intensity of the light hitting the sensor is at an optimal level. Under normal operating conditions (non-ideal), the discrepancy between data from the on chip temporal differencing circuit and the off chip software calculation can be as high as 0.65 %, which is too high to allow the detection of signals that are below 1% ΔF/F. This error is caused by the current opamp design used in the differential circuit, where outside of a certain operating voltage range, the output of the circuit becomes non-linear with respect to the light intensity.

Figure 3.

Figure 3

Data captured at 333 fps of a rotating disk. Left side of each panel is a single frame and right is the value of one pixel over time. In the single frame images, the contrast between the quadrants of the disk has been digitally enhanced for presentation. A - The imaging system running in temporal difference mode. B - Off-chip temporal difference calculated by subtracting two consecutive image frames. C - Normal output mode. The vertical scale bar represents a 1.3 % change. The horizontal scale bar represents 0.2 second.

3.2. Olfactory bulb recordings

A calcium-sensitive dye, Oregon Green BAPTA 488 Dextran 10kD, was loaded onto rat olfactory receptor neurons 7 days prior to imaging using a procedure similar to that developed for mice [19]. All animal experiments were approved by our Institutional Animal Care and Use Committee in compliance with all Public Health Service Guidelines. Just prior to imaging, an optical window was installed over the dorsal surface of each olfactory bulb by thinning the overlying bone to 100 μm thickness and coating the thinned bone with ethyl 2-cyanoacrylate glue to increase its refractive index. Odorants were diluted from saturated vapor of pure liquid chemicals using mass flow controllers. Linearity and stability of the olfactometer were verified with a photoionization detector. An odorant delivery tube (20 mm ID, 40 mm long), positioned 6 mm from the animals nose, was connected to a 3-way valve controlled vacuum flow. Switching this vacuum flow away from the tube assured rapid onset and offset of the odorant. After creating the optical window, a double tracheotomy was performed to control sniffing. To draw odorants into the nasal cavity, a teflon tube was inserted tightly into the nasopharynx. This tube communicated intermittently (250 ms inhalation at a 2 s interval) with a vacuum flow via a PC-controlled 3-way valve. A cyan LED was the light source. The Neuroplex software initiated a trial by triggering the onset of the light source, resetting the sniff-cycle and triggering the olfactometer. Imaging started 0.5 s after the trigger and an odorant was presented from 4 to 9.5 s. Running at 25 fps, the change in intensity detected by our system with a 4 % concentration of 2-hexanone delivered orthonasally was around 3.1 %. This is comparable to the data collected with NeuroCCD-SM256, a commercial camera (Figure 4).

Figure 4.

Figure 4

A - Image of Oregon Green BAPTA loaded olfactory nerve terminals in the rat olfactory bulb taken with the sensor described in this communication. The red spot indicates the location of a 2 × 2 pixel sub region averaged to produce the trace in C. B - The same field of view captured with a NeuroCCD-SM256 camera (RedShirtImaging, LLC). The red spot indicates an 8 × 8 pixel sub region averaged to produce the trace in D. C - Calcium transients recording from olfactory nerve terminals within glomeruli of the rat olfactory bulb following presentation of an odorant (3.1 % 2-hexanone) recorded with our sensor. D - Approximately same ROI imaged with the NeuroCCD-SM256 camera. Both cameras produce similar sized signals with comparable noise characteristics. Some of the repetitive noise in panel D arose from metabolic (i.e. blood flow from heart pumping) sources. Both traces are single presentations and unfiltered. Vertical and horizontal scale bars represent 1 % ΔF/F and 1 second, respectively.

3.4. Imaging mouse somatosensory cortex during whisker deflection

A 30 g mouse was anesthetized with Urethane (0.6 ml, 100 mg/ml, IP). Depth of anesthesia was monitored by tail pinch and heart rate. Body temperature was maintained at 37° C by using a heating pad and rectal thermometer. Additional anesthesia was given as needed (0.1 ml, IP). A dye well/head holder was affixed to the exposed skull using cynoacrylic glue. A craniotomy (5 mm diameter) was performed over barrel cortex (2.1 mm lateral and 2 mm caudal to bregma) using a Gesswein drill with a 0.5 round head bit. The dura was left intact and a solution of RH1692 (100 μl, 0.1 mg/ml of ACSF) was applied for 1.5 hrs with stirring every 10-15 minutes to prevent the CSF from forming a barrier between the dye and the brain. A pressurized air puff (50 ms, Picospritzer) was used to deflect the whiskers contralateral to the side of imaging. The barrel cortex was imaged using a tandem-lens epifluorescence macroscope equipped with a 150W Xenon arc lamp (OptiQuip, HighlandMills, NY).

4. Discussion

We set out to develop a camera that is part of a fully autonomous fluorescence imaging system capable of being carried on a freely moving rat. We describe an image sensor that represents a compromise between small cell phone -style cameras and large research grade imaging systems. Our sensor allows high-speed, low noise recording in a small, low power, lightweight package. Currently we are testing the next generation sensor which promises to improve several performance characteristics, including higher sensitivity, lower noise, greater spatial resolution and a higher fidelity temporal differencing mode. This miniaturized imaging and control system for wide field optical recording combined with genetically-encoded optical probes represent the first steps towards wearable optical neural prosthetics. With the current design it will be possible to include a full imaging and storage system that can be carried by a rodent or larger animal.

Figure 5.

Figure 5

Data captured from a voltage sensitive dye (RH1692) stained barrel cortex following whisker deflection by a 50 ms air puff. A - 20-trial average of a 138 um × 138 um area of cortex with our imaging system (2 × 2 pixels). B - 20-trial average of the same area of the same experiment using a RedShirt Imaging NeuroCCD (4 × 4 pixels). Data was collected at 500 Hz for both systems. The vertical scale bar represents 0.5 % ΔF/F and the horizontal scale bar represents 0.1 second.

Research Highlights.

  • - Custom low power, high SNR image sensor fabricated and housed in a light weight system because no image sensor currently available meets design requirements

  • - Enables systems that can be head-mounted onto freely moving and awake rodents for wide field imaging of optical probes

  • - Sensor performs similarly to much larger, fixed experimental setups in detecting optical signals

  • - Also offers dI/I on-chip, reducing size and power requirements

Acknowledgments

This project was partly funded by ONR grant number 439471 and 396490, ARO contract W911NF-07-1-0597, and NIH grants (R01NS065110-02 and U24NS057631).

Footnotes

Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

References

  • 1.Ataka K, Pieribone VA. A genetically-targetable fluorescent voltage probe with fast kinetics. Biophysical Journal. 2002;82(1):509–516. doi: 10.1016/S0006-3495(02)75415-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Baker B, Lee H, Pieribone VA, Cohen L, Isacoff E, Knöpfel T, Kosmidis E. Three fluorescent protein voltage sensors exhibit low plasma membrane expression in mammalian cells. Journal of Neuroscience Methods. 2007;161(1):32–38. doi: 10.1016/j.jneumeth.2006.10.005. [DOI] [PubMed] [Google Scholar]
  • 3.Chanda B, Blunck R, Faria L, Schweizer F, Mody I, Bezanilla F. A hybrid approach to measuring electrical activity in genetically specified neurons. Nature Neuroscience. 2005;8(11):1619–1626. doi: 10.1038/nn1558. [DOI] [PubMed] [Google Scholar]
  • 4.Dimitrov D, He Y, Mutoh H, Baker B, Cohen L, Akemann W, Knöpfel T. Engineering and characterization of an enhanced fluorescent protein voltage sensor. PLoSONE. 2007;2(5):e440. doi: 10.1371/journal.pone.0000440. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Knöpfel T, Tomita K, Shimazaki R, Sakai R. Optical recordings of membrane potential using genetically targeted voltage-sensitive fluorescent proteins. Methods. 2003;30(1):42–48. doi: 10.1016/s1046-2023(03)00006-9. [DOI] [PubMed] [Google Scholar]
  • 6.Mutoh H, Perron A, Dimitrov D, Iwamoto Y, Akemann W, Chudakov D, Knöpfel T. Spectrally resolved response properties of the three most advanced FRET based fluorescent protein voltage probes. PLoSOne. 2009;4(2):e4555. doi: 10.1371/journal.pone.0004555. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Siegel M, Isacoff E. A genetically encoded optical probe of membrane voltage. Neuron. 1997;19(1):735–741. doi: 10.1016/s0896-6273(00)80955-1. [DOI] [PubMed] [Google Scholar]
  • 8.Tsien R. Indicators based on fluorescence resonance energy transfer (FRET) Cold Spring Harb Protoc. 2009;7 doi: 10.1101/pdb.top57. pdb.top57. [DOI] [PubMed] [Google Scholar]
  • 9.Tsutsui H, Karasawa S, Okamura Y, Miyawaki A. Improving membrane voltage measurements using FRET with new fluorescent proteins. Nature Methods. 2008;5(1):683–685. doi: 10.1038/nmeth.1235. [DOI] [PubMed] [Google Scholar]
  • 10.Flusberg B, Nimmerjahn A, Cocker E, Mukamel E, Barretto R, Ko T, Burns L, Jung J, Schnitzer M. High-speed, miniaturized microscopy in freely moving mice. Nature Methods. 2008;5(1):935–938. doi: 10.1038/nmeth.1256. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Helmchen F, Fee M, Tank D, Denk W. A miniature head-mounted two-photon microscope: high resolution brain imaging in freely moving animals. Neuron. 2001;31(1):903–912. doi: 10.1016/s0896-6273(01)00421-4. [DOI] [PubMed] [Google Scholar]
  • 12.Murari K, Etienne-Cummings R, Cauwenberghs G, Thakor N. An integrated imaging microscope for untethered cortical imaging in freely-moving animals. Conf Proc IEEE Eng Med Biol Soc. 2010:5795–8. doi: 10.1109/IEMBS.2010.5627825. [DOI] [PubMed] [Google Scholar]
  • 13.Theuwissen A. Cmos image sensors: state-of-the-art. Solid-State Electronics. 2008;52(1):1401–1406. [Google Scholar]
  • 14.Tian H, Fowler B, Gamal AE. Analysis of temporal noise in CMOS photodiode active pixel sensor. IEEE Journal of Solid State Circuits. 2001;36(1):92–101. [Google Scholar]
  • 15.Park J, Culurciello E. Back-illuminated ultraviolet image sensor in silicon-on-sapphire. IEEE International Symposium on Circuits and Systems. 2008:1854–1857. [Google Scholar]
  • 16.Park J, Culurciello E, Kim D, Verhagen J, Gautam S, Pieribone VA. Voltage sensitive dye imaging system for awake and freely moving animals. IEEE Biomedical Circuits and Systems Conference. 2008:89–92. [Google Scholar]
  • 17.Park J, Pieribone VA, von Hehn C, Culurciello E. High-speed fluorescence imaging system for freely moving animals. IEEE International Symposium on Circuits and Systems. 2009:2429–2432. [Google Scholar]
  • 18.Fossum E. Digital camera system-on-a-chip. IEEE Micro. 1998;18(3):8–15. [Google Scholar]
  • 19.Verhagen J, Wesson D, Netoff T, White J, Wachowiak M. Sniffing controls an adaptive filter of sensory input to the olfactory bulb. Nature Neuroscience. 2007;10(1):6391–639. doi: 10.1038/nn1892. [DOI] [PubMed] [Google Scholar]

RESOURCES