Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2014 Oct 16.
Published in final edited form as: Neuron. 2013 Oct 16;80(2):10.1016/j.neuron.2013.09.008. doi: 10.1016/j.neuron.2013.09.008

The Challenge of Connecting the Dots in the B.R.A.I.N

Anna Devor 1,2,3, Peter A Bandettini 4,5, David A Boas 3, James M Bower 6, Richard B Buxton 2, Lawrence B Cohen 7,8, Anders M Dale 1,2, Gaute T Einevoll 9, Peter T Fox 6,10, Maria Angela Franceschini 3, Karl J Friston 11, James G Fujimoto 12, Marc A Geyer 13, Joel H Greenberg 14, Eric Halgren 1,2, Matti S Hämäläinen 3, Fritjof Helmchen 15,16, Bradley T Hyman 17, Alan Jasanoff 18,19, Terry L Jernigan 2,13,20,21, Lewis L Judd 13, Seong-Gi Kim 22, David Kleinfeld 23, Nancy J Kopell 24, Marta Kutas 1,21,25, Kenneth K Kwong 3, Matthew E Larkum 26, Eng H Lo 27, Pierre J Magistretti 28, Joseph B Mandeville 3, Eliezer Masliah 1, Partha P Mitra 29, William C Mobley 1, Michael A Moskowitz 27, Axel Nimmerjahn 30, John H Reynolds 31, Bruce R Rosen 3, Brian M Salzberg 32, Chris B Schaffer 33, Gabriel A Silva 34, Peter T C So 35, Nicholas C Spitzer 36, Roger B Tootell 3,37, David C Van Essen 38, Wim Vanduffel 3,39, Sergei A Vinogradov 40, Larry L Wald 3, Lihong V Wang 41, Bruno Weber 42, Arjun G Yodh 43
PMCID: PMC3864648  NIHMSID: NIHMS530745  PMID: 24139032

Abstract

The Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative has focused scientific attention on the necessary tools to understand the human brain and mind. Here, we outline our collective vision for what we can achieve within a decade with properly targeted efforts, and discuss likely technological deliverables and neuroscience progress.

Introduction

What makes a student – or anyone – fall in love with neuroscience? For many, the life-long affair begins with an encounter with “cognitive neuroscience” – the phenomena of perception, learning, memory, language, emotions, and other marvels of the human mind. It stems from a desire to immerse oneself in an exploration of the biophysical brain substrates of these brain processes, to understand the mechanisms of brain function: from the activity of individual nervous cells to the emergence of conscious perception. These are among the biggest questions that capture the imagination of neuroscientists and society alike. No matter who we are, we can’t help but be excited when we can predict actions, perceptions, and memory retrievals based on the spiking activity of a single neuron or a functional MRI response in humans. And yet, these glimpses of insight fall far short of understanding of “how the brain works.”

Over the years, neuroscientists have gathered a myriad of mechanistic bits and pieces from studies of the brain in a range of model organisms, based on activity measured at varying spatial and temporal scales. This mosaic knowledge, however, has not resolved into a clear picture of the functional organization of the brain. This is in part because there are still large missing pieces. More importantly, it stems from the lack of a roadmap and the necessary tools to connect the dots. This is the challenge that human brain mapping does not share with the great mapping effort of the last decade, the Human Genome Project. In the latter, while the task was daunting for the technology that existed at its inception, the initial target was clear: sequencing the DNA. With brain mapping, in contrast, neuroscientists are facing a key ingenuity test for this century: we need to discover new paradigms in order to solve the puzzle.

Last April, President Obama’s announcement of the Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative opened a debate within the scientific community as to what the scale and scientific scope of such a program should be. What holds us back in realizing our dream of figuring out how our brain “works”? More specifically, what is needed to enable a biologically-based description of behavior at the level of cellular and subcellular functional brain organization, without losing sight of the forest for the trees? What limits our ability to manipulate the brain’s activity on a microscopic scale, while correctly predicting the outcome for higher cortical functions? What will it take to link the neurological and neuropsychiatric diseases to specific cellular and subcellular properties of the elements that work as a whole resulting in altered perception, impaired learning, or memory loss?

Below, we outline our broad, multidisciplinary perspective on how to address these questions. We begin by examining the kinds of technologies that, collectively and within a valid theoretical framework, would facilitate the necessary quantum leap toward understanding brain function and its disruption in disease. After this, we revisit the concept of emergent properties of the brain’s functional organization, which arises time and again in the debates surrounding the BRAIN Initiative. Finally, we offer a prediction of the state of neuroscience in ten years. Admitting the existence of significant technological and theoretical challenges, we nevertheless believe that, properly targeted, a robust investment in the science of the brain today can transform our understanding of the human brain and mind and set a new course to alleviating brain disorders. The views expressed herein are independent of and may be complementary to the recommendations proposed by the NIH-organized BRAIN working group.

Technology on and beyond the horizon

The micro- and nanotechnologies for experimentally measuring, labeling, and manipulating neuronal activity have been a focus in the debates around the BRAIN Initiative. The technologies gathered under this broad umbrella can be divided into three categories based on the stage of their maturity.

The first category comprises tools that have already found neuroscience applications. Measurement modalities in this category include, for example, electrophysiological recordings using arrays of electrodes, multiphoton microscopy, photoacoustic and optical coherence tomography, voltage-sensitive dye imaging, and super resolution microscopy. For each of these technologies, enhancing both the quality of the measurement (resolution, speed, sampling efficiency, selectivity, and specificity) and the ability to quantify the underlying physiological parameter of interest could prove transformative. Enhancement/acceleration of existing tools typically involves combining advances from different fields, thereby requiring a transdisciplinary effort. For instance, one can imagine combining next-generation multicolor genetically-encoded voltage and calcium indicators (genetic engineering) with large-scale, parallel two-photon detection (instrumentation engineering) to achieve efficient sampling from neurons of many cell types simultaneously and reconstruction of the circuit behavior (computational modeling). Such efforts would come with only moderate technological risks: we have good reason to believe that the task is feasible and that the final product will meet the needs. Practical solutions have already been demonstrated for some of these elements (e.g., 3D scanning technologies) but industrial partnership is needed to facilitate broad adoption by the neuroscience community.

The second category includes tools where a proof of principle is available but application in the neurosciences is in its infancy (“on the horizon”) or non-existent. One example of this is the so-called “wide-field two-photon microscopy” technique that could revolutionize multiphoton imaging by relaxing the requirement of scanning one pixel at a time while retaining the optical sectioning inherent in nonlinear excitation. Novel technologies of this type are sometimes conceived and developed in laboratories outside the neurosciences that do not follow through in demonstrating their practical utility but rather move on to the next project as soon as the proof of principle has been achieved. Advancing these technologies to the next stage, therefore, would benefit from a multidisciplinary collaboration attuned to the specific biological questions to be addressed. In contrast to the first category, the potential risks are high in developing on-the-horizon tools, as are the potential rewards.

A final category of tools are best described as “beyond the horizon.” For example, it would be very useful to have a noninvasive version of optogenetics for use in humans with Parkinson’s disease. The objective is clear but the existing technologies do not scale up; there is no obvious path. This is like sailing a ship to a target beyond the horizon without a means of navigation: even with the most imaginative and innovative crew on board, we might not reach the destination. Making progress with such technologies would require a new invention, a discovery, a way to overcome an apparent fundamental limit. This may not be impossible. Seemingly fundamental limits can be broken, as occurred with the recent arrival of super resolution microscopy, which shattered the conventional optical diffraction limit. The possible impact of innovations of this magnitude cannot be underestimated, of course. Yet discoveries do not adhere to a schedule, and an effort built around them may face the problem of unworkable/unrealistic/unachievable goals. In addition to the inherent technical and scientific risks, such efforts are typically disciplinary by nature and carried out by specialized laboratories.

From neurons to networks to behavior

In parallel with technological advances, we need theories to tie together measurements across the spatial and temporal scales and make predictions of the emergent properties of neurons connected in networks. The term emergent property is borrowed from the physics of complex systems, where it refers to phenomena that cannot be directly traced to their individual components, only to how those components interact. Consider the example of weather – the state of the atmosphere. The temperature of the air is not defined at the atomic scale; it is an emergent property of many atmospheric particles. A weather forecast requires a valid theoretical framework: a model. The model incorporates a set of rules worked out by studying interactions among particles; the actual forecast, however, is not predicted by simulating the position of every molecule. Rather, the forecast is made on the relevant practical scale by means of measurements of the current state of the atmosphere and models formulated with “coarse-grained” variables such as pressure and temperature and parameters such as the physical shapes of landforms. For the most part, this approach works: we can rely on the National Weather Service to predict tomorrow’s rain.

While the separation of microscopic and macroscopic scales is less clear in neuroscience than in atmospheric physics, it is nevertheless a useful analogy: using the ability to predict as a surrogate for understanding, understanding higher cortical functions – perception, for example – by quantifying a large number of individual neurons firing across the brain may be impractical; instead it is likely necessary to use intermediary measures and appropriate mathematical models. Also, statistical sampling from neurons of known cell type and connectivity would be preferable to merely increasing the numbers of simultaneously captured spikes. This is because our brains, in contrast to those of invertebrates, appear to be built from large populations of neurons performing the same function, collectively and in a probabilistic way. We, humans, can lose neurons from the age of 20 or earlier without a noticeable effect on cognitive performance. For the nematode C. elegans, by contrast, the loss of a single neuron can have catastrophic effects with respect to survival. Thus, intermediary measures reflecting the ensemble activity of neurons of similar types – which can be localized on the cortical sheet – would offer extremely valuable information.

Further, a number of different types of measures might be required to provide the critical input to the model. For example, sleep spindles, Up and Down states, and cortical spreading depression could be described by a set of parameters including those related to subthreshold polarization, intracellular concentration of calcium in neurons and glia, blood flow, and energy consumption. As in the well-known story of the blind men and the elephant, access to only a single kind of measurement may be insufficient (even misleading) in grasping the bigger picture.

The interactions between individual elements of the brain – neurons and glia – would need to be understood and factored into any general model. Often times this knowledge can be derived most efficiently from relatively simple model organisms, cultured neurons, or isolated preparations of brain tissue. For instance, one can study synaptic formation and its genetic determinates in C. elegans or a fruit fly Drosophila melanogaster to understand the general rules of neuronal recognition and synaptic plasticity. These rules can then be validated in the intact mouse or non-human primate cortex (using statistical measures rather than exhaustive sampling) and implemented as building blocks in computational models.

Mapping the (human) brain in health and disease

Ultimately, the debate comes down to distinct perspectives as to what exactly we need to measure in order to understand what the brain is doing. One obvious target is spikes. But would efforts focused entirely on firing neurons deliver the promised breakthrough in understanding brain function in health and disease? Although most of the brain disorders that impose the greatest burden on American society (e.g. Alzheimer disease, Parkinson disease, Down syndrome, schizophrenia, bipolar illness, autism, migraine, stroke, and traumatic brain injury) involve disease processes that affect the generation of spikes, they cannot be described by the spike code alone. These include dysfunction of synaptic growth and communication, abnormal activity of glia, release of inflammatory mediators, altered molecular signaling (neuro- and gliotransmission, growth factors), disruption of the neuroglial metabolic partnership, pathological neurovascular coupling, and premature cell death. Some are part of the repertoire underlying recovery or restoration of function. For these reasons, measurement of multiple electrical, molecular/chemical, and connectivity parameters in the working brain might prove at least as valuable as extending the number of simultaneously captured spikes.

Animal models of brain diseases do not fully reproduce the range of human symptoms, but they do play an important role in studying the effects of specific genetic and experimental perturbations and testing potential treatments and processes involved in recovery. A comprehensive investigation of pathological mechanisms in these models entails the development of new technologies for quantitative measurements not just of voltage and calcium, but also of other ions, signaling molecules, metabolites, metabolic substrates, and blood perfusion and oxygenation. Ideally, these measurements would be performed in the intact brains of awake, behaving animals where the natural interactions between neurons, glia and cerebral microvasculature are preserved.

Eventually, it will be necessary to translate the findings from animals to humans. Direct translation of tools to humans would be a false promise. For instance, simultaneous optical recording from hundreds of neurons within a cubic-millimeter volume has been demonstrated in the mouse cortex. Yet, extending these measurements to humans is precluded by the invasive nature of the method and other technical constraints. The available noninvasive measurements, however, provide only indirect information about the activity of brain cells and circuits, leaving a gap between the macroscopic activity patterns available in humans and the rich, detailed view achievable in model organisms. A concerted effort to bridge this gap is an important opportunity for the BRAIN Initiative.

Let’s examine the case of fMRI. Here, one obvious limitation is its relatively low resolution. In addition to this resolution limit, there is an even more fundamental constraint in the indirect and uncertain relationship between the imaged signals and the underlying neuronal, metabolic and vascular brain activity. To illustrate this, consider the imaging technological achievements of the past decade, e.g. dramatic improvements in parallel imaging, enhanced performance of gradient and radiofrequency coils, and a move toward higher field strengths. On one hand, these improvements have facilitated sub-millimeter resolution (comparable to the size of cortical layers and columns), which may be sufficient to understand brain phenomena manifested at this mesoscopic scale. On the other hand, the physiological interpretation of the imaged physical signals remains unclear. This limitation is particularly debilitating in disease because of the potential (and unknown) discrepancies between the activity of neuronal networks relative to the accompanying neuroglial, neurometabolic, and neurovascular interactions that collectively determine the fMRI response.

Connecting the dots from microscopic cellular activity to the dynamics of large neuronal ensembles and how they are reflected in noninvasive “observables” is an ambitious and challenging task. As a foundation, we need a suite of micro- and nanoscopic technologies that, collectively, will allow precise and quantitative probing of large numbers of the relevant physiological parameters in the appropriate “preclinical” animal models. Next, we have to combine multimodal measurements and computational modeling to understand how specific patterns of microscopic brain activity (and their pathological departures) translate to noninvasive observables. In parallel, we need to explore novel (currently, beyond-the-horizon) noninvasive contrasts more directly related to specific physiological quantities for human applications.

Skeptics may argue that this spectrum is too broad; instead we need a focused program that would make a significant impact in a limited area. In our view, the focus should be not on a particular measurement (e.g., “we don’t have a way to record from every neuron in large networks; let’s fill this gap”) but on a technological roadmap for addressing the broader goal of the BRAIN Initiative: “to produce insights into brain disorders that will lead to better diagnosis, prevention, and treatment (Insel et al., 2013).”

Short-term objectives and deliverables

Now let’s try to envision what could be achieved within a decade of the “connecting the dots” effort described above, including both technological objectives and neuroscience questions that would become accessible with the advancement of the technology.

To illustrate this vision, consider an increasingly likely future in which we will be able to selectively manipulate one population of cortical neurons at a time: eliciting or suppressing firing and controlling the excitability of dendrites sequentially within a given neural system. This type of stimulation could be employed to obtain the corresponding space-resolved extracellular potentials recorded with the high-density nano-arrays. These data will be used to computationally deconstruct a natural (e.g., sensory stimulus-induced) extracellular potential as a combination of the population-specific “primitives” offering the information about cell type-specific activity. Resultant computational models will need to be validated using the cellular- and subcellular-resolution measurements from a large number of neurons within the active cortical region throughout the cortical depth. Ideally, this would be done using genetically-encoded reporters (e.g., multiphoton imaging of optical voltage or calcium reporters with the color of the emitted light coding for the type of neuron) to attain statistically sound – but not necessarily exhaustive – sampling of activity across cell types. The number of individually-considered neuronal types will be motivated by the model itself: It will need to be sufficient to provide the solution for the cell type-specific decomposition of extracellular potentials. Note that the low-frequency extracellular potential recorded at the cortical surface should correspond to the noninvasive EEG in human studies.

Such a future might include some or all or the following: genetically-encoded or synthetic probes to report the key physiological variables of neuroglial, neurovascular, and neurometabolic processes accompanying neuronal activity such as voltage, release of signaling molecules, receptor activation, second messenger signaling, increases in extracellular potassium and ATP/adenosine, vasodilation/constriction, uptake of glucose, transcellular lactate fluxes, and intracellular oxygen dynamics including mitochondrial function. Combined with the ability to activate one population of cortical neurons at a time, these tools will open the door to addressing the population-specific vascular, metabolic, and hemodynamic “signatures.” They will also allow investigation of energetic compartmentalization and energy budgets. These efforts will not be limited to experimental work and will require extension of the neuronal model. Embedded in the realistic vascular architecture, this model will be used to predict the macroscopic vascular and hemodynamic response. Another step will be to incorporate nuclear spins diffusing in vessels and tissue and their responses to external magnetic fields. This will enable predictions of the decay of the magnetization due to de-phasing of the spins induced by changes in blood oxygenation; this is the BOLD effect.

We might have the tools to manipulate synaptic connectivity and neurotransmitters such as dopamine or serotonin (e.g., by inactivation of the postsynaptic receptors). We will advance the imaging technology to allow simultaneous measurements from a number of locations at a time in awake behaving animals and combine the fine- and coarse-grain tools (again, both experimentally and in a computational framework) to fill in the gaps. Then, we can begin to address questions of distributed computation (i.e., those arising from the interplay of multiple cortical areas) and the importance of the “modulatory” neurotransmission systems. We will have the tools to probe the factors that empower conscious behaviors such as successful retrieval of a memory trace or making a correct decision. Inclusion of non-neuronal measures (e.g., metabolic activity, chemical excitability and structural plasticity of glia) will put the questions of plasticity and development within reach.

A natural deliverable from these efforts will be a set of tools for preclinical studies in model organisms. This is because interpretation of noninvasive functional imaging and understanding of the mechanisms of brain disease require investigation of the same types of neuroglial, neurovascular, and neurometabolic interactions. Collectively, these new tools will enable translation of the detailed and elegant mechanistic approaches to intact brains; today this is possible only in cell cultures and isolated neuronal tissues. In neurodegeneration, these tools will allow us to ask a range of critical questions, such as: does the breakdown in energy metabolism precede dysregulation of neuronal electrical activity? Does the initial pathology, manifested as altered ionic homeostasis or reduced production of ATP, originate in neurons or astrocytes? Are certain types of neurons more vulnerable than others? In mental disease, such tools will make it possible to modulate the excitability of the dendritic trees, and to track the resulting alterations in the release of neurotransmitters and synaptic connectivity. One also could ask whether the common denominator among the models exhibiting abnormal sensorimotor gating might converge upon the same endpoint functional network organization. In the study of headaches, these tools can clarify the chain of events underlying the spontaneous initiation and propagation of cortical spreading depression. For instance, we can ask whether or not the stress accompanying cortical spreading depression can be explained by a metabolic failure where oxygen demand exceeds the supply, resulting in a shortage of the ATP required for the effective neuronal repolarization.

To summarize, we envision a path that will accelerate progress in addressing the “hard” neuroscience questions that will produce significant deliverables along the way, if/when concrete short- and midterm objectives are spelled out. On the basic science level, a decade of intensified effort will bring us closer to understanding the code that operates on complex, multi-compartment, multi-parameter, multi-level systems to ensure robust and appropriate behavior. On the translational side, within a decade we will make considerable progress toward holistic evaluation of neurological damage in model organisms, open new avenues to guide the development of treatments, and build a strong foundation for human noninvasive imaging.

Conclusions

Connecting the dots from microscopic cellular activity to the dynamics of large neuronal ensembles and how they are reflected in noninvasive observables is an ambitious and challenging task. However the impact of such an effort in decades and even generations to come should not be underestimated. We can achieve this only through a large-scale, coordinated program with coherent technological, experimental and theoretical efforts targeting the development of molecular probes and microscopic imaging with which to understand the meso- and macroscopic level of brain organization. Such a program would naturally transcend the conventional boundaries of scientific disciplines, bringing together experts from multiple fields beyond the traditional neurosciences including physics, mathematics, statistics, engineering, chemistry, nanotechnology, and computer science. Moving forward in the spirit of collaboration, we will accelerate basic and translational scientific discoveries and ultimately arrive at an understanding of how our brain constrains the way we experience the world around us and controls our behavior.

Acknowledgements

We thank Krastan Blagoev for helpful discussions.

Footnotes

Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

References

  1. Insel TR, Landis SC, Collins FS. Research priorities. The NIH BRAIN Initiative. Science. 2013;340:687–688. doi: 10.1126/science.1239276. [DOI] [PMC free article] [PubMed] [Google Scholar]

RESOURCES