Skip to main content
The Yale Journal of Biology and Medicine logoLink to The Yale Journal of Biology and Medicine
. 2019 Mar 25;92(1):145–149.

Focusing in on the Mechanisms Underlying Attention

An Interview with Anirvan Nandy, PhD

Huaqi Li 1,*
PMCID: PMC6430167  PMID: 30923482

You have a background in electrical engineering and computational modeling. Can you speak a bit about how you first became interested in neuroscience research and how you’ve applied your combined expertise in these fields to your current work?

My background was in signal processing and communications, a field that provides us with tools to analyze the process of information flow between communicating devices. My wife (Dr. Monika Jadi, Assistant Professor of Psychiatry and Neuroscience, Yale School of Medicine) and I began our careers at Motorola. At the time, it was the new cellular technology boom. As it happens, once you’ve developed the system, after a while it goes into maintenance mode where you’re satisfying the needs of the customers. That’s when we became disillusioned with working in the industry, so my wife and I started thinking about going back to graduate school. Initially we thought about doing a PhD in electrical engineering, but then we started reading popular science books about the brain and we found a lot of parallels between our background in electrical engineering and how one can study the brain as a system of communicating devices. We found similarities between how neurons in the brain are really communicating devices that talk to each other and how ideas that were developed in signal processing and communications can translate to neuroscience and be applied to understanding how our brains function. There are different levels at which one can study the brain, for example, the molecular and cellular levels. We were interested in studying the brain at the electrical level—how neurons talk to each other. That language is similar to how digital devices talk to each other in terms of how they send bits of information to each other, either “one”s or “zero”s. Neurons talk through electrical pulses known as spikes. We can think of a spike as a “one” and the absence of a spike as a “zero.” That’s what got us interested in neuroscience and we decided to quit our jobs and go to graduate school.

Anirvan Nandy, PhD.

Anirvan Nandy, PhD

Assistant Professor in Neuroscience

Figure 1.

Figure 1

A snapshot of the whiteboard in Dr. Nandy’s office.

You can also study neural activity at the whole brain level. Using fMRI, you can see how the whole brain acts as you are sensing information and making decisions. Thus, there are these multiple levels of investigation, from the molecular level to the whole brain level. My interests lie at an intermediate level between these two extremes: at the level of neural systems and circuits. We try to understand how groups of neurons communicate with each other, how they encode sensory information and how this encoded information flows through the different parts of the cortex to mediate behavior. In order to do that, we train animals to perform visual tasks. When visual information comes in, it first falls on the retina which is at the back of our eyes. Visual signals are sent through the optic nerve, first to the back of our brains where the primary visual areas are located. After that, visual information flows in parallel, with one path on the side of your brain called the temporal lobe and one path that goes down the top of your brain called the parietal lobe. These two parallel paths actually carry two different types of information. The path down to the side carries information about what objects are and it helps you identify objects. For example, if you look at this object here, the temporal pathway helps you identify the object as a bottle. But, if I were to ask you to reach out for the bottle, then the other pathway helps you interact with the bottle. This former pathway is known as the “what” pathway and the latter is known as the “where” pathway. These two pathways merge in the frontal areas of the brain leading to a unified percept that helps us both identify and interact with objects. Neuroscientists came to know about these two pathways by examining lesions. For example, if you had a lesion in the parietal pathway, then you would be able to identify the object, but if I asked you to reach for the object, you wouldn’t know where it is. Even more interesting is if you had a lesion on the temporal pathway, you’re blind for all practical purposes. You wouldn’t be able to see the bottle, but if I told you to trust me and reach out for it, you would be able to.

I work in a brain area in the temporal pathway called visual area V4. Neurons in this brain area are sensitive to elongated shapes such as curved contours. If you look at objects, they are composed of curves and patterns of curves. This area of the brain is also strongly modulated by attention. When you’re paying attention to something versus when you’re not paying attention, there are differences in brain activity. So, this one brain area allows us to study two different aspects of brain function. One is how visual information is encoded and second is how this information is encoded differently depending on the attentional state.

For example, most of the time when you’re driving, you are looking straight but paying attention to whether or not a car is coming in from the sides. In general, primates and carnivores have this unique ability to dissociate our point of gaze from what we are paying attention to. So, we can voluntarily separate these two streams and look at something but pay attention to something else. This ability to dissociate gaze and attention is known as covert attention. Covert attention is ecologically relevant because in hierarchical societies, like with primates, they have an alpha who’s sitting at the top of a tree and there are subordinates on the ground. These subordinates do not ever directly look at the alpha, but they must still know what the alpha is up to, so they’re looking at something but covertly attending to what the alpha is doing. In contrast, when we are directly looking at some object and also paying attention to it, we are deploying what is known as overt attention. In our lab, we train animals to do covert attention tasks. We train them to pay attention to a particular stream of information over another distracting stream and we then record from neurons in area V4, for example. When the same visual information is being encoded by these neurons, we study how it is being encoded differently when the animal is paying attention to the stream of information versus paying attention to a different stream of information.

In “Laminar organization of attentional modulation in macaque visual area V4,” there’s a description for a circuit for attention-mediated information processing in the V4 across cortical layers. Would you elaborate on this a little and go into how drawing this template for attention mediated information processing might apply to other sensory modalities?

Let’s take a step back and look at what was previously known about the neural mechanisms of attention. In the last several decades of research, three primary mechanisms have been found by which attention helps boost signals that you are paying attention to. The first, is by increasing the firing rate of neurons. Neurons signal information in terms of spikes and one idea is that more spikes mean that there’s something interesting happening. In keeping with this, it was found that neurons send out more spikes when attention is deployed to an object compared to when attention is deployed elsewhere; the neurons are receiving the same visual information about the object, but their firing rates change with attention. Aside from the number of spikes a neuron fires for a given visual stimulus, there is variability in the pattern of spikes; so when the stimulus is presented multiple times, the pattern of spikes will always be a little bit different each time. The second finding is that when you pay attention to something there is a reduction in this variability. In other words, the firing pattern of neurons becomes more predictable. Pairs of neurons tend to fire up and down together, so if two neurons are neighbors and one neuron has fired more spikes, the neighboring neuron will also tend to fire more spikes. The firing rates of neurons tend to fluctuate together. The third finding is that the deployment of attention reduces these fluctuations. The idea is that when two neurons are fluctuating together, they are not conveying independent information; they’re co-dependent. So, this reduction in co-variability is thought to make the neurons more independent.

An important fact about the sensory cortical areas, like visual area V4, is that they have a six layered structure along the depth of the cortex. You can think of the sensory cortex as a six layered pancake; the layers are numbered from 1 through 6, layer 1 being most superficial and layer 6 being the deepest. Layer 4 is the input layer which receives information from other cortical areas. Information from layer 4 then flows to layers 2 to 3 and down to deeper layers. Layers 2 and 3, in turn project to layer 4 of other brain areas and layer 5/6 neurons project to subcortical structures.

The particular motivation in our study was to look at what the different layers of the cortex were doing in terms of the key processes by which attention improves the quality of sensory information: increasing firing rate, decreasing variability, and decreasing co-variability. All these previous studies were done by sticking electrodes in the cortex without knowing which layer the neural signals were being recording from. It’s like fishing in the dark; you’re blindfolded and you cast your line, but you don’t know how deep in the pond you caught your fish. What we did in this paper was cast a different kind of line. We recorded from all the brain areas simultaneously using electrodes with multiple contact points along the entire depth of the cortex. With this technique, we could precisely identify layer boundaries and the locations of the neurons. We wanted to ask if the effect of attention on neural activity was the same or different across the layers of the cortex. What we found is that there were indeed differences across the layers. For example, we found that the increase in firing rate with attention was true for all layers, but decreasing co-variability was restricted to certain layers. We now have a better mechanistic picture of how neurons in the different layers of the cortex are modulated by attention.

Could you talk about the implications this work has in the field and the potential clinical impacts?

We need to first understand how the neural circuit works in a “normal” brain, before we can delve into questions of clinical impact. Once we have a better understanding of how the different pieces of the machinery work, we can then identify which parts of the machinery are at fault and more precisely target those parts for clinical interventions. As technology is improving, we will have more and more precise control of neural circuits.

What we’ve been discussing so far are simply measurements of neural activity. Right now, there is this revolutionary technique in neuroscience known as optogenetics which gives us the ability to causally manipulate neural activity. Using this technique, we can insert light sensitive molecules, known as opsins, within neurons that allow us to control neuronal activity by shining light on the neuron directly. This technique was inspired by marine algae which encode these light-sensitive proteins and use these proteins to navigate toward sunlight. Scientists have taken these proteins and have found a way to express them in neurons. Once the proteins are expressed, the neurons themselves become light-sensitive. Depending on the type of protein, when you shine the right wavelength of light, you can make a neuron fire or make it go completely silent. Thus, you can causally manipulate the neuron. Let us say that based on traditional measurements of neural activity we have come to suspect that neurons in a particular layer of the cortex play a special role in attention. To test this hypothesis, we can express opsins in these neurons. If we silence these neurons while the animal is performing an attention demanding task and this silencing affects the animal’s performance, then we’ve established a causal relationship between that neural activity and behavior. This will give us an even finer-grain understanding of the neural circuit and a possible path toward clinical interventions.

You recently published the article “Optogenetically induced low-frequency correlations impair perception.” Could you give a brief overview of this and how these impairments affect attention-demanding tasks?

Optogenetics ties into our recent paper. We employed opsins in a non-layer specific manner to examine the role of co-variability in perception. Co-variability has a rhythm to it; normally, attention reduces the strength of slow fluctuations that are less than 10Hz and makes neurons more independent but at this slower time scale. We wanted to determine if these slow fluctuations played a causal role in behavior. So we expressed opsins in V4 neurons and then by shining laser light, we artificially induced low-frequency fluctuations among V4 neurons while animals were performing an attention-demanding task. Our hypothesis was that if these slow fluctuations played a causal role in perception then our laser induced manipulation should negatively impact the performance of the animal. And this was indeed what we found. In the subset of trials in which we made these neurons more synchronized at low frequencies less than 10Hz, the animals had a harder time performing the attention demanding task. When we induced these fluctuations at a higher frequency, we did not see any impact on performance. Our study thus established the causality of the low frequency fluctuations in sensory perception.

Was the attention demanding task covert or overt?

It was a covert task. What happens is that the animal sits in front of the computer monitor and they are trained to look at the center of the monitor. While the animal maintains their gaze, two streams of information are presented at locations ‘A’ and ‘B’ on the monitor, both away from the center of gaze. In our particular experiment, the stimuli were oriented lines which flashed on and off in the two locations simultaneously. In one of the locations, the line would undergo a small change and the animal has to detect the change and make an eye movement to the location of change in order to earn a juice reward. This is covert in the sense that the animal was directing it’s gaze at the center the screen while monitoring the two other spatial locations to detect a change. In the experiment, we cued the animals to pay attention to one stream of information or the other. If the animal is cued to pay attention to location ‘A’ then it has to ignore the stream of information in location ‘B’ and detect a subtle change in location ‘A’. This makes the task attention-demanding.

A general principle of our sensory cortex is that there is an organized map of the visual world and proximal neurons encode information about nearby portions of the visual world. In any given experiment, the neurons we were recording from encoded information about a small portion of the visual world. Let us imagine that this location coincides with location ‘A’ on the computer screen. If the animal is cued to attend to location ‘A’ then we are measuring neural activity when attention is directed to the location in visual space that the neurons care about (the “attend-in” condition). When the animal is cued to attend to location ‘B’, then we are measuring neural activity when attention is directed away from the location the neurons care about (the “attend-away” condition). Crucially, the visual information that the neurons receive are the same in both conditions. We are just manipulating the location of attention and studying how this modulates neural activity.

Are there circumstances in which you would be interested in an overt task?

The problem with an overt task is that as you get closer to the center of gaze, humans and primates have a very high concentration of cone photoreceptors in the retina which allows us to precisely examine the visual world. This area is part of the retina called the fovea. If you were to hold your thumbs out at arms distance, the extent of your two thumbs is the only part of the visual world that you can examine in true detail. Neurons that encode information from near the center of gaze do so from very tiny areas of visual space, thus allowing us to have high visual acuity near the center of gaze. Recording information from these neurons becomes difficult because it is hard to reliably estimate the kinds of visual information that these neurons are receiving. This is in part because our eyes are constantly moving; even when we are fixing our gaze at an object, our eyes make miniature movements. So, these neurons near the center of gaze, which care about very small portions of the visual world, are constantly receiving ever-changing visual stimulation due to the miniature eye-movements. As you move further away from the center of gaze, neurons receive information from larger patches of the visual world which are not much affected by miniature eye-movements.

You said you first developed an interest in neuroscience and research through reading. What were some books that you would recommend?

I would recommend a classic neuroscience book called “The Computational Brain” by Patricia Churchland and Terry Sejnowski. It presents the idea of brains as computers and communicating devices. Very interestingly, my wife did her postdoctoral fellowship with Terry at the Salk Institute. We read the book while we were working at Motorola and after our PhDs, we both ended up doing fellowships at the Salk Institute. In terms of popular science books, one book I have found fascinating was “The Blank Slate” by Steven Pinker. He talks about the idea of how much of the brain comes pre-wired versus how much we learn from the environment.

What will you be doing in terms of future research?

We are studying not just the mechanisms of attention, but also how neurons in different layers of the cortex are participating in encoding visual information. Once we start understanding these circuits, we are going to target neurons in different layers using more advanced optogenetic strategies. The recent paper that we published was not layer specific, so in the future we would like to target opsins to certain layers to see how and if manipulating these subsets of neurons will affect behavior. This will give us a clearer picture of the causal and mechanistic role of these neurons in supporting sensory processing and attentive behavior.


Articles from The Yale Journal of Biology and Medicine are provided here courtesy of Yale Journal of Biology and Medicine

RESOURCES