Summary
The hippocampal formation is linked to spatial navigation, but there is little corroboration from freely-moving primates with concurrent monitoring of head and gaze stances. We recorded neural activity across hippocampal regions in rhesus macaques during free foraging in an open environment while tracking their head and eye. Theta activity was intermittently present at movement onset and modulated by saccades. Many neurons were phase-locked to theta, with few showing phase precession. Most neurons encoded a mixture of spatial variables beyond place and grid tuning. Spatial representations were dominated by facing location and allocentric direction, mostly in head, rather than gaze, coordinates. Importantly, eye movements strongly modulated neural activity in all regions. These findings reveal that the macaque hippocampal formation represents three-dimensional space using a multiplexed code, with head orientation and eye movement properties being dominant during free exploration.
eTOC Blurb
Leveraging wireless recordings, head and eye tracking in freely behaving macaques, Mao et al. find that hippocampal activity is tuned to, most strikingly, where the head points, 3D head orientation, and eye movements. These results demonstrate the peculiarities of spatial representations in this primate species under ethological conditions.
Introduction
The rodent hippocampal formation (HF) is implicated in spatial navigation. Hippocampal local field potential (LFP) shows prominent theta oscillations (6-10 Hz) during movement (Vanderwolf, 1969). Theta power and frequency are correlated with locomotion speed (McFarland et al., 1975; Sławińska and Kasicki, 1998), suggestive of a role in spatial navigation. The precise relationship between theta phase and neuronal firing reflects a temporal code that may support communication across regions and organize ensemble activity into sequences (Mizuseki et al., 2009; O’Keefe and Recce, 1993; Skaggs et al., 1996). In contrast to rodents, primate studies have consistently found a lack of continuous theta oscillations or only occasional bouts (Ekstrom et al., 2005; Talakoub et al., 2019; Watrous et al., 2013), but spike-LFP phase coding appears to be conserved to some extent, at least in humans (Jacobs et al., 2007; Qasim et al., 2021).
Multiple cell types have been identified to encode position, head direction, speed and other spatial variables in the rodent HF (Hafting et al., 2005; McNaughton et al., 1983; O’Keefe and Dostrovsky, 1971; Taube et al., 1990), some with 3D properties (Angelaki et al., 2020; Grieves et al., 2020), that has also been found in the bat HF (Finkelstein et al., 2015; Yartsev and Ulanovsky, 2013). Recent studies have employed multimodal models to reveal multiplexed representations in the rodent HF (Hardcastle et al., 2017; Laurens et al., 2019; Ledergerber et al., 2021). A fundamental advantage of such models is that they can correctly identify multimodal responses even for variables that are correlated and interdependent, while proved remarkably immune to pitfalls such as overfitting; whereas traditional methods often fail to quantify neurons with mixed selectivity (Laurens et al., 2019).
The primate HF has been scarcely explored under the same, freely-moving paradigms. Human studies have been mostly limited to virtual navigation in patients and focused exclusively on a limited number of spatial variables (Doeller et al., 2010; Ekstrom et al., 2003; Jacobs et al., 2013). A recent study also found LFP correlates of proximity to environmental boundaries in mobile humans (Stangl et al., 2021). Non-human primate studies have used almost exclusively head-fixed monkeys, either during cart navigation (Matsumura et al., 1999; O’Mara et al., 1994) or in a virtual environment (Furuya et al., 2014; Gulli et al., 2020; Wirth et al., 2017). In these studies, hippocampal neurons showed correlates with self-location, attended location, and head direction. Three limited investigations in freely-moving monkeys have reported location-specific hippocampal activity reminiscent of rodent place cells albeit more dispersed and less prevalent (Courellis et al., 2019; Hazama and Tamura, 2019; Ludvig et al., 2004). Yet, these studies neither explored three-dimensional properties nor used contemporary multimodal models to quantify multi-variable spatial representations, leaving a large part of coding space unexplored.
The primate HF also shows neural correlates with visual space (Nau et al., 2018; Rolls, 2021). A rather provocative observation is the existence of putative gaze-centered spatial representations in the HF of head-fixed macaques (Killian et al., 2012; Meister and Buffalo, 2018; Wirth et al., 2017). Of particular relevance is the report that some hippocampal neurons showed correlates with the location that the animal was looking at (‘spatial view’ cells) but not where the animal was located (Georges-François et al., 1999; Rolls et al., 1997). This property, however, has yet to be interrogated by monitoring both 3D eye and head movements in truly freely-moving primates where flexible head movement is also a critical component of natural behavior.
Here, we combine precise head tracking, wireless eye tracking, and telemetric recordings across three HF regions – hippocampus, entorhinal cortex, and subicular complex, in freely-foraging macaques. We provide a full characterization of how the macaque HF encodes space under moderately ethological conditions.
Results
To investigate how the primate HF represents space during ambulatory navigation, we trained three macaques to freely forage in an open cylindrical arena endowed with salient cues (Fig. 1A). Using chronically implanted tetrodes or single electrodes (Fig. 1B,C, and S1), we recorded LFP and single neuron activity across three HF regions in both hemispheres: hippocampus (HPC), entorhinal cortex (EC), and subicular complex (SUB) (HPC, EC and SUB in monkeys K and L, only EC in monkey B). To reconstruct electrode locations, we co-registered preoperative magnetic resonance imaging (MRI) images and postoperative computed tomography (CT) images (Fig. 1D-G and S1), corroborated by histology in one monkey (Fig. S1 bottom right). To accurately track the monkeys’ head movement, we used a marker-based approach, combined with wireless eye tracking in one animal (Fig. 1H,I). Many spatial variables were extracted from the markers’ trajectories in 3D space (Fig. 1J and S2). Overall, the monkeys’ behavior covered a broad space while salient landmarks had a strong influence (Fig. S3). Wireless eye tracking revealed eye-in-head movements were predominantly within ±30° horizontal and ±20° vertical (Fig. S3A bottom right).
Fig. 1. Freely-moving monkey setup and electrode localization.
(A) The cylindrical arena.
(B) Monkey K’s drive and skull models.
(C) Segmented models of the brain, hippocampus (blue) and entorhinal cortex (green).
(D) Segmented models of the hippocampus, entorhinal cortex, and blood vessels. Electrodes are visible as white tracks in the co-registered CT images.
(E) CT, MRI, and co-registered images for sagittal (left) and axial (right) views. White tracks and dots are electrodes.
(F) Models of the hippocampus and entorhinal cortex color-coded by dorsal-ventral position. Four axial planes are highlighted.
(G) CT and MRI images corresponding to the 4 axial plans shown in (F). Hippocampus is circled in blue; entorhinal cortex is circled in green. Electrodes are shown as white dots. Deeper electrodes are also visible in more dorsal planes.
(H) Four markers were placed on the head cap. Monkey K was trained to wear a wireless eye tracker.
(I) Neural logger recordings and wireless transmission of eye tracking images.
(J) A short segment of the markers’ trajectories in 3D.
Speed-dependent low theta activity is linked to movement onset
We first examined the relationship between hippocampal LFP and behavior. Hippocampus showed stronger absolute power than the EC and SUB (Fig. 2A) (power comparison for 1-10 Hz and 10-50 Hz, both p < 0.001, one-way ANOVA). After removing aperiodic activity, the LFP showed two prominent oscillatory components: one peaked in the low theta band (1-4 Hz) and another in the beta band (12-30 Hz) (Fig. 2A) (it was called ‘low theta’ to be consistent with a series of human studies). The HPC showed the strongest oscillatory power in the low theta band but the weakest in the beta band (Fig. 2A) (both p < 0.001, one-way ANOVA). In sharp contrast to rodents, hippocampal LFP lacked a sustained oscillatory pattern in the theta band during locomotion (Fig. 2B).
Fig. 2. Speed modulation of hippocampal local field potential.
(A) Left, absolute power spectrum; middle, aperiodic fit to the left; right, power spectrum of the oscillatory component (absolute - aperiodic). Shaded areas: SEM over sessions.
(B) Scalogram of a short segment of hippocampal LFP. Speed trace is shown in white.
(C) Mean partial correlation between speed and LFP power in different bands. Error bars: SEM over sessions.
(D) Examples of theta and beta activity.
(E) Colormaps of peak theta (top) and beta (bottom) power-triggered, normalized speed, shown for each session separately and as mean±SEM across sessions.
(F) Left, colormap of LFP power plotted in frequency vs. speed. Middle, relative power (normalized such that the power ranges from 0 to 1 in each speed bin) in the theta and beta bands. White dots indicate the peak frequency in each speed bin.
(G) Theta and beta frequencies as a function of acceleration shown as mean±SEM across sessions.
To assess the correlation between locomotion speed and LFP power in different bands, we employed a wavelet-based approach. Partial correlation analysis revealed that, theta (4-8 Hz), low (30-60 Hz) and high (60-120 Hz) gamma power showed an overall positive correlation with speed, whereas alpha (8-12 Hz) and beta (12-30 Hz) activity showed a negative correlation across HF regions (Fig. 2C).
Intermittent theta activity (Fig. 2D) has been reported, but it is unclear what it correlates with during free behavior. By examining theta-triggered speed profiles, we found that theta power was linked to movement onset, in sharp contrast to the opposite effect of speed on beta power (Fig. 2E). Both theta and beta frequencies increased with speed at slow motion, most obvious for the EC (Fig. 2F). We also found that theta frequency increased with positive but barely with negative acceleration, while beta frequency increased with both (Fig. 2G).
Hippocampal neurons are tuned to multiple spatial variables
Both recording methods, single electrodes and tetrodes, allowed good single unit isolation (Fig. S4A,B). We focused on the 599 well-isolated single units: HPC, 273 neurons; EC, 216 neurons; SUB, 110 neurons (monkey K: HPC 186, EC 49, SUB 27, n = 37 sessions; monkey L: HPC 87, EC 137, SUB 83, n = 86 sessions; monkey B: EC 30, n = 13 sessions). Spike amplitude ranged from 64 to 124 μV (20-80 percentiles) (Fig. S4C). Recordings were stable throughout the sessions (spike feature correlation between the 1st half and the 2nd half of sessions: Pearson’s r = 0.997, p < 0.001) (Fig. S4D). We used independent measures to verify spike sorting quality (Fig. S4E,F).
Consistent with previous studies (Barnes et al., 1990; Skaggs et al., 2007), the activity of HF neurons was in general sparse and the firing rates showed log-normal distributions with SUB neurons showing the least sparsity (p < 0.001, Kruskal-Wallis test) (Fig. 3A). Individual HF neurons exhibited tuning to diverse spatial variables, including horizontal position, head height, linear speed, azimuth head direction, head tilt, head facing location (where the head points, a 3D variable), egocentric boundary (relative position and direction to the arena boundary), and head angular velocity (Fig. 3B,C, S2 and S4G).
Fig. 3. Diverse spatial tuning across hippocampal regions.
(A) Cumulative distribution of the firing rates for neurons recorded in each HF region.
(B) Example raw tuning curves for head height, translational speed, azimuth head direction, and LFP phase (1-10 Hz).
(C) Example raw data (top) and raw tuning curves (bottom) for horizontal position (place fields), facing location (where the head points on the arena surface), egocentric boundary, head tilt, and angular velocity. Red dots: spikes; gray: behavioral variables. Colormaps (bottom) show the raw tuning curves (bottom). Peak firing rate (yellow) is indicated. Tuning curves are all from different neurons.
(D) Fraction of neurons modulated by position, grid, azimuth head direction, and speed for each region, computed using traditional analyses. Chance level is at 0.01. Bottom, Venn diagram showing the number of neurons encoding each variable and combination of variables.
We first used traditional analyses to quantify neuronal tuning for the four most extensively studied variables in rodents: position, grid, azimuth head direction and linear speed. Based on such analyses, which compared data against permutated distributions (Fig. S5; STAR Methods), hippocampal neurons showed prominent modulation by position (HPC, 26%; EC, 10%; SUB, 5%), speed (HPC, 22%; EC, 35%; SUB, 40%) and head direction (HPC, 41%; EC, 46%; SUB, 74%) (Fig. 3D). A large fraction of neurons (153/599, 26%) showed conjunctive tuning (Fig. 3D, bottom). The proportion of cells with significant grid modulation was rather low (at 99%: HPC 1% 3/273, EC 2% 4/216, SUB 2% 2/110; see Fig. S5C,E for examples). Dropping the significance criterion barely increased these numbers: at 95%: HPC 6%, EC 6%, SUB 3%. Thus, when considering raw raster plots, no cells showed rodent-like grid fields (also see Discussion).
Mixed selectivity across hippocampal regions is dominated by head facing location
To quantify the simultaneous encoding of many spatial variables by HF neurons in an unbiased way, we used a model-based framework (Fig. 4A; STAR Methods) (Laurens et al., 2019). Each neuron could encode a single or combination of variables – or not be tuned to any spatial variable at all.
Fig. 4. Mixed selectivity of diverse spatial variables.
(A) A model-based statistical framework was used to quantify the spatial coding.
(B) An example cell showing tuning to facing location and position.
(C) Log-likelihood (LLH; goodness of fit) increase as a function of model variables for the example cell shown in (B).
(D) Pie chart shows the name and the fraction of select variables that were best encoded in each region. ‘None’ cluster corresponds to neurons not encoding any variables.
(E) Breakdown of the fraction of encoding neurons for each variable in each region. A single neuron can encode none, one, or more than one variable.
(F) Fraction of neurons tuned to single variable (single) or to more than one variable (mixed) in each region.
(G) Circular graph representation of the degree of conjunction between variables. Line thickness and color correspond to how often two variables are co-coded in a single neuron.
(H) Fraction of putative interneurons and principal cells encoding each variable.
(I) Left, fraction of encoding neurons along the hippocampal long axis for variables FL, EB, and HT. Right, fraction of neurons showing single and mixed encoding along the long axis.
The main model included 8 variables: 3D position, which was decomposed into horizontal position (Pos) and head height (HH), angular velocity (AV), linear speed (Spd), egocentric boundary (EB), facing location (FL), as well as 3D orientation, which was decomposed into head tilt (HT) and azimuth head direction (HD) (Fig. 4A and S2; STAR Methods). Overall, this model provided good fit to the data (Fig. 4B,C and S6), with the SUB exhibiting the strongest encoding (fraction of neurons encoding at least one of these variables: HPC: 45%, EC: 56%, SUB: 72%; Fig. 4D). In general, HF population exhibited rich coding schemes (Fig. 4 and S6).
We first classified neurons based on the single variable that best explained their firing. Across HF regions, the most frequently coded variable was FL (22%). FL tuning was more prevalent in the SUB (31%) and EC (25%) than HPC (16%) (Fig. 4D and S7A). Other dominant variables represented were linear speed (HPC: 8%, EC: 6%, SUB: 9%), EB (HPC: 5%, EC: 8%, SUB: 5%) and 3D head orientation, comprised of HD and HT. HD was a dominant coded variable mainly in the SUB (13%), whereas HT (vertical orientation tuning) was more broadly coded in the EC and SUB. In contrast to the dominance of orientation variables, only 6% of HPC cells were tuned to either component of 3D position (horizontal Pos: 5%; HH: 1%). The proportion of position-tuned neurons was also low for EC (4%; Pos: 4%; HH: 0%) and SUB (6%; Pos: 5%; HH: 1%).
Considering simultaneous encoding of multiple variables, FL was again the most prominently coded variable across HF regions (HPC: 19%, EC: 33%, SUB: 35%) (Fig. 4E and S7). Notably, this variable was coded in a higher percentage of EC/SUB than HPC cells (Fig. 4E). Mixed selectivity was ubiquitous across all HF regions (single vs. mixed selectivity, HPC: 18% vs 27%, EC: 22% vs 34%, SUB: 25% vs 47%; Fig. 4F), and it was not explained by spike sorting quality (Fig. S4H,I).
Overall, we found that all spatial variables examined were represented in the macaque HF. When mixed selectivity was considered, the percent of cells tuned to linear speed doubled. A similar percentage of neurons was also tuned to angular velocity (Fig. 4E). Thus, angular velocity and linear speed were often co-coded together with other spatial variables. This conclusion also applies to EB, which was represented in ~20% of the population, most prominently encoded by EC neurons (23%; Fig. 4E). Notably, even after considering mixed selectivity, position coding was relatively weak, particularly in the HPC (7%; EC: 14%; SUB: 12%). Considering together position and head height, 3D position tuning was also relatively infrequent compared to FL (HPC: 13%, EC: 19%, SUB: 17%). As in rodents, HD was most prominently coded in the SUB (38%). Considered together with HT, a large proportion of cells was tuned to 3D head direction (HPC: 16%; EC: 28%; SUB: 48%).
Different HF regions carried different combinations of mixed selectivity (Fig. 4G). In both the HPC and EC, FL and EB were most often co-coded in the same neurons. In the EC, FL was also often coded with position and/or HD and AV. In contrast, speed tuning was often encountered in isolation in the HPC and EC. Finally, the dominant HD tuning was most often coded together with FL, AV and speed in the SUB (Fig. 4G). There was no geometric relationship in the preferred direction among FL, position, and HD tuning (Fig. S7D).
Spatial coding was present in both principal cells and interneurons, classified according to spike waveform (Barthó et al., 2004). We classified all neurons into narrow-spiking (NS, putative interneurons, n = 123, 21%) and broad-spiking (BS, putative principal cells, n = 476, 79%) cells (Fig. S4J,K). Speed and AV tuning were more abundant in interneurons, whereas FL and position tuning were more common for principal cells (Fig. 4H).
Spatial coding in the rodent HF is topographically organized along the dorsal-ventral axis (Hafting et al., 2005; Jung et al., 1994). The monkey hippocampal rostral-caudal axis is analogous to the ventral-dorsal axis in rodents. In one monkey, we recorded from nearly the entire hippocampus in the right hemisphere (Fig. 1G and S1). We observed a gradual increase in the fraction of spatially-tuned neurons for the most prominent spatial variables: FL, EB, and HT (FL: p = 0.005; EB: p = 0.018; HT: p = 0.005, chi-square test) (Fig. 4I left). Mixed selectivity also increased from rostral to caudal hippocampus (p = 0.029, chi-square test) (Fig. 4I right).
These results show that a large proportion of HF neurons in the freely-moving macaques are tuned to spatial variables, and the majority show mixed selectivity. Furthermore, orientation-related variables dominate the macaque HF.
Heterogeneous spatial coding covers the entire space
Spatial representations in the macaque HF are heterogeneous with distributed preferred firing fields, as shown for FL and HT tuning examples (Fig. 5A,B). Preferred fields for FL, HT, place and EB covered a broad space, although salient arena cues influenced the clustering at certain locations (Fig. 5C-F). Two major clusters were found for speed tuning: monotonically increasing or decreasing (Fig. 5G). Preferred HD uniformly tiled the azimuth directional space (p = 0.9, Rayleigh test), even though occupancy was strongly biased toward the direction of the entrance/exit door (Fig. 5H). Head height tuning also spread across different elevations, whereas most AV-tuned neurons preferred firing at either low or high velocities (Fig. S7E).
Fig. 5. Spatial representations in the hippocampal formation are heterogeneous.
(A) Example model-based tuning curves for 10 neurons coding facing location (FL). Peak firing rates are indicated.
(B) Example model-based tuning curves for 9 neurons coding head tilt (HT).
(C) Preferred firing locations (red dots) for all neurons encoding FL superimposed on the average occupancy colormap across all monkeys. Dot size corresponds to neuron count.
(D) Preferred firing fields for all neurons encoding HT superimposed on the average occupancy colormap across all monkeys. Dot size corresponds to neuron count.
(E) and (F) The same as (C) and (D) for position and EB encoding neurons, respectively.
(G) Left, speed occupancy across monkeys. Shaded area: 1x SD across sessions. Middle, distribution of preferred speed. Right, clustering analysis for the speed tuning curves using PCA.
(H) The same as (G) for azimuth head direction. Gray bar indicates the direction of the exit/entrance door.
Spike-LFP phase-locking and phase precession
A large number of HF neurons were phase-locked to the LFP, and this was true across frequency bands (Fig. 6A,B) (low theta: 56%, theta: 25%, alpha: 25%, beta: 38%, low gamma: 58%, high gamma: 80%). Except for low theta, where preferred phases were more distributed, neurons preferred firing around LFP troughs (phase: ±180°) for all other frequency bands (Fig. 6B). Phase-locking was more common for spatially-tuned neurons (Fig. 6C), and no difference was seen for cells coding different variables.
Fig. 6. Spike-LFP phase coding.
(A) Spike-LFP phase distribution for 6 frequency bands for 3 example neurons.
(B) Distribution of preferred LFP phase for all significantly phase-locked neurons.
(C) Bar plot of the fraction of phase-locked neurons in each frequency band for spatially-tuned and other neurons. Out of all position tuned neurons, 54%, 21%, 28%, 44%, 61%, and 82% were phase tuned to each band, respectively.
(D) Example filtered LFP trace and spike raster.
(E) Spike phase autocorrelation for the example neuron shown in (D). Dashed lines indicate theta cycle. Red arrows indicate peak autocorrelation in consecutive theta cycles.
(F) Power spectrum of the autocorrelation shown in (E). Gray dashed line shows the 99% percentile of the shuffled distribution. Spike-LFP relative frequency larger than 1 indicates spikes oscillate at a higher frequency than theta.
(G) and (H) The same as (E) and (F) for an example phase-locking neuron without phase precession.
(I) Fraction of neurons showing significant phase precession for low theta and theta.
We also searched for theta phase precession (STAR Methods). We found that some neurons fired at gradually earlier theta phases in consecutive theta cycles (Fig. 6D), resulting in a faster spike-phase frequency relative to the corresponding LFP theta frequency (Fig. 6E,F). In contrast, for phase-locking neurons, spike-phase frequency was the same as LFP frequency (Fig. 6G,H). Theta phase precession was only present in a handful of cells (12/599 neurons for low theta, 13/599 neurons for theta) (Fig. 6I), which were all tuned to various spatial variables. These results suggest that spike-LFP phase coding is widespread in the macaque HF, but theta phase precession is much less prevalent.
Facing location tuning reflects heading- but not viewing-related properties
Our FL variable is similar to ‘spatial view’ (SV) – where the monkey looks in the environment – described in head-fixed macaques (Georges-François et al., 1999; Rolls et al., 1997), but referenced to head rather than gaze. Combining eye-in-head movement and 3D head orientation allowed us to disambiguate between SV and FL (Fig. 7A). Only about 11% (28/244) of HF cells were significantly tuned to SV, as compared to 24% (58/244) for FL, when either variable was included in the model. Critically, including both SV and FL in the model revealed that neural activity was predominantly driven by FL rather than SV (46 vs. 8 tuned neurons; Fig. 7B). We performed a similar comparison for HD vs. gaze direction (GD) and found that tuning again reflected mostly head rather than gaze (15 vs. 5 tuned neurons; Fig. 7B).
Fig. 7. Allocentric facing location and head direction tuning predominantly reflects head but not gaze properties.
(A) Example raw traces showing eye-in-head position (EP), facing location (FL), and spatial view (SV).
(B) Fraction of neurons in each region encoding FL, SV, azimuth head direction (HD) and gaze direction (GD) when fitting all head- and gaze-related variables simultaneously.
(C) Colormap representation of the firing as a function EP for an example neuron.
(D) Colormap representation of the firing as a function of eye-in-head velocity (EV) for an example neuron.
(E) Fraction of neurons in each region (color-coded bars, same as (B)) encoding EP and EV.
(F) Mean firing rate from saccade onset for an example neuron. Error bars, SEM over saccade events.
(G) Colormap of the normalized firing rate for all saccade-tuned neurons (sorted).
(H) Fraction of neurons in each region encoding saccade event.
(I) Confusion matrix showing the number of neurons encoding any combinations of FL and EP.
(J) Average saccade onset-triggered LFP for an example session.
(K) Average saccade onset-triggered LFP for all regions. Shaded area, SEM over sessions.
(L) Theta band power vs. saccade magnitude for an example session. Error bars, SEM over saccade events.
(M) Average, normalized theta band power vs. saccade magnitude for all sessions. Error bars, SEM over sessions.
Data from monkey K.
A small proportion of EC cells have been reported to show grid-like tuning when head-fixed macaques visually explored novel images on a screen (Killian et al., 2012). We performed a similar analysis searching for grid tuning in the SV variable. Based on the same criterion used by Killian et al. (2012), 9% of EC neurons (4/45) showed grid-like tuning for SV as revealed using traditional grid cell analysis (Fig. S5F). But this effect was weak: no neurons showed grid tuning when we increased the significance level from 95% to 99% (Fig. S5F). Neither HPC nor SUB contained any significant grid neurons (at 99%). Similar results were obtained for FL tuning.
We also examined egocentric eye (eye-in-head) tuning. Consistent with studies in head-fixed monkeys (Meister and Buffalo, 2018), we found neurons tuned to eye position (EP) and/or eye velocity (EV) (Fig. 7C,D), with the EC showing the strongest tuning (22%; Fig. 7E). HF neurons (16%) were also modulated by saccadic eye movements, forming saccade-locked sequential activity (Fig. 7F,G). Saccade tuning was stronger in the HPC/SUB than EC (Fig. 7H). We then examined mixed selectivity between the dominant spatial variable (FL) and EP: 15 neurons encoded both FL and EP, but coding of FL only was more common than EP only (Fig. 7I).
In primates, saccades synchronize network activity across regions (Sobotka and Ringo, 1997). We found that hippocampal LFP consistently showed a slight positive deflection, followed by a large negative deflection and another positive deflection that was linked to saccade events (Fig. 7J), resonating with a previous report (Jutras et al., 2013). The first positive and negative deflections were smaller, but the second positive deflection was larger in the HPC than the EC and SUB (all p < 0.001, Kruskal-Wallis test) (Fig. 7K). Consistent with another study in head-fixed monkeys (Doucet et al., 2020), theta band power was positively correlated with saccade magnitude (Pearson’s r = 0.96, p < 0.001) (Fig. 7L,M). These results suggest that the relationship between theta activity and saccade in macaques may be analogous to that between theta and locomotion in rodents.
In summary, macaque hippocampal activity shows rich eye movement-related signals. Nevertheless, allocentric FL and HD tuning predominantly reflects head, but not gaze, properties.
Discussion
These results provide a comprehensive picture of novel spatial coding schemes in the macaque HF, for the first-time during free exploration with accurate tracking of 3D head motion and eye movement.
Behavioral correlates of theta activity and theta phase coding
A salient finding is the intermittent, lower frequency (~4 Hz) theta activity, restricted to movement onset in macaques, as compared to a continuous rhythm in locomoting rodents (~6-10 Hz) (Buzsáki and Moser, 2013). Nevertheless, like rodents, this low theta activity showed speed-dependent frequency increases. Human studies have also consistently reported low frequency oscillations around 3-4 Hz (Bush et al., 2017; Ekstrom et al., 2005; Watrous et al., 2011), but other studies have reported higher theta frequencies (Aghajan et al., 2017; Bohbot et al., 2017; Goyal et al., 2020). Moreover, it was recently reported in a macaque study that intermittent theta LFP predicted sleep instead of active states (Talakoub et al., 2019). Similar to findings in rodents (Kropff et al., 2021), theta frequency increased with positive acceleration in macaques. How speed and acceleration differentially influence theta frequency requires future experiments in a more controlled way.
Interestingly, despite low power theta oscillations in macaques, theta phase coding is conserved: A significant fraction of neurons were phase-locked to low theta (56%) and theta (25%). Similar to rodent studies (Csicsvari et al., 2003), the majority of HF neurons were tuned to gamma phase, providing a possible mechanism of population synchrony across HF subregions to facilitate readout in downstream areas. Future studies will need to examine the relationship between gamma phase tuning and behavioral performance in more detail. Nevertheless, theta phase precession existed only in a very small number of HF neurons of freely-behaving macaques. This low incident of theta phase precession in the macaque HF contrasts with its abundance among rodent place cells (Mizuseki et al., 2009); notably, this property is also less prevalent for bat place cells (23%) (Eliav et al., 2018). It remains to be determined whether theta phase precession becomes more prominent when macaques engage in memory/decision tasks, as well as whether and how distinct theta bands and theta phase precession are tied to specific aspects of the task.
Head facing and 3D head direction tuning
The finding that facing location and head tilt represent the dominant tuning variables stresses the importance of accurately tracking 3D head orientation during free behavior. Whereas facing location has been similarly described previously in macaques in the form of ‘spatial view’ cells (Rolls et al., 1997), the strong head tilt correlates may appear surprising. Nevertheless, tilt tuning has been shown to be prevalent in the bat presubiculum (Finkelstein et al., 2015), monkey anterior thalamus (Laurens et al., 2016) and mouse anterior thalamus and retrosplenial cortex (Angelaki et al., 2020). Thus, orientation relative to gravity may be an important component in spatial coding and behavior across species.
FL is consistently the best predictor across regions and animals (Fig. S7A). However, some differences existed between monkeys. Specifically, monkey L showed stronger HD tuning than monkey K in SUB. This likely has arisen from differences in the exact recording locations: monkey K’s SUB cells were mostly recorded from more lateral prosubiculum and subiculum subregions, whereas they were mostly from more medial presubiculum for monkey L (Fig. S1), which has been shown to contain many head direction cells across species (Finkelstein et al., 2015; Robertson et al., 1999; Taube et al., 1990). For monkey B’s EC, the best predictor is Speed, different from the other two monkeys. For all three monkeys, EC neurons were recorded from the posterior half of the EC (analogous to the rodent medial entorhinal cortex), but we couldn’t exclude the possibility that they were from different locations along the medial-lateral axis or cortical layers.
It is currently unknown whether rodent HF cells encode facing location. The strong prevalence of facing location/spatial view tuning in monkeys represents the most notable difference in HF coding between the two species. Note that we could not distinguish whether the dominant variable encoded in the macaque HF truly represents facing location on the walls and ceilings of the open arena or, alternatively, egocentric head direction relative to environmental landmarks. We defined FL in a way analogous to ‘spatial view’ (Rolls et al., 1997), although we note that what this variable represents exactly must be further explored.
Eye movement encoding
Previous studies in which the animal’s head was fixed in a cart failed to disambiguate the influences of head and eye movement on hippocampal responses (Georges-François et al., 1999; Rolls et al., 1997). When FL was substituted by SV, we found that some cells appear to encode where the eye looks. However, considering both variables together solved this ambiguity – FL of the head outperformed SV of gaze. Similarly, the firing of most direction-tuned neurons was better explained by head, rather than gaze. We propose that, because primates saccade continuously to explore their environment, head-anchoring may provide a more stable representation.
More important than the debate between FL vs. SV is the presence of eye movement signals in the macaque HF. Many neurons, particularly in the EC (22%), were tuned to egocentric eye-in-head position and/or velocity. This number is similar to results obtained in head-fixed monkeys viewing images (11.9% Grid + 9.3% Border in Killian et al., 2012; 20.3% in Meister and Buffalo, 2018; 22% in the HPC in Nowicka and Ringo, 2000). In addition, many cells, particularly in the HPC and SUB, were modulated by saccadic eye movements. Furthermore, head-fixed studies have found hippocampal LFP synchronization and amplitude modulation by saccades (Doucet et al., 2020; Jutras et al., 2013). We extended these findings to freely-behaving monkeys and further showed differential effects across HF regions. These results directly link eye movements with the navigation circuit during natural behavior.
Despite strong eye movement-related responses in the HF, we found that the macaque EC showed little grid tuning both for allocentric location and for where the animal is looking in the environment. Previously, a small percentage (11.9%) of EC cells were reported to show grid tuning based on eye movements in head-fixed macaques (Killian et al., 2012). Using the same criterion, we found 9% of EC neurons that showed significant grid score. However, the tuning was so weak that it failed to reach significance at the 99% level for all neurons. Thus, visual grid tuning, even if present in a small number of EC cells, is far less robust, and possibly insignificant, compared to rodent grid cells. Alternatively, visual grid tuning may only be seen in head-fixed animals exploring a visual image and not in freely-foraging primates. Future studies should directly compare visual space grid tuning between head-fixed virtual navigation (or viewing a steady image) and free navigation in open environments on a cell-by-cell basis.
Combining present and previous findings, head facing may act as an anchor for the firing fields in the HF, whereas eye movements may be used to establish a more fine-grained map for things seen centered around that facing location. The eyes actively interrogate and efficiently gather information. Subtle eye movements may reflect internal deliberation and the prioritization of goals in real-time (Gottlieb and Oudeyer, 2018; Yang et al., 2016). The reported eye movement encoding in navigation circuits may also reflect task-relevant variables that covary with eye movements. It is well known, for example, that eye movements offer a window into cognition and memory, which could be mediated by the HF (Hayhoe and Ballard, 2005; Nau et al., 2018; Ryan and Shen, 2020). Eye movements have also been linked to infer subjects’ beliefs by continuously tracking an invisible target (Lakshminarasimhan et al., 2020). Accurate goal-tracking by gaze was associated with improved task performance, and inhibiting eye movements impaired navigation precision. Thus, it is important that future experiments explore the precise advantage that active visual sampling confers upon navigational planning and the extent to which humans and monkeys exploit it.
Position (‘place’) coding in freely-moving monkeys
Since the discovery of place cells in rodents, it has been a long-standing question whether the primate HF represents self-location in a similar way. It is important to note that, using traditional tuning curve analysis, the proportion of place coding cells in the HPC is not different from previous monkey studies (present study: 26% in HPC, as compared to 30.6% in Matsumura et al., 1999; 25.4% in Hazama and Tamura, 2019; 32.1% in Ludvig et al., 2004 - squirrel monkeys; 14.1% in Courellis et al., 2019 - marmosets). Further, in addition to their low occurrence compared to rodents, ‘place fields’ have been reported to be more dispersed in moving marmosets than rodents (Courellis et al., 2019), a finding that is consistent with the present results. Thus, the difference in conclusions is not data-driven, but rather analysis-driven. Nevertheless, we note that we cannot exclude the possibility that the low fraction of place and grid coding is due to the restricted size of the arena or recording locations. In the wild, the natural habitat of rhesus macaques is much larger than the arena used in the present study, but our monkeys have always lived in small environments. Our arena diameter is approximately 6 times the animal’s body length. Rodent studies often use larger environments relative to their body size. Nonetheless, salient grid tuning has also been confirmed in proportionally similar environments (e.g. Fyhn et al., 2008; Hafting et al., 2005). It is also possible that stronger place and grid coding may be found in locations/cortical layers not sufficiently sampled in the present recordings.
Importantly, unlike previous studies, when accounting for other variables quantitatively using a multi-modal model, the percentage of true place cells dropped from 26% to 7% in the HPC, a percentage that is lower but still significant. This difference emphasizes the need to avoid the use of traditional tuning curve analysis in neurons with mixed selectivity. Although the vertical coverage is limited in the present study, a sizable proportion (13%/19%) of HPC/EC cells are tuned to either horizontal or vertical head position. Thus, like head direction, place coding in freely-moving macaques is three-dimensional, as previously shown for both rats and bats (Grieves et al., 2020; Yartsev and Ulanovsky, 2013). The present study found a negligible proportion (2%) of grid coding. We note that, allocentric border cells would also show as position-coding; thus such cells are also infrequently encountered in the macaque HF.
Putative place cells have been reported in human patients during virtual navigation (26% in Ekstrom et al., 2003; 25.6% in Miller et al., 2013). Single cell recordings have also confirmed grid-like tuning in the human EC in a virtual navigation task (14% in Jacobs et al., 2013; 18.4% in Nadasdy et al., 2017). Indirect measurements in fMRI studies have further corroborated grid-like activity in the human EC, that is also modulated by speed (Doeller et al., 2010). In a recent human study during a spatial memory task, hippocampal neurons showed selectivity to remote goal location, more so than self-location, with some cells also encoding heading direction and trial progression (Tsitsiklis et al., 2020). Whether the HF is engaged in spatial coding may be tightly linked to task demands (Ekstrom et al., 2003; Stangl et al., 2021; Wood et al., 1999, 2000). The free foraging paradigm used in the present study resembles that typically used in rodent studies. However, it is likely that free foraging per se may impose distinct cognitive demands between macaques and rodents, thus differentially engaging their navigation circuits. Future experiments should compare HF spatial representations during memory/goal-oriented navigation and free foraging.
Topography of spatial coding in hippocampal-neocortical networks
In primates, the posterior hippocampus is analogous to the dorsal hippocampus in rodents (Strange et al., 2014). The posterior hippocampus showed stronger spatial tuning to facing location, head tilt, and egocentric boundary than the anterior hippocampus. The posterior hippocampus is also highly connected with the posterior neocortex, including retrosplenial and parietal cortices, which are strongly linked to spatial processing (Clark et al., 2018). Therefore, the topographical differentiation of spatial tuning may reflect one aspect of a general organization of the hippocampal-neocortical networks in different aspects of cognition, in that the posterior part is more involved in detailed spatial representation and the anterior part is more involved in memory/emotion-related processing (although this still remains controversial) (Strange et al., 2014). The technical innovations will now allow us to interrogate how different cognitive aspects segregate and interact across extended hippocampal-neocortical networks using large-scale wireless recordings in freely-moving primates.
STAR Methods
RESOURCE AVAILABILITY
Lead Contact
Further information and requests should be directed to and will be fulfilled by the lead contact, Dora E. Angelaki (da93@nyu.edu).
Materials Availability
This study did not generate new unique reagents.
Data and Code Availability
All data and code are available from the Lead Contact upon request. Any additional information required to reanalyze the data reported in this work is available from the Lead Contact upon request.
EXPERIMENTAL MODEL AND SUBJECT DETAILS
Three male rhesus macaques (Macaca mulatta, 8-10 years old) weighing 9-14 kg were used in this study. Animals were chronically implanted with a lightweight polyacetal head ring for head restraint (Meng et al., 2005). A head cap printed with carbon-fiber reinforced nylon (Utah Trikes, USA) was attached on the head ring to accommodate and protect the microdrives, recording devices, transmitters, and batteries. We used the standard pole-and-collar method to train the monkeys to move from their home cage to the primate chair using positive reinforcement. All animal experimental procedures and surgeries were approved by the Institutional Animal Care and Use Committee at Baylor College of Medicine and were in accordance with the National Institutes of Health guidelines.
METHOD DETAILS
Reconstruction and implant design
Pre-op computed tomography (CT) and magnetic resonance imaging (MRI; 3T, Siemens) images were collected to segment the regions of interest, and for accurate design of the form-fitting (to the skull) chambers (titanium) and the microdrives (form fitting to the brain surface; Visijet-Clear or Ultem). CT and MRI images were registered in 3D slicer software (Fedorov et al., 2012) (https://www.slicer.org). The hippocampus, entorhinal cortex, subicular complex, and the entire brain were segmented semi-manually from the T1-weighted MRI images in ITK-SNAP software (Yushkevich et al., 2006) (https://www.itksnap.org), with reference to standard atlas (Paxinos et al., 2008; Saleem and Logothetis, 2006). Major blood vessels were segmented from the T2-weighted MRI images. The skull model was segmented from the CT images. All rendered 3D models were exported from ITK-SNAP and then imported into 3D slicer, where MRI and CT volumes were aligned with stereotaxic coordinates. Models of the chambers and microdrives were imported in 3D slicer and the best positioning was chosen such that the electrodes covered the largest extent of the hippocampus and the posterior entorhinal cortex (homolog to the medial entorhinal cortex in rodents). Major blood vessels were avoided.
Freely moving monkey (FMM) arena
The FMM arena consisted of an open circular enclosure with a 3.30 m diameter and a 2.12 m height, with a single entrance/exit door and a drain on the floor (Figure S2). The enclosure was made of white composite material. Three feed/touch-screen boxes were evenly located at the perimeter of the arena. A motion capture system (Vicon Ltd, UK) consisting of 9 infrared cameras (Bonita camera) was mounted in the ceiling and the wall (Figure S2), to capture the 3D position of the 4 reflective markers placed on the monkey’s head cap at a rate of ~1 kHz. Nine video cameras (CM3-U3-13Y3M 1/2" Chameleon3 Monochrome Camera, FLIR Systems, Inc., USA) were mounted on the wall to capture videos at ~30 Hz from different angles through a transparent plastic window. A wide-angle camera was mounted in the center of the ceiling for surveillance. The arena was lit by an LED lamp mounted at the ceiling.
Implantation of chronic microdrives
We used two different technologies for recordings.
First, Monkeys B and L were implanted with the NLX-18 and NLX-9 tetrode drives (Neuralynx, Inc., USA) using a tetrode-in-guide tube technique. First, guide tubes (2” long 27G needles) were loaded and fixed inside a grid, which rested inside a chamber. The depths of the guide tubes were planned such that they were positioned ~3-5 mm above the target regions. Second, a polymicro capillary tubing (100 μm ID, 170 μm OD; Molex, LLC, USA) was loaded into the guide tubes and glued to the drive shuttles. Third, each tetrode (Platinum 10% Iridium, 0.0007”; California Fine Wire Co., USA) was loaded into the polymicro capillary tubing. The tetrode was glued to the capillary tubing and its tip extended outside from the bottom of the tubing by ~2-3 mm. The NLX-18 drive was loaded with 16 tetrodes and the NLX-9 with 8 tetrodes. The tetrodes were plated with platinum to an impedance of 100-200 kΩ. The entire drives were gas-sterilized (EtO) before surgery. Monkey B received an NLX-18 drive in the right hemisphere. Monkey L received an NLX-18 drive in the right hemisphere and an NLX-9 drive in the left hemisphere. The assembled drive was implanted as a single unit and only a single surgery was required.
Second, Monkey K was implanted with a 32-channel microdrive (SC32) in the left hemisphere and a 124-channel microdrive (LS124; Gray Matter Research, LLC, USA) in the right hemisphere (Dotson et al., 2017). Each channel was loaded with a glass-coated tungsten electrode (250 μm total diameter, 60 ° taper angle, ~1 MΩ impedance; Alpha Omega Co., USA). Each electrode had a travel distance of 42 mm from the brain surface. The implantation procedure breaks into 3 separate surgeries (Gray Matter Research) (https://www.graymatter-research.com/documentation-manuals/): Chamber implantation, craniotomy, and microdrive implantation. In stage 1, the chamber was implanted at the desired location using C&B Metabond (Parkell, Inc., USA) and dental cement; bone anchor screws were not required since everything was protected within the head cap. The chamber was hermetically sealed with a plug and O-rings. At 10 days after stage 1, we collected fluid sample from the inside of the chamber and cultured the sample. After no signs of infection and a negative culture, we moved on to stage 2. In stage 2, a craniotomy was made inside the chamber and the edge of the craniotomy was polished with 90 ° Kerrison rongeurs. A form-fitting plug (with the same shape as the microdrive) was installed and the plug & chamber unit were hermetically sealed. At 10 days after stage 2, we controlled for infection again by collecting and culturing a fluid sample from the inside of the chamber. After no signs of infection and a negative culture, we moved on to the final stage. In stage 3, the microdrive, loaded with electrodes, was inserted into the chamber and secured to the chamber walls using multiple screws. The entire assembly was then hermetically sealed with O-rings. All components were autoclaved or gas-sterilized prior to procedures.
For Monkeys B and L, all tetrodes were advanced to 1 mm out of the guide tubes on the day of implantation. For Monkey K, all electrodes targeting deep structures were moved to 8 mm below brain surface on the day of microdrive implantation and advanced to a position of 2-3 mm above target areas within 5 days following recovery from surgery. Electrodes were advanced by turning fine threaded screws, at a resolution of 125 μm per revolution for single electrodes and 250 μm per revolution for tetrodes. Electrodes were advanced by 50-250 μm per day and recording was performed the next day. A CT scan was performed every 2-3 months to reconstruct current electrode locations. Electrodes were visible as white tracks in the CT images (Figures 1 and S1). CT volumes were registered with pre-op MRI volumes to visualize electrode locations in the brain (Premereur et al., 2020). When combined with electrode advance history, electrode locations were reconstructed with good precision. Anatomical landmarks such as ventricles, white matter and electrophysiological signatures were used to further verify electrode locations.
We additionally verified electrode locations with histology in Monkey B. The monkey was perfused with electrodes in position. After post-perfusion fixation, the brain was removed and sunk in fixative with 30% sucrose for several days. The brain was sectioned at 60 μm and stained with cresyl violet.
Behavioral training and tracking
Monkeys were placed on a food delayed schedule. They were fed fewer than normal biscuits (~15%) in the morning so they were encouraged to forage for treats in the arena. The remaining daily allotment of food less the amount received in the arena was fed after the monkeys were returned to the home cage after training, and supplemented with vegetables and fruits. The monkeys were habituated to the FMM arena in a gradual manner. They were brought from their home cage to the arena in a transfer cage. They entered/exited the arena through a single door (same one each time). They were trained to freely forage for randomly scattered food pellets or fruits on the floor throughout the session. Monkeys B and L also received rewards from the reward boxes with equal probability. Monkey K did not forage at the reward boxes. At the end of each session, the monkeys were allowed to return to the transfer cage and then back to the home cage, where they received food and additional treats. Monkey K was habituated to wear a wireless eye tracking device (ISCAN, Inc., USA). The monkey was initially trained to wear a dummy eye tracking device, which could be replaced inexpensively when destroyed, in the home cage. Once the monkey stopped interacting with it, they were trained to wear the eye tracking device in the FMM arena. The eye tracking device was housed in a rigid, 3D-printed case which was fixed to the head ring implant. The device consists of a miniature infrared camera, an infrared emitter, a hot mirror, and a transmitter. The hot mirror was held rigidly in front of the left eye and the camera faced downward at about 45 ° relative to the mirror.
To accurately track monkey’s head motion in 3D, we used a marker-based motion tracking system. Four markers were placed within a single plane on the head cap (Figure 1H). One marker was placed in the front and one at the back. Other two markers were symmetrically placed on the left and right (closer to the back marker). The 3D position (x,y,z) of each marker was recorded (~1 kHz) in Spike2 with a data acquisition system (Power1401, CED Ltd., UK). The eye tracking device was powered by a lightweight 3.7 V Li-Po battery. Eye images were wirelessly transmitted to a receiver outside the arena. Horizontal/vertical eye position was monitored (~30 Hz) in Spike2.
Electrophysiological recordings
We used a 64-channel neural logger to record broadband (0.1-7000 Hz) electrophysiological signals at 32 kHz (Deuteron Technologies Ltd., Israel). The neural logger was powered by a lightweight 3.7 V Li-Po battery. The head-stage was connected to the drive connectors directly or via a 5 cm jumper cable. The head-stage utilized a preamplifier and a 16 bit analog-to-digital converter from Intan Technologies. The signal precision was 0.2 μV. Raw signals were digitized on board and stored on a 64 GB micro-SD card (SanDisk) plugged into the logger. To avoid clock drift over time, the logger was wirelessly synchronized with the computer clock every 5-10 minutes via a USB transceiver placed outside the arena; the transceiver was connected to the Spike2 system for synchronization between behavioral data and electrophysiological recordings. Each recording session usually lasted ~20-60 minutes. For Monkey K, the chamber was used as the ground. For Monkeys B and L, one guide tube or a separate screw held in the skull was used as the ground.
QUANTIFICATION AND STATISTICAL ANALYSIS
All data analysis was performed in MATLAB 2019b (The MathWorks, Inc., USA).
Extraction of behavioral variables
Three-dimensional behavioral variables were extracted from the raw marker position data that were pre-processed to fill small gaps (< 1 s) which could happen occasionally. All raw data were re-sampled at 50 Hz.
Head-in-world position in the horizontal plane was defined as the x and y components of the average across all 4 markers. The amplitude of its time derivate was translational speed. Head height was defined as the z component of 3D position (arena floor at z = 0). Head tilt was defined as the projection of the earth gravity vector onto the head coordinate system; only pitch and roll angles (as a 2D variable) were considered. Azimuth head direction was calculated as in (Angelaki et al., 2020). Heat tilt and azimuth head direction, together, define 3D orientation. We computed angular velocity about the head’s yaw, pitch and roll axes. We focused on the yaw and pitch angular velocities (as a 2D variable) since the head rotated mostly along these 2 dimensions. Egocentric boundary was defined in a polar coordinate system, with the radial component calculated as the distance from the head center to the nearest arena boundary and polar angle defined between the arena center-to-head center vector and the azimuth head direction. To calculate where the monkey looked, we obtained 3D gaze direction by combining eye-in-head position and 3D head direction. The inner surface of the arena was treated as an enclosed cylinder, composed of three surfaces: the ceiling, the floor, and the cylinder wall. Spatial view was then computed as the intersection between the 3D gaze direction and these surfaces. Facing location was computed similarly, i.e., as the intersection between the 3D head direction and the three surfaces. Note that Facing location (FL) is different from 3D orientation: Depending on the animal’s position, the same head orientation can yield distinct FL, and different head orientations can point to the same FL.
Position (Pos) and head height (HH) define 3D position; head direction (HD) and head tilt (HT) define 3D orientation; facing location (FL) / spatial view (SV) correspond to the intersection of 3D head / gaze direction (a 3D vector) with arena surfaces.
For the model fitting, these behavioral variables were binned as follows: position (2D variable, −165-165 cm, 20 x 20 bins); facing location and spatial view (3D variable, x & y: −165-165 cm, z: 0-212 cm, 17 x 17 bins for the ceiling and floor, 53 x 11 bins for the wall); egocentric boundary (2D variable, x & y: −165-165 cm, 20 x 20 bins); head tilt (2D variable, pitch: −55-90 ° (up-down), roll: −18-18 ° (left-right), 18 x 6 bins); angular velocity (2D variable, −120-120 °/s, 16 x 16 bins); head height (1D variable, 10-70 cm, 12 bins); translational speed (1D variable, 0-120 cm/s, 15 bins); azimuth head direction (1D circular variable, −180-180 °, 18 bins). Sessions in which the monkeys occupied fewer than 80% of the valid position bins were excluded from further analysis.
Note that in this analysis, allocentric position-dependent spatial variables like border and grid tuning, will not be distinguished from Pos tuning. Thus, Pos tuning encompasses not only place, but also grid and border fields.
Preprocessing of electrophysiological data
We used a semi-automatic procedure for the preprocessing of the electrophysiological data. To obtain the LFP, the raw data were low-pass filtered at 300 Hz and down-sampled to 1 kHz. We used established pipelines, using the software Klusta for spike detection and automatic spike sorting and phy Kwik-GUI for manual curation (Rossant et al., 2016). We extracted spike waveforms 0.5 s before and 1 s after the trough. Therefore, some neurons with longer trough-to-peak latency may appear to have a latency close to 1 s (Fig. S4K). Manual corrections were based on spike waveforms, waveform features, auto-correlograms, and cross-correlograms. Only well-isolated units were kept for further analysis. We used two independent measures to quantify spike sorting quality. In the first measure, we calculated the percent of spikes that fall within the refractory period (3 ms), i.e. refractory period violation. In the second measure, we calculated the isolation distance as the nth smallest Mahalanobis distance of the spikes in other clusters from the current cluster in the feature space (n is the number of spikes in the current cluster) (Schmitzer-Torbert et al., 2005). For this measure to work, it must contain other clusters that have larger than n total spikes.
LNP model fitting
Several recent studies have emphasized the importance of using multimodal models to characterize mixed selectivity, which are agnostic to tuning curve shape and robust to the interdependence of encoded variables (Hardcastle et al., 2017; Laurens et al., 2019). Specifically, by comparing a multivariate linear-nonlinear Poisson (LNP) model framework with traditional classification methods based on tuning curves, Laurens et al. (2019) found that traditional analysis approaches often produce biased results. A fundamental advantage of the multivariate LNP model framework is that it can, by design, correctly identify multimodal responses even when various behavioral variables are correlated or interdependent. For example, position and egocentric boundary (EB) variables are typically strongly correlated, such that an EB-tuned cell may appear to be tuned as a place cell (Laurens et al., 2019).
Thus, to quantify the encoding of the behavioral variables by neuronal activity, we adapted this LNP framework, which has been successfully used in investigating the neuronal encoding of navigational variables in the rodent hippocampal formation (Hardcastle et al., 2017; Laurens et al., 2019; Ledergerber et al., 2021). The MATLAB code is available on GitHub (https://github.com/kaushik-l/neuroGAM). Spike trains were binned into 0.02 s time bins, the same as the behavioral variables. To test for significance of the model fitting and to control for overfitting, we used a 5-fold cross-validation approach. To obtain the training and test sets with minimal bias, the data were divided into 3 chunks; in each chunk, the data were further divided into 5 sub-chunks; for each fold during the cross-validation process, the ith (i ∈ (1,2,3,4,5), 5 folds) sub-chunk in each chunk was concatenated and used as the test set (20% of data) and the remaining used as the training set (80% of data). The measured spike train r was smoothed with a Gaussian kernel with 0.06 s (3 time bins) standard deviation. The best-fit parameter was solved using the MATLAB fminunc function (using the ‘trust-region’ algorithm). We used log-likelihood as the quantification for model performance (i.e. goodness of fit). The model performance was compared to a null model (mean firing rate model) to test significance (one-sided Wilcoxon signed rank test, alpha = 0.05).
We used an optimized forward search approach to select the best, simplest model. First, we fitted the 1st order models that contained only one variable. Second, if any of the 1st order models performed better than the null model, we fitted the 2nd order models (2 variables) which included the best 1st model variable (with the highest, significant log-likelihood value). Higher order models containing the best, significant lower model variables were fitted further until the model performance did not improve. The best simplest model was selected as the final best model.
The way the forward search algorithm works ensures that any variable that significantly contributes (to any extent) to the performance of the best, lower order model will be included in the final best model. It is very likely that, if a combination of variables explains the activity better than the best single predictor, they would be picked up by the final best model; if not, it means that the other variables do not contribute further on top of the best predictor.
The neuron categorization (Figure 4D) was based on the best 1st order model; for example, if the 1st order model containing the position variable had the best significant performance, this neuron would be categorized as a ‘position’ cell. The fraction of neurons encoding each variable (Figure 4E) was based on the final best model, in which a single neuron could encode more than 1 variable; for example, if the best model included egocentric boundary + translational speed + head direction, then this neuron is counted as encoding all these 3 variables. This approach avoided assigning variables that were significant in the 1st order models but were not picked up by the final best model since the behavioral variables could be interdependent, which is usually not taken into consideration in traditional analyses.
To distinguish between the tuning to head and eye-related properties, in Fig. 7, we repeated the LNP model fitting by adding 2 eye-related variables: SV and GD, to the above model. Therefore, the final model included 10 variables (see the table above).
LFP analysis
We used the continuous wavelet transform (CWT) to compute the scalogram (Torrence and Compo, 1998). We used a model-based approach to compute the aperiodic as well as the oscillatory components in the power spectrum (https://github.com/fooof-tools/fooof). We extracted the average power in 6 frequency bands from the scalogram: low theta 1-4 Hz, theta 4-8 Hz, alpha 8-12 Hz, beta 12-30 Hz, low gamma 30-60 Hz, and high gamma 60-120 Hz. To generate the plots in Fig. 2E, we used a 4x standard deviation (of the entire session) threshold to detect local power peaks (at least 2 s apart) in the theta and beta bands. We then calculated the average speed profiles synchronized to these time points for all the sessions.
To extract LFP phase, we first band-pass filtered the raw LFP to individual frequency bands using a 4th-order Chebyshev Type II filter. We then used the Hilbert transform to extract instantaneous LFP phase. We used nearest neighbor interpolation to identify the corresponding LFP phase for each spike. We used Rayleigh test to identify significant spike-LFP phase tuning with alpha = 0.01.
To analyze LFP phase precession, LFP phase in the theta bands was unwrapped such that the resulting phase is monotonic instead of cyclic (Mizuseki et al., 2009; Qasim et al., 2021). For each spike, we assigned the corresponding unwrapped LFP phase via nearest neighbor interpolation. We excluded spikes where the filtered LFP amplitude fell in the lowest 10 percentile. We calculated the autocorrelogram (22.5° bin) of the spike-LFP phase trains. We then calculated the power spectrum of the autocorrelogram. If a neuron is phase-locked to LFP phase, the power spectrum peaks at the same frequency of the autocorrelogram relative to the LFP phase (relative frequency = 1). If the power spectrum peaks at a faster frequency of the autocorrelogram relative to the LFP phase (relative frequency > 1), it indicates that spikes move to earlier LFP phases (i.e. phase precession). To identify significant phase precession, we assigned each spike a random, unwrapped LFP phase within the same theta cycle, and repeated the analysis for 500 times. We identified significant phase precession if the actual vs. shuffled power spectrum shows a significant peak in the range of relative frequency of 1 to 1.5 (alpha = 0.01/26 bins, corrected for multiple comparisons).
Circular graph for conjunctive coding
The degree of conjunctive coding was defined as the similarity of the encoding between variables, i.e. the size of the intersection divided by the size of the union – Jaccard index. The circular graph in Figure 4G included only variable pairs which had a Jaccard index above 0.1 (code at https://github.com/paul-kassebaum-mathworks/circularGraph).
Tuning curve clustering analysis
Tuning curve clustering analysis (Figures 5G and 5H right) was based on the model-based tuning curves. We first normalized the tuning curves so that the firing rate ranged from 0 to 1 then performed principal component analysis on the correlation matrix of all tuning curves. The first two principal components were plotted (example code at https://github.com/PeyracheLab/Class-Decoding). Uniformity test for data in Figure 5H used Rayleigh test in the CircStat toolbox (Berens, 2009) (http://bethgelab.org/software/circstat/).
Traditional tuning curve-based analysis
The model-based, unbiased approach employed in the present study is ideal for disambiguating the contributions of many variables, compared to more traditional analyses focusing on a single or limited number of variables, which can often over-represent the coding of variables considered. Nevertheless, to be able to compare with previous studies, we also performed a more traditional tuning analysis considering the most used spatial variables.
Specifically, we used traditional methods to identify place tuning based on spatial information (SI) (Skaggs et al., 1993). We used grid score (GS) to quantify grid tuning (Langston et al., 2010). We set a speed threshold of 10 cm/s below which spikes were not considered. We first constructed the position activity map (firing rate as a function of position) using 6x6 cm binning (55x55 bins for x and y), which was then smoothed using a 2D Gaussian kernel with 9 cm standard deviation along each dimension. We used speed modulation depth to identify neurons tuned to speed. We first constructed speed tuning curves (firing rate as a function of speed, 30 speed bins, from 0-120 cm/s, 6 cm/s standard deviation Gaussian smoothing). We calculated speed modulation depth as the difference between the maximum and minimum speed tuning firing rate (Kropff et al., 2015). We used mean vector length to quantify azimuth head direction tuning. We first constructed circular head direction tuning curves (36 bins, 10°/bin, 15° standard deviation Gaussian smoothing). The mean vector length was calculated as the weighted sum of the tuning curve in each direction divided by the total firing rate across all directions using the CircStat toolbox (Berens, 2009). For grid analysis of the SV and FL variables, we computed the SV and FL activity map on the floor using 110x110 bins, smoothed with a 2D Gaussian kernel with standard deviation of 2x2 bins; and SV and FL activity map on the wall using 500x100 bins (perimeter x height), smoothed with a 2D Gaussian kernel with standard deviation of 5x1 bins along each dimension. We then computed GS to quantify SV and FL grid tuning either on the wall or on the floor.
To obtain the shuffled distributions, we circularly shifted the spike trains relative to the behavioral data for a random interval between 10 seconds and session duration minus 10 seconds, and re-calculated each measure mentioned above. We repeated this process 500 times. If the raw measure was larger than the 99th percentile of the shuffled distribution, the neuron was considered tuned to the specific variables. We additionally set a SI threshold of 1 bits/spike to identify neuron tuned to position.
One issue with the traditional analyses is that a neuron may appear to encode a particular variable when other more relevant variables are excluded, therefore masking the actual, more multiplexed coding schemes. Several recent studies have emphasized the advantages of using multimodal models to characterize mixed selectivity, which are agnostic to tuning curve shape and robust to the interdependence of encoded variables (Hardcastle et al., 2017; Laurens et al., 2019).
Eye movement analysis
We used a speed threshold of 150°/s to detect saccade events in either horizontal or vertical eye movements. We computed the slow phase eye velocity by taking the derivative of smoothed eye position (excluding saccade events). We fitted eye-in-head position (2D variable, horizontal: −40-40°, vertical: −30-30°, 20 x 20 bins) and velocity (2D variable, horizontal/vertical: −150-150°/s, 20 x 20 bins) with the LNP model (single variable model) to identify neurons tuned to either of these two variables. To identify neurons that were tuned to saccade events, we identified neurons that showed a significant difference between pre- and post-saccadic activity (0.4 s before and 0.4 s after saccade onset; paired t-test, alpha = 0.01). For eye-in-head movement, we quantified horizontal and vertical movements, which is enough as gaze has a 2D topology. The third dimension is torsion, but not particularly relevant for visual exploration.
All p values smaller than 0.001 were indicated as p < 0.001; otherwise exact p values were indicated.
Supplementary Material
KEY RESOURCES TABLE
REAGENT or RESOURCE | SOURCE | IDENTIFIER |
---|---|---|
Experimental models: Organisms/strains | ||
Macaca mulatta | Covance Research Products Univ Texas MD Anderson | N/A |
Software and algorithms | ||
Klusta | https://github.com/kwikteam/klusta | |
Phy | https://github.com/cortex-lab/phy | |
3D slicer | https://www.slicer.org | |
ITK-SNAP | https://www.itksnap.org | |
MATLAB custom code | MathWorks | https://doi.org/10.5281/zenodo.5496420 |
Highlights.
Wireless recordings and accurate behavior tracking in freely moving macaques
Hippocampal theta activity is intermittent and modulated by eye movements
Facing location and 3D orientation but not place dominates for spatial selectivity
3D facing and orientation tuning is mostly anchored to head but not gaze stances
Acknowledgments
We thank J. Lin and B. Kim for help with setting up the arena; G. DeAngelis for help with surgeries; M. Schartner for help with the video-based tracking system; N. Tataryn for veterinary care; L. Lu and K. Bohne for advice on the tetrode technique; B. Goodell and C. Gray for assistance with the microdrive design; K. Lakshminarasimhan for help with the model fitting. NIH BRAIN Initiative grant U01 NS094368, 1R01-AT010459 and Simons Collaboration on the Global Brain Grant 542949 (D.E.A.); NIH grant NIDCD DC014686 (J.D.D).
Footnotes
Declaration of Interests
The authors declare no competing interests.
Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
References
- Angelaki DE, Ng J, Abrego AM, Cham HX, Asprodini EK, Dickman JD, and Laurens J (2020). A gravity-based three-dimensional compass in the mouse brain. Nature Communications 11, 1855. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Barnes CA, McNaughton BL, Mizumori SJY, Leonard BW, and Lin L-H (1990). Chapter 21 Chapter Comparison of spatial and temporal characteristics of neuronal activity in sequential stages of hippocampal processing. In Progress in Brain Research, Storm-Mathisen J, Zimmer J, and Ottersen OP, eds. (Elsevier; ), pp. 287–300. [DOI] [PubMed] [Google Scholar]
- Barthó P, Hirase H, Monconduit L, Zugaro M, Harris KD, and Buzsáki G (2004). Characterization of neocortical principal cells and interneurons by network interactions and extracellular features. J Neurophysiol 92, 600–608. [DOI] [PubMed] [Google Scholar]
- Berens P (2009). CircStat: A MATLAB Toolbox for Circular Statistics. Journal of Statistical Software 31, 1–21. [Google Scholar]
- Bohbot VD, Copara MS, Gotman J, and Ekstrom AD (2017). Low-frequency theta oscillations in the human hippocampus during real-world and virtual navigation. Nature Communications 8, 14415. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bush D, Bisby JA, Bird CM, Gollwitzer S, Rodionov R, Diehl B, McEvoy AW, Walker MC, and Burgess N (2017). Human hippocampal theta power indicates movement onset and distance travelled. PNAS 114, 12297–12302. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Buzsáki G, and Moser EI (2013). Memory, navigation and theta rhythm in the hippocampal-entorhinal system. Nature Neuroscience 16, 130–138. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Clark BJ, Simmons CM, Berkowitz LE, and Wilber AA (2018). The retrosplenial-parietal network and reference frame coordination for spatial navigation. Behav Neurosci 132, 416–429. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Courellis HS, Nummela SU, Metke M, Diehl GW, Bussell R, Cauwenberghs G, and Miller CT (2019). Spatial encoding in primate hippocampus during free navigation. PLOS Biology 17, e3000546. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Csicsvari J, Jamieson B, Wise KD, and Buzsáki G (2003). Mechanisms of Gamma Oscillations in the Hippocampus of the Behaving Rat. Neuron 37, 311–322. [DOI] [PubMed] [Google Scholar]
- Doeller CF, Barry C, and Burgess N (2010). Evidence for grid cells in a human memory network. Nature 463, 657–661. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Dotson NM, Hoffman SJ, Goodell B, and Gray CM (2017). A Large-Scale Semi-Chronic Microdrive Recording System for Non-Human Primates. Neuron 96, 769–782.e2. [DOI] [PubMed] [Google Scholar]
- Doucet G, Gulli RA, Corrigan BW, Duong LR, and Martinez- Trujillo JC (2020). Modulation of local field potentials and neuronal activity in primate hippocampus during saccades. Hippocampus 30, 192–209. [DOI] [PubMed] [Google Scholar]
- Ekstrom AD, Kahana MJ, Caplan JB, Fields TA, Isham EA, Newman EL, and Fried I (2003). Cellular networks underlying human spatial navigation. Nature 425, 184–188. [DOI] [PubMed] [Google Scholar]
- Ekstrom AD, Caplan JB, Ho E, Shattuck K, Fried I, and Kahana MJ (2005). Human hippocampal theta activity during virtual navigation. Hippocampus 15, 881–889. [DOI] [PubMed] [Google Scholar]
- Eliav T, Geva-Sagiv M, Yartsev MM, Finkelstein A, Rubin A, Las L, and Ulanovsky N (2018). Nonoscillatory Phase Coding and Synchronization in the Bat Hippocampal Formation. Cell 175, 1119–1130.e15. [DOI] [PubMed] [Google Scholar]
- Fedorov A, Beichel R, Kalpathy-Cramer J, Finet J, Fillion-Robin J-C, Pujol S, Bauer C, Jennings D, Fennessy F, Sonka M, et al. (2012). 3D Slicer as an image computing platform for the Quantitative Imaging Network. Magnetic Resonance Imaging 30, 1323–1341. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Finkelstein A, Derdikman D, Rubin A, Foerster JN, Las L, and Ulanovsky N (2015). Three-dimensional head-direction coding in the bat brain. Nature 517, 159–164. [DOI] [PubMed] [Google Scholar]
- Furuya Y, Matsumoto J, Hori E, Boas CV, Tran AH, Shimada Y, Ono T, and Nishijo H (2014). Place-related neuronal activity in the monkey parahippocampal gyrus and hippocampal formation during virtual navigation. Hippocampus 24, 113–130. [DOI] [PubMed] [Google Scholar]
- Fyhn M, Hafting T, Witter MP, Moser EI, and Moser M-B (2008). Grid cells in mice. Hippocampus 18, 1230–1238. [DOI] [PubMed] [Google Scholar]
- Georges-François P, Rolls ET, and Robertson RG (1999). Spatial View Cells in the Primate Hippocampus: Allocentric View not Head Direction or Eye Position or Place. Cereb Cortex 9, 197–212. [DOI] [PubMed] [Google Scholar]
- Gottlieb J, and Oudeyer P-Y (2018). Towards a neuroscience of active sampling and curiosity. Nature Reviews Neuroscience 19, 758–770. [DOI] [PubMed] [Google Scholar]
- Goyal A, Miller J, Qasim SE, Watrous AJ, Zhang H, Stein JM, Inman CS, Gross RE, Willie JT, Lega B, et al. (2020). Functionally distinct high and low theta oscillations in the human hippocampus. Nature Communications 11, 2469. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Grieves RM, Jedidi-Ayoub S, Mishchanchuk K, Liu A, Renaudineau S, and Jeffery KJ (2020). The place-cell representation of volumetric space in rats. Nat Commun 11, 789. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gulli RA, Duong LR, Corrigan BW, Doucet G, Williams S, Fusi S, and Martinez-Trujillo JC (2020). Context-dependent representations of objects and space in the primate hippocampus during virtual navigation. Nature Neuroscience 23, 103–112. [DOI] [PubMed] [Google Scholar]
- Hafting T, Fyhn M, Molden S, Moser M-B, and Moser EI (2005). Microstructure of a spatial map in the entorhinal cortex. Nature 436, 801–806. [DOI] [PubMed] [Google Scholar]
- Hardcastle K, Maheswaranathan N, Ganguli S, and Giocomo LM (2017). A Multiplexed, Heterogeneous, and Adaptive Code for Navigation in Medial Entorhinal Cortex. Neuron 94, 375–387.e7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hayhoe M, and Ballard D (2005). Eye movements in natural behavior. Trends in Cognitive Sciences 9, 188–194. [DOI] [PubMed] [Google Scholar]
- Hazama Y, and Tamura R (2019). Effects of self-locomotion on the activity of place cells in the hippocampus of a freely behaving monkey. Neuroscience Letters 701, 32–37. [DOI] [PubMed] [Google Scholar]
- Jacobs J, Kahana MJ, Ekstrom AD, and Fried I (2007). Brain Oscillations Control Timing of Single-Neuron Activity in Humans. J. Neurosci. 27, 3839–3844. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Jacobs J, Weidemann CT, Miller JF, Solway A, Burke JF, Wei X-X, Suthana N, Sperling MR, Sharan AD, Fried I, et al. (2013). Direct recordings of grid-like neuronal activity in human spatial navigation. Nature Neuroscience 16, 1188–1190. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Jung MW, Wiener SI, and McNaughton BL (1994). Comparison of spatial firing characteristics of units in dorsal and ventral hippocampus of the rat. J. Neurosci 14, 7347–7356. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Jutras MJ, Fries P, and Buffalo EA (2013). Oscillatory activity in the monkey hippocampus during visual exploration and memory formation. PNAS 110, 13144–13149. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Killian NJ, Jutras MJ, and Buffalo EA (2012). A map of visual space in the primate entorhinal cortex. Nature 491, 761–764. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kropff E, Carmichael JE, Moser M-B, and Moser EI (2015). Speed cells in the medial entorhinal cortex. Nature 523, 419–424. [DOI] [PubMed] [Google Scholar]
- Kropff E, Carmichael JE, Moser EI, and Moser M-B (2021). Frequency oftheta rhythm is controlled by acceleration, but not speed, in running rats. Neuron 109, 1029–1039.e8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lakshminarasimhan KJ, Avila E, Neyhart E, DeAngelis GC, Pitkow X, and Angelaki DE (2020). Tracking the Mind’s Eye: Primate Gaze Behavior during Virtual Visuomotor Navigation Reflects Belief Dynamics. Neuron 106, 662–674.e5. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Langston RF, Ainge JA, Couey JJ, Canto CB, Bjerknes TL, Witter MP, Moser EI, and Moser M-B (2010). Development of the Spatial Representation System in the Rat. Science 328, 1576–1580. [DOI] [PubMed] [Google Scholar]
- Laurens J, Kim B, Dickman JD, and Angelaki DE (2016). Gravity orientation tuning in macaque anterior thalamus. Nature Neuroscience 19, 1566–1568. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Laurens J, Abrego A, Cham H, Popeney B, Yu Y, Rotem N, Aarse J, Asprodini EK, Dickman JD, and Angelaki DE (2019). Multiplexed code of navigation variables in anterior limbic areas. BioRxiv 684464. 10.1101/684464 [DOI] [Google Scholar]
- Ledergerber D, Battistin C, Blackstad JS, Gardner RJ, Witter MP, Moser M-B, Roudi Y, and Moser EI (2021). Task-dependent mixed selectivity in the subiculum. Cell Reports 35, 109175. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ludvig N, Tang HM, Gohil BC, and Botero JM (2004). Detecting location-specific neuronal firing rate increases in the hippocampus of freely-moving monkeys. Brain Research 1014, 97–109. [DOI] [PubMed] [Google Scholar]
- Matsumura N, Nishijo H, Tamura R, Eifuku S, Endo S, and Ono T (1999). Spatial- and Task-Dependent Neuronal Responses during Real and Virtual Translocation in the Monkey Hippocampal Formation. J. Neurosci 19, 2381–2393. [DOI] [PMC free article] [PubMed] [Google Scholar]
- McFarland WL, Teitelbaum H, and Hedges EK (1975). Relationship between hippocampal theta activity and running speed in the rat. J Comp Physiol Psychol 88, 324–328. [DOI] [PubMed] [Google Scholar]
- McNaughton BL, Barnes CA, and O’Keefe J (1983). The contributions of position, direction, and velocity to single unit activity in the hippocampus of freely-moving rats. Exp Brain Res 52, 41–49. [DOI] [PubMed] [Google Scholar]
- Meister MLR, and Buffalo EA (2018). Neurons in Primate Entorhinal Cortex Represent Gaze Position in Multiple Spatial Reference Frames. J. Neurosci 38, 2430–2441. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Meng H, Green AM, Dickman JD, and Angelaki DE (2005). Pursuit—Vestibular Interactions in Brain Stem Neurons During Rotation and Translation. Journal of Neurophysiology 93, 3418–3433. [DOI] [PubMed] [Google Scholar]
- Miller JF, Neufang M, Solway A, Brandt A, Trippel M, Mader I, Hefft S, Merkow M, Polyn SM, Jacobs J, et al. (2013). Neural Activity in Human Hippocampal Formation Reveals the Spatial Context of Retrieved Memories. Science 342, 1111–1114. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Mizuseki K, Sirota A, Pastalkova E, and Buzsáki G (2009). Theta Oscillations Provide Temporal Windows for Local Circuit Computation in the Entorhinal-Hippocampal Loop. Neuron 64, 267–280. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Nadasdy Z, Nguyen TP, Török Á, Shen JY, Briggs DE, Modur PN, and Buchanan RJ (2017). Context-dependent spatially periodic activity in the human entorhinal cortex. PNAS 114, E3516–E3525. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Nau M, Julian JB, and Doeller CF (2018). How the Brain’s Navigation System Shapes Our Visual Experience. Trends in Cognitive Sciences 22, 810–825. [DOI] [PubMed] [Google Scholar]
- Nowicka A, and Ringo JL (2000). Eye position-sensitive units in hippocampal formation and in inferotemporal cortex of the Macaque monkey. European Journal of Neuroscience 12, 751–759. [DOI] [PubMed] [Google Scholar]
- O’Keefe J, and Dostrovsky J (1971). The hippocampus as a spatial map. Preliminary evidence from unit activity in the freely-moving rat. Brain Research 34, 171–175. [DOI] [PubMed] [Google Scholar]
- O’Keefe J, and Recce ML (1993). Phase relationship between hippocampal place units and the EEG theta rhythm. Hippocampus 3, 317–330. [DOI] [PubMed] [Google Scholar]
- O’Mara SM, Rolls ET, Berthoz A, and Kesner RP (1994). Neurons responding to whole-body motion in the primate hippocampus. J. Neurosci 14, 6511–6523. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Paxinos G, Petrides M, Huang X-F, and Toga AW (2008). The Rhesus Monkey Brain in Stereotaxic Coordinates (Elsevier Science; ). [Google Scholar]
- Premereur E, Decramer T, Coudyzer W, Theys T, and Janssen P (2020). Localization of movable electrodes in a multi-electrode microdrive in nonhuman primates. Journal of Neuroscience Methods 330, 108505. [DOI] [PubMed] [Google Scholar]
- Qasim SE, Fried I, and Jacobs J (2021). Phase precession in the human hippocampus and entorhinal cortex. Cell 184, 3242–3255.e10. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Robertson RG, Rolls ET, Georges-François P, and Panzeri S (1999). Head direction cells in the primate pre-subiculum. Hippocampus 9, 206–219. [DOI] [PubMed] [Google Scholar]
- Rolls ET (2021). Neurons including hippocampal spatial view cells, and navigation in primates including humans. Hippocampus. [DOI] [PubMed] [Google Scholar]
- Rolls ET, Robertson RG, and Georges- François P (1997). Spatial View Cells in the Primate Hippocampus. European Journal of Neuroscience 9, 1789–1794. [DOI] [PubMed] [Google Scholar]
- Rossant C, Kadir SN, Goodman DFM, Schulman J, Hunter MLD, Saleem AB, Grosmark A, Belluscio M, Denfield GH, Ecker AS, et al. (2016). Spike sorting for large, dense electrode arrays. Nature Neuroscience 19, 634–641. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ryan JD, and Shen K (2020). The eyes are a window into memory. Current Opinion in Behavioral Sciences 32, 1–6. [Google Scholar]
- Saleem KS, and Logothetis NK (2006). A Combined MRI and Histology Atlas of the Rhesus Monkey Brain in Stereotaxic Coordinates (Elsevier; ). [Google Scholar]
- Schmitzer-Torbert N, Jackson J, Henze D, Harris K, and Redish AD (2005). Quantitative measures of cluster quality for use in extracellular recordings. Neuroscience 131, 1–11. [DOI] [PubMed] [Google Scholar]
- Skaggs WE, McNaughton BL, Gothard KM, and Markus EJ (1993). An Information-Theoretic Approach to Deciphering the Hippocampal Code. In In, (Morgan Kaufmann; ), pp. 1030–1037. [Google Scholar]
- Skaggs WE, McNaughton BL, Wilson MA, and Barnes CA (1996). Theta phase precession in hippocampal neuronal populations and the compression of temporal sequences. Hippocampus 6, 149–172. [DOI] [PubMed] [Google Scholar]
- Skaggs WE, McNaughton BL, Permenter M, Archibeque M, Vogt J, Amaral DG, and Barnes CA (2007). EEG Sharp Waves and Sparse Ensemble Unit Activity in the Macaque Hippocampus. Journal of Neurophysiology 98, 898–910. [DOI] [PubMed] [Google Scholar]
- Sławińska U, and Kasicki S (1998). The frequency of rat’s hippocampal theta rhythm is related to the speed of locomotion. Brain Research 796, 327–331. [DOI] [PubMed] [Google Scholar]
- Sobotka S, and Ringo JL (1997). Saccadic eye movements, even in darkness, generate event-related potentials recorded in medial sputum and medial temporal cortex. Brain Res 756, 168–173. [DOI] [PubMed] [Google Scholar]
- Stangl M, Topalovic U, Inman CS, Hiller S, Villaroman D, Aghajan ZM, Christov-Moore L, Hasulak NR, Rao VR, Halpern CH, et al. (2021). Boundary-anchored neural mechanisms of location-encoding for self and others. Nature 589, 420–425. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Strange BA, Witter MP, Lein ES, and Moser EI (2014). Functional organization of the hippocampal longitudinal axis. Nature Reviews Neuroscience 15, 655–669. [DOI] [PubMed] [Google Scholar]
- Talakoub O, Sayegh P, Womelsdorf T, Zinke W, Fries P, Lewis CM, and Hoffman KL (2019). Hippocampal and neocortical oscillations are tuned to behavioral state in freely-behaving macaques. BioRxiv 552877. 10.1101/552877 [DOI] [Google Scholar]
- Taube JS, Muller RU, and Ranck JB (1990). Head-direction cells recorded from the postsubiculum in freely moving rats. I. Description and quantitative analysis. J. Neurosci 10, 420–435. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Torrence C, and Compo GP (1998). A Practical Guide to Wavelet Analysis. Bulletin of the American Meteorological Society 79, 61–78. [Google Scholar]
- Tsitsiklis M, Miller J, Qasim SE, Inman CS, Gross RE, Willie JT, Smith EH, Sheth SA, Schevon CA, Sperling MR, et al. (2020). Single-Neuron Representations of Spatial Targets in Humans. Current Biology 30, 245–253.e4. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Vanderwolf CH (1969). Hippocampal electrical activity and voluntary movement in the rat. Electroencephalography and Clinical Neurophysiology 26, 407–418. [DOI] [PubMed] [Google Scholar]
- Watrous AJ, Fried I, and Ekstrom AD (2011). Behavioral correlates of human hippocampal delta and theta oscillations during navigation. Journal of Neurophysiology 105, 1747–1755. [DOI] [PubMed] [Google Scholar]
- Watrous AJ, Lee DJ, Izadi A, Gurkoff GG, Shahlaie K, and Ekstrom AD (2013). A comparative study of human and rat hippocampal low-frequency oscillations during spatial navigation. Hippocampus 23, 656–661. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wirth S, Baraduc P, Planté A, Pinède S, and Duhamel J-R (2017). Gaze-informed, task-situated representation of space in primate hippocampus during virtual navigation. PLOS Biology 15, e2001045. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wood ER, Dudchenko PA, and Eichenbaum H (1999). The global record of memory in hippocampal neuronal activity. Nature 397, 613–616. [DOI] [PubMed] [Google Scholar]
- Wood ER, Dudchenko PA, Robitsek RJ, and Eichenbaum H (2000). Hippocampal Neurons Encode Information about Different Types of Memory Episodes Occurring in the Same Location. Neuron 27, 623–633. [DOI] [PubMed] [Google Scholar]
- Yang SC-H, Lengyel M, and Wolpert DM (2016). Active sensing in the categorization of visual patterns. ELife 5, e12215. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Yartsev MM, and Ulanovsky N (2013). Representation of Three-Dimensional Space in the Hippocampus of Flying Bats. Science 340, 367–372. [DOI] [PubMed] [Google Scholar]
- Yushkevich PA, Piven J, Hazlett HC, Smith RG, Ho S, Gee JC, and Gerig G (2006). User-guided 3D active contour segmentation of anatomical structures: Significantly improved efficiency and reliability. Neuroimage 31, 1116–1128. [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Data Availability Statement
All data and code are available from the Lead Contact upon request. Any additional information required to reanalyze the data reported in this work is available from the Lead Contact upon request.