Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2025 Aug 7.
Published in final edited form as: Cell Rep. 2023 Oct 31;42(11):113238. doi: 10.1016/j.celrep.2023.113238

A neural code for time and space in the human brain

Daniel R Schonhaut 1, Zahra M Aghajan 2, Michael J Kahana 3,6, Itzhak Fried 2,4,5,6,7,*
PMCID: PMC12329751  NIHMSID: NIHMS2030449  PMID: 37906595

SUMMARY

Time and space are primary dimensions of human experience. Separate lines of investigation have identified neural correlates of time and space, yet little is known about how these representations converge during self-guided experience. Here, 10 subjects with intracranially implanted microelectrodes play a timed, virtual navigation game featuring object search and retrieval tasks separated by fixed delays. Time cells and place cells activate in parallel during timed navigation intervals, whereas a separate time cell sequence spans inter-task delays. The prevalence, firing rates, and behavioral coding strengths of time cells and place cells are indistinguishable−yet time cells selectively remap between search and retrieval tasks, while place cell responses remain stable. Thus, the brain can represent time and space as overlapping but dissociable dimensions. Time cells and place cells may constitute a biological basis for the cognitive map of spatiotemporal context onto which memories are written.

In brief

Schonhaut et al. record direct neural firing while subjects play a timed, virtual navigation game with object search and retrieval tasks separated by fixed delays. The authors find that neural codes for time and space are simultaneously active, context-specific, and dissociable, providing a putative mechanism for representing spatiotemporal context.

Graphical Abstract

graphic file with name nihms-2030449-f0006.jpg

INTRODUCTION

Time and space help to organize our experiences, allowing us to reconstruct the past and envision the future. Lesions to the medial temporal lobe (MTL) and prefrontal cortex (PFC) disrupt associations between events and their temporal1,2 and spatial35 contexts. Parallel lines of research have uncovered neurons in these regions that fire at specific locations within a given environment (“place cells”6) or specific moments within a stable interval (“time cells”710), providing a candidate biological basis for a cognitive map of time and space. Our understanding of these neurons, however, stems largely from studying them in isolation: place cells are usually recorded during exploratory or goal-directed navigation absent time constraints, while time cells are recorded at fixed locations under timed conditions.9 Thus, despite the long-standing assumption that the brain encodes experiences within their spatiotemporal contexts,1113 we lack an understanding of how neuronal representations of time and space converge during experience.

Recordings of neuronal responses in the human brain have now established the existence of place-responsive cells that appear analogous in many ways to place cells in the rodent hippocampus.1417 Recent studies also suggest the existence of neurons that appear selective to time in both verbal list learning18 and image sequence learning19 tasks. Yet it remains unclear if neurons in humans encode time during task-free conditions analogous to those studied in animals, or if time cells only appear in tasks requiring explicit attention to time or sequentially presented stimuli.

To address these questions, we recruited 10 neurosurgical patients with intracranially implanted depth electrodes (Figure 1) to play a time-constrained, spatial navigation computer game called Goldmine, in which they earned points by collecting gold in a visually sparse, underground mine (Figure 2 and Video S1). Each trial consisted of four timed events: first, subjects waited passively for 10 s at a fixed location, the mine base (Delay1). Next, they had 30 s to search for gold that appeared throughout the mine in randomized locations on every trial (Gold Search). Subjects then waited an additional 10 s in the mine base under identical conditions to the first delay (Delay2). Finally, they had 30 s to return to remembered gold locations and dig for gold, now invisible (Gold Dig). After each Gold Search and Gold Dig interval, subjects were instructed to navigate back to the mine base if they were not already there at the end of 30 s (untimed Return-to-base period, median duration = 1.6 s across subjects). This sequence repeated for 36 trials per session. All subjects performed capably, collecting 54% (range 35%–73%) of gold that was found during Gold Search while maintaining 47% (range 19%–71%) digging accuracy (STAR Methods).

Figure 1. Recording locations.

Figure 1.

Approximate locations of 457 neurons across 10 participants are overlaid on a glass brain in Montreal Neurological Institute (MNI) space. Round markers indicate the positions of microwire bundles, each consisting of eight recording electrodes, from which one or more units was recorded. Marker size and color are proportional to the number of units recorded at a given location.

Figure 2. Goldmine task.

Figure 2.

(A) Trial structure and timing.

(B) Top-down view of the mine layout. The yellow line shows an example route by a subject (red circle).

(C) Gameplay screenshots during Delay (top left), Gold Search (top right, bottom left), and Gold Dig (bottom right) events. See also Figures S1 and S2 and Video S1.

Using microwires that extended from implanted electrode tips, we recorded extracellular action potentials from a combination of 457 single and multiple units (13–73 units per session, hereafter referred to as neurons) that were primarily located in the MTL and medial PFC (mPFC) (Figure 1; Table 1). Here we investigated associations between these neurons’ firing rates and the spatiotemporal structure of the task.

Table 1.

Neurons by subject and region

Subject HPC AMY EC PHG/FSG HSGa mOCCa mPFC Other Sum

1b 19 6 5 23 0 13 0 0 66
2 5 0 0 19 0 0 0 0 24
3 11 5 1 0 0 0 1 7 25
4 9 6 10 0 0 0 8 0 33
5b 11 19 42 0 21 0 7 2 102
6 0 25 24 15 0 0 1 0 65
7 9 13 15 9 0 0 27 0 73
8 3 9 0 0 0 0 1 0 13
9 4 6 3 0 0 0 0 0 13
10 14 3 0 8 0 0 15 3 43
Sum 85 92 100 74 21 13 60 12 457

Number of neurons (single and multiple units) recorded from each subject in each region. HPC, hippocampus; AMY, amygdala; EC, entorhinal cortex; PHG/FSG, parahippocampal gyrus/medial bank of the fusiform gyrus; HSG, Heschl’s gyrus; mOCC, medial occipital cortex; mPFC, medial prefrontal cortex (including medial orbitofrontal, anterior cingulate, and pre-supplementary motor area). Subjects are listed in the order tested.

a

Sample from one patient.

b

Subjects with two sessions of data.

RESULTS

Time during task-free delays

We first analyzed neural activity during delay events (Figure 2A), which replicated conditions in which time cells are found in the rodent hippocampus.9 To prevent time from being confounded with other behavioral variables, we teleported subjects to the exact same location at the start of every delay, where they viewed a static image of a door to the mine that was identical across trials (Figure 2C). As in animal time cell studies, subjects were neither instructed nor incentivized to explicitly attend to time. The delays therefore provided a strict test of the hypothesis that neurons encode time within clearly defined, repeating intervals with fixed external context.

Figure 3A shows a selection of neurons that fired in a time-constrained manner consistently across trials, illustrating the range of typical responses. Among the 457 recorded neurons, 99 neurons (22%) exhibited a significant main effect of time (10 discrete, 1-s bins), independent of event (Delay1 or Delay2) and its interactions with time (permutation test against circularly shifted spikes; see STAR Methods; Figures 3A and 3C). These “delay time cells” were present at rates well above chance in the hippocampus and other recorded regions, with no difference in the proportion of significantly responding neurons between the hippocampus, surrounding MTL (combining amygdala, entorhinal cortex, and parahippocampal/fusiform gyrus), and mPFC (p > 0.05, permutation test controlling for differences in neurons recorded per region, between subjects; Table 2). In contrast, only 26 neurons (6%) exhibited a significant time × event interaction, not exceeding chance (Figure 3C). A majority of time-coding neurons during delays therefore did not distinguish between Delay1 and Delay2.

Figure 3. Time cells during task-free delays.

Figure 3.

(A) Subpanels show trial-wise spike rasters and firing rates (mean ± SEM; solid red line: 500 ms moving average; dashed blue line: grand average) for six time cells in the hippocampus (top row), medial occipital cortex, parahippocampal gyrus, and amygdala (bottom row, L to R). The left subpanel for each neuron shows Delay1 activity, and the right subpanel shows Delay2 activity.

(B) Event-specific cells in orbitofrontal cortex (top) and parahippocampal gyrus.

(C) Percent of neurons with significant responses to each main effect and interaction (red line: type 1 error rate). ***p < 0.0001, binomial test with Bonferroni-Holm correction.

(D) Z-scored firing rates for all main effect time cells (each row = one neuron), sorted by time of maximum Z-scored firing.

(E) Mean ± SEM prediction errors, across trials, for classifiers that were trained and tested on the same delay event (e.g., both Delay1) to decode time from firing rates of: all neurons (solid blue line), the 125 neurons that responded to time as a main effect or interaction with event (dashed green line), 332 neurons that did not respond significantly to time (dash-dot red line), and chance-level results from null model classifiers (dotted gray line).

(F) Same as (E) but for classifiers that were trained and tested on different delay events (e.g., Delay1 → Delay2).

(G and H) Confusion matrices for same-delay (G) and cross-delay (H) time cell classifiers. Matrix rows sum to 1, with each value indicating the mean probability across held-out test trials.

Table 2.

Neuron responses by region

Trial events Behav. Variable HPC AMY EC PHG/FSG HSGa mOCCa mPFC Other Total

Delay1, Delay2 time 10 (12%) 22 (24%) 21 (21%) 22 (30%) 4 (19%) 5 (38%) 13 (22%) 2 (17%) 99 (22%)
Delay1, Delay2 event 11 (13%) 15 (16%) 25 (25%) 9 (12%) 0 (0%) 2 (15%) 6 (10%) 2 (17%) 70 (15%)
Delay1, Delay2 time × event 4 (5%) 7 (8%) 4 (4%) 1 (1%) 3 (14%) 2 (15%) 5 (8%) 0 (0%) 26 (6%)
Search, Dig time 2 (2%) 4 (4%) 4 (4%) 8 (11%) 0 (0%) 2 (15%) 2 (3%) 1 (8%) 23 (5%)
Search, Dig place 7 (8%) 6 (7%) 15 (15%) 16 (22%) 1 (5%) 3 (23%) 11 (18%) 1 (8%) 60 (13%)
Search, Dig event 8 (9%) 13 (14%) 12 (12%) 12 (16%) 1 (5%) 2 (15%) 10 (17%) 1 (8%) 59 (13%)
Search, Dig time × event 9 (11%) 10 (11%) 22 (22%) 8 (11%) 0 (0%) 2 (15%) 12 (20%) 1 (8%) 64 (14%)
Search, Dig place × event 4 (5%) 3 (3%) 6 (6%) 4 (5%) 1 (5%) 0 (0%) 6 (10%) 1 (8%) 25 (5%)
Search, Dig place × time 5 (6%) 8 (9%) 8 (8%) 8 (11%) 6 (29%) 4 (31%) 4 (7%) 0 (0%) 43 (9%)

Number and percentage of neurons in each region, across subjects, with significant responses to each behavioral variable.

a

Sample from one patient.

Next we examined the distribution of mean firing rates over time for all main-effect time cells, sorted by their maximal firing time (Figure 3D). Individual neurons had highly variable firing rate peaks, and the number of neurons that peaked in each third of the delay was significantly above chance (p < 0.05, binomial tests with Bonferroni-Holm correction). Thus, time cells were not restricted to one portion of the delay but instead spanned the entire 10-s duration. However, similarly to time cells in animals,7,8,20 more time cells peaked near delay onset than at later times (Figure 3D), and time cells with earlier peaks also showed larger magnitude responses above their baseline activity (r = −0.47, p < 0.0001; peak firing time versus maximum Z-scored firing rate).

While time cells did not distinguish between Delay1 and Delay2, a distinct group of neurons (n = 70, 15%) responded to event as a main effect, independent of time (Figures 3B and 3C). Some of these “event-specific cells” had dramatically different firing rates between the two delays, as shown for an orbitofrontal cortex neuron that almost never fired during the 36 Delay1 trials yet was active in sustained bursts throughout Delay2 (Figure 3B, top). A similar number of event-specific cells fired more during Delay1 than Delay2 (n = 39, 56%) as showed the opposite response, and like time cells, these neurons appeared throughout the regions we recorded but did not differ significantly between regions (Table 2).

Given the prevalence of time cells at the unit recording level, we next asked if time could be decoded from neural activity patterns at the population level. Indeed, support vector machines trained on firing rates of all recorded neurons decoded time within 1.9 ± 0.1 s on held-out test trials, significantly outperforming the 3.2 ± 0.1 s error expected by chance (p < 0.0001, paired t test versus null model classifiers; see STAR Methods). Mirroring the clustering of time cell peaks near delay onset, classifier error was lowest at delay onset and increased steadily over time, while still remaining better than chance in every time bin (Figure 3E).

In spatial navigation studies, place decoding is informed both by place cells and “non-place” cells that lack individually interpretable responses, indicating that spatial location is represented by a distributed neural code.21 We compared classifiers that were trained to decode time from all neuron firing rates against classifiers trained only on time cells (n = 125, including time as a main effect or interaction) or only on non-time cells (n = 332), respectively. Whereas time-cell-only classifiers performed significantly better than all-neuron classifiers (p = 0.0070, paired t test; Figures 3E and 3G), non-time cell classifier predictions were no better than chance (p > 0.05, paired t test). Thus, the population code for time during delays depended on the activity of bona fide time cells, and without these neurons, there was no delay time code.

Finally, we considered whether time could be decoded from cross-classifiers that were trained and tested on different delays (train only on Delay1 intervals → test only on Delay2 intervals or vice versa), as suggested by the prevalence of main-effect time cells over time × event interaction cells. Successful decoding performance depended on the existence of a stable population code for time that was similarly expressed between Delay1 and Delay2. We found that cross-delay classifiers (Figures 3F and 3H) performed comparably to classifiers that were trained and tested on the same delay (Figures 3E and 3G). In summary, population neural activity was sufficient to decode time, and the two delays shared an overlapping neural time code.

Time and place during navigation

Having observed time-coding neurons during delays, we next asked if similar time coding appeared during virtual navigation, when subjects alternated between searching and digging for gold in the mine (Figure 2A). To identify neural responses to time independent of place, we regressed each neuron’s firing rate against elapsed time (10 discrete, 3-s bins), place (12 regions; Figures S1 and S2), event (Gold Search or Gold Dig), and their first-order interactions. We then identified neurons for which removing the main effect of time or the interaction between time and another variable caused a significant decline in model performance, relative to a null distribution of circularly shifted firing rates (STAR Methods). Additionally, to ensure that time and place were sufficiently behaviorally decorrelated, we allowed subjects to exit the base through only one of three doors on a given trial (left, right, or center; counterbalanced across trials), requiring them to vary their routes through the mine. This manipulation severed all but weaker correlations (r < 0.2) between temporal and spatial bins, with the exception that subjects always began navigation at the mine base (Figure S3). Models with additional covariates for head direction, movement, visible objects (gold) and landmarks (mine base), and dig times yielded qualitatively similar results (Figure S4; STAR Methods).

Holding place constant, we observed many neurons that fired in a time-dependent manner during navigation (Figure 4A). Most of these time-coding neurons were context-specific, with a significant number of neurons (n = 64, 14%) representing interactions between time and event, while the number of neurons with a significant main effect of time (n = 23, 5%) was at chance level (Figure 4C). In this respect, time cells during navigation differed markedly from delay time cells that fired analogously between Delay1 and Delay2. In addition, classifiers trained on neural firing during delays failed to predict time above chance during navigation (Figure S5), and population neural activity was negatively correlated between delay and navigation events, such that neurons that were more active during delays were usually less active during navigation and vice versa (Figure S6). Insofar as neurons encoded time during navigation, they therefore did not adhere to the delay time code.

Figure 4. Time and place cells during navigation.

Figure 4.

(A) Spike raster and firing rate plots (mean ± SEM; solid red line: 500 ms moving average; dashed blue line: grand average) are shown for four time × event neurons in the entorhinal cortex (top left, bottom right), amygdala (top right), and orbitofrontal cortex.

(B) Four place cells in the fusiform gyrus (top left), entorhinal cortex (top right, bottom left), and anterior cingulate. Paths traveled (white lines) and spikes (red circles) are overlaid on firing rate heatmaps (color bar) in each mine region. The left subpanel for each neuron in (A) and (B) shows activity during Gold Search events, and the right subpanel shows activity during Gold Dig events.

(C) Percent of neurons that responded significantly to each main effect and interaction (red line: type 1 error rate). ***p < 0.0001, binomial test with Bonferroni-Holm correction.

(D) Correlated firing rates between Gold Search and Gold Dig events, computed: (left bar) across time bins, for neurons with a main effect of time or a time × event interaction (each point = one neuron); (right bar) across mine regions, for neurons with a main effect of place or a place × event interaction. Bars show the mean across neurons, and error bars show the standard error. ***p < 0.0001, Welch’s t test.

(E) Firing rates in each mine region for two place × time interaction cells in the parahippocampal gyrus (top) and entorhinal cortex, averaged across all Gold Search and Gold Dig events during the first, middle, and last 10 s of each event (left to right subpanels). See also Figures S3S9.

Most time × event neurons fired in a time-modulated manner during one navigation event but had a flat or unrelated firing rate during the other, similar to the place cell phenomenon of “global remapping.”22 For example, the entorhinal cortex neuron shown in the bottom-right subpanel of Figure 4A fired at a uniform rate throughout Gold Search events but increased its firing rate more than 3-fold from the beginning to end of Gold Dig events. As during delays, firing rate peaks for time × event cells spanned entire event durations and were overrepresented near navigation onset (Figure S7). Time × event cell proportions did not differ significantly between regions, nor did we find regional differences between other behavioral variables during navigation (Table 2).

Classifiers trained on population neural activity during navigation echoed the unit-level results, decoding time within 3.8 ± 0.2 s on held-out test trials (chance: 10.1 ± 0.2 s), with increasing error at later times from event onset (Figures S8A and S8C). However, contrasting the delay results, time cell cross-classifiers that were trained and tested on different navigation events (e.g., trained to decode time on Gold Search trials and tested for time decoding ability on Gold Dig trials) failed to generalize, performing no better than chance (p > 0.05, paired t test; Figures S8B and S8D). Thus, while the two delays were represented by an overlapping time cell code, Gold Search and Gold Dig events used orthogonal codes.

As in untimed navigation studies,1417,23 a significant number of neurons (n = 60, 13%) encoded place as a main effect during timed navigation, and these “place cells” exhibited a wide range of receptive fields in different regions of the mine (Figures 4B and 4C). In contrast, the number of neurons with a significant place × event interaction (n = 25, 6%) did not exceed chance. This result revealed a dissociation between time cell and place cell responses to changes in task context, with place cells remaining stable between Gold Search and Gold Dig events while time cells remapped. We confirmed this conclusion by comparing correlations between Gold Search and Gold Dig event firing rates: (1) across time bins, for each of the 85 neurons with a main effect of time or a time × event interaction (r = 0.0 ± 0.05 SEM across neurons); and (2) across spatial regions, for each of the 82 neurons with a main effect of place or a place × event interaction (r = 0.31 ± 0.04; Figure 4D). Despite this remapping dissociation, time cells and place cells did not differ significantly in prevalence (time cells: 19% of all neurons; place cells: 18%; p > 0.05, chi-squared test of equal proportions), firing rate during navigation (time cells: 3.5 ± 0.6 Hz SEM; place cells: 4.2 ± 0.7 Hz; p > 0.05, Welch’s t test), or Z-scored strength of time and place coding, respectively, relative to null distributions of circularly shifted firing rates (time cells: Z = 3.8 ± 0.4 SEM; place cells: Z = 3.6 ± 0.3; p > 0.05, Welch’s t test). These variables therefore could not explain the time cell-specific remapping effect.

Prior animal and human studies have found that place cells are often influenced by additional variables including head direction, goal location, and visual cues.14,15,24,25 We asked if time similarly modulates place representations by identifying neurons whose activity reflected interactions between place and time, controlling for their main effects. We identified 43 such place × time cells (9%) whose firing rates at a given location depended on the time it was visited relative to navigation onset (Figure 4E). These neurons were significantly more prevalent than chance, including in models that further controlled for head direction and visual landmarks (Figure S4). Thus, while time and place were generally represented by different neuronal populations, a small number of neurons conveyed information about joint spatiotemporal context, reflecting a higher level of feature abstraction.

Alongside neurons that encoded time and place, we identified a significant number of neurons (n = 59, 13%) that represented event information as a main effect. These neurons were approximately evenly divided between cells that fired preferentially during Gold Search and cells that were active more during Gold Dig. These navigation-event cells overlapped minimally with the delay-event cells described previously (Figure 3B), such that all four trial events were represented by distinct neural populations.

Having analyzed neural activity during delay and navigation periods separately, we considered whether some neurons represented different features in different phases of the trial, as has been observed in rodents.9 Indeed, we found that some neurons that acted as time cells or event cells during delays encoded a different variable, such as place, during navigation. However, these examples of cross-event coding were uncommon and did not differ significantly from the number expected given chance overlap, with one exception: 25 of 70 (36%) delay-event neurons (which fired preferentially during Delay1 or Delay2, respectively) also showed significant time × event interactions during navigation. This number was substantially higher than expected if these delay and navigation responses had occurred independently (p < 0.0001, chi-square test of independent proportions). These neurons tended to change their firing rates gradually over the course of each Goldmine trial, and their activity may be better characterized as being tuned at the trial level rather than at the level of delay and navigation events within the trial (Figure S9).

Representing time over long durations

During both delay and navigation events, the neural time code gradually erodes (Figures 3E, 3G, S8A, and S8C), as reported previously in animals.2628 Given this loss of temporal information, how do we retain a sense of time over long durations? Behavioral studies of temporal memory suggest that some events might act as “landmarks” in time by realigning the internal clock with the external passage of time.29 Landmarks play a parallel role in spatial navigation, where they can correct cumulative path integration error and offer an alternative to direction-based navigation.30,31

The Goldmine task contained two levels of temporal structure: time within each delay and navigation event, and time across the four events in a trial (Figure 2A). If neurons used the boundaries between these events as temporal landmarks, we reasoned that (1) it should be possible to decode time across the whole trial at once, and (2) decoding accuracy should decrease steadily within each trial event but increase following the transition from one event to the next.

To evaluate these hypotheses, we trained classifiers to decode time (40, 2-s bins) from all neuron firing rates across the 80-s trial and then tested them on held-out trials (Figure 5). As classifiers merely selected the most probable time bin without knowing anything about the trial event structure, above-chance performance can only be explained by distinctive neural patterns in each time bin relative to others across the four timed trial events. Consistent with our first hypothesis, classifier accuracy on held-out trials was well above chance (observed: 25% ± 3% SEM across trials; chance: 2.5%), and classifier-predicted time was closely aligned with actual time throughout the trial duration (Figure 5B). Consistent with the second hypothesis, classifier accuracy increased sharply at the beginning of each delay and navigation event then decreased at a predictable rate over time (Figure 5A). Population neural activity was therefore sufficient to decode time across sequential trial events.

Figure 5. Decoding time across the trial.

Figure 5.

(A) Prediction accuracy by time (mean ± SEM across held-out test trials) is shown for classifiers trained to decode 2-s time bins from all neuron firing rates, using actual (solid red line) or circularly shifted (gray dotted line) time bins.

(B) Confusion matrix for classifiers trained on actual (non-shifted) time bins. Matrix rows sum to 1, with each value indicating the mean probability across held-out test trials.

DISCUSSION

Our study reveals neurons in the MTL and mPFC that encode time and space during exploration in a virtual environment for fixed durations. Time cells activated at rest in the absence of movement or other external contextual change, while distinct time cells and place cells emerged during navigation and exhibited divergent responses to changing tasks. These results demonstrate a neuron-level code for spatiotemporal context in the human brain, in which time and space are simultaneously represented but not wholly conjoined.

Fixing the duration of navigation events allowed us to investigate concomitant temporal and spatial codes, and we find the brain maintains largely independent time and place representations within a given context. Our data moreover reveal a dissociation between place cell and time cell responses, with place cells firing similarly between gold searching and digging tasks, while time cells completely remapped. This result could reflect differences in how subjects perceived time and place in Goldmine, akin to differences in how people judge the time and place of events in daily life. That is, place cells were stable because subjects needed to return to the same locations during gold searching and digging, while time cells remapped to track the temporal progression of these events (first search, then dig) within each trial. This interpretation implies that different experimental conditions could elicit a reversed remapping effect in which place cell activity varies across contexts for which time cells are stable, with implications for how events within these contexts are later organized in memory.32

Two recent studies in humans described neurons with temporally correlated activity during verbal list learning18 and image sequence learning19 tasks. These studies provided initial evidence for neuronal time coding under conditions in which subjects had to attend to sequential information. Here we extend these findings to show that time codes are present even in the absence of task engagement, serial item presentation, or changing external stimuli. This finding suggests that neurons map time by default, providing a stable scaffold onto which events are bound to their times of occurrence across diverse contexts. Our task additionally enabled direct comparison between human and animal neuronal responses to time, revealing broadly conserved qualities across species. Specifically, we find that human time cells (1) span entire event durations7,8,20; (2) accumulate error in the absence of external cues2628; (3) remap between events for which context discrimination (here gold searching versus digging) is behaviorally adaptive7,8,20,27; (4) are encoded independently of place10,3335; and (5) reside in the MTL7,28,3641 and mPFC.42,43 Consistent with the original time cell study in rodents,7 we also found an inverse correlation between population firing rates during delay and navigation intervals, denoting sharp contextual boundaries between these states. Lastly, we found some evidence for neural tuning at the trial level, consistent with prior literature in rodents.44

Episodic memory is distinguished from other forms of memory by the recall of events together with the unique, spatiotemporal contexts in which they occurred−the “what,” “when,” and “where” of experience.11,12 Neural representations of these features are thought to converge in the MTL, where neurons fire selectively to image categories and multimodal percepts (“what”),4547 and to specific locations and orientations in an environment (“where”).1417,23 Here we confirm a neuron-level basis for encoding “when” an event occurs, separably from “where.” The convergence of these time and place codes may provide a contextual framework for organizing the contents of our experiences into separable but associated memories, providing a biological mechanism for Tulving’s defining view of memory 50 years ago.11

Limitations of the study

Whereas time coding was robust to potentially confounding factors such as task, place, head direction, and visual cues, we cannot dismiss the possibility that latent factors (e.g., attention, planning, or anticipation) could have influenced neuronal responses. This concern is also applicable to animal time cell studies that informed our experimental design, and it may be resolved by comparing time cell properties across different timing paradigms. Secondly, while we attempted to decorrelate time from place during navigation by directing subjects along different starting routes, mild correlations remained in the behavioral data (Figure S3) that complicate interpretation of some results. In particular, stronger time-by-place correlations at the start of the task interval may have reduced our ability to identify early-responding time cells after regressing out place effects, while population neural decoders may have benefitted from leveraging both temporal and spatial information to inform time prediction early on. Finally, it is possible that task learning during Gold Search influenced neural activity during subsequent delays, and as such, exploring the presence or extent of neuronal reactivation during delays warrants future investigation.

Another limitation concerns the regional specificity of our results. Although we observed significant numbers of time cells and place cells in multiple regions within the MTL and mPFC, we were unable to resolve differences in the proportions of these neurons or their response properties between regions. We expect that regional differences between time cell and place cell codes likely do exist, but that we were underpowered to detect these effects in a limited sample size with high inter-subject variability in electrode placement due to clinical constraints, alongside highly variable numbers of time cells and place cells recorded in different subjects.

Finally, our time cell and place cell recordings may appear noisier than their rodent counterparts. However, factors aside from interspecies differences might explain this observation. First, tetrodes and high-density electrodes used in rodent studies enable better unit isolation, and consequently lower background firing, than can be attained with the single microwire electrodes that we used. Second, our participants were patients with epilepsy who were recorded during prolonged hospital stays. Third, whereas time cells and place cells in rodents are typically recorded in overtrained animals, our participants had ~10 min of tutorial training before beginning experimental trials in a de novo environment. Fourth, the Goldmine environment was more complex than spatial arenas in most rodent place cell studies and lacked continually visible landmarks, optimizing our ability to detect “pure” time and place responses but at a possible cost to individual neuron precision. Lastly, virtual navigation studies in rodents usually involve running on a stationary trackball,48 while our participants navigated using keyboard and mouse controls absent normal locomotive proprioception. Similarly noisy place cell recordings have been observed in other studies of hand- or eye-movement-based virtual navigation in humans and monkeys.14,15,17,49 Future studies that record intracranial activity while subjects navigate through real-world environments50 may permit more direct comparison between human studies and the wealth of neuroscientific knowledge in animal models from which they draw.

STAR★METHODS

RESOURCE AVAILABILITY

Lead contact

Further information and requests for resources should be directed to and will be fulfilled by the lead contact, Itzhak Fried (IFried@mednet.ucla.edu).

Materials availability

This study did not generate any new reagents.

Data and code availability

  • All subject-deidentified data are freely available for use upon request to the lead contact.

  • Original code used for all data preprocessing and analyses can be downloaded at Zenodo: https://doi.org/10.5281/zenodo.8333600.

  • Any additional information required to reanalyze the data reported in this work paper is available from the lead contact upon request.

EXPERIMENTAL MODEL AND STUDY PARTICIPANT DETAILS

Subjects

We analyzed behavioral and single-unit data from 10 neurosurgical patients with drug-resistant epilepsy who completed a total of 12 testing sessions. Clinical teams determined the location and number of implanted electrodes, based on clinical criteria. All testing was performed under informed consent after the nature and possible consequences of the experiment were explained, and experiments were approved by institutional review boards at the University of California, Los Angeles and the University of Pennsylvania.

METHOD DETAILS

Electrophysiological recording

Patients were stereotactically implanted with 7–12 Behnke-Fried electrodes with 40mm diameter microwire extensions (eight high-impedance recording wires and one low-impedance reference wire per depth electrode) that capture local field potentials (LFPs) and extracellular spike waveforms.59 Microwire electrophysiology data were amplified and recorded at 30 kHz on a Blackrock Microsystems (Salt Lake City, UT) recording system or at 32 kHz on a Neuralynx (Tucson, AZ) recording system.

Spike sorting

Automated spike detection and sorting were performed using the WaveClus3 software package in Matlab.60 We then manually reviewed each unit for inclusion by evaluating waveform shape, amplitude, and consistency, along with spike time auto-correlation, inter-spike intervals, and firing consistency across the session, and we rejected units that were likely contaminated by artifacts or had excessively low amplitude waveforms relative to the noise floor, in keeping with field-standard spike evaluation criteria.61,62 For electrodes with multiple units that passed this inclusion check, we merged units whose waveform features could not be well-separated in principal components space, retaining for analysis a combination of single- and multi-units. We refer to these units as neurons throughout the text, but due to a limited ability to resolve truly single-unit from multi-unit activity using currently available recording technology in human subjects, we do not attempt to distinguish single-from multi-units in the present study and analyze all units identically. Spike sorting was performed by D.R.S., blinded to electrode recording region, and independently reviewed by I.F.

Task description

Subjects played a first-person, virtual navigation game called Goldmine, in which they explored an underground mine while alternating between searching for visible gold and then digging for this gold, now hidden from view, at remembered gold locations. Testing sessions lasted for approximately 1 h, and they consisted of a short tutorial sequence followed by 36 experimental trials. Every trial consisted of four timed and two untimed events in the following sequence.

  1. Delay1 (10s): Subject waits in the mine base. Game controls are turned off, and the subject sees a static image of the center base door. This image is identical across delays, and the subject is in exactly the same location, facing the same direction, on every delay.

  2. Gold Search (30s): A 1s beep denotes the start of the Gold Search event, during which the subject may freely search the mine for one or more golds that appear on the ground in randomized locations on every trial. Concurrent with the beep that signals the start of Gold Search, a 1s instruction message, centered onscreen, tells the subject how many golds there are to find. Game controls are also reactivated, and one of three doors (to the left, right, or center) opens to allow the subject to exit the base. If the left or right door opens, an arrow appears onscreen for 1s (overlapping with the instruction message) to indicate which way the subject should turn.

  3. Return-to-base1 (variable time): All gold vanishes, and a message onscreen instructs the subject to navigate back to the mine base. As soon as the subject re-enters the base, or if they were already in the base at the end of Gold Search, the screen goes black for 2s except for a message that instructs the subject to envision the route they will take during the upcoming Gold Dig event. Across all trials, the median duration for this event was 2s (the minimum duration), and the 75th percentile was 8.9s.

  4. Delay2 (10s): Subject waits in the mine base. As during Delay1, game controls are deactivated, and the subject sees a static image of the center base door. Delay1 and Delay2 are overtly identical events, differing only by their order within the trial sequence.

  5. Gold Dig (30s): A 1s beep denotes the start of the Gold Dig event, during which the subject attempts to return to gold locations from the preceding Gold Search event and dig for gold, now hidden from view. Concurrent with the beep that signals the start of Gold Dig, a 1s instruction message, centered onscreen, tells the subject how many golds there are to dig (equal to the number of golds to find during Gold Search). Game controls are reactivated, and the same door that opened during Gold Search is reopened, allowing the subject to exit the base.

  6. Return-to-base2 (variable time): Digging is disabled, and a message onscreen instructs the subject to navigate back to the mine base. As soon as the subject reenters the base, or if they were already in the base at the end of Gold Dig, the screen goes black for 2s except for a message that instructs the subject to prepare for the upcoming Gold Search event. Across all trials, the median duration for this event was 2s (the minimum duration), and the 75th percentile was 7.4s.

After 36 trials, a “game over” screen appeared and showed subjects their final score, the number of golds successfully dug, and their digging accuracy. Subjects were aware of the 30s time limits during Gold Search and Gold Dig and of a “short waiting time” between each navigation event, but they were never explicitly instructed to attend to time during the experiment. Instead, they were told that their goal was to maximize their score by digging up as many golds as possible, as accurately as possible. They were also asked to remain focused, still, and silent throughout testing – including during delays – unless they needed to ask the experimenter a clarifying question. Voluntary breaks were programmed after 12 and 24 completed trials. Subjects were also taught to press a ‘manual pause’ button if they needed to pause the game for any reason during testing. We did not analyze trials with manual pauses (1.9% of all trials; min = 0, max = 3 per session).

The game was played from a first-person perspective, with the (invisible) avatar being 2m tall and moving forward at a constant 4 m/s. Subjects rotated their view by moving the mouse, moved forward by clicking and holding the left mouse button, and dug (during Gold Dig only) by pressing the spacebar. Releasing the left mouse button caused movement to immediately stop, although head rotation was still possible.

Subjects could retrieve one gold on each of the first two trials. Thereafter, the number of golds, ngold, varied such that if a subject had successfully retrieved all golds on both of the last two trials, the next trial would have ngold+1 golds. However, if the subject failed to retrieve all golds on both of the last two trials, respectively, the next trial would have maxngold1,1 golds. Otherwise, the number of golds stayed the same. Subjects received 10 points for each gold retrieved, with a correct dig occurring anywhere within 4m of the nearest gold. A crosshairs on the ground in front of the subject indicated where their digging was targeted. Each gold could be retrieved only once, and only golds from the most recent Gold Search event could be retrieved. Subjects were not required to move, or dig, if they did not elect to do so. There was no limit on how many digs could be attempted, but every incorrect dig subtracted 2 points from the overall score. The current score was always visible in the top-right corner of the screen, and current task instructions (how many golds to find or dig) were always visible in the top-left corner of the screen.

Before beginning the main experiment, subjects completed a ~10min tutorial with 3 practice trials that taught them the game rules and controls and allowed them to practice moving through the virtual space. The tutorial occurred in a different environment than the one used during the main experiment, although trial events used the same timing (10s delay and 30s navigation events).

The virtual environment was designed to be moderately challenging for patients to learn, capable of being fully explored within 30s, and visually sparse so as to minimize behavioral confounds with time and place. The environment was 27m long × 27m wide and featured 531m2 of traversable space, including 477m2 in the mine and 54m2 in the base. The environment was vertically, horizontally, and diagonally symmetrical except for the base, which served as the sole orienting landmark. Tall rock walls surrounded the mine on all four sides, and inner rock walls were of uniform height (4m) and appearance. The floor of the mine was flat and evenly patterned. Each gold occupied 1m2 of space, and all golds were visually identical. Gold locations were selected by the computer at random at the start of every trial, with the only condition being that gold could not overlap with the base, any walls, or any already created golds.

Goldmine was programmed in Unity, with scripts written in C#. The paradigm ran on a Macbook Pro at 60 frames-per-second. A cord connected the laptop to a digital-analogue converter that sent patterned pulses to the recording system, for the purpose of later synchronizing electrophysiological and behavioral data.

QUANTIFICATION AND STATISTICAL ANALYSIS

Single-neuron responses to task variables

Delay events

For each neuron, we used ordinary least-squares regression to fit the number spikes (unsmoothed) in 1s increments across the 36 Delay1 and 36 Delay2 events, as a function of time (10 discrete bins, each 1s in duration), delay-event (Delay1 or Delay2), and their interaction. We then calculated the likelihood ratio, LR, between this model and 3 reduced models that dropped the parameters for each main effect and interaction, respectively. For example, in the case of time, we compared the original model to a reduced model in which all time-bin parameters were removed while delay-event and time-bin × delay-event parameters were retained.

LR is calculated as:

LR=2lnLm1Lm2

Where Lm1 is the likelihood of the reduced model, given the data, and Lm2 is the likelihood of the original model, given the data. A higher LR indicates that the reduced model fit the data increasingly worse than the fit obtained from the original model, and LR can therefore be interpreted as a measure of the extent to which a set of parameters (e.g., time bins) improves dependent variable prediction (firing rate) over and above the variance explained by the remaining parameters (delay-event and time-bin × delay-event).

Next, we generated a null distribution for each neuron by shuffling event labels (i.e., permuting Delay1 and Delay2 labels, without replacement) and circularly-shifting spike counts by a uniform, random integer between 0 and 9, independently across delays. This manipulation served to decouple cross-trial associations between the behavioral parameters and a neuron’s firing rate while preserving both the number of spikes in each time bin and the autocorrelation in firing rates over time. We repeated this process 1,000 times per neuron, recalculating LRs between full model and reduced model fits with each iteration. We then compared these null distribution LRs to those obtained from the real data, calculating an empirical p-value as P=r+1n+1, where r is the number of null replicates with an LR greater than or equal to the real LR, and n is the total number of replicates.63 We considered a neuron significant for a given main effect or interaction if p < 0.05. Finally, we used binomial tests to determine if the number of significant neurons exceeded the 5% Type 1 error rate, Bonferroni-Holm corrected for multiple comparisons across the 2 main effects and 1 interaction term of interest. The results from these models are described in the text and shown in Figure 3 and Table 2.

Navigation events

The same procedure was used to analyze firing rate correlations with behavior during navigation as during delays, but with a different combination of independent variables. Specifically, ordinary least-squares regression was used to model the number of spikes (unsmoothed) in 500ms increments across the 36 Gold Search and 36 Gold Dig events, as a function of time (10 discrete bins, each 3s in duration), place (i.e., subjects’ current location within the 12 mine regions in Figure S1), navigation-event (Gold Search or Gold Dig), and their first-order interactions (time × delay-event, place × delay-event, and time × place). For each neuron, we calculated LRs between this model and 6 reduced models that dropped the parameters for each main effect and interaction term, in turn. Empirical p-values were obtained relative to null distributions that shuffled navigation event labels and circularly-shifted spike count vectors at random within each navigation event, and neurons were considered significant for a given main effect or interaction if p < 0.05. Finally, we used binomial tests to determine if the number of significant neurons exceeded the 5% Type 1 error rate, Bonferroni-Holm corrected for multiple comparisons across the 3 main effects and 3 interaction terms of interest. The results from these models are described in the text and shown in Figure 4, Table 2, and Figures S7S9.

We also tested models with additional covariates for virtual head direction (8 angles corresponding to North [the starting direction], Northeast, East, Southeast, South, Southwest, West, and Northwest), player movement (whether the in-game avatar was moving or rotating), base visibility (whether the base was currently visible from the player’s vantage), gold visibility (whether gold was currently visible from the player’s vantage; applied to Gold Search only), gold digging (whether the player had just performed a dig action; applied to Gold Dig only), head-direction × navigation-event, player-movement × navigation-event, and base-visibility × navigation-event. Gold visibility (i.e., object-in-view) and base visibility (landmark-in-view) were dummy-coded variables whose values were determined using a raycasting procedure centered on the main player camera and sampled several times per second. Figure S3 shows the mean correlations, across subjects, between all pairs of behavioral parameters.

Neural response differences by brain region

As electrode coverage and the number of recorded neurons per region varied between subjects, we used a permutation-based method that accounted for between-subject differences to analyze regional differences in the proportion of significantly-responding neurons to each behavioral variable of interest (see Table 2). For each behavioral variable, we first performed a chi-square independence test on the contingency table that listed the number of significantly-responding neurons by region, across all subjects. We then shuffled the neuron-to-region assignment at random within each subject and recalculated the chi-square statistic 1,000 times to obtain a null distribution. Lastly, an empirical p-value was obtained that described the extent to which the proportion of significantly-responding neurons by region differed to a greater extent than was observed in the shuffled data.63 As no p-value passed the significance threshold after adjusting for multiple comparisons, no post-hoc tests were performed. We performed this analysis using 3 regions-of-interest: the hippocampus, surrounding medial temporal lobe (combining amygdala, entorhinal cortex, and parahippocampal gyrus/medial bank of the fusiform gyrus), and medial prefrontal cortex (combining orbitofrontal cortex, anterior cingulate, and supplementary motor area). We excluded from these analyses 46 neurons that were located in more sparsely-sampled neocortical regions due to insufficient sample size.

Classifying time from population neural activity

We used the scikit-learn library to train multi-class, nonlinear (radial basis function) support vector machines to identify discrete, 2s time bins based on population neuron firing rates.57 We trained separate classifiers on Delay1, Delay2, Gold Search, and Gold Dig events, respectively (Figures 3E3H and S8), as well as training classifiers across all four of these events combined (Figure 5).

For each of these conditions, we used the following procedure: First, missing firing rates from the 1.9% of discarded trials (see Task description) were replaced using median imputation. Next, we z-scored firing rates across all time bins and trials in a given analysis, separately for each neuron. Lastly, we trained support vector machines using a nested cross-validation (CV) procedure that split data into train/test/validate folds at the trial level (5 inner folds, 36 outer folds). The inner CV served to identify optimal values for two hyperparameters of the radial basis function kernel: C, which determines the strength of parameter regularization; and g, which determines the radius of influence for each support vector. For each inner fold, we tested 100 pairs of hyperparameter values, each chosen at random from a continuous, log-uniform distribution between 1e−9 and 1e9. The best-performing hyperparameter values were then used to retrain a classifier across the 35 train/validate trials and generate predictions on the held-out test trial. This procedure was repeated over each fold of the outer CV, yielding predictions for each time bin, for all 36 trials.

To evaluate classifier performance, we trained null classifiers that replicated the above procedure, but with time bins being circularly-shifted by a uniform, random integer between 0 and 1 minus the number of time bins, independently on every trial. Paired t-tests were used to compare mean prediction errors (absolute value of the difference between actual and classifier-predicted times) across trials for classifiers trained on actual versus circularly-shifted time bins.

Cross-event decoders used the same decoders that were trained separately on each trial event, as described above, but then predicted times from neural firing rates during a different event than the one that was used for training. The following cross-decoders were evaluated (train → test).

  1. Delay: Delay1 → Delay2, Delay2 → Delay1

  2. Navigation: Gold Search → Gold Dig, Gold Dig → Gold Search

  3. Delay to navigation: Delay1 → Gold Search, Delay1 → Gold Dig, Delay2 → Gold Search, Delay2 → Gold Dig. As the event durations differed, we tested three mappings for each of these combinations: First 10s of navigation, last 10s of navigation, and relative time as a percentage of event duration.

Code dependencies

Neural firing and behavioral data were analyzed using Python version 3.9.7 and JupyterLab version 3.1.745,51 along with the following, open source Python packages: Matplotlib (version 3.0.3),52 NumPy (version 1.19.1),53 pandas (version 1.1.5),54 SciPy (1.5.2),55 seaborn (version 0.11.1),56 Scikit-learn (version 0.23.2),57 and Statsmodels (version 0.12.1)58.

Supplementary Material

Neural code for time and space - supplemental material Fig1-9
Neural code for time and space - supplemental material - Video S1
Download video file (22.2MB, mp4)

Supplemental information can be found online at https://doi.org/10.1016/j.celrep.2023.113238.

KEY RESOURCES TABLE.

REAGENT or RESOURCE SOURCE IDENTIFIER

Software and algorithms

Original code This publication https://doi.org/10.5281/zenodo.8333600
JupyterLab 3.1.745 Kluyver et al.51 https://doi.org/10.3233/978–1-61499–649-1–87
https://jupyter.org
Matplotlib 3.0.3 Hunter52 https://doi.org/10.1109/MCSE.2007.55
https://matplotlib.org
NumPy Harris et al.53 https://doi.org/10.1038/s41586–020-2649–2
https://numpy.org
pandas 1.1.4 McKinney54 https://pandas.pydata.org
Python 3.9.7 Python Software Foundation https://www.python.org
SciPy 1.5.2 Virtanen et al.55 https://doi.org/10.1038/s41592–019-0686–2
https://scipy.org
seaborn 0.11.1 Waskom56 https://doi.org/10.21105/joss.03021
https://seaborn.pydata.org
Scikit-learn 0.23.2 Pedregosa et al.57 https://scikit-learn.org
Statsmodels 0.12.1 Seabold and Perktold58 https://doi.org/10.25080/majora-92bf1922–011
https://www.statsmodels.org

Highlights.

  • Human medial temporal lobe and prefrontal cortex neurons encode time during task-free delays

  • Time and place are independently represented during timed navigation

  • Time cells remap between contextually similar events with stable place cell firing

  • Population neural activity represents time across multiple events in a sequence

ACKNOWLEDGMENTS

We are grateful to the research participants for their generous involvement. We thank Natalie Cherry, Chris Dao, Andreina Hampton, Guldamla Kalender, Connor Keane, and Emily Mankin for assisting in data collection. We are thankful to Marc Howard and members of the Fried and Kahana labs for helpful discussion and feedback. This work was supported by the National Science Foundation Graduate Research Fellowship (D.R.S.), National Institutes of Health grant U01-NS113198 (M.J.K.), National Institutes of Health grant U01-NS108930 (I.F.), National Institutes of Health grant R01-NS084017 (I.F.), and National Science Foundation grant 1756473 (I.F.).

Footnotes

DECLARATION OF INTERESTS

The authors declare no competing interests.

REFERENCES

  • 1.Noulhiane M, Pouthas V, Hasboun D, Baulac M, and Samson S (2007). Role of the medial temporal lobe in time estimation in the range of minutes. Neuroreport 18, 1035–1038. 10.1097/WNR.0b013e3281668be1. [DOI] [PubMed] [Google Scholar]
  • 2.Kurosaki Y, Terasawa Y, Ibata Y, Hashimoto R, and Umeda S (2020). Retrospective time estimation following damage to the prefrontal cortex. J. Neuropsychol. 14, 135–153. 10.1111/jnp.12171. [DOI] [PubMed] [Google Scholar]
  • 3.Petrides M (1985). Deficits on conditional associative-learning tasks after frontal- and temporal-lobe lesions in man. Neuropsychologia 23, 601–614. 10.1016/0028-3932(85)90062-4. [DOI] [PubMed] [Google Scholar]
  • 4.Astur RS, Taylor LB, Mamelak AN, Philpott L, and Sutherland RJ (2002). Humans with hippocampus damage display severe spatial memory impairments in a virtual Morris water task. Behav. Brain Res. 132, 77–84. 10.1016/S0166-4328(01)00399-0. [DOI] [PubMed] [Google Scholar]
  • 5.Bohbot VD, Kalina M, Stepankova K, Spackova N, Petrides M, and Nadel L (1998). Spatial memory deficits in patients with lesions to the right hippocampus and to the right parahippocampal cortex. Neuropsychologia 36, 1217–1238. 10.1016/S0028-3932(97)00161-9. [DOI] [PubMed] [Google Scholar]
  • 6.Moser EI, Moser MB, and McNaughton BL (2017). Spatial representation in the hippocampal formation: a history. Nat. Neurosci. 20, 1448–1464. 10.1038/nn.4653. [DOI] [PubMed] [Google Scholar]
  • 7.Pastalkova E, Itskov V, Amarasingham A, and Buzsá ki G (2008). Internally generated cell assembly sequences in the rat hippocampus. Science 321, 1322–1327. 10.1126/science.1159775. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.MacDonald CJ, Lepage KQ, Eden UT, and Eichenbaum H (2011). Hippocampal “time cells” bridge the gap in memory for discontiguous events. Neuron 71, 737–749. 10.1016/j.neuron.2011.07.012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Eichenbaum H (2014). Time cells in the hippocampus: a new dimension for mapping memories. Nat. Rev. Neurosci. 15, 732–744. 10.1038/nrn3827. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Kraus BJ, Robinson RJ, White JA, Eichenbaum H, and Hasselmo ME (2013). Hippocampal “time cells”: time versus path integration. Neuron 78, 1090–1101. 10.1016/j.neuron.2013.04.015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Tulving E (1972). Episodic and semantic memory. In Organization of Memory, Tulving E and Donaldson W, eds. (Academic Press; ), pp. 301–403. [Google Scholar]
  • 12.Tulving E (1983). Elements of Episodic Memory (Oxford University Press; ). [Google Scholar]
  • 13.Howard MW, and Eichenbaum H (2015). Time and space in the hippocampus. Brain Res. 1621, 345–354. 10.1016/j.brainres.2014.10.069. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Ekstrom AD, Kahana MJ, Caplan JB, Fields TA, Isham EA, Newman EL, and Fried I (2003). Cellular networks underlying human spatial navigation. Nature 425, 184–188. 10.1038/nature01964. [DOI] [PubMed] [Google Scholar]
  • 15.Jacobs J, Kahana MJ, Ekstrom AD, Mollison MV, and Fried I (2010). A sense of direction in human entorhinal cortex. Proc. Natl. Acad. Sci. USA 107, 6487–6492. 10.1073/pnas.0911213107. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Jacobs J, Weidemann CT, Miller JF, Solway A, Burke JF, Wei XX, Suthana N, Sperling MR, Sharan AD, Fried I, and Kahana MJ (2013). Direct recordings of grid-like neuronal activity in human spatial navigation. Nat. Neurosci. 16, 1188–1190. 10.1038/nn.3466. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Miller JF, Neufang M, Solway A, Brandt A, Trippel M, Mader I, Hefft S, Merkow M, Polyn SM, Jacobs J, et al. (2013). Neural activity in human hippocampal formation reveals the spatial context of retrieved memories. Science 342, 1111–1114. 10.1126/science.1244056. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Umbach G, Kantak P, Jacobs J, Kahana M, Pfeiffer BE, Sperling M, and Lega B (2020). Time cells in the human hippocampus and entorhinal cortex support episodic memory. Proc. Natl. Acad. Sci. USA 117, 28463–28474. 10.1073/pnas.2013250117. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Reddy L, Zoefel B, Possel JK, Peters J, Dijksterhuis DE, Poncet M, van Straaten ECW, Baayen JC, Idema S, and Self MW (2021). Human hippocampal neurons track moments in a sequence of events. J. Neurosci. 41, 6714–6725. 10.1523/JNEURO-SCI.3157-20.2021. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Gill PR, Mizumori SJY, and Smith DM (2011). Hippocampal episode fields develop with learning. Hippocampus 21, 1240–1249. 10.1002/hipo.20832. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Stefanini F, Kushnir L, Jimenez JC, Jennings JH, Woods NI, Stuber GD, Kheirbek MA, Hen R, and Fusi S (2020). A distributed neural code in the dentate gyrus and in CA1. Neuron 107, 703–716.e4. 10.1016/j.neuron.2020.05.022. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Colgin LL, Moser EI, and Moser MB (2008). Understanding memory through hippocampal remapping. Trends Neurosci. 31, 469–477. 10.1016/j.tins.2008.06.008. [DOI] [PubMed] [Google Scholar]
  • 23.Kunz L, Brandt A, Reinacher PC, Staresina BP, Reifenstein ET, Weidemann CT, Herweg NA, Patel A, Tsitsiklis M, Kempter R, et al. (2021). A neural code for egocentric spatial maps in the human medial temporal lobe. Neuron 109, 2781–2796.e10. 10.1016/j.neuron.2021.06.019. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Wood ER, Dudchenko PA, Robitsek RJ, and Eichenbaum H (2000). Hippocampal neurons encode information about different types of memory episodes occurring in the same location. Neuron 27, 623–633. 10.1016/S0896-6273(00)00071-4. [DOI] [PubMed] [Google Scholar]
  • 25.Manns JR, and Eichenbaum H (2009). A cognitive map for object memory in the hippocampus. Learn. Mem. 16, 616–624. 10.1101/lm.1484509. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Mau W, Sullivan DW, Kinsky NR, Hasselmo ME, Howard MW, and Eichenbaum H (2018). The same hippocampal CA1 population simultaneously codes temporal information over multiple timescales. Curr. Biol. 28, 1499–1508.e4. 10.1016/j.cub.2018.03.051. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Taxidis J, Pnevmatikakis EA, Dorian CC, Mylavarapu AL, Arora JS, Samadian KD, Hoffberg EA, and Golshani P (2020). Differential emergence and stability of sensory and temporal representations in context-specific hippocampal sequences. Neuron 108, 984–998.e9. 10.1016/j.neuron.2020.08.028. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Bright IM, Meister MLR, Cruzado NA, Tiganj Z, Buffalo EA, and Howard MW (2020). A temporal record of the past with a spectrum of time constants in the monkey entorhinal cortex. Proc. Natl. Acad. Sci. USA 117, 20274–20283. 10.1073/PNAS.1917197117. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Friedman WJ (1993). Memory for the time of past events. Psychol. Bull. 113, 44–66. 10.1037/0033-2909.113.1.44. [DOI] [Google Scholar]
  • 30.McNaughton BL, Barnes CA, Gerrard JL, Gothard K, Jung MW, Knierim JJ, Kudrimoti H, Qin Y, Skaggs WE, Suster M, and Weaver KL (1996). Deciphering the hippocampal polyglot: the hippocampus as a path integration system. J. Exp. Biol. 199, 173–185. 10.1242/jeb.199.1.173. [DOI] [PubMed] [Google Scholar]
  • 31.Peer M, Brunec IK, Newcombe NS, and Epstein RA (2021). Structuring knowledge with cognitive maps and cognitive graphs. Trends Cognit. Sci. 25, 37–54. 10.1016/j.tics.2020.10.004. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.Deuker L, Bellmund JL, Navarro Schrö der T, and Doeller CF (2016). An event map of memory space in the hippocampus. Elife 5, e16534. 10.7554/eLife.16534. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Robinson NTM, Priestley JB, Rueckemann JW, Garcia AD, Smeglin VA, Marino FA, and Eichenbaum H (2017). Medial entorhinal cortex selectively supports temporal coding by hippocampal neurons. Neuron 94, 677–688.e6. 10.1016/j.neuron.2017.04.003. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.Sabariego M, Schö nwald A, Boublil BL, Zimmerman DT, Ahmadi, Gonzalez N, Leibold C, Clark RE, Leutgeb JK, and Leutgeb S (2019). Time cells in the hippocampus are neither dependent on medial entorhinal cortex inputs nor necessary for spatial working memory. Neuron 102, 1235–1248.e5. 10.1016/j.neuron.2019.04.005. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.MacDonald CJ, and Tonegawa S (2021). Crucial role for CA2 inputs in the sequential organization of CA1 time cells supporting memory. Proc. Natl. Acad. Sci. USA 118, e2020698118. 10.1073/pnas.2020698118. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36.Naya Y, and Suzuki WA (2011). Integrating what and when across the primate medial temporal lobe. Science 333, 773–776. 10.1126/science.1206773. [DOI] [PubMed] [Google Scholar]
  • 37.Sakon JJ, Naya Y, Wirth S, and Suzuki WA (2014). Context-dependent incremental timing cells in the primate hippocampus. Proc. Natl. Acad. Sci. USA 111, 18351–18356. 10.1073/pnas.1417827111. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38.Salz DM, Tiganj Z, Khasnabish S, Kohley A, Sheehan D, Howard MW, and Eichenbaum H (2016). Time cells in hippocampal area CA3. J. Neurosci. 36, 7476–7484. 10.1523/JNEUROSCI.0087-16.2016. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 39.Heys JG, and Dombeck DA (2018). Evidence for a subcircuit in medial entorhinal cortex representing elapsed time during immobility. Nat. Neurosci. 21, 1574–1582. 10.1038/s41593-018-0252-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 40.Tsao A, Sugar J, Lu L, Wang C, Knierim JJ, Moser MB, and Moser EI (2018). Integrating time from experience in the lateral entorhinal cortex. Nature 561, 57–62. 10.1038/s41586-018-0459-6. [DOI] [PubMed] [Google Scholar]
  • 41.Kraus BJ, Brandon MP, Robinson RJ, Connerney MA, Hasselmo ME, and Eichenbaum H (2015). During running in place, grid cells integrate elapsed time and distance run. Neuron 88, 578–589. 10.1016/j.neuron.2015.09.031. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 42.Tiganj Z, Jung MW, Kim J, and Howard MW (2017). Sequential firing codes for time in rodent medial prefrontal cortex. Cerebr. Cortex 27, 5663–5671. 10.1093/cercor/bhw336. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 43.Cruzado NA, Tiganj Z, Brincat SL, Miller EK, and Howard MW (2020). Conjunctive representation of what and when in monkey hippocampus and lateral prefrontal cortex during an associative memory task. Hippocampus 30, 1332–1346. 10.1002/hipo.23282. [DOI] [PubMed] [Google Scholar]
  • 44.Liu Y, Levy S, Mau W, Geva N, Rubin A, Ziv Y, Hasselmo M, and Howard M (2022). Consistent population activity on the scale of minutes in the mouse hippocampus. Hippocampus 32, 359–372. 10.1002/hipo.23409. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 45.Quiroga RQ, Mukamel R, Isham EA, Malach R, and Fried I (2008). Human single-neuron responses at the threshold of conscious recognition. Proc. Natl. Acad. Sci. USA 105, 3599–3604. 10.1073/pnas.0707043105. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 46.Quian Quiroga R, Kraskov A, Koch C, and Fried I (2009). Explicit encoding of multimodal percepts by single neurons in the human brain. Curr. Biol. 19, 1308–1313. 10.1016/j.cub.2009.06.060. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 47.Gelbard-Sagiv H, Mukamel R, Harel M, Malach R, and Fried I (2008). Internally generated reactivation of single neurons in human hippocampus during free recall. Science 322, 96–101. 10.1126/science.1164685. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 48.Aronov D, and Tank DW (2014). Engagement of neural circuits underlying 2D spatial navigation in a rodent virtual reality system. Neuron 84, 442–456. 10.1016/j.neuron.2014.08.042. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 49.Wirth S, Baraduc P, Plante A, Pinè de S, and Duhamel JR (2017). Gaze-informed, task-situated representation of space in primate hippocampus during virtual navigation. PLoS Biol. 15, e2001045. 10.1371/journal.pbio.2001045. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 50.Stangl M, Maoz SL, and Suthana N (2023). Mobile cognition: imaging the human brain in the ‘real world. Nat. Rev. Neurosci. 24, 347–362. 10.1038/s41583-023-00692-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 51.Kluyver T, Ragan-Kelley B, Pé rez F, Granger B, Bussonnier M, Frederic J, Kelley K, Hamrick J, Grout J, Corlay S, et al. (2016). Jupyter Notebooks – a publishing format for reproducible computational work-flows. In Positioning and Power in Academic Publishing: Players, Agents and Agendas, Loizides Fand Schmidt B, eds. (IOS Press; ), pp. 87–90. 10.3233/978-1-61499-649-1-87. [DOI] [Google Scholar]
  • 52.Hunter JD (2007). Matplotlib: A 2D graphics environment. Comput. Sci. Eng. 9, 90–95. 10.1109/MCSE.2007.55. [DOI] [Google Scholar]
  • 53.Harris CR, Millman KJ, van der Walt SJ, Gommers R, Virtanen P, Cournapeau D, Wieser E, Taylor J, Berg S, Smith NJ, et al. (2020). Array programming with NumPy. Nature 585, 357–362. 10.1038/s41586-020-2649-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 54.McKinney W (2011). Pandas: A Foundational python Library for Data Analysis and Statistics. [Google Scholar]
  • 55.Virtanen P, Gommers R, Oliphant TE, Haberland M, Reddy T, Cournapeau D, Burovski E, Peterson P, Weckesser W, Bright J, et al. (2020). SciPy 1.0: fundamental algorithms for scientific computing in Python. Nat. Methods 17, 261–272. 10.1038/s41592-019-0686-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 56.Waskom M (2021). seaborn: statistical data visualization. J. Open Source Softw. 6, 3021. 10.21105/joss.03021. [DOI] [Google Scholar]
  • 57.Pedregosa F, Varoquaux G, Gramfort A, Michel V, Thirion B, Grisel O, Blondel M, Prettenhofer P, Weiss R, Dubourg V, et al. (2011). Scikit-learn: machine learning in Python. J. Mach. Learn. Res. 12, 2825–2830. [Google Scholar]
  • 58.Seabold S, and Perktold J (2010). Statsmodels: econometric and statistical modeling with Python. Proc. 9th Python Sci. Conf, 92–96. 10.25080/majora-92bf1922-011. [DOI] [Google Scholar]
  • 59.Fried I, Wilson CL, Maidment NT, Engel J, Behnke E, Fields TA, Macdonald KA, Morrow JW, and Ackerson L (1999). Cerebral microdialysis combined with single-neuron and electroencephalographic recording in neurosurgical patients. J. Neurosurg. 91, 697–705. 10.3171/jns.1999.91.4.0697. [DOI] [PubMed] [Google Scholar]
  • 60.Quiroga RQ, Nadasdy Z, and Ben-Shaul Y (2004). Unsupervised spike detection and sorting with wavelets and superparamagnetic clustering. Neural Comput. 16, 1661–1687. 10.1162/089976604774201631. [DOI] [PubMed] [Google Scholar]
  • 61.Hill DN, Mehta SB, and Kleinfeld D (2011). Quality metrics to accompany spike sorting of extracellular signals. J. Neurosci. 31, 8699–8705. 10.1523/jneurosci.0971-11.2011. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 62.Harris KD, Quiroga RQ, Freeman J, and Smith SL (2016). Improving data quality in neuronal population recordings. Nat. Neurosci. 19, 1165–1174. 10.1038/nn.4365. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 63.North BV, Curtis D, and Sham PC (2002). A note on the calculation of empirical P values from Monte Carlo procedures. Am. J. Hum. Genet. 71, 439–441. 10.1086/341527. [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Neural code for time and space - supplemental material Fig1-9
Neural code for time and space - supplemental material - Video S1
Download video file (22.2MB, mp4)

Data Availability Statement

  • All subject-deidentified data are freely available for use upon request to the lead contact.

  • Original code used for all data preprocessing and analyses can be downloaded at Zenodo: https://doi.org/10.5281/zenodo.8333600.

  • Any additional information required to reanalyze the data reported in this work paper is available from the lead contact upon request.

RESOURCES