Skip to main content

This is a preprint.

It has not yet been peer reviewed by a journal.

The National Library of Medicine is running a pilot to include preprints that result from research funded by NIH in PMC and PubMed.

bioRxiv logoLink to bioRxiv
[Preprint]. 2025 Sep 3:2025.08.29.672705. [Version 1] doi: 10.1101/2025.08.29.672705

Evaluating Place Cell Detection Methods in Rats and Humans – Implications for Cross-Species Spatial Coding

Place Cell Detection and Cross-Species Spatial Coding

Weijia Zhang 1, Thomas Donoghue 1, Salman E Qasim 2, Joshua Jacobs 1
PMCID: PMC12424783  PMID: 40950016

Abstract

Place cells, first identified in the rat hippocampus as neurons that fire selectively at specific locations, are central to investigations of the neural underpinnings of spatial navigation. With recent work with human patients, identifying and characterizing place cells across species has become increasingly important for understanding the extent to which decades of rodent research generalize to humans and uncovering principles of spatial cognition. One challenge, however, is that detection methods differ: rodent studies often rely on spatial information (SI), whereas human studies employ analysis of variance (ANOVA) - based approaches. These methodological differences may affect the identified place cell population, which complicates how their properties are interpreted and cross-species comparisons. To address this, we systematically applied multiple detection pipelines to human and rat datasets, supported by simulations that vary place-field properties. Our analyses and simulations demonstrate that spatial information and ANOVA-based approaches are responsive to distinct place field properties: spatial information primarily reflects the contrast between peak and average firing rates, while ANOVA emphasizes consistency across trials. Across species, rodent place cells revealed a broad spectrum of spatial tuning, including strongly tuned neurons with high spatial information (SI) and high ANOVA values. In contrast, human place cells lacked this strongly tuned population and exhibited a narrower distribution of tuning scores, concentrated at the lower end of both spatial tuning metrics. Despite these differences, both species had an overlapping population of neurons with weaker yet consistent spatial tuning, which may support important functional roles such as generalization and mixed selectivity. Together, our study provides a roadmap showing how spatial tuning metrics shape place cell detection and interpretation, while underscoring the functional importance of weaker-tuned neurons in cross-species comparisons.

Keywords: Spatial Navigation, Place Cells, Spatial Tuning Metrics, Cross-Species Comparison, Simulations

Author Summary

Place cells are neurons that become active in specific locations, and they play a critical role in how the brain supports navigation and memory. Place cells were first discovered in rats and later observed in humans, however, there has been a lack of direct comparisons between species using comparable approaches. Part of the difficulty doing so is that studies of rodent and human place cells have often relied on different analysis methods, making it difficult to determine if and how place-cell properties differ between species. To address this, in this study, we set out to understand how differences in place cell detection methods affect the identified place cell populations and interpretations of spatial coding across species.

To do so, we compared the most prevalent detection methods used in rodent and human research side by side, applying them to datasets from both species and to simulations. We found that different methods emphasize different features of spatial responses, which changes which neurons are identified as place cells. Across species, rat recordings revealed a wide range of spatial responses, from neurons with sharply localized activity to those with broader but reliable patterns. Human recordings, by contrast, were more concentrated at weaker but consistent levels of tuning. Importantly, these weaker but consistent responses reflect an overlapping population of neurons found in both species, which may serve similar functional roles in supporting flexible spatial memory and generalization. By separating methodological effects from biological differences, we lay the groundwork for future cross-species studies for spatial coding.

Introduction

The hippocampus plays a central role in spatial navigation and memory by forming internal representations of the environment’s structure and the animal’s experiences. A hallmark of this function is the activity of place cells, neurons that fire selectively when an animal occupies a specific location[2, 3]. First discovered in rodents, place cells exhibit tuning properties that vary systematically with anatomical location and behavioral demands. In dorsal CA1, rat place cells exhibit sharply localized and stable firing fields that tile the environment with high spatial precision [4, 5, 6, 7]. Moving along the hippocampal axis towards the ventral regions, neurons tend to exhibit broader and more diffuse fields and greater responsiveness to motivational and contextual variables [8, 7, 9, 10].

In humans, analogous spatially modulated neurons have been identified through intracranial recordings from neurosurgical patients with drug-resistant epilepsy performing virtual navigation tasks. These neurons increase firing at specific locations, supporting a conserved role in spatial representation [11, 12, 13, 14]. Human neurons exhibit spatial tuning with relatively diffuse place-field boundaries, and their firing patterns are often also modulated by non-spatial factors such as task demands, contextual cues, and goal relevance [1, 15, 16, 17, 18, 19, 20]. While place cells have been described in both humans and rodents, the human literature is comparatively smaller, and few studies have directly examined their properties across species using comparable behavioral paradigms. As a result, it remains unclear to what extent place-cell properties differ between species and what factors give rise to these differences.

Methodological approaches used to identify place cells may play a critical role in shaping our understanding of their properties. Previous methodological work has shown that the choice of detection metric influences which neurons are considered spatially tuned, but these comparisons have largely been confined to a single species or a single analytic framework [21, 22]. Rodent studies often identify spatial tuning using spatial information (SI) scores, which quantify how much neural firing reduces uncertainty about position [23]. Applying a threshold (e.g., 0.25–0.5 bits/spike) or a permutation-based significance test is often combined with additional criteria, such as minimum firing rate and trial-level stability, to classify place cells(Table. 1)[24, 25, 26, 27, 28, 29, 30, 31]. Human studies typically rely on analysis of variance (ANOVA) measures, evaluated against permuted surrogates, to assess whether firing rates vary significantly across spatial bins, often incorporating additional task-related modulations(Table. 2)[11, 32, 17, 19]. These methodological differences may influence which neurons are identified as place cells and, in turn, affect how their properties are interpreted, underscoring the need to clarify how methodological choices impact place-cell research across species.

Table 1: A sample of place cell identification methods in rodent studies.

Reference Method Classification Modulation
Jung et al. 1994 SI1 N / A Place
Schapiro et al. 1997 Place Field2 (Adjacency Criterion) N/A Place
Wood et al. 2000 Two - Way ANOVA3 P7 < 0.05 Place + Trial Type
Kjelstrup et al. 2008 Place Field4 (Contiguity Criterion) N / A Place
Royer et al. 2010 Place Field4 (Contiguity Criterion) & Stability5 (Split-Half) N / A Place
Langsten et al 2010 SI1 P7 < 0.05 Place
Deshmukh et al. 2011 SI1 P7 < 0.01 Place
Chen et al. 2013 SI1 P7 < 0.05 Place; Visual
Roux et al. 2017 SI1 SI1 > 0.25 & P7 < 0.05 Place
Scalplen et al. 2017 SI1 SI1 > 0.25 Place; Visual
Aronov et al. 2017 SI1 P7 < 0.01 Place; Sound
Newman et al. 2017 SI1 SI1 >= 0.5 Place
Jun et al. 2020 SI1 P7 < 0.05 Place
Grieves et al., 2020 SI1 SI1 > 0.5 Place
Shuman et al.,2020 SI1;
Stability6(Even-Odd)
P7 < 0.05 Place
Duvelle et al. 2021 SI1 SI1 > 0.5 & P7 < 0.05 Place; Connectivity
Harland et al. 2021 SI1 SI1 > 0.5 Place
Jin & Lee, 2021 SI1 SI1 > 0.5 & P7 < 0.01 Place; Motivation
Zhang et al. 2022 SI1 P7 < 0.05 Place; Goal
Levy et al. 2023 SI1 Z8 >1.96 Place
Qian et al., 2025 SI1 P7 < 0.05 Place
Heredi et al 2025 SI1 & Stability6 (Even-Odd) P7 < 0.01; Place

SI1: Spatial information score (bits/spike). Place Field2(Adjacency Criterion): ≥3 adjacent bins with mean rate > 3× grand mean; in-field rate > 5× overall rate. Two-Way ANOVA3: A significant main effect of place was observed, but not for task-related variables or the place × task-related variable interaction. Place Field4 (Contiguity Criterion): Stable, contiguous region with rate > 20% of peak. Stability5 (Split-Half): Pixel-wise correlation between first and second half maps. Stability6 (Even/Odd): Pearson correlation from even/odd lap tuning curves. P7: Significance was evaluated by comparing the real test statistic (as specified in the Methods column) to a surrogate distribution generated by shuffling spike times. Z8: Significance was defined as the observed value exceeding the randomized mean by > 1.96 standard deviations.

Table 2: A summary of place cell identification methods in human studies.

Reference Method Classification Modulation
Ekstrom et al. 2003 ANOVA9 P7 < 0.05 Place + goal + view
Jacobs et al. 2010 ANOVA9 P7 < 0.1 Place + direction; Path
Jacobs et al. 2013 ANOVA9 / T-test10 P7 < 0.05 Place + direction
Miller et al. 2013 ANOVA9 / Wilconxon11 P7 < 0.05 Place + direction
Qasim et al. 2020 ANOVA9 P7 < 0.05 Place + object
Tsitsiklis et al. 2020 ANOVA9 P7 < 0.05 Place; Spatial target
Kunz et al. 2021 ANOVA9 P7 < 0.05 Place + direction + egocentric
Qasim et al. 2021 ANOVA9 P7 < 0.01 Place
Schonhaut et al. 2023 OLS regression12 P7 < 0.05 Place + time / events
Donoghue et al. 2023 ANOVA9 P7 < 0.05 Place; Spatial target
Kunz et al. 2024 T-test10 P7 < 0.05 Place; Object; Place + object

ANOVA9: Analysis of Variance; tests for differences in firing rates across multiple spatial bins or conditions. T-test10: Firing rates were compared when the participant was in versus out of the place field. Wilcoxon test11: At each location, a one-tailed Wilcoxon rank-sum test compared the firing rates for all navigation epochs within a 10 VR-unit radius (nearby) to the firing rates for all other navigation epochs (far). OLS regression12: Ordinary least-squares regression modeled spike counts using time, location, event type, and their pairwise interactions.

In this study, we perform a systematic evaluation of place cell detection methods using both empirical datasets (rat and human single-neuron recordings) and simulated datasets with known ground truth. We focus on two widely used spatial tuning metrics: spatial information and ANOVA F-statistics. We assess how both raw tuning scores and downstream classification criteria, including fixed thresholds and permutation-based significance tests, influence the identification of place cells. To further investigate how each spatial tuning metric relate to the underlying properties of place fields, we developed a simulation framework that systematically varies key place field properties such as tuning width, baseline firing rate, and trial-to-trial variability. Finally, we extracted estimated place-cell features from rat, human, and simulated data and projected them into a low-dimensional representational space, revealing how spatial tuning metrics differentially reflect place-field features and their interactions.

Our results suggest that commonly used spatial tuning metrics: spatial information (SI) and ANOVA capture different features of place cell activity. SI is particularly responsive to sharply localized, high-contrast firing fields, whereas ANOVA emphasizes the trial-level consistency of spatial tuning curves. Importantly, identified place cell populations depend strongly on the choice of classification criteria: raw score thresholding and permutation-based significance testing often yield divergent results, with the latter offering more stable and data-adaptive criteria. We also find that rat place cells exhibit a broader dynamic range of spatial tuning scores than humans. Rodent neurons frequently display sharply localized place fields with high spatial specificity, resulting in cells that score highly on both SI and ANOVA metrics. In contrast, such sharply tuned and high contrast cells are rare in human data, where hippocampal neurons more commonly exhibit spatially diffuse but trial-consistent tuning, leading to lower overall SI and F-statistic values. Notably, both species show overlapping distributions at the lower end of the SI and ANOVA spectra, suggesting an analogous population of neurons with weaker but reliable spatial modulation - cells that are often missed by SI thresholds but consistently detected by ANOVA. These ANOVA-identified neurons likely contribute to generalized representations and may reflect mixed selectivity for spatial and task-related variables, which are important for understanding spatial coding across species.

Together, these findings indicate that commonly used detection methods capture different but complementary aspects of spatial coding. By clarifying how methodological choices shape the identification of these cells, our work establishes a foundation for principled cross-species comparisons. Broadening our focus to include consistently tuned, weaker-modulation neurons, prevalent both in humans and rats, offers new insight into the flexible coding strategies of the hippocampus and contributes to our understanding of spatial cognition across species.

Results

In this study, we compare widespread place cell detection methods in rats and humans to understand how different analytical pipelines influence the identification of spatially tuned neurons, with the goal of informing future cross-species comparisons of hippocampal spatial coding (Fig. 1a). We focus on two analytical approaches for detecting spatial tuning: spatial information (SI) and ANOVA F-statistics. Based on previous research, we evaluate SI using both fixed thresholding and permutation-based significance testing, while ANOVA is assessed through permutation testing, allowing us to examine how different classification strategies influence detection outcomes (Fig. 1b). We systematically apply these detection pipelines to datasets from both humans and rats performing comparable linear track spatial navigation tasks to evaluate the extent to which rat and human place cells differ and to assess the role of methodological differences in contributing to these variations (Fig. 1c, top and bottom). To further understand how these analytical methods relate to place field properties, we develop a simulation framework that varied parameters such as tuning width, firing rate, and trial-to-trial variability. Additionally, we estimate place cell features from rat, human, and simulated place fields and analyzed their structure in high-dimensional feature space using principal component analysis (PCA) to evaluate how spatial tuning metrics capture different place field properties and their interactions.

Figure 1: Overview of place cells and detection methods in rats and humans.

Figure 1:

a) Simulated place cells with varying spatial tuning properties on a linear track. b) Place cell detection pipeline commonly used in rats (Spatial Information) and in human (ANOVA). c) Examples of rat (top) and human (bottom) place cells, illustrating a range of tuning profiles from more prominent spatial selectivity (left) to weaker or more diffuse spatial modulation (right).

Method-Driven Variability in Rat Place Cell Detection

We first evaluate these place cell detection methods by reanalyzing data from four Long-Evans rats that ran back and forth along a linear track while single-neuron activity was recorded ([33]; Fig. 2a). We begin by comparing detection outcomes based on permutation testing (1,000 surrogates, p < 0.05) for both metrics. Overall and region-wise analyses reveals that permutation-corrected ANOVA identifies more spatially tuned neurons across hippocampal and entorhinal subfields than permutation-corrected spatial information (Fig. 2bc). For ANOVA, the proportion of significant neurons increases in a sigmoidal pattern as a function of the F-statistic. This sharp transition around the significance threshold suggests high consistency between the raw score and permutation testing outcome (Fig. 2d). In contrast, the proportion of neurons classified as significant by SI-based permutation testing increases more gradually with spatial information scores, suggesting weaker alignment between raw scores and surrogate testing(Fig. 2e, Fig. S1a). While permutation-corrected results for ANOVA and spatial information show substantial agreement, they do not fully converge. As a function of the F-statistic, agreement between the two spatial tuning metrics was high at both low and high F-statistic but dropped near the ANOVA cutoff (F = 2) before recovering, indicating that disagreement was concentrated around the cutoff where classification is most sensitive to the choice of metric (Fig. 2f). By contrast, agreement across SI values remained consistently high with only a modest upward trend and no clear dip at the conventional threshold (SI = 0.25), suggesting that discrepancies are not localized to the SI cutoff (Fig. 2g). When comparing permutation testing on both spatial tuning metrics, a majority of neurons (69.4%) were jointly identified, but notable subsets were uniquely classified by ANOVA permutation testing (14.1%) or SI permutation testing (4.6%) (Fig. 2h, top). Divergences became more pronounced when comparing ANOVA permutation testing with SI thresholding: only 57.5% of neurons were jointly identified, while substantial subsets were uniquely classified by ANOVA (26.0%) or by SI thresholding alone (9.4%) (Fig. 2h, bottom). Together, these results show that ANOVA yields stronger alignment between raw scores and permutation testing and consistently identifies a larger population of spatially tuned neurons than spatial information, highlighting the impact of methodological choices on place cell classification.

Figure 2: Place Cell Identification Methods in Rats.

Figure 2:

a) Task Schematic. Rats traverse bi-direnctionally along a 250 cm linear track to obtain water rewards at each end (Directions A and B). b) Proportion of neurons identified as significant by ANOVA permutation testing across brain regions. (CA1 and CA3: Hippocampus, DG: Dentate Gyrus, EC: Entorhinal Cortex) c) Proportion of neurons exceeding the SI threshold of 0.25 (light red) and those confirmed by SI permutation testing (dark red) across brain regions. d) Proportion of neurons classified as significant as a function of increasing ANOVA F-statistics. e) Same as in e, plotted against Spatial Information Scores. f) Agreement between permutation-based SI classification and ANOVA classification as a fuction of increasing F-statistics. g) Same as f., plotted against spatial information scores. Dashed lines in d. and f. mark the observed threshold for ANOVA (F = 2.0); dashed lines in e. and g. indicate the commonly used spatial information threshold (SI = 0.25). h) Top: Comparison between SI permutation testing (p < 0.05) and ANOVA permutation testing (p < 0.05). Bottom: Comparison between SI thresholding (SI > 0.25) and ANOVA classification. Percentages indicate the proportion of neurons in each classification category.

Effects of Detection Methods on Human Place Cells

Building on our initial analyses in rodents, we next examine how place cell detection methods influence classification outcomes in humans. We reanalyze single-neuron recordings from patients with drug-resistant epilepsy using a dataset originally collected to study hippocampal and entorhinal function [1, 34]. Participants perform a virtual navigation task in which they were passively moved along a linear track and instructed to learn and recall the locations of four objects (Fig. 3a). The task closely resembles linear track paradigms used in the rats dataset, supporting method comparisons under comparable behavioral conditions. Across brain regions, permutation testing on both spatial tuning metrics (ANOVA and SI) identified smaller proportions of human place cells, and SI thresholding similarly yielded reduced proportions(Fig. 3bc).

Figure 3: Place Cell Identification Methods in Humans.

Figure 3:

a) Task Schematic. Subjects are moved along a 40 Virtual units linear track to learn and recall the location of four objects. b) Proportion of neurons identified as significant by ANOVA permutation testing across brain regions ( H: Hippocampus, A: Amygdala, EC: Entorhinal Cortex, C: Cingulate). c) Proportion of neurons exceeding the SI threshold of 0.25 (light red) and those confirmed by SI permutation testing (dark red) across brain regions. d) Proportion of neurons classified as significant as a function of increasing ANOVA F-statistics. e) Same as in e, plotted against Spatial Information Scores. f. Agreement between permutation-based SI classification and ANOVA classification as a function of increasing F-statistics. g) Same as f., plotted against spatial information scores. Dashed lines in d. and e. mark the observed threshold for ANOVA (F = 1.6); dashed lines in f. and g. indicate the commonly used spatial information threshold (SI = 0.25). h) Top: Comparison between SI permutation testing (p < 0.05) and ANOVA permutation testing (p < 0.05). Bottom: Comparison between SI thresholding (SI > 0.25) and ANOVA classification. Percentages indicate the proportion of neurons in each classification category.

Consistent with rats findings, when comparing permutation-based significance testing to raw F-statistics, the proportion of neurons classified as significant by ANOVA exhibits a sigmoidal relationship with the F-statistic values (Fig. 3e). On the other hand, compared to rats, SI-based classification in humans displayed greater variability and more pronounced divergence between fixed-threshold and permutation-based results(Fig. 3f; Fig. S1b). We next assessed the correspondence between the two spatial tuning metrics under permutation-based evaluation. When examined as a function of the ANOVA F-statistic (Fig. 3f), agreement between ANOVA- and SI-based classifications was generally high at both low and high values but exhibited a pronounced dip around the ANOVA significance boundary (F = 2). This pattern indicates that neurons within this range of F-statistics are differentially classified by the two approaches, reflecting heightened sensitivity to the choice of spatial tuning metric. When examined as a function of spatial information (Fig. 3g), agreement remained comparatively stable across values, with only a modest decline near the conventional SI threshold (0.25), suggesting that discrepancies are less localized and less pronounced than those observed for ANOVA. Notably, these trends closely parallel the results observed in rats, highlighting consistent cross-species differences in how ANOVA and SI capture neurons with weak or intermediate levels of spatial tuning. In general, Comparing permutation-based significance testing across spatial tuning metric, we observe overlap between methods: 12.1% of neurons are jointly classified as spatially tuned, while 3.4% and 2% are uniquely identified by SI and ANOVA, respectively (Fig. 3b, top). In contrast, applying a fixed threshold to SI scores (SI > 0.25) led to substantial divergence, only 1% of neurons are jointly classified, while 9.1% and 13.1% are uniquely identified by SI and ANOVA, respectively (Fig. 3h, bottom). In summary, both permutation-based methods (ANOVA and SI) as well as fixed SI thresholding identified fewer human neurons as place cells compared to rats. Similarly in rats, ANOVA showed strong concordance between raw F-statistics and permutation testing, whereas SI exhibited weaker alignment between raw and surrogate evaluations.

Thresholding vs. Permutation Test and Effect of Analysis Parameters

We then evaluate how methodological choices such as smoothing and bin resolution can influence which neurons are classified as spatially tuned. We find that applying a smoothing kernel inflates ANOVA F-statistics while deflating spatial information scores (Fig. S2). Additionally, using higher spatial bin resolutions increases spatial information scores but decreases ANOVA F-statistic values (Fig. S3al), making fixed-threshold methods more dependent on analysis parameters such as bin size and smoothing kernel compared to permutation-based approaches, which remain more robust across varying analysis settings. However, permutation-based significance testing shows greater consistency under these varying conditions (Fig. S3m-n). These differences highlight that spatial information thresholding is sensitive to analysis parameters such as smoothing and spatial binning. Overall, these findings indicate that fixed thresholding methods are highly dependent on analysis parameters and may classify different subsets of neurons under varying conditions, whereas permutation-based significance testing provides a more stable and robust approach for detecting spatial tuning and shows greater consistency across species and analytical methods.

Statistical distribution of spatial tuning metrics differs across species

To characterize variation in spatial tuning metrics across species, we examine the distributions of spatial information (SI) scores and ANOVA F-statistics for rat and human single-neuron datasets, respectively (Fig. 4). Statistical significance for each metric was assessed independently using permutation testing (1,000 surrogates; p < 0.05). For rat neurons, ANOVA F-statistics ranged from 0.14 to 400.12, and SI scores from 0.01 to 4.54 (Fig. 4a). A substantial proportion of neurons met significance criteria under both SI and ANOVA, with additional neurons identified exclusively by ANOVA. Several neurons exceeded the permutation-based SI threshold but fell below the commonly used fixed cutoff of SI>0.25, highlighting divergence between thresholded and surrogated detection criteria. In comparison, human neurons exhibited a narrower range of tuning metrics, with ANOVA F-statistics from 0.43 to 3.12 and SI scores from 0.00 to 3.64 (Fig. 4b). Fewer neurons reached significance under either metric, and there was reduced overlap between SI- and ANOVA-identified neurons relative to rodents. Fixed SI cutoffs were more limiting in this context, while ANOVA identified a broader set of neurons across varying tuning strengths. We observed that a subset of rodent neurons exhibited SI and F-statistic values within the range observed in the human data, indicating overlap in tuning profiles across species. Notably, while rat neurons spanned a substantially broader range of SI and F-statistic values, their distributions overlapped with human neurons at the lower end, indicating shared profiles of weak or intermediate spatial tuning across species, which may subserve comparable functional roles.

Figure 4: Statistical distribution of place cell measures in rats and humans.

Figure 4:

a) Rats: ANOVA F-statistics 0.14–400.20; spatial information 0.01–4.54. b) Humans: ANOVA F-statistics 0.43–3.12; spatial information 0.01–3.64. Neurons are plotted by spatial information (x-axis) and ANOVA F-statistic (y-axis). Each point represents a single neuron. Dashed lines indicate thresholds (SI = 0.25; F = 2.0 for rats, 1.6 for humans). Marginal histograms show metric distributions and neuron counts by category. Bottom, Example firing rate maps for gray (low SI, low ANOVA), blue (high ANOVA), and red (high SI).

Modeling the Influence of Place Cell Properties on Detection Statistics

The analyses in the preceding sections reveal consistent differences between spatial information (SI) and ANOVA in evaluating the spatial tuning of place cells. However, it remains unclear how specific place cell properties influence these spatial tuning metrics. In empirical datasets, such effects are difficult to isolate because the ground-truth spatial tuning properties of individual neurons are unknown. To address this limitation, we developed a simulation framework that generates synthetic tuning curves with precisely controlled parameters, providing ground-truth benchmarks for systematically assessing how different detection metrics respond to variations in tuning properties(Fig. 5). Specifically, this approach allows us to independently vary features such as peak firing rate, field width, background firing, noise, place field consistency across trials, and presence ratio, while holding other variables constant to isolate the effect of each property on detection metrics. Modeling at the level of tuning curves offers a controlled and interpretable setting for assessing how spatial information and ANOVA F-statistics respond to variations in place cell activity.

Figure 5: Place Field Simulations.

Figure 5:

a) Place Field Components. Simulated place fields are generated by combining three components: Peak (Gaussian-shaped firing rate profile), Baseline (constant firing rate across space), and Noise (random fluctuations in firing rate). The final place field (right panel) is produced by summing these components. b) Field Properties. Place field tuning parameters are varied across neurons to model population-level diversity. Peak (modulation of peak firing rate), Base (adjustment of background firing rate), and Width (spread of the place field) define each neuron’s trial-averaged spatial tuning profile. In contrast, Noise (random fluctuations added to the firing rate) varies across trials, introducing realistic trial-to-trial variability in field expression. c) Trial-Level Field Consistency. To assess within-neuron reliability, we compute two metrics: Place Field Consistency (left) quantifies the spatial stability of peak firing across trials, while Presence Ratio (right) measures the fraction of trials in which a detectable place field is expressed, reflecting tuning persistence over time.

Peak Firing Rate Enhances Spatial Tuning

We examined how variation in peak firing rate (1 to 20 Hz), under biologically plausible conditions and with all other parameters held constant, affects spatial tuning detection metrics (Fig. 6a). As the peak firing rate increased, the spatial information score rose because it is sensitive to how much more the neuron fires in certain locations compared to others—i.e., the contrast between peak and average firing rates (Fig. 6b). At the same time, ANOVA F-statistics also increased because the firing rates became more different across spatial bins (i.e., locations), which increases the variance between spatial bins (Fig. 6b). Importantly, the variability within each bin (i.e., trial-to-trial variability) was held constant, so the increase in the F-statistic reflects more structured, spatially organized firing rather than noise. These results indicate that both metrics are sensitive to peak firing rate, but through distinct mechanisms: spatial information quantifies the contrast between a neuron’s peak firing rate and its overall mean rate across spatial locations, whereas ANOVA assesses the proportion of firing rate variance systematically explained by spatial position relative to variation across trials (Fig. 6c).

Figure 6: Impact of place field features on spatial information and ANOVA statistics.

Figure 6:

a) Simulated firing rate maps illustrating variation across six place field parameters: peak firing rate, width, baseline firing rate, noise, place field consistency, and presence ratio. b) Spatial information (SI, red) and ANOVA F-statistics (blue) as a function of each place field parameter. c) Joint distributions of SI (y-axis) and ANOVA F-statistics (x-axis), with grayscale indicating values of the corresponding parameter.

Variation in Place Field Width Exhibits Non-Monotonic Effects

We next investigated how spatial tuning width, defined as the standard deviation (σ) of a Gaussian place field, varied from 1 to 20 spatial bins out of a total of 50, influences spatial tuning detection while holding all other parameters constant (Fig. 6a). Both spatial information and ANOVA F-statistics exhibited non-monotonic dependencies on field width (Fig. 6b). Spatial information peaked at narrow widths (σ2–3 bins), where sharply localized firing created strong contrast between active and inactive locations. In contrast, ANOVA F-statistics were maximized at broader widths (σ6–8 bins), reflecting an optimal balance between structured variation across spatial bins and maintained trial-level variability. At larger field widths, both metrics declined: spatial information decreased as spatial contrast diminished, while ANOVA F-statistics dropped as the tuning profile became increasingly flat and spatial variance was reduced. Accordingly, the joint metric analysis (Fig. 6c) revealed a curved relationship, with both metrics elevated only within a restricted range of intermediate widths. This finding indicates that similar detection scores can emerge from distinct underlying spatial profiles.

Differential Sensitivity in Baseline Firing Rate

To assess the impact of background activity on detection sensitivity, we manipulated the baseline firing rate from .5 to 5 Hz, while keeping all other parameters fixed. Increasing baseline elevated overall firing rates uniformly across trials and spatial field without altering the underlying tuning shape (Fig. 6 a). This manipulation had a marked effect on spatial information scores (Fig. 6b, red) which decreased steadily as baseline rose. Since spatial information is sensitive to the ratio of peak to average firing, higher background activity compresses this contrast and diminishes the score. ANOVA F-statistics (Fig. 6b, blue), by contrast, exhibited an initial drop at low baseline values but remained relatively stable across the remainder of the range. Since ANOVA captures variance across spatial bins relative to trial-to-trial variability, uniform increases in baseline preserve the spatial structure of the tuning curve and exert a smaller influence on the metric (Fig. 6c, blue). We observed a similar trend in empirical recordings from putative place cells in human and rodents. Human neurons, which typically show elevated baseline activity, had lower SI scores despite consistent spatial tuning, whereas the example rodent neurons with spare firing showed high spatial information score despite reliable tuning. These findings highlight that SI may overestimate tuning in sparse neurons and underestimate it in neurons with high background activity - patterns that are particularly common in human recordings, where place fields tend to be less sharply localized and background firing rates are often high.

ANOVA Is More Responsive to Trial Level Consistency Than Spatial Information

To evaluate how trial-level variability affects detection metrics, we manipulated three aspects of tuning stability: additive noise, trial-wise place field shifts, and the presence ratio (Fig. 6a). In all cases, the trial-averaged spatial tuning profile was preserved, while trial-to-trial reliability was systematically degraded.

For additive noise (Fig. 6a), independent Gaussian noise with increasing standard deviation (.5–5 Hz) was added to each trial. Both metrics declined as noise increased, but ANOVA F-statistics dropped sharply due to rising trial-level variance relative to across-bin differences. Spatial information decreased more gradually, remaining moderately high even at large noise levels due to its reliance on trial-averaged firing (Fig. 6b).

For place field shifts (Fig. 6a), the location of each trial’s tuning curve was jittered by a random offset (0–5 spatial bins). ANOVA F-statistics decreased rapidly with increasing shift magnitude, reflecting diminished spatial alignment across trials. Spatial information, by contrast, remained relatively stable, again reflecting averaging across spatial locations(Fig. 6b).

For presence ratio manipulations (Fig. 6a), place fields were selectively silenced on a subset of trials, reducing their presence ratio from 1.0 to 0.1. ANOVA values increased steeply with higher presence ratios, indicating sensitivity to consistent expression of tuning. Spatial information, however, remained largely flat, failing to capture trial-wise field dropout(Fig. 6b).

Metric–metric comparisons (Fig. 6c) revealed consistent dissociations across all three manipulations: spatial information scores stayed relatively high even when ANOVA values were low, especially under high noise, large shifts, or sparse field expression. These results demonstrate that spatial information is largely insensitive to trial-to-trial fluctuations, while ANOVA more directly captures trial-level consistency.

Together, these simulations demonstrate that spatial information and ANOVA F-statistics capture distinct, complementary aspects of spatial tuning in single neurons. Spatial information is most sensitive to sharp, high-contrast firing fields and is strongly modulated by the ratio of peak to baseline activity, but is relatively insensitive to trial-to-trial instability and field dropout. In contrast, ANOVA is maximally sensitive to structured spatial variance, consistent, reliably expressed tuning across trials. Notably, both metrics exhibited non-monotonic dependencies on place field width, with maximal detection sensitivity occurring at intermediate tuning widths, underscoring the importance of field size in shaping metric outcomes. Our modeling further reveals that the raw scores of each metric are differentially sensitive to specific place field properties, indicating that variations in field characteristics can lead to divergent detection outcomes depending on the metric employed.

Place Cell Feature Estimates

We next estimated several features of place cell activity from rat, human, and simulated neurons, including peak firing rate, average firing rate, the peak-to-average rate ratio, place field width, number of place fields, presence ratio, and even-odd correlation (Fig. S4, Fig. S5). We found that spatial information (SI) correlates most strongly with the peak-to-average firing rate ratio across all datasets, indicating that SI is particularly sensitive to cells with sharply tuned, high-contrast firing fields. In contrast, ANOVA F-statistics correlate more strongly with even-odd correlation, suggesting that ANOVA captures the reliability and consistency of spatial firing patterns across repeated trials. These findings underscore that SI and ANOVA emphasize different aspects of spatial tuning and are influenced by distinct underlying neuronal firing properties (Fig. S4, Fig. S5).

PCA of Tuning Features Reveals Divergent Structure for Spatial information and ANOVA

Having established how individual properties correlate with spatial tuning metrics, we next sought to examine how interactions among multiple place field features jointly influence spatial tuning metrics. Principal component analysis (PCA) was applied to characterize the low-dimensional structure of feature interactions and to identify dominant patterns underlying spatial tuning across datasets.

PCA of rat hippocampal neurons revealed that gradients in ANOVA F-statistics and Spatial Information score occupy distinct, though overlapping, axes in feature space (Fig. 7a-b). This organization is clarified by the feature loading vectors (Fig. 7c): the SI gradient aligns most strongly with peak-over-average firing rate (yellow) and runs opposite to place field width (cyan), reflecting SI’s sensitivity to firing contrast and field sharpness. In contrast, the F-statistic gradient is oriented along even–odd correlation (red) and place field consistency (orange), and is negatively associated with the number of place fields (green), highlighting ANOVA’s emphasis on trial-to-trial reliability. The ordering and orientation of these loadings indicate that SI and ANOVA capture distinct combinations of underlying place field properties.

Figure 7: Estimated tuning features reveal distinct structures captured by SI and ANOVA.

Figure 7:

Principal component analysis (PCA) was applied to tuning features derived from hippocampal neurons, including Features include: even–odd correlation, peak-to-average ratio, place field width, presence ratio, place field consistency, number of place fields, average firing rate, and peak firing rate. a) Top: Rat neurons projected onto the first two principal components, colored by spatial information (SI). Bottom: Example rat neurons with high, median, and low SI, shown with corresponding firing rate maps. b) Top: Same PCA projection as in a, but colored by ANOVA F-statistic. Bottom: Example rat neurons with high, median, and low F-statistics, shown with corresponding firing rate maps. (c–e) Principal component analysis (PCA) projections and loading vectors of place cell features for rats (c), humans (d), and simulations (e). Points represent neurons projected into the first two principal components. Arrows indicate the loading vectors for each feature, with their direction and length representing the contribution and magnitude of each feature to the first two principal components.

These differences are further illustrated by example firing rate maps sampled from each distribution (bottom panels): neurons with high SI values exhibit sharply localized, high-contrast tuning profiles, while low SI neurons show flatter responses. High F-statistic neurons display strong trial-to-trial consistency in their spatial responses, whereas low F-statistic neurons exhibit irregular or noisy activity across bins.

Feature loadings and their ordering in humans and simulations were comparable to those in rats, with consistent contributions from key features such as even–odd correlation, peak-over-average firing rate, and place field width. Moreover, the principal axes defined by these features correspond to gradients in ANOVA F-statistics and spatial information (Fig. 7d-e; Fig. S6,S7).

Together, these results demonstrate that SI and ANOVA emphasize distinct axes of the neural feature space. SI is more influenced by tuning sharpness and signal contrast, while ANOVA is more closely aligned with trial-level stability. PCA thus reveals that these two commonly used spatial tuning metrics occupy partially overlapping but separable subspaces, highlighting their complementary roles in characterizing spatial coding in the hippocampus across species and in simulations.

Discussion

Overview and Significance of the Findings

Cross-species comparisons are critical for uncovering core computational principles of hippocampal function that are either conserved or divergent across biological systems. Despite rich literatures on place cells in rodents and humans, few studies have directly compared them under comparable behavioral paradigms and analytic frameworks [21, 22, 11, 2]. Methodological conventions differ across species – rodent researchers tend to use spatial information (SI)-based methods (Table. 1), whereas human researchers more often employ ANOVA-based approaches (Table. 2). This lack of alignment makes it difficult to determine whether observed differences reflect true biological variation, task-related factors, or methodological differences. To begin to fill this gap, this study addresses an important methodological and conceptual challenge in cross-species spatial coding: how detection metrics influence the classification and interpretation of place cells. In this study, we systematically evaluate two spatial tuning metrics: SI and ANOVA, across human and rat single-neuron datasets, applying both fixed thresholds and permutation based significance tests to assess how different classification criteria influence place cell detection. To assess how spatial tuning metrics respond to place-field features, we used a simulation framework with systematically varied parameters and known ground truth.

Our findings show that spatial tuning metrics are differentially responsive to place cell features, and that both the choice of metric and classification criterion (permutation-based vs. thresholding) directly influence which neurons are classified as spatially tuned. Cross-species comparisons revealed that rat place cells span a broad spectrum of spatial tuning scores, ranging from sharply defined, high-contrast fields with elevated SI and ANOVA values to broad but consistent fields with lower scores on both metrics. In contrast, human place cells often exhibit broader and more consistent tuning, with a tendency to cluster in the low SI/low ANOVA quadrant that overlaps with the lower range of the rat distribution. Such cross-species differences in tuning properties may help to contextualize prevailing methodological choices. In rodents, the relative abundance of sharply localized, high-contrast place fields has likely contributed to the widespread use of SI-based approaches, whereas in humans, the broader tuning, often accompanied by conjunctive coding of spatial and non-spatial variables, appears more readily detected with ANOVA. Our results suggest that methodological choices shape place-field classification and interpretation across species, providing a foundation for more rigorous comparative research on spatial coding.

Methodological Findings and Recommendations

Spatial Tuning Metric

Our analyses indicate that spatial information scores and ANOVA F-statistics emphasize different place-field features. SI primarily reflects the peak-to-average firing rate ratio: neurons with sharply localized, high-contrast fields attain high SI values, whereas neurons with elevated baseline activity more common in human recordings exhibit lower scores. SI is less responsive to trial-level variability because it is computed from the average firing rate across trials. By contrast, ANOVA F-statistics measure how robustly a neuron’s firing varies with position across repeated traversals. Formally, the F-statistic is the ratio of variance in firing rates between spatial locations to the residual variance across trials within each location. High values arise when positional differences are pronounced and trial-level responses are internally consistent. In summary, SI primarily captures field contrast whereas ANOVA F-statistics evaluate spatial differentiation and trial-level reliability, providing a complementary views on spatial coding.

Place Cell Detection Pipelines

Place cell detection methods often rely on fixed thresholding of spatial tuning metrics and/or statistical testing [24, 25, 26, 27, 35, 29, 11, 12, 1]. While threshold-based classification remains widely used, these metrics are sensitive to quantitative analysis choices, such as spatial resolution (e.g., bin size) and smoothing kernel width (Fig. S2, S3). These parameters shape the distributions of raw scores and may introduce instability when fixed thresholds are applied across datasets with different firing statistics or behavioral structures. Permutation-based significance testing offers an alternative approach, generating a null distribution for each neuron through within-session shuffling. By adjusting for neuron-specific firing variability and session-level statistics, this method tends to yield more consistent classification outcomes across different analysis settings compared to fixed-threshold approaches(Fig. S3m-n).

Recommendations

Taken together, our findings support a set of best practices for reliable and comparable place cell detection. First, while both spatial information and ANOVA effectively identify strongly tuned place cells, they differ in their treatment of borderline or noisy cases. Spatial information tends to highlight neurons with high-contrast, sharply tuned firing fields, whereas ANOVA is more likely to detect neurons with weaker but consistent modulation across trials, often characteristic of the tuning profiles observed in human recordings. Thus, the key distinction between these metrics lies not in their ability to detect canonical place cells, but in how they classify ambiguous neurons near the threshold of significance. Studies should consider the goals of their analyses and the likely properties of the place cells in their dataset to choose appropriate methods and/or consider applying multiple methods to capture different variants of putative spatial tuning. Second, permutation-based significance testing provides greater robustness than fixed thresholds by generating neuron-specific null distributions that account for baseline variability and reduce dependence on analysis parameters such as bin size and smoothing. Together, these practices offer a more principled and reproducible framework for identifying spatially tuned neurons and enable more accurate comparisons of hippocampal spatial coding across species.

Relevance to Cross-Species Explorations

Our analyses revealed both differences and meaningful overlaps in the statistical distribution of spatial tuning across species. Rat neurons exhibited a broad dynamic range on both the SI and ANOVA axes, including sharply tuned cells with high spatial contrast as well as more broadly and weakly modulated neurons. Human neurons, by contrast, clustered within a narrower range and were largely absent from the extreme high-SI and high-ANOVA region that characterizes the upper end of the rodent distribution. Nevertheless, subsets of neurons from both species fell within overlapping SI–ANOVA ranges, indicating that neurons in both species can exhibit comparable spatial tuning profiles, even if their overall prevalence and sharpness differ. Notably, permutation-based ANOVA was more effective than fixed SI thresholds at identifying these overlapping neurons.

Importantly, these moderately tuned neurons may still play a important functional role. In rodents, similar broad tuning profiles are more commonly found in the ventral hippocampus, a region associated with spatial and non-spatial contextual coding [8, 36]. In humans, spatially modulated responses are typically examined in behavioral paradigms that also engage episodic memory, decision-making, and goal-directed behavior, so that recorded neurons may reflect integrated, conjunctive coding schemes rather than purely spatial tuning[11, 13, 1]. These findings emphasize that moderate spatial tuning is not necessarily indicative of noise or weak signal, but may reflect meaningful coding strategies adapted to the task demands of each species. Considering these potential ecological and task-related differences is important for interpreting hippocampal function across species and for building cross-species models of spatial representation.

Cross-Species Explorations

Beyond the difference in species, there are several additional differences between the empirical datasets compared here, which are also potentially relevant to understanding the results. Notably these differences are not specific to these particular datasets, but reflect typical differences in experiments between rodents and humans. Here we briefly note these differences and how they may relate to the methodological findings and suggest them as avenues for future work comparing between species.

Anatomical Differences

Anatomical sampling differences likely contribute to the observed variation in spatial coding between rodents and humans. In rodents, place field size and spatial precision vary systematically along the dorsoventral axis, with small, sharply tuned fields in dorsal hippocampus and larger, more diffuse fields in ventral regions [8, 36]. Human intracranial recordings are constrained by clinical needs, leading to limited and variable anatomical coverage. While the rodent dorsoventral axis is often proposed to correspond functionally to the human anterior-posterior axis, it remains unclear whether human place cells follow a similar gradient [37, 38].

Virtual Environments vs. Real World

Another important consideration when interpreting cross-species differences in place cell properties is the use of virtual environments in human experiments versus real-world navigation in most rodent studies. In rodents, previous work has shown that virtual reality (VR) leads to broader place fields, decreased stability, reduced spatial selectivity, and lower firing rates compared to freely moving conditions [39, 40]. By contrast, human intracranial studies are typically conducted in desktop-based ‘video game–like’ virtual environments due to practical and clinical constraints. As a result, some of the observed differences in place field properties across species may not solely reflect biological variation, but may also be shaped by differences in recording context. Future research using more immersive or naturalistic paradigms in humans may help clarify the extent to which VR-specific factors influence hippocampal spatial coding.

Training, Task Engagement, and Clinical Constraints

Differences in training protocols, attentional demands, task engagement, and clinical context likely contribute to cross-species variability in observed place cell properties. Rodent experiments typically involve repeated exposure and reward-driven behavior, under which place-field properties stabilize and exhibit experience-dependent refinements, particularly when animals attend closely to spatial context [41, 42, 43, 44]. Human intracranial recordings are conducted with epilepsy patients during clinical monitoring, where sessions are comparatively shorter and pre-training is limited. Factors such as attention, fatigue, alertness, and medication can vary considerably in this setting, potentially influencing place-field properties and spatial tuning in humans [45]. These contextual differences in training intensity, attentional state, and clinical constraints should therefore be considered when interpreting cross-species variability in hippocampal spatial representations.

Future Directions

In summary, our analyses indicate that methodological choices substantially influence place-cell detection. By introducing a statistically grounded detection framework, this work provides a foundation for disentangling methodological from biological variation and for developing a more generalizable account of spatial coding across species. Across species, both rats and humans contain overlapping populations of weakly tuned neurons that may contribute not only to spatial coding but also to broader representations of context, task structure, and memory. A key direction for future research is to determine whether these shared populations support comparable computational functions across species and, more broadly, whether humans exhibit the full spectrum of place-field properties observed in rodents. Addressing these questions will require accounting for differences in anatomical sampling, task design (e.g., virtual reality vs. real-world environments), training, and recording constraints. These efforts will be essential for studying cross-species principles of spatial coding.

Methods

Literature Search

To examine the existing literature on place cell analyses, we conducted a review combining automated and manual search strategies to identify studies of single-neuron recordings examining place cell activity across species, using the LISC Python tool [46] to support automated literature collection from the PubMed database. Our goal was to characterize and compare methodological approaches in both rodent and human studies (Tables 1, 2). For rodent research, which reflects a large literature, we sub-selected a sample of studies spanning a diversity of analytical pipelines. For human research, which is a much smaller literature, we included all published studies reporting intracranial single-unit recordings during (virtual) spatial or navigational tasks. Each paper was reviewed for key methodological features including recording site, behavioral task, spatial tuning analysis, and classification criteria.

Datasets

Rat Dataset

We analyzed a publicly available dataset comprising 119 recording sessions from four male Long-Evans rats (250–400 g) navigating a 250 cm linear track for water rewards at each end [47, 33]. Each animal was chronically implanted with silicon probes targeting the medial temporal lobe (MTL). Single-neuron activity was extracted from microwire recordings using KlustKwik for automated spike sorting, followed by manual curation with Klusters. Full details of the recording procedures and preprocessing steps are available in [33].

For spatial analyses, the track was divided into 26 evenly spaced bins. To reduce edge effects near reward sites, the first and last three bins were excluded. Quality control criteria were applied to an initial pool of 8,500 neurons to ensure data reliability. Sessions with fewer than 15 forward or backward trials were excluded, as were neurons with firing rates below 0.2 Hz or above 20 Hz, or total spike counts less than 50. Traversals were excluded if running duration deviated more than two standard deviations from the session mean. A minimum running speed threshold of 5 cm/s was enforced, and neurons with spike presence ratios below 50% were removed in the direction of interest. After applying these criteria, a total of 3,089 neurons were included for one running direction and 3,338 neurons for the opposite direction.

Human Dataset

We reanalyzed human single-neuron recordings from 306 neurons in the medial temporal lobe (MTL) of 19 neurosurgical patients undergoing intracranial monitoring for treatment of drug-resistant epilepsy from a previous experiment [1, 34]. Neuronal activity was recorded while participants performed a virtual object–location memory task. Participants navigated a virtual linear track to learn object locations during encoding (2 trials per object), followed by a retrieval phase where they recalled the locations from memory (14 trials per object). Each session included 4 unique objects, for a total of 64 trials. For analysis, we excluded neurons with mean firing rates below 0.1 Hz or above 20 Hz to ensure stable and physiologically plausible neuron activity. For spatial analyses, the virtual track was linearly binned into 40 equally spaced segments, and neuronal firing rates were aligned to the participant’s position along the track.

Simulation Framework

We developed a simulation framework to generate synthetic firing rates across spatially binned locations, with tunable place field properties, in order to systematically evaluate detection metrics. Notably, these simulations model trial-by-trial firing rates directly, rather than simulating individual spike trains, allowing precise control over key parameters such as field width, peak rate, baseline activity, and trial-level variability (Fig. 5).

Place Field Simulation

The activity of each neuron is modeled as a combination of a Gaussian tuning curve, a constant baseline, and additive noise on a linear spatial trajectory (Fig. 5a).

Place Field Peak

A Gaussian curve was generated to simulate the place field of a neuron. This curve represents the neuron’s firing rate as a function of spatial position along a linear track. The firing rate at position x was defined by :

F(x)=Aexp(xμ)22σ2 (1)

where F(x) is the firing rate at position x, A is the peak firing rate amplitude, μ is the center of the place field, and σ is the standard deviation that determines the width of the field. The spatial environment was discretized into 50 equally spaced spatial bins.

Baseline

A baseline firing rate was added to reflect background neural firing. This baseline, denoted B(x), was set to a constant value B0 across all spatial bins to represent the neuron’s firing rate in the absence of any spatial tuning or external stimuli:

B(x)=B0 (2)
Noise

To simulate variability in neuronal firing, we added independent noise to each spatial bin. The noise at position x, denoted N(x), was drawn from a Gaussian distribution:

N(x)~N0,σn2 (3)

where N(x) is the noise at spatial bin x, and σn is the standard deviation controlling the magnitude of the noise. This noise was applied uniformly across all spatial bins, introducing random fluctuations that capture the inherent variability of neuronal activity.

Simulated Place Field

The final simulated firing rate was obtained by summing the Gaussian place field peak, the constant baseline firing rate, and the spatially distributed noise. The resulting firing rate at position x, denoted G(x), was computed as:

G(x)=F(x)+N(x)+B0 (4)

This combined signal represents the neuron’s firing rate as a function of spatial position, incorporating both structured activity due to the place field and random fluctuations due to noise.

Place Field Parameters

The tuning parameters - peak firing rate, place field width, baseline rate, and noise level - are independently manipulated between neurons (Fig. 5b). Trial-to-trial variability is introduced through jittered field locations and a presence ratio that controls the probability of field expression (Fig. 5c).

Height

To model variability in peak firing rates across neurons, we varied the amplitude parameter A in the Gaussian tuning curve (Eq. 1), which sets the maximum firing rate at the place field center. We generated 1000 samples of A, each drawn from a uniform distribution over the range of [1, 20] Hz.

Width

To simulate variability in spatial selectivity, we varied the place field width by adjusting the standard deviation parameter σ in the Gaussian tuning curve (Eq. 1), defined over a 50-bin spatial track. Since σ controls the spread of the tuning curve, it effectively determines half the width of the place field. Larger σ values yield broader spatial tuning, while smaller values result in more sharply localized fields. σ was sampled from 1,000 evenly spaced values between 1 and 20 spatial bins.

Baseline

To simulate background neuronal activity, we added a constant baseline firing rate B0 across all spatial bins (Eq. 2). This represents non-selective, ongoing activity unrelated to spatial position. 1000 samples of B0 were drawn uniformly between 0.5 and 5 Hz across the simulated population.

Noise

To introduce trial-to-trial variability, we added independent Gaussian noise to the firing rate at each spatial bin on every trial. The noise term N(x) was drawn from a zero-mean Gaussian distribution with standard deviation σn (Eq. 3). This noise was applied uniformly across space, simulating random fluctuations in neuronal firing. The noise level σn was sampled uniformly between 0.5 and 5 Hz for 1000 neurons.

Place Field Consistency

To model variability in place field position across trials, we introduced random shifts across simulated trials to the location of the Gaussian tuning curve. On each trial, 1000 samples of place field center μ was offset by a value drawn from a uniform distribution between 0 and 5 spatial bins. This introduces realistic trial-by-trial variability in the field location, simulating fluctuations commonly observed in empirical recordings.

Presence Ratio

To simulate variability in how consistently a neuron expressed its place field across trials, we used a presence ratio parameter. This ratio defined the proportion of trials in which the place field was active. On trials where the field was inactive, firing rates were set to baseline plus noise only. 1000 samples of presence ratios were sampled uniformly between 0.1 and 1 across simulated neurons.

Statistical Analysis

All firing rate computations and statistical analyses were performed using the spiketools Python package [48]. For both rodent and human datasets, firing rates were computed as spike counts per spatial bin normalized by occupancy time.

Spatial Information Score

To quantify spatial tuning across the dataset, we computed the spatial information (SI) score for each neuron, which measures how much information a neuron’s firing conveys about the subject’s position. The SI score was defined as:

SI=xp(x)γ(x)γ¯log2γ(x)γ¯ (5)

where p(x) is the occupancy probability of spatial bin x, γ(x) is the mean firing rate in bin x, and γ¯ is the overall mean firing rate across all bins. SI was computed from each neuron’s trial-averaged firing rate across spatial bins.

ANOVA F-Statistics

To assess whether neurons were spatially modulated, we performed a one-way ANOVA on trial-by-bin firing rates for each neuron. For each trial, we computed the neuron’s firing rate within each spatial bin, yielding a matrix of firing rates across trials and positions. The one-way ANOVA tested whether the mean firing rate differed significantly across spatial bins, quantifying how much of the firing rate variance could be attributed to spatial location versus trial-to-trial variability. A high F-statistic indicated that a neuron’s activity was reliably modulated by position across trials.

firing rateC(spatial bin) (6)

Spatial Information Score Threshold

To evaluate place cell classification, we applied two criteria. First, we used a commonly adopted threshold from rodent studies: neurons with SI greater than 0.25 bits/spike [24, 28] were labeled as spatially selective.

Permutation Testing

To assess statistical significance for both spatial information (SI) scores and ANOVA F-statistics, we performed permutation testing using circular shuffling of spike trains within each session. For each neuron, spike times were circularly shifted along the session-wide spike train 1,000 times. This procedure preserved the overall temporal structure, firing rate, and autocorrelation of the spike train, while disrupting the relationship between spike timing and spatial position.

For each permutation, the shuffled spike train was used to recompute firing rates across spatial bins. A null distribution was then generated by recalculating the SI score and ANOVA F-statistic across the 1,000 shuffled iterations. A neuron was considered significantly spatially modulated if its observed score exceeded the 95th percentile of the corresponding null distribution (permutation-corrected, p < 0.05).

Feature Estimation

To characterize tuning properties and population structure across neurons, we extracted a set of quantitative features from each neuron’s spatial firing profile. These features were designed to capture key aspects of spatial tuning strength, field structure, and trial-level consistency. For each neuron, we computed the following metrics:

  • Peak Firing Rate: the maximum firing rate observed across all spatial bins.

  • Average Firing Rate: the mean firing rate across all bins and trials.

  • Peak-to-average firing rate ratio: a normalized measure of tuning sharpness, calculated as the ratio between peak and average firing rate.

  • Place Field Width: the total number of spatial bins that exceeded a threshold firing rate (e.g., 20% of the peak), indicating contiguous spatial tuning.

  • Number of Place Fields: the number of distinct spatially contiguous regions (fields) meeting minimum width and firing threshold criteria.

  • Place Field Consistency: the proportion of trials in which the spatial bin of peak firing fell within a fixed window of ±3 bins around the neuron’s overall peak bin, used to assess the reliability and consistency of spatial tuning across trials.

  • Presence ratio: the proportion of trials in which the neuron exhibited non-zero firing in at least one bin within its identified place field region.

  • Even–Odd Correlation: The Pearson correlation between the neuron’s spatial firing rate maps computed separately from even- and odd-numbered trials, providing an estimate of trial-to-trial consistency in spatial tuning.

These features were concatenated into a feature vector for each neuron. The resulting matrix (neurons × features) served as input for dimensionality reduction using Principal Component Analysis (PCA).

Dimensionality Reduction with PCA

We performed Principal Component Analysis (PCA) on the neuron-by-feature matrix to identify low-dimensional structure in the tuning properties across the population. PCA finds a set of orthogonal axes (principal components) that explain the maximal variance in the data, enabling a compact representation of the dominant tuning patterns. The PCA projection allowed us to visualize and interpret the feature space, identify subpopulations with shared tuning characteristics, and explore the principal axes of variability in spatial encoding across neurons. Additionally, we examined the feature loadings on each component to estimate which place field properties most strongly contributed to each axis, and how these components related to standard spatial tuning metrics such as spatial information or ANOVA F-statistic.

Supplementary Material

Supplement 1

Acknowledgements

We would like to thank all participating patients for their time and contributions. We also acknowledge the members of the Jacobs Lab for helpful discussions and feedback throughout the project.

Funding Sources

This work was supported by NIH R01-MH104606

Abbreviations

SI

Spatial Information

ANOVA

Analysis of Variance

Footnotes

Materials Descriptions and Availability Statements

Project Repository

This project is openly available through an online project repository, which includes all the code used for data pre-processing and analysis.

Project Repository: https://github.com/HSUPipeline/PlaceCellMethods

Dataset

This project uses electrophysiological data collected from neurosurgical patients, as well as an open-access dataset of rat recordings from CRCNS.org: http://dx.doi.org/10.6080/K09G5JRZ The human data were collected as part of a previously published study and will be made available prior to publication [1]. A custom simulation framework was developed to evaluate place cell detection methods across species and will be released as part of the open-source SpikeTools repository prior to publication.

Software

All code used and developed for this project was written in the Python programming language. The code is openly available, licensed for reuse, and deposited in the project repository.

Management of the dataset was conducted using the Human Single Unit (HSU) Pipeline: https://github.com/HSUPipeline

Analyses of the single-neuron data were performed using the open-source SpikeTools toolbox: https://github.com/spiketools/spiketools

Literature searches and related resources were organized using LISC, an open-source Python module for literature analysis. https://github.com/HSUPipeline/Literature

Disclosures

Conflicts of Interest

The authors declare no competing interests.

References

  • 1.Qasim SE, Miller J, Inman CS, Gross RE, Willie JT, Lega B, Lin JJ, Sharan A, Wu C, Sperling MR, Sheth SA, McKhann GM, Smith EH, Schevon C, Stein JM, and Jacobs J. Memory retrieval modulates spatial tuning of single neurons in the human entorhinal cortex. Nature Neuroscience 2019; 22:2078–86. doi: 10.1038/s41593-019-0523-z [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.O’Keefe J and Dostrovsky J. The hippocampus as a spatial map. Preliminary evidence from unit activity in the freely-moving rat. Brain Research 1971; 34:171–5. doi: 10.1016/0006-8993(71)90358-1 [DOI] [PubMed] [Google Scholar]
  • 3.Moser EI, Kropff E, and Moser MB. Place cells, grid cells, and the brain’s spatial representation system. Annual Review of Neuroscience 2008; 31:69–89. doi: 10.1146/annurev.neuro.31.061307.090723 [DOI] [Google Scholar]
  • 4.McNaughton BL, Barnes CA, and O’Keefe J. The contributions of position, direction, and velocity to single unit activity in the hippocampus of freely-moving rats. Experimental Brain Research 1983; 52:41–9. doi: 10.1007/BF00237147 [DOI] [PubMed] [Google Scholar]
  • 5.O’Keefe J and Nadel L. The Hippocampus as a Cognitive Map. Oxford, UK: Oxford University Press, 1978 [Google Scholar]
  • 6.Moser MB, Rowland DC, and Moser EI. Place Cells, Grid Cells, and Memory. Cold Spring Harbor Perspectives in Biology 2015; 7:a021808. doi: 10.1101/cshperspect.a021808 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Henriksen EJ, Colgin LL, Barnes CA, Witter MP, Moser MB, and Moser EI. Spatial representation along the proximodistal axis of CA1. Neuron 2010; 68:127–37. doi: 10.1016/j.neuron.2010.08.042 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Jung M, Wiener S, and McNaughton B. Comparison of spatial firing characteristics of units in dorsal and ventral hippocampus of the rat. Journal of Neuroscience 1994; 14:7347–56. doi: 10.1523/JNEUROSCI.14-12-07347.1994 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Strange BA, Witter MP, Lein ES, and Moser EI. Functional organization of the hippocampal longitudinal axis. Nature Reviews Neuroscience 2014; 15:655–69. doi: 10.1038/nrn3785 [DOI] [PubMed] [Google Scholar]
  • 10.Jin DZ and Lee I. Differential Encoding of Place Value between the Dorsal and Intermediate Hippocampus. Current Biology 2021; 31:3147–3155.e4. doi: 10.1016/j.cub.2021.05.020 [DOI] [Google Scholar]
  • 11.Ekstrom AD, Kahana MJ, Caplan JB, Fields TA, Isham EA, Newman EL, and Fried I. Cellular networks underlying human spatial navigation. Nature 2003; 425:184–8. doi: 10.1038/nature01964 [DOI] [PubMed] [Google Scholar]
  • 12.Jacobs J, Kahana MJ, Ekstrom AD, Mollison MV, and Fried I. A sense of direction in human entorhinal cortex. Proceedings of the National Academy of Sciences (PNAS) 2010; 107:6487–92. doi: 10.1073/pnas.0911213107 [DOI] [Google Scholar]
  • 13.Miller JF, Neufang M, Solway A, Brandt A, Trippel M, Mader I, Hefft S, Merkow M, Polyn SM, Jacobs J, Kahana MJ, and Schulze-Bonhage A. Neural activity in human hippocampal formation reveals the spatial context of retrieved memories. Science 2013; 342:1111–4. doi: 10.1126/science.1244056 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Jacobs J, Weidemann CT, Miller JF, Solway A, Burke JF, Wei XX, Suthana N, Sperling MR, Sharan AD, Fried I, and Kahana MJ. Direct recordings of grid-like neuronal activity in human spatial navigation. Nature Neuroscience 2013; 16:1188–90. doi: 10.1038/nn.3466 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Tsitsiklis M, Miller J, Qasim SE, Inman CS, Gross RE, Willie JT, Smith EH, Sheth SA, Schevon CA, Sperling MR, Sharan A, Stein JM, and Jacobs J. Single-neuron representations of spatial targets in humans. Current Biology 2020; 30:245–253.e4. doi: 10.1016/j.cub.2019.11.048 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Kunz L, Brandt A, Reinacher PC, Staresina BP, Reifenstein ET, Weidemann CT, Herweg NA, Patel A, Tsitsiklis M, Kempter R, Kahana MJ, Schulze-Bonhage A, and Jacobs J. A neural code for egocentric spatial maps in the human medial temporal lobe. Neuron 2021; 109:2781–2796.e10. doi: 10.1016/j.neuron.2021.06.019 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Qasim SE, Fried I, and Jacobs J. Phase precession in the human hippocampus and entorhinal cortex. Cell 2021; 184:3242–3255.e10. doi: 10.1016/j.cell.2021.04.027 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Schonhaut DR, Aghajan ZM, Kahana MJ, and Fried I. A neural code for time and space in the human brain. Cell Reports 2023; 42:113238. doi: 10.1016/j.celrep.2023.113238 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Donoghue T, Cao R, Han CZ, Holman CM, Brandmeir NJ, Wang S, and Jacobs J. Single neurons in the human medial temporal lobe flexibly shift representations across spatial and memory tasks. Hippocampus 2023; 33:494–507. doi: 10.1002/hipo.23539 [DOI] [Google Scholar]
  • 20.Kunz L, Staresina BP, Reinacher PC, Brandt A, Guth TA, Schulze-Bonhage A, and Jacobs J. Ripple-locked coactivity of stimulus-specific neurons and human associative memory. Nature Neuroscience 2024; 27:674–83. doi: 10.1038/s41593-023-01550-x [DOI] [Google Scholar]
  • 21.Souza BC, Pavão R, Belchior H, and Tort ABL. On Information Metrics for Spatial Coding. Neuroscience 2018; 375:62–73. doi: 10.1016/j.neuroscience.2018.01.066 [DOI] [PubMed] [Google Scholar]
  • 22.Grijseels DM, Shaw K, Barry C, and Hall CN. Choice of method of place cell classification determines the population of cells identified. PLOS Computational Biology 2021; 17:e1008923. doi: 10.1371/journal.pcbi.1008835 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Skaggs WE, McNaughton BL, Gothard KM, and Markus EJ. An information-theoretic approach to deciphering the hippocampal code. Advances in Neural Information Processing Systems 1992; 5:1030–7 [Google Scholar]
  • 24.Roux L, Hu B, Eichler R, Stark E, and Buzsáki G. Sharp wave ripples during learning stabilize hippocampal spatial map. Nature Neuroscience 2017. Jun; 20:845–53. doi: 10.1038/nn.4543 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Langston RF, Ainge JA, Couey JJ, Canto CB, Bjerknes TL, Witter MP, Moser EI, and Moser MB. Development of the Spatial Representation System in the Rat. Science 2010; 328:1576–80. doi: 10.1126/science.1188214 [DOI] [PubMed] [Google Scholar]
  • 26.Deshmukh SS and Knierim JJ. Representation of non-spatial and spatial information in the lateral entorhinal cortex. Frontiers in Behavioral Neuroscience 2011; 5:69. doi: 10.3389/fnbeh.2011.00069 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Chen G, King JA, Burgess N, and O’Keefe J. How vision and movement combine in the hippocampal place code. Proceedings of the National Academy of Sciences 2013; 110:378–83. doi: 10.1073/pnas.1215834110 [DOI] [Google Scholar]
  • 28.Scaplen KM, Ramesh RN, Nadvar N, Ahmed OJ, and Burwell RD. Inactivation of the Lateral Entorhinal Area Increases the Influence of Visual Cues on Hippocampal Place Cell Activity. Frontiers in Systems Neuroscience 2017; 11:40. doi: 10.3389/fnsys.2017.00040 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Duvelle É, Grieves RM, Liu A, Summerfield C, Pezzulo G, and Spiers HJ. Hippocampal placé cells encode global location but not connectivity in a complex space. Current Biology 2021; 31:4719–4730.e6. doi: 10.1016/j.cub.2021.01.005 [DOI] [Google Scholar]
  • 30.Harland B, Contreras M, Souder M, and Fellous JM. Dorsal CA1 hippocampal place cells form a multi-scale representation of megaspace. Current Biology 2021; 31:2178–2190.e6. doi: 10.1016/j.cub.2021.03.003 [DOI] [PubMed] [Google Scholar]
  • 31.Jin SW and Lee I. Differential encoding of place value between the dorsal and intermediate hippocampus. Current Biology 2021; 31:2421–2433.e4. doi: 10.1016/j.cub.2021.04.073 [DOI] [Google Scholar]
  • 32.Miller JF, Fried I, Suthana N, and Jacobs J. Repeating Spatial Activations in Human Entorhinal Cortex. Current Biology 2015; 25:1080–5. doi: 10.1016/j.cub.2015.02.045 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Mizuseki K, Sirota A, Pastalkova E, Diba K, and Buzsáki G. Multiple single unit recordings from different rat hippocampal and entorhinal regions while the animals were performing multiple behavioral tasks. CRCNS.org. 2013. doi: 10.6080/K09G5JRZ [DOI]
  • 34.Goyal A, Miller J, Qasim SE, Watrous AJ, Zhang H, Stein JM, Inman CS, Gross RE, Willie JT, Lega B, Lin JJ, Sharan A, Wu C, Sperling MR, Sheth SA, McKhann GM, Smith EH, Schevon C, and Jacobs J. Functionally distinct high and low theta oscillations in the human hippocampus. Nature Communications 2020; 11. doi: 10.1038/s41467-020-15670-6 [DOI] [Google Scholar]
  • 35.Aronov D, Nevers R, and Tank DW. Mapping of a non-spatial dimension by the hippocampal–entorhinal circuit. Nature 2017; 543:719–22. doi: 10.1038/nature21692 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36.Kjelstrup KB, Solstad T, Brun VH, Hafting T, Leutgeb S, Witter MP, Moser EI, and Moser MB. Finite Scale of Spatial Representation in the Hippocampus. Science 2008; 321:140–3. doi: 10.1126/science.1157086 [DOI] [PubMed] [Google Scholar]
  • 37.Poppenk J, Evensmoen HR, Moscovitch M, and Nadel L. Long-axis specialization of the human hippocampus. Trends in Cognitive Sciences 2013; 17:230–40. doi: 10.1016/j.tics.2013.03.005 [DOI] [PubMed] [Google Scholar]
  • 38.Brunec IK, Bellana B, Ozubko JD, Man V, Robin J, Liu ZX, Grady CL, Rosenbaum RS, Winocur G, Barense MD, and Moscovitch M. Multiple Scales of Representation along the Hippocampal Anteroposterior Axis in Humans. Current Biology 2018; 28:2129–2142.e6. doi: 10.1016/j.cub.2018.05.016 [DOI] [PubMed] [Google Scholar]
  • 39.Aghajan ZM, Acharya L, Moore JJ, Cushman JD, Vuong C, and Mehta MR. Impaired spatial selectivity and intact phase precession in two-dimensional virtual reality. Nature Neuroscience 2015; 18:121–8. doi: 10.1038/nn.3884 [DOI] [PubMed] [Google Scholar]
  • 40.Chen G, King JA, Lu Y, Cacucci F, and Burgess N. Spatial cell firing during virtual navigation of open arenas by head-restrained mice. eLife 2018; 7:e34789. doi: 10.7554/eLife.34789 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 41.Wilson MA and McNaughton BL. Dynamics of the hippocampal ensemble code for space. Science 1993; 261:1055–8. doi: 10.1126/science.8351520 [DOI] [PubMed] [Google Scholar]
  • 42.Mehta MR, Barnes CA, and McNaughton BL. Experience-dependent, asymmetric expansion of hippocampal place fields. Proceedings of the National Academy of Sciences 1997; 94:8918–21. doi: 10.1073/pnas.94.16.8918 [DOI] [Google Scholar]
  • 43.Kentros CG, Agnihotri NT, Streater S, Hawkins RD, and Kandel ER. Increased attention to spatial context increases both place field stability and spatial memory. Neuron 2004; 42:283–95. doi: 10.1016/s0896-6273(04)00192-8 [DOI] [PubMed] [Google Scholar]
  • 44.Monaco JD, Rao G, Roth ED, and Knierim JJ. Attentive scanning behavior drives one-trial potentiation of hippocampal place fields. Nature Neuroscience 2014; 17:725–31. doi: 10.1038/nn.3687 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 45.Parvizi J and Kastner S. Promises and limitations of human intracranial electroencephalography. Nature Neuroscience 2018; 21:474–83. doi: 10.1038/s41593-018-0108-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 46.Donoghue T. LISC: A Python Package for Scientific Literature Collection and Analysis. Journal of Open Source Software 2019; 4:1674. doi: 10.21105/joss.01674 [DOI] [Google Scholar]
  • 47.Diba K and Buzsáki G. Hippocampal network dynamics constrain the time lag between pyramidal cells across modified environments. Journal of Neuroscience 2008; 28:13448–56. doi: 10.1523/JNEUROSCI.3824-08.2008 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 48.Donoghue T, Maesta-Pereira S, Han CZ, Qasim SE, and Jacobs J. spiketools: a Python package for analyzing single-unit neural activity. Journal of Open Source Software 2023; 8:5268. doi: 10.21105/joss.05268 [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supplement 1

Articles from bioRxiv are provided here courtesy of Cold Spring Harbor Laboratory Preprints

RESOURCES