Abstract
Buttenfield (1988) pioneered research on multiple representations in the dawn of GIScience. Her efforts evoked inquiries into fundamental issues arising from the selective abstractions of infinite geographic complexity in spatial databases, cartography and application needs for varied geographic details. These fundamental issues posed ontological challenges (e.g., entity classification) and implementational complications (e.g., duplication and inconsistency) in geographic information systems (GIS). Expanding upon Buttenfield’s line of research over the last three decades, this study reviewed multiple representations in spatial databases, spatial cognition, and deep learning. Initially perceived as a hindrance in GIS, multiple representations were found to offer new perspectives to encode and decipher geographic complexity. This paper commenced by acknowledging Buttenfield’s pivotal contributions to multiple representations in GIScience. Subsequent discussions synthesized the literature to outline cognitive representations of space in the brain’s hippocampal formation and feature representations in deep learning. By cross-referencing related concepts of multiple representations in GIScience, the brain’s spatial cells, and machine learning algorithms, this review concluded that multiple representations facilitate learning geography for both humans and machines.
Keywords: multiple representations, spatial cognition, spatial cells, deep learning
1. Introduction
In the context of this paper, the term “representation” refers to a surrogate for either a concrete entity (e.g., a road or a land parcel) or an abstract concept (e.g., heat waves or enclaves). Multiple representations (MR) suggest using multiple surrogates to represent the same thing. Both human brains and information systems commonly employ MR in computation and learning. What gives rise to MR? What roles are MR essential to learning and knowledge production? Dr. Barbara Buttenfield pioneered MR research in GIScience, and since her early work, cartographers and geospatial scientists have made marked progress on MR-related issues in map generalization, automated cartography, and multi-scalar spatial databases. The fundamental concepts and frameworks developed in the 1990s and early 2000s (e.g., Buttenfield & McMaster, 1991; Muller et al., 1995; Oosterom, 2009) remain relevant and continue to guide research development, including the use of machine learning for map generalization. While meritorious, comprehensive reviews of MR, map generalization, and machine learning applications in GIScience are beyond this paper’s scope. Instead, this review study aims to (1) relate the roles of MR across GIScience, neuroscience, and artificial intelligence, (2) highlight the latent functions of MR used in spatial information encoding and processing in human brains and computing algorithms, and (3) reconceive MR being a problem in GIS databases to MR being an effective approach for understanding the complexity and dynamics of geographic worlds.
MR research was active in the dawn of GIScience. In 1988, the US National Science Foundation awarded the University of California at Santa Barbara, the State University of New York at Buffalo, and the University of Maine to establish the National Center for Geographic Information and Analysis (NCGIA). Over the subsequent years, NCGIA grounded and accelerated research and education in GIScience. More than 30 years later, NCGIA impacts remained reverberant. Buttenfield led NCGIA Research Initiative 3 (I3) on MR from 1988–1990 (Buttenfield, 1993). She brought together researchers across institutions and government agencies to investigate how a digital object should adjust geometry and topological structures to adequately capture real-world geographic entities at a given scale and subsequent implications for geographic information applications. These use-inspired inquiries aimed to improve digital representations of reality while delving into how humans perceive, assess, communicate, interact with, learn about, and make decisions in geographic worlds. The I3 deliberated MR on database management, cartographic generalization, scale dependency, and modifiable area unit problems. Moreover, it extended to geospatial ontologies, cartographic knowledge formalization, and many other fundamental topics in GIScience.
Publications on map generalization and multiple representations proliferated in the mid-1990s and early-2000s, notably Brewer & Buttenfield (2010), Buttenfield & Frye (2006), Frank & Timpf (1994), van Oosterom (2009), Sester (1999), Spaccapietra et al. (2000), Tiina Sarjakoski, (2007), and Weibel & Dutton, (1999). Researchers working on map generalization were among the early adopters of Expert Systems and Artificial Intelligence (AI) applications in GIScience to formulate cartographic rules for automatically scaling geometric features, their topological relations, and label placements on maps (Buttenfield & McMaster, 1991). In recent years, the rise of machine learning (ML) in geospatial applications and, more broadly, Geospatial AI (GeoAI) boosters MR-related research in algorithmic computing. In particular, Deep Learning (DL) algorithms have become popular for complex polyline simplification, such as Generative Adversarial Networks to simplify road networks (Courtial et al., 2023; Du et al., 2022) and Deep Convolutional Neural Networks to simplify building outlines (Feng et al., 2019).
Map generalization aims to display spatial features visually properly but may not store the revised shapes in a database. Data generalization, on the other hand, needs a means to retain or trace versions of data at different details. Advanced algorithmic approaches to geospatial data generalization received increasing attention in recent years (c.f., Feng et al., 2019; Forghani et al., 2021; Gao et al., 2022; Grilli & Remondino, 2020; Touya et al., 2019; Yang et al., 2022). In essence, applications of convolutional neural networks (CNN) and other deep learning algorithms progressively transformed the input data to abstract features at higher levels toward the deeper layers (Albawi et al., 2017; Yuan & McKee, 2022). In this line of studies, a neural network served as the MR framework and enabled algorithmic strategies of connections, convolution, pooling, and regulations to encode and decode the input data at multiple levels of abstraction to achieve the desired computational goals (e.g., image classification, image segmentation, speech recognition, natural language processing, etc.). By unpacking a mixture of data into MR, deep learning algorithms unearth and organize levels of detail to facilitate learning and understanding of infinite complexity in the geographic world. The process of knowledge production, by humans or machines, is an abstraction endeavor and is subject to the constraint that no representation can capture infinite geographic complexity.
Going beyond the existing MR research in GIScience, this paper posits new perspectives to shed light on the importance of MR to spatial learning and knowledge production for both humans and machines by cross-referencing concepts from data modeling in GIS, neural cells in the brain, and algorithmic architecture in AI. Geographic complexity gives rise to and demands multiple representations in GIS. The next section highlights studies on handling MR in maps and spatial databases. Besides advances in database modeling and cartographic generalization, these GIScience studies improve our understanding of scale dependency in geographic observations, features, and organizations. The following sections discuss MR in spatial cognition and MR in GeoAI. From human brains to computers, MR play prominent roles in encoding and decoding spatial information and knowledge. What can we learn from MR in spatial cognition and GeoAI to advance MR in GIS? This paper concludes with suggestions on opportunities and possibilities.
2. MR in GIS upon Buttenfield’s research
Geography embeds multiple levels of detail from which distinct features arise at different scales. Map generalization commonly concerns with reducing geographic details. In Buttenfield’s first journal publication, she emphasized the importance of both reduction and introduction of details and elaborated mathematical algorithms from geometric approximation and fractal dimensions to perceptual expectations of cartographic lines (Buttenfield, 1985). Her synthesis of how increasing or decreasing details of cartographic lines serve different functions for map interpretation reflects the need for geospatial data at varying resolutions to study geographic phenomena at different scales. Her insights remain prominent almost 40 years later to date.
In the early GIS development, remote sensing imagery and digital maps were the primary sources of geospatial data. Even with the current expanded data sources, observatory apparatus and acquisition techniques prescribe specific data resolutions and hence encode geometric and topological characteristics of features at specific levels of detail. A higher-resolution image captures finer variances and smaller features on the ground than a lower-resolution image. A larger-scale map renders a higher level of geographic details than a smaller-scale map. These images and digital maps are often conveniently organized into GIS data layers with scale-dependent features. Hence, multiple representations of geography are intrinsic to GIS databases.
Multiple representations engender the issues of data integrity and consistency in database management since data updates and analytical processes need to apply to all data objects that represent the same geographic entities. In the I3 closing report, Buttenfield (1993) summarized then three key GIS research directions in MR: data models of MR, linkages between MR, and maintenance of materialized views for expedited responses to future queries (Figure 1). She directed research attention to generalization as an effective means to enable a single cartographic database of utmost detail supportive of geographic abstractions at multiple scales and for multiple purposes.
Figure 1:
Multiple representations in GIS databases: (A) Storing data layers representing geographies at different scales with no indication of data objects representative of the same geographic entities across layers; (B) Storing data at the largest scale and applying generalization rules or models to generate data objects representative of the same geographic entities at coarser scales; (C) Storing and explicitly linking data objects representative of the same geographic entities across different scales.
The International Cartographic Association (ICA) Commission on Generalization and Multiple Representation started annual workshops on Map Generalization in 1995, revised the workshop topic to Generalization and Multiple Representation in 2004, and continued it to date. Over the years, a paradigm emerged with the idea of re-engineering geographic data in linked multiple-representation databases (MRDB) and developing continuous generalization algorithms capable of gradual adaptations to proper abstractions about semantics and computational geometry to support multiple uses at any level of detail (Oosterom, 2009). Researchers proposed several MRDB architectures to manage and analyze multiple data objects representing the same geographic entities. Henceforth, this paper borrows the term isotypical from Group Theory and coins these data objects as isotypical objects. Popular MRDB management strategies include linking isotypical objects or developing functions to transform spatial data objects to their isotypical counterparts at the spatial granularity for a given map (Vangenot et al., 2002). The linking approach managing isotypical objects from multiple database compartments is vulnerable to inconsistency and errors during data revisions with increased data volume and complexity.
Alternatively, the transformation approach includes strategies to equip MRDB with deductive rules and inference mechanisms to support MR management and analysis (Jones, 1991), computational models for evaluation of topological consistency (Egenhofer et al., 1994), a multiscale cartographic tree to hierarchically organize isotypical data at different levels of detail (Frank & Timpf, 1994), combined rule-based, geometric and generalization procedures, and object-oriented data models to connect isotypical objects and assure data integrity (Jones et al., 1996), a link-list data structure and quad-tree-based encoding (Zhan & Buttenfield, 1996), scale-transition relationships and semantic integrations of attribute values in data schemata to link isotypical objects and support interoperability (Devogele et al., 1996), model generalization to extract and link isotypical objects and cartographic generalization of the extracted isotypical objects to create a cartographic database (Kilpeläinen, 2000), coordinated symbol change with geometry change for map production (Brewer & Buttenfield, 2010), and geographic-knowledge guided generalization tools with considerations of geographic processes and regional characteristics (Buttenfield et al., 2011). An early large-scale project on MRDB engineering and implementation was the European Multiple Representation-Multiple Resolution (Mur-Mur), EEC 5th Framework project. The MurMur project developed a single multi-representation database with unification processes that ingested data from mono-representation databases into a global data-schema that adopted an entity-relation structure to connect isotypical objects from different perspectives and resolutions (Balley et al., 2004).
Fruitful research in many application domains ensued Buttenfield’s (1986) call for attention to both perceptual and mathematical treatments in line generalization. Generalization encompasses processes of abstraction, selection, and reduction. There are three types of generalization: object generalization to acquire relevant information for an application; model generalization to reduce data through statistical or computational means; and cartographic generalization for graphical clarity through symbolization and spatial arrangement (Weibel & Dutton, 1999). Algorithmic solutions commonly address geometric, topological, and placement changes of linear features (Kolanowski et al., 2018; Monmonier, 1989; Stanislawski et al., 2020), polygonal features (Maruyama et al., 2019; Stum et al., 2017; Sun et al., 2016; Zhang et al., 2016), or annotations across scales. MRDB implementations appear popular in some applications, such as hydrography data (Buttenfield et al., 2011), road networks (Brewer et al., 2013), buildings and streets in SWISS VECTOR25 dataset (Bobzien et al., 2008) and residential neighborhoods from the national topographic map series (Zhang et al., 2016). More GIS implementations link isotypical objects across multiple mono-scale data sets than those with model generalization that automates derivations of isotypical data at multiple scales from a single high-resolution database. As geospatial data sources diversify and multiply with the rise of location-aware sensors, crowdsourcing technologies, and social media platforms, geospatial data at multiple scales are readily available for MRDB development to link geospatial data across scales (Zhang et al., 2018).
Rather than having one most detailed database to generate representations at coarser scales, machine learning approaches gain popularity in transferring learned model-generalization relationships from one pair of isotypical objects to construct missing counterparts for other objects at the target scales (Feng et al., 2019; Peng et al., 2021; Yang et al., 2022). Yet, Brewer and Buttenfield (2007) ran ScaleMaster exercises and demonstrated that changes to symbol design or symbol modification in map display with multiple isotypical objects in an MRDB could effectively reduce map production workloads, compared to model generalization from a single highly detailed database.
3. MR in spatial cells in the brain’s neural system
Growing evidence reveals multiple representations of space in the brain for cognition and navigation. Since the discovery of place cells in rats’ brains (O’Keefe & Conway, 1978; O’Keefe & Dostrovsky, 1971), scientists have uncovered a neural system of cells with functions for spatial representations in mammals, including humans. In addition to place cells, the brain’s spatial cells include head-direction cells (Taube et al., 1990), grid cells (Hafting et al., 2005), border cells (Solstad et al., 2008), boundary vector cells (Lever et al., 2009), goal direction cells (Sarel et al., 2017), reward cells (Gauthier & Tank, 2018), social place cells (Omer et al., 2018), and object vector cells (Høydal et al., 2019). These neural cells in the hippocampal formation (Figure 2) collaboratively encode space in allocentric or map-like representations that enable survey knowledge of the environment for spatial decision and navigation. Every spatial cell may exhibit a firing preference at specific locations (place cells), a specific hexagon pattern across the observed space (grid cells), a specific range of azimuthal angles in the allocentric heading direction (head-direction cells), near orientation-specific boundaries (border cells), towards a boundary in a specific range of distances and allocentric directions (boundary-vector cells), at a specific range of angles in the egocentric heading direction to the goal (goal-direction cells), near reward locations (reward cells), at locations of conspecifics (social place-cells), and at locations and directions of discrete objects (object-vector cells).
Figure 2:
Spatial cells in the hippocampal formation with examples of firing patterns for place cells, boundary cells, head direction cells, and grid cells. Each square corresponds to the confined physical space where an animal subject moves freely and the subject’s locations where the observed neuron fires in an experiment. For example, the square labeled “Place cell” illustrates that the observed neuron cell fires when the subject is around a particular location (a dark center and its surrounding) in the space. The observed neuron cell is, therefore, a place cell. Likewise, the squares labeled “Boundary cell” and “Head direction cell” show that the neuron cells fire when a subject is along a boundary of the physical space or when the subject is facing around a specific direction, respectively. Each of the four squares labeled “Grid cells” corresponds to a neuron cell that fires in regular spacing across the physical space; that is, the neuron fires when the subject is at any dotted location in the physical space. The spacing distance (similar to spatial resolution) gradually decreases in grid cells located from the ventral toward the dorsal of the entorhinal cortex. See Behrens et al. (2018) for firing patterns of other spatial cells.
Multiple representations arise from these spatial cells individually and collaboratively in spatial information encoding and processing. Neural evidence shows that the hippocampus represents space in chunks, what neuroscientists call environments (Kubie & Muller, 1991). In this context, an environment is “the accessible space of a particular setting together with the cues available for orientation” (Kubie & Muller, 1991, page 240) and corresponds to a designed space where a mammal (usually a rat, monkey, or person) can move around during an experiment. While a geographic environment is more extensive and may be more complex than an environment in a controlled experiment, a research design that manipulates space shape and cues enables neural scientists to discover and reason causal associations between the neural activities of spatial cells and the subject’s physical locations and actions. Neural experiments from references cited in the subsequent subsections suggest that spatial cells in the hippocampal formation exhibit at least three mechanisms of multiple representations: (1) many-to-many mapping between spatial cells of the same types and spatial information elements; (2) multiple associations of spatial cells of different types and spatial information elements; and (3) scale-dependent frames of reference.
3.1. Many-to-many mapping between spatial cells of the same types and spatial information elements
A many-to-many mapping exists between spatial cells of the same types and spatial information elements, such as location, direction, distance, boundary, etc., in an environment. For example, a place cell will fire and become active at many locations, and many place cells fire at a given location. Nevertheless, firing rates vary. Each place cell exhibits a firing field covering multiple locations with the highest firing rate at the center of the firing field and decreasing gradually outward. Many place cells have overlapping firing fields so that a location is encoded by multiple place cells. The overlapping firing location in the ensemble of co-active place cells represents the animal’s current position (Latuske et al., 2018). For example, if place cell #1 fires at locations A, B, and C, place cell #2 fires at locations B, C, and D, and place cell #3 fires at locations B, D, and E. All three place cells become active when the animal subject is at location B. Likewise, a head-direction cell is sensitive to a specific range of azimuthal angles, and its firing range often overlaps with other head-direction cells. A heading will excite multiple head-direction cells. Despite overlaps, spatial cells exhibit sparse representation; that is, only a small portion of spatial cells of each type become active for each spatial information element. For example, approximately 20% of all place cells will become active to encode all locations in an environment (Muller & Kubie, 1987).
Spatial cells appear free from topographic organization, so “cells that are neighbors in the physical space of the hippocampus do not have neighboring firing fields in the environment” (Kubie and Muller, 1991, p. 241). Closer place cells show no more similar firing fields than place cells that are farther away, and a location may excite distant place cells (O’Keefe & Speakman, 1987). Therefore, distant place cells may overlap firing fields, while proximate place cells may respond to faraway locations and have disjoint firing fields. Tobler’s law does not hold here (Tobler, 1970). Place cells in CA1 region of the hippocampus (Figure 2) fire rapidly at correspondent locations and decline over repetitions, whereas place cells which also map to these locations but reside in CA3 region (Figure 2) emerge gradually and appear stable over days (Dong et al., 2021). Therefore, the dynamics of firing activity patterns form multiple representations of every location enable fast learning of a novel environment and stable location encoding when the environment becomes familiar.
Different environments evoke different location-specific groups of cells. When an animal subject moves to a new environment, the subject’s place cells change firing activity patterns and remap to new locations. In other words, if a place cell fires at the center of the old environment, the place cell may fire at a corner location in the new environment. The remapping process appears independent from cell to cell across environments (Muller & Kubie, 1987). While the brain’s spatial representations from multiple visits can be highly correlated in the same environment over a short period (e.g., two days), spatial firing patterns are orthogonal across different environments with minimal overlap in the population of active spatial cells (Alme et al., 2014). Nevertheless, repeated visits over 6–11 days trigger the remapping process to form distinct spatial representations of individual environments, which are stable, accessible, and informative for various recall tasks (Sheintuch et al., 2020). Consequently, multiple representations allow the firing of the set of spatial cells to signal the individual’s location within an environment and the identity of the environment (Kubie & Muller, 1991).
3.2. Multiple associations of spatial cells of different types and spatial information elements
The second MR mechanism in the space neural system activates multiple types of space cells to learn and act in an environment through multiple associations of spatial cells of different types and spatial information elements. Place cells, grid cells, head-direction cells, border cells, and boundary-vector cells encode locations with different references to make sense of the environment. A set of place cells fires to encode the current location of an individual. The same location also activates the firing of some head-direction cells, and, if the individual is close to a wall (or any barrier), a set of border cells and boundary-vector cells. Furthermore, the presence of objects or other individuals of the same species in the environment can excite object-vector cells or social-place cells, respectively, at the individual’s current location.
An example of MR in GIS is representing a space in vectors, rasters, or point clouds. Spatial cells of different types also represent a space differently. In an experiment environment, each place cell only represents a few locations in the environment, and it takes many place cells to cover the entire environment (analogous to a point-cloud representation). In contrast, each grid cell fires in regularly spaced locations that collectively cover the entire environment (analogous to a raster representation). As such, the environment is represented simultaneously by place cells and grid cells. Furthermore, border cells represent the environment by encoding only the environment’s boundaries (analogous to a vector representation). When learning and navigating an environment, these spatial cells represent locations and associations with multiple means. Before the discovery of these spatial cells, theoretical psychologist Tolman (1948) proposed the existence of an endogenous computing system responsible for building cognitive maps to represent geometric coordinates of the environment and support navigation even during the first visit. While Tolman’s theory suggests a stationary framework for mental maps, neural studies disclose the importance of circuits and synaptic mechanisms generating and organizing spatial representations of the environment through associating and tuning neural activities in the hippocampal formation (Danjo, 2020). For example, place cells are sensitive to changes in landmarks and contexts. Hence, place cells need to remap locations when one moves across environments as discussed earlier. Border cells and object vector cells also need to remap boundaries (or barriers) and landmarks in the new environment. As such, each location is represented by multiple spatial cells in the hippocampal formation, and each space cell contributes to encoding multiple spaces. The many-to-many associations of spatial cells and spatial information elements may be most intriguing in social place cells.
Social place cells observed in rats (Danjo et al., 2018) and bats (Omer et al., 2018) give awareness to others in the same environment. Social place cells facilitate social collaboration or spatial competition among individual conspecies. In rats, approximately one-half of activated social place cells encode locations of other individuals’ locations in allocentric coordinates, and the other half of social place cells also encode the individual own locations. In bats, the estimated proportions are 43% encoding others’ locations and 57% encoding both others’ and one’s own locations.
Like place cells, different sets of one’s social place cells fire when one observes the others changing locations. Therefore, when both individuals move, different sets of place cells and social place cells change their excitatory and inhibitory states to encode and track both locations. Even when one is stationary, activities by other individuals can excite one’s social place cells. Mou and Ji (2016) positioned five rats in a box to observe a rat running in a linear track. They discovered similar firing patterns across the two environments (a box and a linear track) and noted the cross-activation mechanism: a common group of place cells with consistent firing sequences in the rats staying in the box and observing the running rat and when these observed rats eventually ran the track. Furthermore, their data showed that the cross-activation appeared before the box rats’ first running experience on the track. As such, they claimed that their findings provided neural evidence to support the local enhancement and social learning (Buckley, 1997): an individual could learn from the behavior of other individuals through observations.
3.3. Scale-dependent frames of reference
The third means of MR in the spatial cells is the scale-dependent frames of reference. Based on Reference Frames Theory (Meilinger, 2008; Meilinger et al., 2014), spatial navigation applies independent reference frames to encode smaller spaces with connections to form a graph representing the bigger environment. There are two general categories of spatial reference frames: egocentric and allocentric. The aforementioned spatial neuron system in the hippocampal formation is mainly associated with allocentric reference frames based on external cues or landmarks, and neurons in cortical regions appear responsible for egocentric spatial codes in referencing to one’s own body parts (self-dependent on the location or orientation of one’s heading, eye-viewing, etc.). An egocentric reference frame must update distances and orientations of all locations and objects when one navigates an environment.
In contrast, the self-independent or observer-independent nature of an allocentric reference frame allows persistent location encoding and hence can maintain the spatial measures of cues, landmarks, objects or conspecies in the environment. Allocentric frames of reference are, therefore, more effective than egocentric ones for spatial representations in long-term memory. Neuroimaging studies showed greater hippocampus activities on allocentric referencing with wayfinding and route planning (or finding shortcuts) but greater cortical activities on egocentric referencing when traveling along familiar routes (Hartley et al., 2014). These findings agreed with neurophysiological differences between London taxi and bus drivers (Maguire et al., 2006). Nevertheless, time cells in the hippocampus encoded episodic experiences (Eichenbaum, 2014) explained why hippocampal damages could impair memorizing successive body turns during route learning (Rondi-Reig, 2006) and hence egocentric learning of the environment, although egocentric referencing is associated with neurons outside the hippocampus region.
Wolbers and Wiener (2014) reviewed findings from human-subject studies concerning figurative space, vista space, environmental space, and geographic space accordingly to spatial scale about spatial perception, thinking, memory, and behavior (Montello, 1993). They argued that most neural experiments were carried out in vista or semi-vista spaces where animals could either observe the entire space or have limited options at a decision point. Hence, these experiments would directly correspond to allocentric mapping and remapping of the environment. However, Wolbers and Wiener (2014) noted numerous behavioral studies that gave evidence of hierarchical spatial representations of connected vista spaces to encode an environmental space. Since an environmental space could not be observed instantaneously, one would have to learn it over a period through exploration and experiences. The exploration discretized the environmental space into connected vista spaces along the route that the individual took. Each vista space was encoded with an independent local allocentric reference frame. Human subjects in these behavioral experiments exhibited higher accuracy of spatial judgments within a vista space than those between vista spaces. Results from animal experiments were consistent with findings from human subjects and supported the idea of independent local reference frames and the remapping of place cells and other spatial cells when animals moved across vista spaces. The transverse from one vista space to another involved a perspective shift in connecting the prior- and post-remapped spaces; such connections might be with an egocentric frame of reference because of the need for self-localization. Yet, neural studies showed no remapping of grid cells from one vista space to another.
Grid cells in the medial entorhinal cortex (MEC) of the hippocampal formation encode a vista or environmental space with equilateral triangular grids at multiple resolutions. Fyhn et al. (2004) recorded the firing activities of a grid cell when a rat moved freely in an experiment. An individual grid cell would fire at multiple locations coincident with the rate’s locations on its movement trajectory. The firing pattern formed a tessellation of equilateral triangles (a.k.a. a hexagonal lattice) across the experimental environment, and the firing locations were at the vertices of these triangles (Figure 2). Without remapping, grid cells encoded and preserved distinct representations of multiple environments in several sets of hexagonal patterns, which allowed grid cells to represent non-congruent spatial configurations for environments with different geometries and possibly conceptual and cognitive spaces (Spalla et al., 2019).
The firing pattern of a grid cell is characterized by the spacing, orientation, and spatial phase of these equilateral triangles when connecting firing locations. Figure 2 shows a set of grid cells decreasing in spacing or triangles upward. Each grid cell maintains stable firing locations and a rigid spatial periodicity regardless of the rat’s moving speed, direction, and visual input -- even in complete darkness, without visual cues (Fyhn et al., 2007). Thus, the firing pattern of a grid cell provides a coordinate and metric system for spatial navigation. Grid cells are organized in modules along the dorsal-ventral axis of the medial entorhinal cortex. All grid cells within the same module share the same grid spacing and orientations but show shifts in spatial phases in their firing tessellations (Stensola & Moser, 2016). Therefore, the spatial patterns of neighboring grid cells do not align with each other, which provides an efficient multiple “coordinate systems” at the same spatial resolution to cover the entire space.
Moreover, grid cells at the dorsomedial MEC have smaller spacings; the spacings gradually increase with grid cells distributed towards the ventrolateral MEC (Hafting et al., 2005). The monotonic increase in grid spacing from dorsomedial to ventrolateral MEC is analogous to a decrease in spatial resolution and granularity. The multiple grids of varying spacing, orientation, and spatial phasing facilitate computing context-dependent metric relationships in an environment. Without remapping functions, grid cells rescale firing patterns in response to changes in the size and shape of the environment, and the grid deformation depends upon one’s familiarity with the original environment (Barry et al., 2007). Grid cells support path integration by providing coordinate systems to reference self-motion cues and external environmental cues encoded by place cells and other spatial cells to estimate travel distances and directions (Fyhn et al., 2004). Moreover, through path integration, grid cells predict the next possible location in one’s movement and elevate the firing potential of corresponding place cells and other spatial cells at the projected locations (Bellmund et al., 2018).
4. MR in deep learning
GIS researchers explored the use of expert systems to represent, extract, and apply rules for cartographic generalization and map design in early AI applications (Forrest, 1993). Rule-based and learning-based algorithms were implemented for cartographic or model generalization, such as rule-based line generalization (Brewer et al., 2013; Sester, 1999), learning-based label placements (Harrie et al., 2022), rule-based 2D model generalization (Forghani et al., 2021), and learning-based 3D model generalization (Grilli & Remondino, 2020). Rule-based approaches gained early popularity in representing cartographic knowledge. Despite the progress in cartographic generalization and model generalization, spatial analysis at multiple scales took no advantage of multiple representations. Numerous multi-scalar spatial studies consider scalar effects through aggregating or averaging data at different spatial and temporal scales. For example, a common multi-scalar population analysis would compare population patterns at the levels of census blocks or census tracts. Since data at each scale represent different spatial constructs in a multi-scalar analysis, the idea of multiple representations, representing the same entity multiple times, does not apply.
AI embeds MR in algorithms, especially deep neural networks (DNN), convolutional neural networks (CNN), and graph convolutional neural networks (GCN). The MR embedment accounts for semantic complexity in the input data to make predictions and sets apart deep learning from linear regression or statistical models. A general linear regression is, in essence, an input-output neural network without a hidden layer and applies known input data to optimize the weight for each input variable so that the linear combination of all input variables can best approximate the output variable based on maximum likelihood methods. As such, the information about input and output variables is only represented once in the original data. Likewise, Geographically Weighted Regression and Random Forest Regression, building a regression model for every data subset determined by spatial neighbors or attribute similarity, involve only a single representation of the input data. While regression-based statistics apply maximum likelihood and covariance to estimate the weights (i.e., coefficients), machine learning relies on forward and backward propagations to minimize a loss function for weight estimations. A DNN is a neural network with multiple hidden layers. Every node in each layer contributes to forming all new nodes in the next layer. Therefore, Each hidden layer is a distinct combination of uniquely transformed input variables, giving rise to a unique representation of the original information.
DNN, CNN, and GCN are examples of deep learning methods that transform high-dimensional data into similarity-preserving layers to expand the input data into multiple views (Kang et al., 2012). As such, nodes in each layer are view-specific but also shared and connected to nodes in the preceding, subsequent, or even disjoint layers. The connected multiple representations of input data fortify deep-learning performance. CNN adds convolutional functions to DNN when input data are spatially grided, like pixels in an image. Yuan and McKee (2022) detailed how machine learning algorithms embed the concepts of scale in variable transformations and operation functions to determine the output variable. Using CNN as an example, they scrutinized 44 representations of input data in identifying archaeological features at the Hull Archaeological site (Figure 3). Functions used to transform input variables progressively included kernels, pools, and up-sampling to bound local analysis of neighboring details and generalize the local findings. Latent features and patterns emerged and became input to the next layer. From individual pixels to archaeological features, CNN in a U-Net structure encoded and decoded input variables functionally and spatially across the layer’s representations to identify archaeological features of varying sizes and shapes at 95% accuracy. With spatiotemporal data, machine learning methods may constitute DNN and CNN layers of time-centred, space-centred, and composite representations to make spatiotemporal predictions (Amato et al., 2020).
Figure 3:
An example of multiple representations in a deep learning algorithm. Three matrices represent spatial distributions of values for three input variables. Multiple convolutions with downscale pooling and upscale transposed convolutions generate intermediate representations of the input data for features and signals at different levels of detail. There are 64 representations in Conv1 and Conv2, 16 representations in Conv43, and seven representations in the final conv and predicted categories. The predicted output feature will be the category with the highest probability estimated by the softmax algorithm. Adapted from Figure 7 in Yuan and McKee (2022).
Classical CNN algorithms operate on data gridded regularly in Euclidean space (a.k.a. Euclidean data in machine learning literature) and therefore are popular in image classification and image segmentation. Vector data in an irregular spatial structure are non-Euclidean data and incongruent with CNN data requirements. To account for the varying distances between data points, Bronstein et al. (2017) advocated for Geometric Deep Learning methods to work with network data and Riemannian manifolds. Topology is central to both graphs and manifolds. In a graph, the distance measure reflects the number of topological connections from one node to another (hence, non-Euclidean space). A manifold is a topological space but resembles Euclidean space near each point locally. For manifolds, calculations are based on the changes in d-dimensional volume induced by the Riemannian metric.
An early success in geometric deep learning is human action recognition. GCN algorithms construct multiple graphs of connected nodes to capture multiple views of the input data. Hence, multiple representations across a graph give rise to the predictability of actions or activities, which is similar to multiple representations across an image, engendering the predictability of features. A three-stream GCN with, for example, extracts regions of the focused person from video image frames and represents a human skeleton in graphs of temporal segment networks to capture the person’s high-level joint motions (Xu et al., 2018). The general idea is to temporally order the skeleton graphs of a person and apply convolutional functions to spatially connected nodes in individual and across skeleton graphs over time. A variety of GCN algorithms implement spatial or spectral conventions over temporal sequences of skeleton graphs for action recognition with diverse representation schemes, such as ST graphs, frame-wise skeleton, node-trajectory, spatial graph router, temporal graph router, node-stream, edge-stream, and long or short memory graphs (Ahmad et al., 2021). As a CNN algorithm progressively transforms input imagery to many internal image layers (i.e., multiple representations) before predicting an output, GCN predicts actions or activities by transforming a time series of input graphs into multiple representations of internal graph layers.
To date, GCN algorithms have been widely used with many graph-structured data, such as word connections, traffic flows, social networks, citation networks, and biochemical graphs, in natural language processing, traffic prediction, recommender systems, and chemistry (Wu et al., 2021). GCN methods show great promises in GIS applications, for example, zoning building layouts in a city (Yan et al., 2019), election prediction (Li et al., 2019), place characterization (Zhu et al., 2020), urban road management (Bi et al., 2021), urban scene classification (Xu et al., 2022), predicting semantics of spatial entities (Iddianozie & McArdle, 2021) and many other applications. However, most GCN applications in GIS appear limited to context analysis or path analysis of subgraphs in a graph network without MR. Potential GCN applications with MR would be promising for predicting change, movement, flow, or dynamics, similar to using temporal skeleton graphs to predict actions or activities (Xue et al. 2018).
Conclusion
Buttenfield pioneered Multiple Representation research in GIS and stimulated rich scientific inquiries and technological reshaping. The infinite complexity of reality necessitates multiple representations to distill concepts, entities, and relationships in the input data from many perspectives or on various levels of details across scale, granularity, and generalization in semantics, space, and time. GIS, as information systems to effectively store, process, analyze, and communicate information embedded in geospatial data, must be able to represent the multiplicity of geography. To date, most GIS technologies can host and handle multiscale databases with separated data layers at different scales. Multiple-representation databases (MRDB), however, require vertical integration with data links or generalization algorithms to relate data objects corresponding to the same geographic entity at different scales in space and time. Section 2 highlights fruitful research on data generalization, model generalization, and cartographic generalization to support MRDB through generalization from one highly detailed base data layer, linking data objects across scales, or generalization between data layers at different scales. Most, if not all, conventional GIS databases store data at discrete fixed scales (or resolutions) without clear indications that multiple data objects may represent the same real-world entity across scales. Hence, such a system does not know whether an entity may appear and disappear in layers of data objects due to semantic, spatial, or temporal generalizations.
In contrast, the brain’s spatial cells and deep learning algorithms exhibit explicit mechanisms that subserve the vertical integration of multiple representations. The neural systems collaboratively encode spatial information elements of orientations, boundaries, locations of self, conspecies, objects, and rewards and integrate the spatial information elements in multi-scalar hexagonal lattices. Remapping functions allow the cells to perform spatial encoding across vista and environmental spaces, and stable reference grids enable path integration from one vista space to the next. Multiple yet sparse spatial cells overlap firing ranges of orientations, boundaries, or various locations. Their overlapped firing fields marked by the firing of time cells represent one’s movement and show spatial arrangements in the environment across multiple levels of detail simultaneously encoded by varying spacing references projected by grid cells. Vertical integration of multiple spatial scales is innate in the hippocampus formation.
Deep learning algorithms, on the other hand, sequentially aggregate and abstract the most detailed input data into multiple intermediate layers of generalizations to finalize the output. Classification algorithms generate categorical estimates, while regression algorithms produce numerical predictions. Either classification or regression embeds multiple representations in hidden layers, and it is the succession of multiple representations scaffolding the forward and backward propagations to estimate model parameters and minimize the specified loss function. While the brain’s spatial cells and deep learning algorithms apply different strategies to develop vertical integration for multiple representations, both utilize multiple representations as an effective means for encoding and decoding spatial information.
What can we learn from the brain’s spatial cells and deep learning algorithms to help improve multiple representations in GIS? After over 30 years of development, GIS databases remain analogous to data layers at fixed discrete scales. The brain’s spatial cells show that multiple correspondences of neurons to spatial information on simultaneously multi-scalar frames of reference can lead to effective, dynamic spatial representations of the environments. Deep learning algorithms demonstrate that simple functions, like convolution, non-linear transformation, pooling, and upscaling, can utilize multiple representations for highly accurate predictions. Both the brain’s spatial system and DL algorithms institute vertical integration dynamically and effectively with multiple means to operationalize multiple representations. Can we develop biolithic GIS with functional elements to mimic spatial cells for spatial information encoding and knowledge production? Can the functional elements apply learning algorithms to spatial data objects? If so, will we transform GIS data layers to active data objects, biolithic encoding and decoding, algorithmic connections, and embedded multiple representations for a novel understanding of geographic complexity and dynamics?
GeoAI has gained accelerating popularity in GIScience research with emphases on spatializing learning algorithms, such as geographically weighted neural networks (c.f., Dai et al., 2022; Hagenauer & Helbich, 2022) or applying algorithms to geographic analysis or modeling (c.f., Malik et al., 2023; Sun et al., 2022). Making artificial intelligence techniques spatially explicit is critical to improving the applicability and accuracy of spatial prediction and enriching the semantics of geographic information (Janowciz et al., 2020). However, few publications inspect and leverage the conceptual advances in AI to forge forward GIScience. Computational needs by GeoAI algorithms drive new ways to encode locations and attributes into embedded, high-dimensional vectors for machine learning (Mai et al., 2022). Beyond the computational advances, GeoAI research should also attend to new conceptual and representation frameworks to enhance the computability and explainability of AI algorithms for geographic problems.
This review paper showed how GeoAI algorithms and the brain’s spatial cells utilize MR to learn about a spatial environment. Deep learning gradually unpacks a mixture of input data into intermediate layers of features for the eventual prediction of entity classes, individual entities, or activities. The brain’s spatial cells of various types fire both individually and collaboratively to encode an environment in multiple firing locations and patterns. With the overview of MR concepts and operations in GIScience, neuroscience, and deep learning, this review paper showed alternatives to considering MR outside GIScience and implications for new approaches to MR and other GIScience research topics.
Acknowledgment
This research is based upon work supported by (while serving at) the National Science Foundation and the (US) National Institute of Health (NIH) grant R21 AG069267.
Data Availability Statement
This is a review paper and involves no original data.
References
- Ahmad T, Jin L, Zhang X, Lai S, Tang G, & Lin L (2021). Graph Convolutional Neural Network for Human Action Recognition: A Comprehensive Survey. IEEE Transactions on Artificial Intelligence, 2(2), 128–145. 10.1109/TAI.2021.3076974 [DOI] [Google Scholar]
- Albawi S, Mohammed TA, & Al-Zawi S (2017). Understanding of a convolutional neural network. 2017 International Conference on Engineering and Technology (ICET), 1–6. 10.1109/ICEngTechnol.2017.8308186 [DOI] [Google Scholar]
- Alme CB, Miao C, Jezek K, Treves A, Moser EI, & Moser M-B (2014). Place cells in the hippocampus: Eleven maps for eleven rooms. Proceedings of the National Academy of Sciences, 111(52), 18428–18435. 10.1073/pnas.1421056111 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Amato F, Guignard F, Robert S, & Kanevski M (2020). A novel framework for spatio-temporal prediction of environmental data using deep learning. Scientific Reports, 10(1), 22243. 10.1038/s41598-020-79148-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Balley S, Parent C, & Spaccapietra S (2004). Modelling geographic data with multiple representations. International Journal of Geographical Information Science, 18(4), 327–352. 10.1080/13658810410001672881 [DOI] [Google Scholar]
- Barry C, Hayman R, Burgess N, & Jeffery KJ (2007). Experience-dependent rescaling of entorhinal grids. Nature Neuroscience, 10(6), 682–684. 10.1038/nn1905 [DOI] [PubMed] [Google Scholar]
- Bellmund JLS, Gärdenfors P, Moser EI, & Doeller CF (2018). Navigating cognition: Spatial codes for human thinking. Science, 362(6415), eaat6766. 10.1126/science.aat6766 [DOI] [PubMed] [Google Scholar]
- Bi H, Shang W-L, Chen Y, Wang K, Yu Q, & Sui Y (2021). GIS aided sustainable urban road management with a unifying queueing and neural network model. Applied Energy, 291, 116818. 10.1016/j.apenergy.2021.116818 [DOI] [Google Scholar]
- Bobzien M, Burghardt D, Petzold I, Neun M, & Weibel R (2008). Multi-representation Databases with Explicitly Modeled Horizontal, Vertical, and Update Relations. Cartography and Geographic Information Science, 35(1), 3–16. 10.1559/152304008783475698 [DOI] [Google Scholar]
- Brewer CA, & Buttenfield BP (2007). Framing Guidelines for Multiscale Map Design Using Databases at Multiple Resolutions. Cartography and Geographic Information Science, 34(1), 3–15. 10.1559/152304007780279078 [DOI] [Google Scholar]
- Brewer CA, & Buttenfield BP (2010). Mastering map scale: Balancing workloads using display and geometry change in multiscale mapping. GeoInformatica, 14(2), 221–239. 10.1007/s10707-009-0083-6 [DOI] [Google Scholar]
- Brewer CA, Stanislawski LV, Buttenfield BP, Sparks KA, McGilloway J, & Howard MA (2013). Automated thinning of road networks and road labels for multiscale design of The National Map of the United States. Cartography and Geographic Information Science, 40(4), 259–270. 10.1080/15230406.2013.799735 [DOI] [Google Scholar]
- Bronstein MM, Bruna J, LeCun Y, Szlam A, & Vandergheynst P (2017). Geometric Deep Learning: Going beyond Euclidean data. IEEE Signal Processing Magazine, 34(4), 18–42. 10.1109/MSP.2017.2693418 [DOI] [Google Scholar]
- Buckley NJ (1997). Spatial‐Concentration Effects and the Importance of Local Enhancement in the Evolution of Colonial Breeding in Seabirds. The American Naturalist, 149(6), 1091–1112. 10.1086/286040 [DOI] [PubMed] [Google Scholar]
- Buttenfield B (1985). Treatment of the Cartographic Line. Cartographica: The International Journal for Geographic Information and Geovisualization, 22(2), 1–26. 10.3138/FWV8-3802-2282-6U47 [DOI] [Google Scholar]
- Buttenfield BP (1993). Multiple Representation (NCGIA Research Initiative 3 Closing Report No. 89–3; p. 28). http://ncgia.ucsb.edu/technical-reports/PDF/89-3.pdf [Google Scholar]
- Buttenfield BP, & Frye C (2006). The Fallacy of the “Golden Feature” in MRDBs: Data Modeling Versus Integrating New Anchor Data. 9. [Google Scholar]
- Buttenfield BP, & McMaster RB (Eds.). (1991). Map generalization: Making rules for knowledge representation. New York, NY: : Wiley. [Google Scholar]
- Buttenfield BP, Stanislawski LV, & Brewer CA (2011). Adapting Generalization Tools to Physiographic Diversity for the United States National Hydrography Dataset. Cartography and Geographic Information Science, 38(3), 289–301. 10.1559/15230406382289 [DOI] [Google Scholar]
- Courtial A, Touya G, & Zhang X (2023). Deriving map images of generalised mountain roads with generative adversarial networks. International Journal of Geographical Information Science, 37(3), 499–528. 10.1080/13658816.2022.2123488 [DOI] [Google Scholar]
- Dai Z, Wu S, Wang Y, Zhou H, Zhang F, Huang B, & Du Z (2022). Geographically convolutional neural network weighted regression: A method for modeling spatially non-stationary relationships based on a global spatial proximity grid. International Journal of Geographical Information Science, 36(11), 2248–2269. 10.1080/13658816.2022.2100892 [DOI] [Google Scholar]
- Danjo T (2020). Allocentric representations of space in the hippocampus. Neuroscience Research, 153, 1–7. 10.1016/j.neures.2019.06.002 [DOI] [PubMed] [Google Scholar]
- Danjo T, Toyoizumi T, & Fujisawa S (2018). Spatial representations of self and other in the hippocampus. Science, 359(6372), 213–218. 10.1126/science.aao3898 [DOI] [PubMed] [Google Scholar]
- Devogele T, Trevisan J, & Raynal L (1996). Building a Multiscale Database with Scale-Transition Relationships. Proceedings of SDH’96, 10. https://www.academia.edu/download/70978535/Building_a_multi-scale_database_with_sca20211002-26873-1q1lvhb.pdf [Google Scholar]
- Dong C, Madar AD, & Sheffield MEJ (2021). Distinct place cell dynamics in CA1 and CA3 encode experience in new environments. Nature Communications, 12(1), Article 1. 10.1038/s41467-021-23260-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Du J, Wu F, Xing R, Gong X, & Yu L (2022). Segmentation and sampling method for complex polyline generalization based on a generative adversarial network. Geocarto International, 37(14), 4158–4180. 10.1080/10106049.2021.1878288 [DOI] [Google Scholar]
- Egenhofer MJ, Clementini E, Felice PD, & di LÕAquila U (1994). Evaluating Inconsistencies Among Multiple Representations. Advances in GIS Research, 901–920. [Google Scholar]
- Eichenbaum H (2014). Time cells in the hippocampus: A new dimension for mapping memories. Nature Reviews Neuroscience, 15(11), 732–744. 10.1038/nrn3827 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Feng Y, Thiemann F, & Sester M (2019). Learning Cartographic Building Generalization with Deep Convolutional Neural Networks. ISPRS International Journal of Geo-Information, 8(6), 258. 10.3390/ijgi8060258 [DOI] [Google Scholar]
- Forghani A, Kazemi S, & Bruce D (2021). A Machine-Learning Approach to Generalisation of GIS Data. International Journal of Geoinformatics, 41–59. 10.52939/ijg.v17i2.1757 [DOI] [Google Scholar]
- Forrest D (1993). Expert systems and cartographic design. The Cartographic Journal, 30(2), 143–148. 10.1179/caj.1993.30.2.143 [DOI] [Google Scholar]
- Frank AU, & Timpf S (1994). Multiple representations for cartographic objects in a multiscale tree—An intelligent graphical zoom. Computers & Graphics, 18(6), 823–829. 10.1016/0097-8493(94)90008-6 [DOI] [Google Scholar]
- Fyhn M, Molden S, Witter MP, Moser EI, & Moser M-B (2004). Spatial Representation in the Entorhinal Cortex. Science, New Series, 305(5688), 1258–1264. [DOI] [PubMed] [Google Scholar]
- Gao X, Yan H, Lu X, & Li P (2022). Automated Residential Area Generalization: Combination of Knowledge-Based Framework and Similarity Measurement. ISPRS International Journal of Geo-Information, 11(1), 56. 10.3390/ijgi11010056 [DOI] [Google Scholar]
- Gauthier JL, & Tank DW (2018). A Dedicated Population for Reward Coding in the Hippocampus. Neuron, 99(1), 179–193.e7. 10.1016/j.neuron.2018.06.008 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Grilli E, & Remondino F (2020). Machine Learning Generalisation across Different 3D Architectural Heritage. ISPRS International Journal of Geo-Information, 9(6), 379. 10.3390/ijgi9060379 [DOI] [Google Scholar]
- Hafting T, Fyhn M, Molden S, Moser M-B, & Moser EI (2005). Microstructure of a spatial map in the entorhinal cortex. Nature, 436(7052), Article 7052. 10.1038/nature03721 [DOI] [PubMed] [Google Scholar]
- Hagenauer J, & Helbich M (2022). A geographically weighted artificial neural network. International Journal of Geographical Information Science, 36(2), 215–235. 10.1080/13658816.2021.1871618 [DOI] [Google Scholar]
- Harrie L, Oucheikh R, Nilsson Å, Oxenstierna A, Cederholm P, Wei L, Richter K-F, & Olsson P (2022). Label Placement Challenges in City Wayfinding Map Production—Identification and Possible Solutions. Journal of Geovisualization and Spatial Analysis, 6(1), 16. 10.1007/s41651-022-00115-z [DOI] [Google Scholar]
- Hartley T, Lever C, Burgess N, & O’Keefe J (2014). Space in the brain: How the hippocampal formation supports spatial cognition. Philosophical Transactions of the Royal Society B: Biological Sciences, 369(1635), 20120510. 10.1098/rstb.2012.0510 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Høydal ØA, Skytøen ER, Andersson SO, Moser M-B, & Moser EI (2019). Object-vector coding in the medial entorhinal cortex. Nature, 568(7752), 400–404. 10.1038/s41586-019-1077-7 [DOI] [PubMed] [Google Scholar]
- Iddianozie C, & McArdle G (2021). Towards Robust Representations of Spatial Networks Using Graph Neural Networks. Applied Sciences, 11(15), 6918. 10.3390/app11156918 [DOI] [Google Scholar]
- Janowicz K, Gao S, McKenzie G, Hu Y and Bhaduri B, 2020. GeoAI: spatially explicit artificial intelligence techniques for geographic knowledge discovery and beyond. International Journal of Geographical Information Science, 34(4), pp. 625–636. [Google Scholar]
- Jones CB (1991). Database architecture for multiscale GIS. AutoCarto 10 Proceedings, 1–14. https://cartogis.org/docs/proceedings/archive/auto-carto-10/index.html [Google Scholar]
- Jones CB, Kidner DB, Luo LQ, Bundy G. Ll., & Ware JM (1996). Database design for a multiscale spatial information system. International Journal of Geographical Information Systems, 10(8), 901–920. 10.1080/02693799608902116 [DOI] [Google Scholar]
- Kang Y, Kim S, & Choi S (2012). Deep Learning to Hash with Multiple Representations. 2012 IEEE 12th International Conference on Data Mining, 930–935. 10.1109/ICDM.2012.24 [DOI] [Google Scholar]
- Kilpeläinen T (2000). Maintenance of Multiple Representation Databases for Topographic Data. The Cartographic Journal, 37(2), 101–107. 10.1179/caj.2000.37.2.101 [DOI] [Google Scholar]
- Kolanowski B, Augustyniak J, & Latos D (2018). Cartographic Line Generalization Based on Radius of Curvature Analysis. ISPRS International Journal of Geo-Information, 7(12), 477. 10.3390/ijgi7120477 [DOI] [Google Scholar]
- Kubie JL, & Muller RU (1991). Multiple representations in the hippocampus. Hippocampus, 1(3), 240–242. 10.1002/hipo.450010305 [DOI] [PubMed] [Google Scholar]
- Latuske P, Kornienko O, Kohler L, & Allen K (2018). Hippocampal Remapping and Its Entorhinal Origin. Frontiers in Behavioral Neuroscience, 11, 253. 10.3389/fnbeh.2017.00253 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lever C, Burton S, Jeewajee A, O’Keefe J, & Burgess N (2009). Boundary Vector Cells in the Subiculum of the Hippocampal Formation. Journal of Neuroscience, 29(31), 9771–9777. 10.1523/JNEUROSCI.1319-09.2009 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Li M, Perrier E, & Xu C (2019). Deep Hierarchical Graph Convolution for Election Prediction from Geospatial Census Data. Proceedings of the AAAI Conference on Artificial Intelligence, 33, 647–654. 10.1609/aaai.v33i01.3301647 [DOI] [Google Scholar]
- Maguire EA, Woollett K, & Spiers HJ (2006). London taxi drivers and bus drivers: A structural MRI and neuropsychological analysis. Hippocampus, 16(12), 1091–1101. 10.1002/hipo.20233 [DOI] [PubMed] [Google Scholar]
- Mai G, Janowicz K, Hu Y, Gao S, Yan B, Zhu R, Cai L, & Lao N (2022). A review of location encoding for GeoAI: Methods and applications. International Journal of Geographical Information Science, 36(4), 639–673. 10.1080/13658816.2021.2004602 [DOI] [Google Scholar]
- Malik K, Robertson C, Roberts SA, Remmel TK, & Long JA (2023). Computer vision models for comparing spatial patterns: Understanding spatial scale. International Journal of Geographical Information Science, 37(1), 1–35. 10.1080/13658816.2022.2103562 [DOI] [Google Scholar]
- Maruyama K, Takahashi S, Wu H-Y, Misue K, & Arikawa M (2019). Scale-Aware Cartographic Displacement Based on Constrained Optimization. 2019 23rd International Conference Information Visualisation (IV), 74–80. 10.1109/IV.2019.00022 [DOI] [Google Scholar]
- Meilinger T (2008). The Network of Reference Frames Theory: A Synthesis of Graphs and Cognitive Maps. In Freksa C, Newcombe NS, Gärdenfors P, & Wölfl S (Eds.), Spatial Cognition VI. Learning, Reasoning, and Talking about Space (Vol. 5248, pp. 344–360). Springer; Berlin Heidelberg. 10.1007/978-3-540-87601-4_25 [DOI] [Google Scholar]
- Meilinger T, Riecke BE, & Bülthoff HH (2014). Local and Global Reference Frames for Environmental Spaces. Quarterly Journal of Experimental Psychology, 67(3), 542–569. 10.1080/17470218.2013.821145 [DOI] [PubMed] [Google Scholar]
- Monmonier M (1989). Regionalizing and matching features for interpolated displacement in the automated generalization of digital cartographic databases. Cartographica: The International Journal for Geographic Information and Geovisualization, 26(2), 21–39. [Google Scholar]
- Montello DR (1993). Scale and multiple psychologies of space. In Frank AU & Campari I (Eds.), Spatial Information Theory A Theoretical Basis for GIS (Vol. 716, pp. 312–321). Springer; Berlin Heidelberg. 10.1007/3-540-57207-4_21 [DOI] [Google Scholar]
- Mou X, & Ji D (2016). Social observation enhances cross-environment activation of hippocampal place cell patterns. ELife, 5, e18022. 10.7554/eLife.18022 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Muller JC, :agramge JP, & Weibel R (Eds.). (1995). GIS and Generalization (1st ed.). Taylor & Francis. [Google Scholar]
- Muller R, & Kubie J (1987). The effects of changes in the environment on the spatial firing of hippocampal complex-spike cells. The Journal of Neuroscience, 7(7), 1951–1968. 10.1523/JNEUROSCI.07-07-01951.1987 [DOI] [PMC free article] [PubMed] [Google Scholar]
- O’Keefe J, & Conway DH (1978). Hippocampal place units in the freely moving rat: Why they fire where they fire. Experimental Brain Research, 31(4). 10.1007/BF00239813 [DOI] [PubMed] [Google Scholar]
- O’Keefe J, & Dostrovsky J (1971). The hippocampus as a spatial map. Preliminary evidence from unit activity in the freely-moving rat. Brain Research, 34(1), 171–175. 10.1016/0006-8993(71)90358-1 [DOI] [PubMed] [Google Scholar]
- O’Keefe J, & Speakman A (1987). Single unit activity in the rat hippocampus during a spatial memory task. Experimental Brain Research, 68(1). 10.1007/BF00255230 [DOI] [PubMed] [Google Scholar]
- Omer DB, Maimon SR, Las L, & Ulanovsky N (2018). Social place-cells in the bat hippocampus. Science, 359(6372), 218–224. 10.1126/science.aao3474 [DOI] [PubMed] [Google Scholar]
- Oosterom P van. (2009). Research and development in geo-information generalisation and multiple representation. Computers, Environment and Urban Systems, 33(5), 303–310. 10.1016/j.compenvurbsys.2009.07.001 [DOI] [Google Scholar]
- Peng Q, Li Z, Chen J, & Liu W (2021). Complexity-based matching between image resolution and map scale for multiscale image-map generation. International Journal of Geographical Information Science, 35(10), 1951–1974. 10.1080/13658816.2021.1885674 [DOI] [Google Scholar]
- Rondi-Reig L (2006). Impaired Sequential Egocentric and Allocentric Memories in Forebrain-Specific-NMDA Receptor Knock-Out Mice during a New Task Dissociating Strategies of Navigation. Journal of Neuroscience, 26(15), 4071–4081. 10.1523/JNEUROSCI.3408-05.2006 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Sarel A, Finkelstein A, Las L, & Ulanovsky N (2017). Vectorial representation of spatial goals in the hippocampus of bats. Science, 355(6321), 176–180. 10.1126/science.aak9589 [DOI] [PubMed] [Google Scholar]
- Sester M (1999). Acquiring transition rules between multiple representations in a GIS: An experiment with area aggregation. Computers, Environment and Urban Systems, 23(1), 5–17. 10.1016/S0198-9715(99)00006-X [DOI] [Google Scholar]
- Sheintuch L, Geva N, Baumer H, Rechavi Y, Rubin A, & Ziv Y (2020). Multiple Maps of the Same Spatial Context Can Stably Coexist in the Mouse Hippocampus. Current Biology, 30(8), 1467–1476.e6. 10.1016/j.cub.2020.02.018 [DOI] [PubMed] [Google Scholar]
- Solstad T, Boccara CN, Kropff E, Moser M-B, & Moser EI (2008). Representation of Geometric Borders in the Entorhinal Cortex. Science, 322(5909), 1865–1868. 10.1126/science.1166466 [DOI] [PubMed] [Google Scholar]
- Spaccapietra S, Parent C, & Vangenot C (2000). GIS Databases: From Multiscale to MultiRepresentation. In Choueiry BY & Walsh T (Eds.), Abstraction, Reformulation, and Approximation (Vol. 1864, pp. 57–70). Springer; Berlin Heidelberg. 10.1007/3-540-44914-0_4 [DOI] [Google Scholar]
- Spalla D, Dubreuil A, Rosay S, Monasson R, & Treves A (2019). Can Grid Cell Ensembles Represent Multiple Spaces? Neural Computation, 31(12), 2324–2347. 10.1162/neco_a_01237 [DOI] [PubMed] [Google Scholar]
- Stanislawski LV, Finn MP, & Buttenfield BP. (2020). Classifying physiographic regimes on terrain and hydrologic factors for adaptive generalization of stream networks. Internatoinal Journal of Cartography, 6(1), 4–21. [Google Scholar]
- Stensola T, & Moser EI (2016). Grid Cells and Spatial Maps in Entorhinal Cortex and Hippocampus. In Buzsáki G & Christen Y (Eds.), Micro-, Meso- and Macro-Dynamics of the Brain (pp. 59–80). Springer International Publishing. 10.1007/978-3-319-28802-4_5 [DOI] [PubMed] [Google Scholar]
- Stum AK, Buttenfield BP, & Stanislawski LV (2017). Partial polygon pruning of hydrographic features in automated generalization. Transactions in GIS, 21(5), 1061–1078. 10.1111/tgis.12270 [DOI] [Google Scholar]
- Sun Y, Guo Q, Liu Y, Ma X, & Weng J (2016). An Immune Genetic Algorithm to Buildings Displacement in Cartographic Generalization: Buildings Displacement in Cartographic Generalization. Transactions in GIS, 20(4), 585–612. 10.1111/tgis.12165 [DOI] [Google Scholar]
- Sun Z, Peng Z, Yu Y, & Jiao H (2022). Deep convolutional autoencoder for urban land use classification using mobile device data. International Journal of Geographical Information Science, 36(11), 2138–2168. 10.1080/13658816.2022.2105848 [DOI] [Google Scholar]
- Taube J, Muller R, & Ranck J (1990). Head-direction cells recorded from the postsubiculum in freely moving rats. I. Description and quantitative analysis. The Journal of Neuroscience, 10(2), 420–435. 10.1523/JNEUROSCI.10-02-00420.1990 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Tiina Sarjakoski L (2007). Conceptual Models of Generalisation and Multiple Representation. In Generalisation of Geographic Information (pp. 11–35). Elsevier. 10.1016/B978-008045374-3/50004-1 [DOI] [Google Scholar]
- Tobler WR (1970). A Computer Movie Simulating Urban Growth in the Detroit Region. Economic Geography, 46, 234. 10.2307/143141 [DOI] [Google Scholar]
- Tolman EC (1948). COGNITIVE MAPS IN RATS AND MEN. Psychological Review, 55(4), 189–208. 10.1037/h0061626 [DOI] [PubMed] [Google Scholar]
- Touya G, Zhang X, & Lokhat I (2019). Is deep learning the new agent for map generalization? International Journal of Cartography, 5(2–3), 142–157. 10.1080/23729333.2019.1613071 [DOI] [Google Scholar]
- Vangenot C, Parent C, & Spaccapietra S (2002). Modelling and Manipulating Multiple Representations of Spatial Data. In Richardson DE & van Oosterom P (Eds.), Advances in Spatial Data Handling (pp. 81–93). Springer Berlin; Heidelberg. 10.1007/978-3-642-56094-1_7 [DOI] [Google Scholar]
- Weibel R, & Dutton G (1999). Generalising spatial data and dealing with multiple representations. In Longley Paul A., Goodchild Michael F., Maguire DJ, & Rhind David W. (Eds.), Geographical information systems: Principles, techniques, management and applications. (Vol. 1, pp. 125–155). Longman. [Google Scholar]
- Wolbers T, & Wiener JM (2014). Challenges for identifying the neural mechanisms that support spatial navigation: The impact of spatial scale. Frontiers in Human Neuroscience, 8. 10.3389/fnhum.2014.00571 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wu Z, Pan S, Chen F, Long G, Zhang C, & Yu PS (2021). A Comprehensive Survey on Graph Neural Networks. IEEE Transactions on Neural Networks and Learning Systems, 32(1), 4–24. 10.1109/TNNLS.2020.2978386 [DOI] [PubMed] [Google Scholar]
- Xu J, Tasaka K, & Yanagihara H (2018). Beyond Two-stream: Skeleton-based Three-stream Networks for Action Recognition in Videos. 2018 24th International Conference on Pattern Recognition (ICPR), 1567–1573. 10.1109/ICPR.2018.8546165 [DOI] [Google Scholar]
- Xu Y, Jin S, Chen Z, Xie X, Hu S, & Xie Z (2022). Application of a graph convolutional network with visual and semantic features to classify urban scenes. International Journal of Geographical Information Science, 1–26. 10.1080/13658816.2022.2048834 [DOI] [Google Scholar]
- Yan X, Ai T, Yang M, & Yin H (2019). A graph convolutional neural network for classification of building patterns using spatial vector data. ISPRS Journal of Photogrammetry and Remote Sensing, 150, 259–273. 10.1016/j.isprsjprs.2019.02.010 [DOI] [Google Scholar]
- Yang M, Yuan T, Yan X, Ai T, & Jiang C (2022). A hybrid approach to building simplification with an evaluator from a backpropagation neural network. International Journal of Geographical Information Science, 36(2), 280–309. 10.1080/13658816.2021.1873998 [DOI] [Google Scholar]
- Yuan M, & McKee A (2022). Embedding scale: New thinking of scale in machine learning and geographic representation. Journal of Geographical Systems. 10.1007/s10109-022-00378-6 [DOI] [Google Scholar]
- Zhan BF, & Buttenfield BP (1996). Multiscale Representation of a Digital Line. Cartography and Geographic Information Systems, 23(4), 206–228. 10.1559/152304096782438800 [DOI] [Google Scholar]
- Zhang X, Guo T, Huang J, & Xin Q (2016). Propagating Updates of Residential Areas in Multi-Representation Databases Using Constrained Delaunay Triangulations. ISPRS International Journal of Geo-Information, 5(6), 80. 10.3390/ijgi5060080 [DOI] [Google Scholar]
- Zhang X, Yin W, Yang M, Ai T, & Stoter J (2018). Updating authoritative spatial data from timely sources: A multiple representation approach. International Journal of Applied Earth Observation and Geoinformation, 72, 42–56. 10.1016/j.jag.2018.05.022 [DOI] [Google Scholar]
- Zhu D, Zhang F, Wang S, Wang Y, Cheng X, Huang Z, & Liu Y (2020). Understanding Place Characteristics in Geographic Contexts through Graph Convolutional Neural Networks. Annals of the American Association of Geographers, 110(2), 408–420. 10.1080/24694452.2019.1694403 [DOI] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Availability Statement
This is a review paper and involves no original data.