Skip to main content
PLOS One logoLink to PLOS One
. 2021 Nov 2;16(11):e0259031. doi: 10.1371/journal.pone.0259031

Change of human mobility during COVID-19: A United States case study

Justin Elarde 1, Joon-Seok Kim 1, Hamdi Kavak 2, Andreas Züfle 1, Taylor Anderson 1,*
Editor: Itzhak Benenson3
PMCID: PMC8562789  PMID: 34727103

Abstract

With the onset of COVID-19 and the resulting shelter in place guidelines combined with remote working practices, human mobility in 2020 has been dramatically impacted. Existing studies typically examine whether mobility in specific localities increases or decreases at specific points in time and relate these changes to certain pandemic and policy events. However, a more comprehensive analysis of mobility change over time is needed. In this paper, we study mobility change in the US through a five-step process using mobility footprint data. (Step 1) Propose the Delta Time Spent in Public PlacesTSPP) as a measure to quantify daily changes in mobility for each US county from 2019-2020. (Step 2) Conduct Principal Component Analysis (PCA) to reduce the ΔTSPP time series of each county to lower-dimensional latent components of change in mobility. (Step 3) Conduct clustering analysis to find counties that exhibit similar latent components. (Step 4) Investigate local and global spatial autocorrelation for each component. (Step 5) Conduct correlation analysis to investigate how various population characteristics and behavior correlate with mobility patterns. Results show that by describing each county as a linear combination of the three latent components, we can explain 59% of the variation in mobility trends across all US counties. Specifically, change in mobility in 2020 for US counties can be explained as a combination of three latent components: 1) long-term reduction in mobility, 2) no change in mobility, and 3) short-term reduction in mobility. Furthermore, we find that US counties that are geographically close are more likely to exhibit a similar change in mobility. Finally, we observe significant correlations between the three latent components of mobility change and various population characteristics, including political leaning, population, COVID-19 cases and deaths, and unemployment. We find that our analysis provides a comprehensive understanding of mobility change in response to the COVID-19 pandemic.

1 Introduction

Human mobility plays a crucial role in spreading an infectious virus such as SARS-CoV-2 and has been instrumental in the onset of the COVID-19 pandemic. In response to this global crisis, non-pharmaceutical interventions (NPIs), including stay-at-home orders and social distancing guidelines, have been implemented [1], reducing physical contacts and resulting in significant changes to normal mobility patterns [2]. These changes can be observed using individual-level mobility data such as mobile phone data embedded with Bluetooth and global positioning system (GPS), collected actively through Call Detail Records (CDRs) and passively through the use of smartphone applications (apps).

Individual-level mobility data are typically anonymized and aggregated to various spatial resolutions to produce a range of different mobility indicators (see Table 1). As part of their COVID-19 Data Consortium efforts, SafeGraph Inc. [3] has made available a comprehensive set of mobility indicators ranging from mean distance traveled to median dwell time at and away from home for each census block group in the US. Descartes Labs Inc. [4] makes available the median of the maximum distance traveled by users at the national, state, and county levels. Through the Data for Good [5, 6] effort, Facebook makes available the fraction of users who stay put in a 60x60 meter tile.

Table 1. Publicly available mobility indicators and mobility change measures.

Name Mobility Indicators Geography US Spatial Granularity Mobility Change Measure Baseline
SafeGraph [7, 8] candidate device count, origin CBG and POI destination, completely home device count, home dwell time, non-home dwell time, distance traveled from home US census block group shelter in place index, relative foot traffic index average time spent at home/foot traffic from Feb. 20–27, 2020
Google [9] N/A global county relative time spent at various POI groups compared to baseline median value for the corresponding day of the week, during the 5 week period Jan.3-Feb.6, 2020
Apple [10] N/A global county, city relative number of direction requests compared to baseline volume of requests on Jan. 13, 2020
Foursquare [11, 12] N/A US national and select states relative number of visits to different POIs compared to baseline average number of visits from Feb. 13–19, 2020
Descartes Labs Inc [4, 13] number of samples, median of the max distance US national, state, and county level relative median max distance compared to baseline average median max distance from Feb. 17-Mar. 7, 2020
Facebook [5, 6] fraction of users that stay put in a region global national, state, county, and city level relative number of trips to other 60m tiles compared to baseline average from Feb. 2-Mar. 29, 2020
Unacast [14] N/A US county level relative distance traveled compared to baseline, relative number of visits to non essential retail and essential services compared to baseline, relative number of unique human encounters relative to baseline average from Feb. 2-Mar. 29, 2020

By using indicators such as these, we can derive a mobility change measure by comparing mobility as measured by an indicator for a specific point in time and place with a baseline representing normal mobility as measured by the same indicator (see Table 1).

To preserve the privacy of users, many companies like Google and Apple have not made available mobility indicators, but instead make available their mobility change measures, which are based on some measured mobility indicator. Google [9] makes available the percent change in minutes spent at various POI groups like parks, residential, workplace, retail, and public transportation each day compared to a baseline. Similar mobility change measures based on POI visits are produced by Foursquare [11], Safegraph [7], and Unacast [14]. Apple [10] makes available the percent change in volume of direction requests. Unicast also makes available the absolute change in distance traveled compared to a baseline and the absolute change in unique human encounters per sq km. Safegraph [8] makes available their own mobility change measure, a shelter in place index which measures the change in percent in the population that stays at home each day compared to a baseline. Descartes Labs Inc. [4] makes available the percent change in max distance traveled relative to a baseline. In all of the above examples, the choice of baseline varies. In most cases, these baselines are static and usually look at the average mobility measured by the indicator of choice over a short period representing normal mobility (usually January or February 2020).

Quantifying the spatial and temporal change in mobility has been critical for evaluating the effectiveness of NPIs [1517], explaining behavior related to mobility patterns [18], supporting contact tracing efforts [19], and developing realistic models that predict trajectories of the disease [20]. However, due to the challenges associated with analyzing big data with both spatial and temporal dimensions, mobility change measures are typically either 1) mapped to show the increase or decrease in mobility at specific points in time [13] or 2) plotted to show the change in mobility as a time series for a specific study area [21]. In either case, studies tend to associate these changes at specific points in time with certain NPIs and furthermore attempt to determine the underlying explanations for the variation in the social distancing behaviors [17]. However, A more comprehensive understanding of mobility changes is needed.

Therefore, the objectives of this study are to (1) develop a novel mobility change measure and (2) identify and describe common temporal and spatial trends that are observed across all US counties over the period of a year during the COVID-19 pandemic. We aim to explore three related hypotheses, as follows:

  • Our first hypothesis is that mobility behavior during the pandemic varies spatially and temporally, but we can quantify general mobility trends across counties. To evaluate this hypothesis, we use principal component analysis (PCA) approach based on truncated Singular Value Decomposition (SVD) to decompose the mobility time series of all counties into latent components of mobility behavior.

  • Our second hypothesis is that geographically close counties have similar mobility change trends. By representing each county as a linear combination of latent features of human mobility, we can map these features into geographic space and measure their spatial autocorrelation.

  • Our third hypothesis is that the strength of mobility components correlate with other population characteristics such as population density, income, and political leaning. To evaluate this hypothesis, we test for a significant correlation between mobility components and these explanatory variables for the same county.

By exploring these hypotheses, we aim to uncover hidden spatial and temporal patterns and provide a concise summary of human mobility behavior.

2 Methods

In this paper, we analyze mobility change in the US using high-resolution foot traffic data (Fig 1). We first propose the Delta Time Spent in Public PlacesTSPP), which measures changes in mobility for each US county from 2019 to 2020 (Section 2.2). In this study, any geographical region that has a FIPs code including counties, cities, and boroughs is designated as a county. Because of the high dimensionality of the data, we next use Principal Component Analysis (PCA) to reduce the data into three latent components, where each component is explained as a time series representing the change in mobility in US counties (Section 2.3). Third, we use clustering analysis to find which counties have similar weighted combinations of the three components (Section 2.4). Fourth, we investigate local and global spatial autocorrelation for each component (Section 2.5). And finally, we conduct correlation analysis to investigate how various population characteristics and behavior correlate with mobility patterns (Section 2.6).

Fig 1. Overview of methodology.

Fig 1

2.1 Mobility data source

The data used in this study was obtained “from SafeGraph, a data company that aggregates anonymized location data from numerous applications in order to provide insights about physical places, via the Placekey Community. To enhance privacy, SafeGraph excludes census block group information if fewer than five devices visited an establishment in a month from a given census block group” [3, 22].

As part of SafeGraph’s Data for Academics [23], SafeGraph offers a Social Distancing Metrics Dataset [8] (archived on April 16, 2020). This dataset is available at no cost for academic researchers for non-commercial work. In general, SafeGraph obtains precise device location information from third-party data partners such as mobile application developers. This information is collected through APIs where app developers provide information about their users [24]. The device’s home census block group (CBG) is determined based on the nighttime location of devices over six weeks so that Social Distancing Metrics including the CBG’s device count, the complete at home device count, the median distance traveled from home, the median home dwell time, median non-home dwell-time and more can be calculated. It should be noted that SafeGraph’s Social Distancing Metrics are available from January 1, 2019, through to April 16, 2021, and are no longer being updated. For this study, we use data from SafeGraph’s Social Distancing Metrics from January 1, 2019, to December, 31st 2020.

2.2 Human mobility measures

As an indicator to assess changes in human mobility in the US, we selected SafeGraph’s measure of median_non_home_dwell_time from the Social Distancing Metrics dataset, which is defined as the median time (minutes each day) that all devices in a census block group (CBG) spend visiting public points of interest (POIs) located outside the boundaries of their home geohash (using a 153m × 153m hash buckets). This includes minutes spent at public POIs such as grocery stores, restaurants, bars, and movie theaters. We note that this measure only includes public POIs that are captured among the 6.5 million POIs in the SafeGraph Core Places database [7]. We aggregate this measure to the county level by averaging the median_non_home_dwell_time for each CBG in each county to give us an average_median_non_home_dwell_time for each county.

Our goal is to compare the average_median_non_home_dwell_time for each county for each day in 2019 and the corresponding day in 2020. Weekly patterns strongly affect mobility in the United States, where in general, lower mobility is observed on the weekends, especially on Sundays. Since corresponding days in 2019 and 2020 may not be corresponding days of the week, we use the seven-day rolling average of the average_median_non_home_dwell_time for both 2019 and 2020. That is, we calculate the mean average_median_non_home_dwell_time for that day and the three days before and after that day. We formally define this as Time Spent at Public Places (TSPP) calculated for both 2019 and 2020 as follows:

Definition 1 (Time Spent at Public Places (TSPP)) Let R be a region, and let DR=[d1R,...,dNR] denote a time series of daily median_non_home_dwell_time for that region, where N is the number of days of interest. We define our Time Spent at Public Places measure at the i-th day as:

TSPP(DR)[i]=j=-33di+jR7,

where i is an index of DR and 4 ≤ iN − 3. For example, for a 365 day time series, TSPP(DR)[i] is defined from Day 4 to Day 362, since for the first and last three days, there are not enough day before and after, respectively, to compute the centered weekly average. The denominator denotes the size of a sliding window, i.e., 7 days, used for calculating the mean.

Next, we look at the difference between the Time Spent at Public Places (TSPP) measure calculated for the i-thday in the time series for each county in 2019 and the i-th day in the time series for the same county in 2020. We define this as Change in Time Spent at Public Places measure or ΔTSPP as follows:

Definition 2 (Time Spent at Public Places (ΔTSPP)) Let R be a region and let DyR the time series of 365 daily median_non_home_dwell_time values in R for all days in year y. We define the daily change in TSPP as:

ΔTSPP(R,y1,y2)=TSPP(Dy1R)-TSPP(Dy2R),

where y1 and y2 are a target year and a reference year to compare, respectively. For short, ΔTSPP is referred to ΔTSPP(R,2020,2019) in this paper, where R is all counties of the US.

We consider an increase in ΔTSPP, where TSPP is higher in 2020 than in 2019, as a proxy for increased mobility, thus increasing the risk of exposure. Although individual counties can provide the spatial and temporal heterogeneity in the post-pandemic mobility behavior, there are thousands of counties in the US, each with a unique mobility trend. Thus, we seek an approach that can identify different mobility trends found commonly across all 3107 counties while handling both the dimensionality and variance of the data.

2.3 Mobility feature extraction: Principal Component Analysis

Principal Component Analysis (PCA) [25] is a commonly used technique to reduce the dimensionality, yet maintain the variation, present in large multivariate data and is a generalization of eigendecomposition for non-square and non-maximum rank matrices. We define a data matrix RIRm×n as a m × n matrix where m = 359 corresponds to days the number of days of the year (except for the first three and last three days due to the seven-day sliding window) and where n = 3100 corresponds to the number of US counties (we remove seven outlier counties—see Section 2 in S1 File). Using singular value decomposition (SVD), R is factorized in the product of three matrices R = UΣVT where Σ is a diagonal matrix containing the square roots of the eigenvalues of RRT, and the columns of U (V) are the eigenvectors of RRT (RT R).

To reduce the dimensionality of R we truncate the SVD to obtain only the first K dimensions. Thus, ΣK is a K × K diagonal matrix containing the K largest eigenvalues, UK is a m × K matrix describing each of the m days with K latent features, and VK is a K × n matrix describing each county with K latent features.

The idea of using SVD in this context is to decompose the time series of each county into a linear combination of K archetypal time series called principal components (PCs). SVD assumes that the ΔTSPP is a linear combination of latent features. This assumption holds since the average ΔTSPP that we observe is indeed derived from the mobility change of individual people. By applying SVD to the set of ΔTSPP time-series of all counties of the US, we can find components of individual human behavior as follows.

  • Reduced mobility during the entirety of March-December 2020 corresponding to counties with individuals who have the ability and obedience to stay at home for the remainder of 2020, such as people who worked remotely.

  • Reduced mobility only during Summer 2020 corresponding to individuals who stop isolation after the first wave of infections in the US, either due to having to go back to work or due to growing weary of mobility restrictions.

  • No reduced mobility, corresponding to individuals who cannot stay at home such as health professionals or individuals who are not willing to follow stay-at-home directions.

In addition to finding these three latent PCs (see Section 3.3 in S1 File), SVD further allows us to describe each county as a linear combination of these components, which can be interpreted as corresponding mobility behavior. In the case that some counties are not well explained by any of the latent components, we calculate the coefficient of determination for each county.

2.4 Clustering analysis

Due to a large number of counties, it is difficult to determine which counties exhibit similar mobility trends. Therefore, we cluster counties into groups of counties that exhibit similar latent features of change of exposure. We first plot each county into the PCA space where each point represents a county, and each axis represents the weight of each PC in explaining the county’s ΔTSPP, normalized from 0 to 1. For clustering, we compared the K-means algorithm [26] against other clustering algorithms and determined K-means to be the most appropriate (see Section 3.4 in S1 File). The K-means algorithm partitions n observations into k clusters by randomly initializing k points (means or cluster centroids) and assigning each observation to their closest point. The coordinate point is updated iteratively to reflect the mean center of observations that belong to it. This approach requires the number of k points to begin with. We choose k = 3 so that we can better visualize the counties that have similar weighted combinations of PCs (see Section 3.4 in S1 File for more details).

2.5 Spatial autocorrelation analysis

To test the impact of proximity in mobility change, we measure the spatial autocorrelation of counties and their corresponding weights for each PC using both Global Moran’s I [27] and Anselin’s Local Moran’s I [28]. The concept of spatial autocorrelation is based on Tobler’s First Law of Geography which states that things that are closer to each other are more similar than things that are far apart [29]. Moran’s I calculates the degree to which features in a dataset are positively spatially autocorrelated (neighboring features are alike), negatively spatially autocorrelated (neighboring features are not alike), or not spatially autocorrelated (attributes of features are independent of location).

First, we compute a matrix of spatial weights to define each counties’ neighbors mathematically. We used Queens-case, meaning counties are considered to be neighboring if their border shares at least one common vertex. After building this matrix, we discovered that the only neighboring county to Fairfax City is Fairfax County, Virginia, which was identified as an outlier in the PCA space and removed earlier in this analysis. To resolve this issue, Fairfax City was also removed. Next, we calculate the fraction of the total variation that is attributed to counties that are close together across the entire study area to give us a measure of global spatial autocorrelation (Moran’s I) and then decompose the measure for each feature to give us local spatial autocorrelation (Anselin’s Local Moran’s I).

2.6 Correlation analysis

We have covered both the spatial and temporal variation of mobility trends across all of the counties in the US in response to the COVID-19 pandemic. Next, we aim to identify some population variables that may explain the variation. Thus, we use the Pearson’s R coefficient to calculate the correlation between the weight of each PC and a variety of explanatory variables, including income, political leaning, employment, percent age over 65, and COVID-19 cases and deaths for each county. Pearson’s R is a statistical measure of linear association that returns a value between -1 and 1 that defines how strong the correlation is, where the further away that value is from 0, the stronger the correlation. We test for significance using a p-value. Since we hypothesize that there is a linear relationship between the strength of the PCs in explaining county mobility and different county variables, we did not investigate non-linear relationships.

3 Results

3.1 General mobility trends

We calculate the TSPP for each of the 3107 counties to produce a time series representing mobility in each county in 2019 and 2020. This can be aggregated to the US. Fig 2 shows the Time Spent in Public Places TSPP(DR) for the region R corresponding to all counties aggregated to the United States level, excluding Alaska, Hawaii, and US territories, and for the sequence of days DR ranging from Jan to Dec. for 2019 and 2020. The boxplots for the 2019 and 2020 TSPP can be found in the Section 3.1 in S1 File.

Fig 2. TSPP calculated for the United States in 2019 and 2020.

Fig 2

We observe anomalously high mobility in January and February 2020. This is likely a combined effect due to higher-than-average temperatures, 50% less snow depth, and panic buying behaviors (see S1 File). Starting March 2020, we observe a rapid drop in mobility due to the COVID-19 pandemic. Interestingly, we also observe that these drops swing back to normal by June 2020 and even exceeds 2019 mobility overall.

Next, we look at the Change in Measure of Public Exposure. Fig 3 shows the ΔTSPP measure for the US and for three counties. We can visually observe radically different mobility behavior among these counties. Arlington County, VA, exhibits a large drop in mobility in March 2020 than other counties. We also observe that this reduction in mobility persists throughout the year 2020. In contrast, the mobility of Cambria County, PA, exhibits a less extreme drop in mobility and quickly returns to and exceeds normal mobility after June 2020, where ΔTSPP is greater than or equal to 0. Tulare County, California, exhibits a much less extreme drop in mobility, but in general, maintains this reduction of mobility.

Fig 3. ΔTSPP calculated for the US, Arlington County (VA), Tulare County (CA), and Cambria County (PA).

Fig 3

3.2 Principal components of ΔTSPP

3.2.1 Qualitative interpretation

We find that K = 3 principal components explain 59% of the variation in all of the included time series where PC1 explains 35.6%, PC2 explains 15.3%, and PC3 explains 8.8% of the variance. Thus, using Matrix VK, each county in the U.S. is described as a linear combination of three PCs, with a loss of 1–59% = 41% of explained variance (see Section 3.3 in S1 File). This result shows that by describing each county by only three archetypal behaviors, we can explain more than half of the variance across the 359 dimensions that each county is described by in the full space.

Towards explainable machine learning, Fig 4 shows the three latent principal components describing each county in the U.S. mapped back into the full 359-day space.

Fig 4. Three latent features describing mobility in the US mapped back into the temporal space.

Fig 4

  • The mobility trend captured by PC1 (in red) begins with slightly higher mobility in January and February 2020 compared with 2019. In March 2020, there is a sharp deviation from the mobility observed in March 2019 as mobility declines in response to the pandemic. For the remainder of 2020, mobility remains consistently lower than mobility in 2019. We explain PC1 by individuals who reduced their mobility in March 2020 and then maintained this stay-at-home and social distancing behavior for the rest of the year. Counties that are well explained by PC1 may be composed of individuals who are able to work from home in April and continued working from home throughout the year.

  • The mobility trend captured by PC2 (in green) begins with a slightly lower mobility in January and February 2020 than in comparison with 2019. In the spring, mobility steadily increases until April 2020, when mobility declines slightly in response to the COVID-19 pandemic. For the remainder of 2020, mobility remains higher than mobility in 2019. The only time mobility is lower than in comparison to 2019 is before the pandemic. Counties that are well explained by PC2 may be composed of individuals who can’t or won’t comply with stay-at-home orders (such as health care workers, essential workers, and other individuals).

  • Finally, the mobility trend captured by PC3 (in blue) begins with more mobility in January and February 2020 than in 2019. In response to the pandemic, there is a sharp drop in mobility in March 2020 until it returned to normal mobility in June 2019. Mobility then increases in late summer and remains higher for the remainder of the year than in 2019. We explain PC3 by individuals who have reduced mobility directly after the pandemic (March-June 2020) but then return to normal mobility. Counties that are well explained by PC3 may be composed of individuals who were unable to work due to a shutdown in March 2020, but returned to work in June 2020. Counties that are well explained by PC3 may also be composed of individuals who were fearful during the onset of the pandemic but experienced pandemic fatigue and became less compliant with stay-at-home orders as the pandemic continued.

Abstractly speaking, we can interpret these three latent components as “Long term mobility reduction” (PC1), “No mobility reduction” (PC2), and “Short term mobility reduction” (PC3).

In the Singular Value Decomposition, Matrix V provides us with the K = 3 weights of these latent features for each county. In the following, Section 3.2.1 provides a qualitative spatial analysis of these three principal components to understand which parts of the United States exhibit strong weights for each of these latent features. We provide quantitative analysis to show that the weights of these latent features are strongly spatially auto-correlated with a number of significant spatial clusters.

3.2.2 Spatial analysis

We provide a spatial analysis of the principal components of change of exposure across all counties in the United States. Section 3.4 shows the spatial distribution of each principal components, Section 3.2.2 analyses clusters of counties having similar principal components, and Section 3.5 explores which counties are well-modeled by these components and which counties still exhibit large unexplained variance using three principal components only.

Matrix V describes each county as a linear combination of the three components. For example, Arlington County’s ΔTSPP (See Fig 3) is described as 98% PC1, 20% PC2, and 21% PC3, thus having a dominant first component. In contrast, Tulare County’s ΔTSPP (See Fig 3) is described as 51% PC1, 16% PC2, and 33% PC3, thus having a stronger weight on PC3 than Fairfax.

Fig 5A to 5C maps the strength of each PC in explaining the mobility trend of all 3100 counties. Based on visual analysis of the results, we find that the counties on the east and west coast have a higher weight in PC1, counties in the south-west and south-east have a higher weight in PC2, and counties in the mid-west have a higher weight in PC3. Stacking these three figures creates a Red-Green-Blue (RGB) composite map to show the linear combination of the components for each county where red is PC1, green is PC2, and blue is PC3 (see Section 3.5 in S1 File).

Fig 5. Spatial distribution of the three principal components of change of public exposure.

Fig 5

(A) Principal Component 1: Reduced ΔTSPP March-December (B) Principal Component 2: Increased ΔTSPP March-December (C) Principal Component 3: Reduced ΔTSPP March-June. Maps produced in QGIS [30] using SafeGraph [3] derived data, shapefiles from data.gov [31].

Since the three principal components only explain 59% of the variation among the 358-dimensional representation of counties as a sequence of daily changes in mobility, an important and open question is to ask which counties of the U.S. are explained well by these components and which ones are not. That is, which counties may be better explained by the remaining 355 components that we truncated to reduce the dimensionality. Fig 6 shows the explained and unexplained variance using the coefficient of determination. The counties with positive values (green) are well explained by the three PCs. The counties with negative values (red) are not well explained by the three PCs and would be better explained by taking the simple average of the counties ΔTSPP.

Fig 6. Spatial distribution of the explained variance (R2) for all counties across the US using three principal components.

Fig 6

Map produced in QGIS [30] using SafeGraph [3] derived data, shapefile from data.gov [31].

3.3 Clustering of latent features of ΔTSPP

Fig 7 depicts the resulting feature vectors for all counties in the K = 3 dimensional latent feature space from two angles (Fig 7A and 7B) for easier interpretation. The colors in Fig 7 represent the result of the K-means clustering analysis. We map the results of K-means analysis to see the spatial distributions of the counties related to each cluster (Fig 8).

Fig 7. Counties plotted in PCA space.

Fig 7

(A) Angle 1. (B) Angle 2.

Fig 8. K-means analysis mapped geographically.

Fig 8

Map produced in QGIS [30] using SafeGraph [3] derived data, shapefile from data.gov [31].

We can see that counties in the southwest and southwest, excluding Arizona, New Mexico, Virginia, and West Virginia, belong primarily to Cluster 1 (in green) and thus have similar weighted combinations of PCs. Counties along the west and northeastern coast, as well as the Florida coast and southern Texas, belong primarily to Cluster 2 (in pink). Counties that belong to the Rocky Mountain region of the US as well as Maine, Vermont, much of New York, West Virginia, Minnesota, and Wisconsin belong primarily to Cluster 3 (in purple). Counties in many states in the midwest and southeast are a mix of belonging to different clusters.

3.4 Spatial autocorrelation of mobility behavior between counties

The results of the Global Moran’s I analysis for each of the three latent features are presented in Table 2. We found a strong positive spatial autocorrelation for all three features. The spatial autocorrelation of PC1 and PC2 at 0.556 and 0.544, respectively, is higher than the spatial autocorrelation of PC3 at 0.489. For all three components, the positive spatial autocorrelation is highly significant at p-values ≪ 10−28 having Z-scores of 46 and greater. As we suspected from our qualitative analysis, this result confirms that the patterns of ΔTSPP that we observed in Fig 5 are indeed strongly positively spatially autocorrelated.

Table 2. Global Moran’s I measure of spatial autocorrelation for each principal component.

Component Moran’s I Z-score
PC1 0.556 52.4
PC2 0.544 51.3
PC3 0.489 46.1

Next, we calculate Anselin’s Local Moran’s I [28] to help visualize clusters of counties with similar neighbors and outlier counties with dissimilar neighbors (Fig 9). The results of Anselin’s Local Morain’s I for PC1 are presented in Fig 9A. We can identify clusters of counties with high weights corresponding to PC1 that have neighbors with high weights. We refer to these patterns as High-High (HH) clusters that are positively spatially autocorrelated. In addition, we can identify the counties with low weights corresponding to PC1 that have neighbors with low weights. We refer to these patterns as Low-Low(LL) clusters that are positively spatially autocorrelated. Anselin’s Local Moran’s I also uncover outliers, where we find counties with high or low weights corresponding to PC1 that have oppositely weighted neighbors. We refer to these patterns as Low-High(LH) outliers and High-Low (HL) outliers that are negatively spatially autocorrelated. Counties with a p-value of greater than.05 are considered insignificant. We present the results of Anselin’s Local Moran’s I for PC2 and PC3 in Fig 9B and 9C.

Fig 9. Anselin’s Local Moran’s I (LISA) results.

Fig 9

(A) LISA for PC1. (B) LISA for PC2. (C) LISA for PC3. Maps produced in QGIS [30] using SafeGraph [3] derived data, shapefiles from data.gov [31].

3.5 Explaining variation in mobility patterns

We examine the correlation between the weight of each PC and other variables for each county. The results are presented in Table 3. PC1 captures the mobility trends in US counties that maintain decreased mobility from the onset of the pandemic and beyond. We find that there is a strong positive correlation between counties with a higher income (median household income and per capita income) and a higher weight corresponding to PC1. This has been supported in the literature in other studies that find that higher-income counties and states are able to follow social distancing guidelines better and stay-at-home orders [32].

Table 3. Correlation analysis results showing Pearson’s correlation and respective p-values between each principal component (PC1-PC3) and other population characteristics.

Correlation Variable PC1 PC1 P-value PC2 PC2 P-value PC3 PC3 P-value
Median Household Income 0.63 0.00e+00 -0.11 1.84e-09 0.22 2.39e-36
Per Capita Income 0.55 1.02e-244 -0.15 2.02e-16 0.19 5.13e-26
2020 Rep Vote Percent -0.45 8.12e-152 0.38 2.90e-104 -0.02 3.34e-01
2020 Dem Vote Percent 0.44 3.61e-150 -0.37 4.56e-99 -3.23e-02 5.14e-01
2016 Rep Vote Percent -0.42 5.79e-133 0.39 7.45e-112 0.02 1.72e-01
2016 Dem Vote Percent 0.41 5.40e-125 -0.33 1.93e-81 -0.03 7.25e-02
ACS 2019 Pop Est 0.38 4.50e-106 -0.18 3.32e-25 -0.02 2.32e-01
Percent Pop Over 65 -0.34 1.23e-84 -0.025 0.16 -0.078 1.50e-05
2012 Dem Vote Percent 0.28 2.42e-56 -0.39 1.11e-115 0.02 3.38e-01
2012 Rep Vote Percent -0.28 4.62e-55 0.42 1.07e-130 -0.02 3.55e-01
2008 Dem Vote Percent 0.26 2.18e-47 -0.44 2.48e-143 0.05 5.09e-03
2008 Rep Vote Percent -0.24 1.03e-41 0.45 1.32e-157 -0.05 4.78e-03
2000 Rep Vote Percent -0.19 7.16e-27 0.30 1.99e-64 0.03 1.01e-01
2004 Dem Vote Percent 0.19 1.01e-25 -0.35 2.06e-89 -1.49e-02 4.07e-01
2004 Rep Vote Percent -0.18 3.74e-25 0.36 1.65e-97 0.02 3.41e-01
2000 Dem Vote Percent 0.17 1.95e-21 -0.23 4.50e-40 -0.03 9.62e-02
Unemployment Rate 0.11 1.42e-10 -0.31 1.28e-69 -0.02 2.65e-01
Deaths Per Thousand 0.09 1.36e-07 0.15 2.64e-16 -0.17 1.31e-21
Cases Per Thousand -0.05 1.18e-02 0.25 5.15e-44 -0.16 6.41e-20

We find a strong positive correlation between counties that are democratic leaning and have a higher weight corresponding to PC1. In contrast, we find a strong negative correlation between counties that are republican leaning in the 2020 and 2016 election and have a higher weight corresponding to PC1. This has been supported in the literature where it has been found that counties and states that are democratic leaning better follow social distancing guidelines and stay-at-home orders [33]. Interestingly, we find that the strength of the correlations between PC1 weight and political leaning decrease as we use political data from 2012, 2008, 2004, and 2000.

We also find moderate positive correlations between counties with high population and a higher weight corresponding to PC1. We find moderate negative correlations between counties with high percentage of population over 65 and a higher weight corresponding to PC1. We do not find strong correlations between counties with a higher weight corresponding to PC1 and normalized numbers of cases and deaths corresponding to COVID-19. In any case, it is difficult to properly quantify the relationship between total and normalized COVID-19 cases and deaths and PC weight based on the uncertainty inherent to the data due to inconsistent reporting.

PC2 captures the mobility trends in US counties that increase their mobility. We find that the correlations between counties with a high weight of PC2 and the variables are opposite that of PC1. Thus, there is a strong positive correlation between counties that are republican leaning in the 2020 and 2016 election with a higher weight corresponding to PC2. PC3 captures the average mobility trends in US counties. There does not appear to be nearly as strong correlations between counties with a high weight corresponding to PC3 and the variables.

4 Discussion and conclusions

In this study, we calculate a novel indicator of mobility change which we call the Time Spent at Public Places (TSPP) using Safegraph data [3]. We quantify the change of mobility by calculating the running difference of this measure between 2019 and 2020 for each county to estimate a measure of mobility change (ΔTSPP) that describes each county as a time series of mobility change of 365 days.

Confirming our first hypothesis, we find that mobility behavior during the pandemic varies spatially and temporally, but that there are three main mobility trends uncovered by our PCA analysis. PC1 captures the mobility trends of counties that drop their mobility at the onset of the pandemic and maintain reduced mobility through to the end of the year, the behavior of which we refer to as “Long term mobility reduction”. PC2 captures the mobility trends of counties that increase their mobility in 2020 or, in other words “No mobility reduction”. PC3 captures the mobility trends of counties that drop their mobility at the onset of the pandemic and then quickly return to normal mobility, which we call “Short term mobility reduction”. PC3 can be considered the average mobility trend across all counties. Confirming our second hypothesis, we find that mobility trends are positively spatially autocorrelated, meaning that counties that are geographically close exhibit similar mobility trends. Finally, we partly confirmed our third hypothesis is that we find some correlations between the variation in mobility trends and underlying population behavior and characteristics.

While we obtained interesting results, there are certain limitations of the data relevant to this study including sparse documentation of data collection, data completeness, bias, geographic coverage, and open-access. First, although SafeGraph has gone to unprecedented lengths to make the data public, perhaps unsurprisingly as a corporate data provider, their methods and sources for collecting device data and POI data are sparse. Detailed methods, sources of data, and truth datasets are not available and thus cannot be independently evaluated [34].

Data completeness is also difficult to assess. Our mobility indicator is based on the median_non_home_dwell_time which measures the median time that devices in the same CBG spend at public POIs that are included in SafeGraph’s Core Places database. SafeGraph represents the location of over 6300 distinct brands as POIs and this number changes over time as new brands are added. These are chains of commercial POIs that include all major brands in the United States (McDonald’s, AMC, Macy’s, Chevrolet, Whole Foods Market). Of the brands that SafeGraph includes, they capture close to 100% of the brands’ locations [35]. About 80% of SafeGraph POIs have no brand associated as they are single commercial locations (local restaurants, museums). It is not possible to assess the actual completeness of SafeGraph’s POIs; specifically, the total number of all POIs in the US versus the total number of POIs represented in the Safegraph Core Places Dataset.

SafeGraph’s Social Distancing Metrics dataset is based on device users that make up approximately 10% of the United States population, which is significantly larger than typical household mobility surveys. Although there are concerns that the sample is not a perfect representative of the population, SafeGraph reports that their sample correlates very highly with true census populations [36]. SafeGraph finds a Pearson’s R Correlation Coefficient of 0.966 and a Sum Absolute Bias of 24.77 between the real county population and the number of devices in the county counted by SafeGraph. SafeGraph finds little to no race-level sampling bias, educational attainment-level sampling bias, and household income-level bias with Pearson’s R Correlation Coefficients of 1, 0.999, and 0.997 and Sum Absolute Biases of 3.70, 3.43, and 1.75, respectively. The code to run independent sampling bias analysis on SafeGraph data is provided by SafeGraph [37].

We found that the geographic coverage of the data was complete at the county level with 100% coverage. There were no counties that were excluded from the dataset as a result of few POIs or lower device counts (see the Section 3.6 in S1 File). As part of SafeGraph’s Data for Academics, academic researchers have no-cost access to SafeGraph data for non-commercial work. The reliance on commercial data means that there are limited safeguards to the data, and changes to data and data access may occur beyond our control. For example, SafeGraph recently stopped updating the Social Distancing Metrics dataset. However, SafeGraph provided ample notice, and the archived data is still available for academic researchers, ensuring the reproducibility of this study. Furthermore, we have made available ΔTSPP on this project’s GitHub Repository (see the Section 1 in S1 File).

Our results are only applicable to the United States. Application of the same methodology to other countries is yet to be conducted. Finally, our PCA components capture ≈59% of the movement patterns, and the rest 41% is unexplained with our approach. Additional work is needed to cover a better percentage of variation without significantly increasing the PCA space. Future work will also focus on exploring the correlation between the PCs and additional variables, including commute, weather, and policy guidelines. This study provides a more comprehensive and data-driven approach to examining how human mobility has changed in response to the pandemic.

Supporting information

S1 File

(PDF)

Acknowledgments

We thank the Editor and the two reviewers for their valuable feedback.

Data Availability

The data underlying the results presented in the study are available from the SafeGraph Data for Academics (https://www.safegraph.com/academics). No cost access to SafeGraph data is available for academic non-commercial work. Data derived and aggregated from the SafeGraph data is available at https://github.com/GMU-GGS-NSF-ABM-Research/Mobility-Trends. Please see the supplemental material for more details.

Funding Statement

Award 1: T.A. A.Z. H.K. Award no: 2030685 Funder: National Science Foundation Funder website: nsf.gov. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Award 2: T.A. A.Z. H.K. Award no: 2109647 Funder: National Science Foundation Funder website: nsf.gov. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

References

Decision Letter 0

Itzhak Benenson

12 Apr 2021

PONE-D-21-10041

Change of human mobility during COVID-19: A United States case study

PLOS ONE

Dear Dr. Anderson,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

The attached review is comprehensive, please make edits in respond to each reviewer's remark

  

Please submit your revised manuscript by May 27 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Itzhak Benenson, Ph.D.

Academic Editor

PLOS ONE

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

  1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. Thank you for stating the following in the Acknowledgments Section of your manuscript:

This research is supported by National Science Foundation “RAPID: An Ensemble Approach to Combine Predictions from COVID-19 Simulations” grant DEB-2030685.We note that you have provided funding information that is not currently declared in your Funding Statement. However, funding information should not appear in the Acknowledgments section or other areas of your manuscript. We will only publish funding information present in the Funding Statement section of the online submission form.

Please remove any funding-related text from the manuscript and let us know how you would like to update your Funding Statement. Currently, your Funding Statement reads as follows:

T.A. A.Z. H.K.

Award no: 2030685

Funder: National Science Foundation

Funder website: nsf.gov

The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Please include your amended statements within your cover letter; we will change the online submission form on your behalf.

3. We note that Figures 5, 6, 7, 9, 10 in your submission contain map images which may be copyrighted. All PLOS content is published under the Creative Commons Attribution License (CC BY 4.0), which means that the manuscript, images, and Supporting Information files will be freely available online, and any third party is permitted to access, download, copy, distribute, and use these materials in any way, even commercially, with proper attribution. For these reasons, we cannot publish previously copyrighted maps or satellite images created using proprietary data, such as Google software (Google Maps, Street View, and Earth). For more information, see our copyright guidelines: http://journals.plos.org/plosone/s/licenses-and-copyright.

We require you to either (1) present written permission from the copyright holder to publish these figures specifically under the CC BY 4.0 license, or (2) remove the figures from your submission:

3a, You may seek permission from the original copyright holder of Figures 5, 6, 7, 9, 10 to publish the content specifically under the CC BY 4.0 license. 

We recommend that you contact the original copyright holder with the Content Permission Form (http://journals.plos.org/plosone/s/file?id=7c09/content-permission-form.pdf) and the following text:

“I request permission for the open-access journal PLOS ONE to publish XXX under the Creative Commons Attribution License (CCAL) CC BY 4.0 (http://creativecommons.org/licenses/by/4.0/). Please be aware that this license allows unrestricted use and distribution, even commercially, by third parties. Please reply and provide explicit written permission to publish XXX under a CC BY license and complete the attached form.”

Please upload the completed Content Permission Form or other proof of granted permissions as an "Other" file with your submission.

In the figure caption of the copyrighted figure, please include the following text: “Reprinted from [ref] under a CC BY license, with permission from [name of publisher], original copyright [original copyright year].”

3b, If you are unable to obtain permission from the original copyright holder to publish these figures under the CC BY 4.0 license or if the copyright holder’s requirements are incompatible with the CC BY 4.0 license, please either i) remove the figure or ii) supply a replacement figure that complies with the CC BY 4.0 license. Please check copyright information on all replacement figures and update the figure caption with source information. If applicable, please specify in the figure caption text when a figure is similar but not identical to the original image and is therefore for illustrative purposes only.

The following resources for replacing copyrighted map figures may be helpful:

USGS National Map Viewer (public domain): http://viewer.nationalmap.gov/viewer/

The Gateway to Astronaut Photography of Earth (public domain): http://eol.jsc.nasa.gov/sseop/clickmap/

Maps at the CIA (public domain): https://www.cia.gov/library/publications/the-world-factbook/index.html and https://www.cia.gov/library/publications/cia-maps-publications/index.html

NASA Earth Observatory (public domain): http://earthobservatory.nasa.gov/

Landsat: http://landsat.visibleearth.nasa.gov/

USGS EROS (Earth Resources Observatory and Science (EROS) Center) (public domain): http://eros.usgs.gov/#

Natural Earth (public domain): http://www.naturalearthdata.com/

4. Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: No

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: This is a well-constructed and useful paper. The authors present a robust methodology, which is essentially a large scale clustering project, and with which I have no quibbles. My comments pertain more to reconciliation of data science vs. epidemiology terminology, and the need to further caveat the data sources.

Abstract: "Mobility in 2020 for US counties can be explained as a combination of three trends"

It might be helpful to state these in the abstract.

Table 1: This is excellent and useful. I have made a nearly identical table in my own research, so glad to see this find its way into print.

Methods section 2: "Delta Measure of Public Exposure"

This is semantic: I understand the meaning of this within this context. However, as an epidemiologist, the term "exposure" has a more specific meaning, which is not adequately represented by gross mobility measures. Within epidemiology, "exposure" means direct exposure to the causative agent. Mobility is a proxy for that. The authors may consider modifying the term to avoid detracting from what it otherwise and excellent paper. Similarly, in the discussion "Long term exposure reduction" will likely raise the hackles of public health audiences. Consider "Long term mobility restriction" as a more accurate representation of what was measured.

Discussion: While the data providers have gone to unprecedented lengths to make these data public during the pandemic, as the authors note, the descriptions of methods are sparse. In addition, there are few safeguards that the data will continue to be available, or any real measures of completeness. As such, more in the discussion is warranted about the potential limitations scientific reliance on corporate free data.

2.2 "We note that this measure only includes public POIs that are captured among the 6.5 million POIs in the SafeGraph Core Places database."

This is a perfect example of my previous point. We have no idea how dynamic or complete these POI are. Granted SafeGraph has a more transparent and vibrant community of practice than most of the other providers. But still, the reliance on proprietary data has limitations that could be more clearly addressed.

Methods: Please describe geographic missingness in the dataset, e.g., arising from fewer POI or lower cell trace volume, or measurement uncertainty. What percent of US counties are covered?

"To reduce the dimensionality of R we truncate the SVD to obtain only the first K dimensions "

Please define kappa a little better to give a real world sense of the truncation.

"SVD assumes that the ∆MoPE is derived from the sum of latent (individual) mobility changes."

Is it sum or averaged? The mobility measures usually report mean changes, so there is an additional assumption there.

Methods: How are the 3 latent PCs measured/differentiated in the aggregate mobility data? For example, how are health professionals identified?

Methods: Can you provide more information on SafeGraph "footfall" metric? Does this correspond more in urban areas where foot traffic is more common?

Methods: "it was discovered that the only neighboring county to Fairfax City in Fairfax County,"

Virginia is an annoying case in county-city geography, in my experience. Were both city and county designations used?

Methods: "Pearson’s R coefficient "

This is a measure of linear association. Is there any a priori reason to believe the relationship to be linear?

Methods: "variety of explanatory variables, including income, political leaning, employment, and COVID-19 cases and deaths for each county "

Is this the complete list of variables? COVID cases and deaths were not consistently reported during the study period. For example, were antibody and rapid test positives included? Was repeat testing for the same individual accounted for? Despite being widely reported in data science and news media, the epidemiologic veracity of case counts is highly questionable. This should be caveated accordingly. Other journals have refused to publish results based solely on these numbers.

Figure 3 vs. 4 -- Suggest a different color scheme between graphs because causal readers may confuse the two legends since the lines are so similar.

Figure 6: I do not understand the color wheel and the colored lines extended from it?

I am curious why baseline commute times/distances, weather, and stay-at-home orders were not considered as explanatory variables?

The figure resolution and map projection was of low quality in the review PDF. I assume this will be corrected in the final version.

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: Yes: Nabarun Dasgupta

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2021 Nov 2;16(11):e0259031. doi: 10.1371/journal.pone.0259031.r002

Author response to Decision Letter 0


16 Jun 2021

Manuscript ID PONE-D-21-10041

Change of human mobility during COVID-19: A United States case study

PLOSONE

We are grateful to the Editor and to the anonymous reviewer for their feedback. We have addressed all comments and indicated where in the manuscript the changes have been made, marked in blue and red font.

Our responses to the comments are as follows:

Response to Editor Comments:

Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

The manuscript meets PLOS ONE’s style requirements.

2. Thank you for stating the following in the Acknowledgments Section of your manuscript:

This research is supported by National Science Foundation “RAPID: An Ensemble Approach to Combine Predictions from COVID-19 Simulations” grant DEB-2030685.We note that you have provided funding information that is not currently declared in your Funding Statement. However, funding information should not appear in the Acknowledgments section or other areas of your manuscript. We will only publish funding information present in the Funding Statement section of the online submission form.

Please remove any funding-related text from the manuscript and let us know how you would like to update your Funding Statement. Currently, your Funding Statement reads as follows:

T.A. A.Z. H.K.

Award no: 2030685

Funder: National Science Foundation

Funder website: nsf.gov

The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Please include your amended statements within your cover letter; we will change the online submission form on your behalf.

There are two awards that should be acknowledged. Please update the funding statement as follows:

Award 1:

T.A. A.Z. H.K.

Award no: 2030685

Funder: National Science Foundation

Funder website: nsf.gov

The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Award 2:

T.A. A.Z. H.K.

Award no: 2109647

Funder: National Science Foundation

Funder website: nsf.gov

The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

3. We note that Figures 5, 6, 7, 9, 10 in your submission contain map images which may be copyrighted. All PLOS content is published under the Creative Commons Attribution License (CC BY 4.0), which means that the manuscript, images, and Supporting Information files will be freely available online, and any third party is permitted to access, download, copy, distribute, and use these materials in any way, even commercially, with proper attribution. For these reasons, we cannot publish previously copyrighted maps or satellite images created using proprietary data, such as Google software (Google Maps, Street View, and Earth). For more information, see our copyright guidelines: http://journals.plos.org/plosone/s/licenses-and-copyright.

We can confirm that the figures that contain map images do not contain copyrighted material.

4. Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice.

We have ensured that the reference list is complete and correct.

Response to Reviewers Comments

1. Is the manuscript technically sound, and does the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: No

We have revised the manuscript and corrected for any typographical and grammatical errors.

5. Review Comments to the Author

Reviewer #1: This is a well-constructed and useful paper. The authors present a robust methodology, which is essentially a large scale clustering project, and with which I have no quibbles. My comments pertain more to reconciliation of data science vs. epidemiology terminology, and the need to further caveat the data sources.

Thank you for your feedback. Please note we have itemized each comment 5.1., 5.2., and so on to help in the organization of our responses.

5.1. Abstract: "Mobility in 2020 for US counties can be explained as a combination of three trends". It might be helpful to state these in the abstract.

We have added a brief description of the three trends in the abstract.

5.2. Table 1: This is excellent and useful. I have made a nearly identical table in my own research, so glad to see this find its way into print.

Thank you very much.

5.3. Methods section 2: "Delta Measure of Public Exposure"

This is semantic: I understand the meaning of this within this context. However, as an epidemiologist, the term "exposure" has a more specific meaning, which is not adequately represented by gross mobility measures. Within epidemiology, "exposure" means direct exposure to the causative agent. Mobility is a proxy for that. The authors may consider modifying the term to avoid detracting from what it otherwise and excellent paper. Similarly, in the discussion "Long term exposure reduction" will likely raise the hackles of public health audiences. Consider "Long term mobility restriction" as a more accurate representation of what was measured.

Thank you for this feedback. We have renamed the Measure of Public Exposure (MoPE) and ΔMoPE to Time Spent in Public Places (TSPP) and ΔTSPP to more accurately reflect what was measured. We consider an increase in ΔTSPP as one proxy for increase in mobility, thus increasing risk of exposure. We have revised all parts of the paper to consistently reflect this change.

5.4. Discussion: While the data providers have gone to unprecedented lengths to make these data public during the pandemic, as the authors note, the descriptions of methods are sparse.

We have added additional details in Section 2.1., page 4 regarding SafeGraph’s methods for data collection. We have added a discussion regarding Safegraph’s sparse documentation for their methods for collecting and processing the data in Section 4, page 12. See also our response to comment 5.6 and 5.7.

5.5. In addition, there are few safeguards that the data will continue to be available, or any real measures of completeness. As such, more in the discussion is warranted about the potential limitations scientific reliance on corporate free data.

As part of the SafeGraph Data for Academics, academic researchers have no-cost access to SafeGraph data for non-commercial work. We agree that there are no safeguards, since changes to data access may occur beyond our control. For example, Safegraph recently stopped updating the Social Distancing Metrics dataset. However, the archived data is still available for academic researchers so that the median_non_home_dwell_time can be obtained from 01/01/2019 through 12/31/2020, ensuring the reproducibility of this study. We have included step-by-step instructions and the Python scripts for obtaining and transforming the data from Safegraph’s median_non_home_dwell_time to the ΔTSPP on our github (https://github.com/GMU-GGS-NSF-ABM-Research/Mobility-Trends). We have added these details in Section 2.1., page 4 and Section 4, page 13.

To overcome the potential issues with changes to data access, we have made our derived data, ΔTSPP (for each county and each day), fully available on the GitHub repository. We have clarified this in the Supplemental Material, Section 2.

We have added a discussion on data access and elaborated on the issues of relying on corporate data.

5.6. [Section] 2.2 "We note that this measure only includes public POIs that are captured among the 6.5 million POIs in the SafeGraph Core Places database."

This is a perfect example of my previous point. We have no idea how dynamic or complete these POI are. Granted SafeGraph has a more transparent and vibrant community of practice than most of the other providers. But still, the reliance on proprietary data has limitations that could be more clearly addressed.

We have added additional details in Section 4, page 12 and 13 to quantify the completeness and bias of the data. See also our response to comment 5.4 and 5.7.

5.7. Methods: Please describe geographic missingness in the dataset, e.g., arising from fewer POI or lower cell trace volume, or measurement uncertainty. What percent of US counties are covered?

We have performed additional analysis to quantify the geographic completeness of the data. We have added these details to the Discussion on Section 4, page 12 and 13 and Supplemental Materials. Specifically, we had added Figure 2 to the Supplementary Material to See also our response to comment 5.4 and 5.6.

5.8. "To reduce the dimensionality of R we truncate the SVD to obtain only the first K dimensions "

Please define kappa a little better to give a real world sense of the truncation.

In PCA, k is the number of latent features we wish to retain, where in our study k = 3 (PC1, PC2, PC3). There was a gap in the text where we switched abruptly from k to 3 without explaining. We have clarified this in the text in Section 2.3, page 5.

5.9. "SVD assumes that the ∆MoPE is derived from the sum of latent (individual) mobility changes."

Is it sum or averaged? The mobility measures usually report mean changes, so there is an additional assumption there.

We have corrected this on Section 2.3, page 5. SVD assumes that the true mobility is the linear combination of latent features which represent mobility change.

5.10. Methods: How are the 3 latent PCs measured/differentiated in the aggregate mobility data? For example, how are health professionals identified?

We make assumptions based on each latent feature or mobility pattern about the individuals that might contribute to these patterns. We have clarified this in Section 3.2.1., page 8.

5.11. Methods: Can you provide more information on SafeGraph "footfall" metric? Does this correspond more in urban areas where foot traffic is more common?

SafeGraph identifies when a device visits a POI using a visit attribution algorithm (https://www.safegraph.com/blog/revealing-safegraphs-secret-method-for-getting-accurate-store-visits-from-gps-data). Foot traffic or footfall in this respect refers to setting foot in a POI rather than walking on foot to a POI.

5.12. Methods: "it was discovered that the only neighboring county to Fairfax City in Fairfax County,"

Virginia is an annoying case in county-city geography, in my experience. Were both city and county designations used?

Anything that has a FIPs code attached to it (counties, cities, and boroughs) is designated a county. Thus, Fairfax City is identified at the county level. This has been clarified in the text in Section 2, page 3.

5.13. Methods: "Pearson’s R coefficient "

This is a measure of linear association. Is there any a priori reason to believe the relationship to be linear?

Our hypothesis is that there is a linear relationship between the strength of the PCs in explaining county mobility and different county variables. We did not investigate non-linear relationships. We clarify this in Section 2.6, page 6.

5.14. Methods: "variety of explanatory variables, including income, political leaning, employment, and COVID-19 cases and deaths for each county "

Is this the complete list of variables? COVID cases and deaths were not consistently reported during the study period. For example, were antibody and rapid test positives included? Was repeat testing for the same individual accounted for? Despite being widely reported in data science and news media, the epidemiologic veracity of case counts is highly questionable. This should be caveated accordingly. Other journals have refused to publish results based solely on these numbers.

The complete list of variables can be found in Table 3. We have acknowledged the caveat of using COVID-19 cases and deaths in Section 3.5, page 11.

5.15. Figure 3 vs. 4 -- Suggest a different color scheme between graphs because causal readers may confuse the two legends since the lines are so similar.

We have changed the colors of Figure 3.

5.16. Figure 6: I do not understand the color wheel and the colored lines extended from it?

We have corrected Figure 6 so that the color wheel and the colored lines match.

5.17. I am curious why baseline commute times/distances, weather, and stay-at-home orders were not considered as explanatory variables?

Thank you. We will consider this in future work. We have added this to Section 4, page 13.

5.18. The figure resolution and map projection was of low quality in the review PDF. I assume this will be corrected in the final version.

We confirm that we have indeed submitted high resolution figures.

Attachment

Submitted filename: PONE-D-21-10041 Response to Reviewer and Editor.pdf

Decision Letter 1

Itzhak Benenson

19 Aug 2021

PONE-D-21-10041R1

Change of human mobility during COVID-19: A United States case study

PLOS ONE

Dear Dr. Anderson,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Academic editor: Reviewer #2 raises important methodological remarks, please react to these remarks.

Please submit your revised manuscript by Oct 03 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Itzhak Benenson, Ph.D.

Academic Editor

PLOS ONE

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: All comments have been addressed

Reviewer #2: (No Response)

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: No

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: No

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: The edits and revision look good. Thanks for the attention to detail! The changes, both semantic and substantive are adequate. This is a nice paper. I look forward to being able to cite it.

Reviewer #2: I have read the manuscript “Change of human mobility during COVID-19: A United States case study”. The manuscript suggest a novel approach to identify the changes in human mobility between the years 2019 and 2020, using SafeGraph measure of median non home dwell time for each US county. The authors estimate the daily change in the measure between 2019 and 2020 in each county, find the principal components that explain most of the variance in the received time series, cluster the different county according to their principal components, conduct a spatial autocorrelation analysis between the counties, and a general correlation analysis of socio-demographic variables. The manuscript is concise, clear, and presents a solid framework for analyzing the change in mobility across the US due to the COVID-19 pandemic. However, the manuscript includes major pitfalls that should be addressed to in order to be published.

I divide my comments into general comments and specific comments.

General

1. In figure 2, it seems that TSPP in the summer of 2019 is lower than TSPP in January 2020. Although the authors do relate to the issue of the seasonal anomalies in the winters of both years, it seems highly unlikely that TSPP is higher in January of 2020 than in the summer of 2019. I suggest the authors run another check over the calculation to observe any mistakes. However, if no mistakes are found, I suggest the authors provide accurate data regarding the weather in each county and show that the weather in each county was indeed extreme. Since county s are not equal in size, a bias towards more dense areas exists, and extreme weather conditions in one of the metropolitan areas can affect the results. However, due to the fact that no data from previous years exist, and 2019 is presumed to be a baseline year, it is crucial to validate the weather conditions(for example, by using NOAA’s GHCN data) against the TSPP in order to explain the anomalies in TSPP.

2. Regarding the principle components analysis – what led the decision to choose only 3 PCs? It is important to show the distribution of the contribution of the PCs to the variability, and then explain what led to the decision to choose only the first three.

3. The choice of k = 3 in k means clustering cannot be explained using the 3 significant PCs. I suggest the author find a better explanation for using k =3, such as the k means bend method or other methods that exist for hierarchical clustering.

Specific

1. In definition 1, j should run between -3 and 3, in order to should that the moving average is of the middle day of the 7 day average

2. In definition 2, it should be noted that delta-TSPP is a by county measure, and therefore it should be parametrized and indexed as well.

3. In figure 2, an error bar/boxplot should be added for each day, in order to get a sense of the distribution of TSPP in the different counties.

4. 2.5 page 6: what state is Fairfax county?

5. Figure 6 is unreadable. I would remove it from the paper as it confuses the reader, and it is somewhat redundant given figure 5.

6. I suggest adding the share of population over 60 as one of the correlation analysis socio-demographic variables – it can help explain the positive coefficient of deaths per thousand in PC1, which is counterintuitive.

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: Yes: Nabarun Dasgupta, MPH, PhD

Reviewer #2: No

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2021 Nov 2;16(11):e0259031. doi: 10.1371/journal.pone.0259031.r004

Author response to Decision Letter 1


7 Oct 2021

Manuscript ID PONE-D-21-10041R1

Change of human mobility during COVID-19: A United States case study

PLOSONE

We are grateful to the Editor and to the two reviewers for their feedback. We have addressed all comments and indicated where in the manuscript the changes have been made, marked in blue and red font.

Our responses to the comments are as follows (>>):

Reviewer #1

The edits and revision look good. Thanks for the attention to detail! The changes, both semantic and substantive are adequate. This is a nice paper. I look forward to being able to cite it.

>>Thank you.

Reviewer #2

I have read the manuscript “Change of human mobility during COVID-19: A United States case study”. The manuscript suggest a novel approach to identify the changes in human mobility between the years 2019 and 2020, using SafeGraph measure of median non home dwell time for each US county. The authors estimate the daily change in the measure between 2019 and 2020 in each county, find the principal components that explain most of the variance in the received time series, cluster the different county according to their principal components, conduct a spatial autocorrelation analysis between the counties, and a general correlation analysis of socio-demographic variables. The manuscript is concise, clear, and presents a solid framework for analyzing the change in mobility across the US due to the COVID-19 pandemic. However, the manuscript includes major pitfalls that should be addressed to in order to be published.

I divide my comments into general comments and specific comments.

General

In figure 2, it seems that TSPP in the summer of 2019 is lower than TSPP in January 2020. Although the authors do relate to the issue of the seasonal anomalies in the winters of both years, it seems highly unlikely that TSPP is higher in January of 2020 than in the summer of 2019. I suggest the authors run another check over the calculation to observe any mistakes. However, if no mistakes are found, I suggest the authors provide accurate data regarding the weather in each county and show that the weather in each county was indeed extreme. Since county s are not equal in size, a bias towards more dense areas exists, and extreme weather conditions in one of the metropolitan areas can affect the results. However, due to the fact that no data from previous years exist, and 2019 is presumed to be a baseline year, it is crucial to validate the weather conditions(for example, by using NOAA’s GHCN data) against the TSPP in order to explain the anomalies in TSPP.

>>Based on NOAA’s GHCN data, it is clear that the US as a whole experienced a mild winter season (January - April 2020) with warmer average temperatures and roughly 50% less snow on the ground than in comparison to the 2019 winter season. This, in combination with panic buying behavior at the beginning of the pandemic, helps to explain the high mobility in January, February, and early March. We have updated the text and added Figure 3 (snow depth) and 4 (average temperature) to Section 3.1 of the Supplemental Materials to present these findings. Any further analysis to explain the mobility data based on weather conditions goes beyond the objective and scope of the study.

Regarding the principle components analysis – what led the decision to choose only 3 PCs? It is important to show the distribution of the contribution of the PCs to the variability, and then explain what led to the decision to choose only the first three.

>>The explained variance for each PC is presented in Figure 5 and Table 1 of Section 3.3. of the Supplemental Materials. We use the elbow method to determine that 3 PCs are sufficient.

The choice of k = 3 in k means clustering cannot be explained using the 3 significant PCs. I suggest the author find a better explanation for using k =3, such as the k means bend method or other methods that exist for hierarchical clustering.

>>We use the k-means clustering as a visualization tool so that we can observe which counties have a similar weighted combination of the three PCs. Thus, we arbitrarily choose a value of k = 3. Although we find that the silhouette method finds a better explanation where k = 6 (see Figure 6 in Section 3.4 of the Supplementary Materials), it is not as effective visually (see Figure 7 in the Supplementary Materials).

Specific

In definition 1, j should run between -3 and 3, in order to should that the moving average is of the middle day of the 7 day average

>>The reviewer is correct. In our implementation we indeed use a centered moving average of seven days. Definition 1 has been corrected to for the index j to run between -3 and 3, and we have clarified that the defined TSPP of a time series is only defined between the fourth and fourth-to-last element of the time series.

In definition 2, it should be noted that delta-TSPP is a by county measure, and therefore it should be parametrized and indexed as well.

>>The parameter R denotes the spatial region, such as a county. This was already parameterized in the previous manuscript. No changes were made to Definition 2.

In figure 2, an error bar/boxplot should be added for each day, in order to get a sense of the distribution of TSPP in the different counties.’

>>We have added Figure 1 and Figure 2, presenting the boxplots for each week to the Supplementary Materials.

2.5 page 6: what state is Fairfax county?

>>We have updated the text with this information to clarify that Fairfax County is part of Virginia.

Figure 6 is unreadable. I would remove it from the paper as it confuses the reader, and it is somewhat redundant given figure 5.

>>We have moved this figure to Section 3.5 of the Supplemental Materials.

I suggest adding the share of population over 60 as one of the correlation analysis socio-demographic variables – it can help explain the positive coefficient of deaths per thousand in PC1, which is counterintuitive.

>>We have added the percent population over 65 as one of the correlation variables.

Attachment

Submitted filename: PONE-D-21-10041R1 Response to Reviewers.pdf

Decision Letter 2

Itzhak Benenson

12 Oct 2021

Change of human mobility during COVID-19: A United States case study

PONE-D-21-10041R2

Dear Dr. Anderson,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Itzhak Benenson, Ph.D.

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Reviewers' comments:

Acceptance letter

Itzhak Benenson

25 Oct 2021

PONE-D-21-10041R2

Change of human mobility during COVID-19: A United States case study

Dear Dr. Anderson:

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

If we can help with anything else, please email us at plosone@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Professor Itzhak Benenson

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 File

    (PDF)

    Attachment

    Submitted filename: PONE-D-21-10041 Response to Reviewer and Editor.pdf

    Attachment

    Submitted filename: PONE-D-21-10041R1 Response to Reviewers.pdf

    Data Availability Statement

    The data underlying the results presented in the study are available from the SafeGraph Data for Academics (https://www.safegraph.com/academics). No cost access to SafeGraph data is available for academic non-commercial work. Data derived and aggregated from the SafeGraph data is available at https://github.com/GMU-GGS-NSF-ABM-Research/Mobility-Trends. Please see the supplemental material for more details.


    Articles from PLoS ONE are provided here courtesy of PLOS

    RESOURCES