Abstract
Political responses to the COVID-19 pandemic led to changes in city soundscapes around the globe. From March to October 2020, a consortium of 261 contributors from 35 countries brought together by the Silent Cities project built a unique soundscape recordings collection to report on local acoustic changes in urban areas. We present this collection here, along with metadata including observational descriptions of the local areas from the contributors, open-source environmental data, open-source confinement levels and calculation of acoustic descriptors. We performed a technical validation of the dataset using statistical models run on a subset of manually annotated soundscapes. Results confirmed the large-scale usability of ecoacoustic indices and automatic sound event recognition in the Silent Cities soundscape collection. We expect this dataset to be useful for research in the multidisciplinary field of environmental sciences.
Subject terms: Urban ecology, Environmental impact
Background & Summary
In response to the rapid spread of the coronavirus disease 2019 (COVID-19) around the world, governments of many countries adopted physical distancing measures in early 2020, including more or less drastically restricting individual travel and suspending many work and leisure activities deemed ‘non-essential’1–3. Incidentally, these public health policy decisions opened a window of opportunity for many environmental scientists to investigate the effects of such a reduction in human activity on ecosystems at multiple spatiotemporal scales4–8.
The modification of soundscapes, especially in urban and peri-urban areas, was among the most significant environmental changes observed during this period9–14. The sudden decrease in individual travel and motorized transport of people and goods shaped extraordinary soundscapes in most cities of the world for a few weeks. This revealed the richness of animal sounds in urban areas, previously hidden by a multitude of anthropogenic sounds. Such a change, directly perceptible by the population, even raised interest outside the academic sphere, as reflected in numerous articles in the general press. Among the thousands of press articles on the subject, we will particularly mention the interactive publications produced by The New York Times (see, for example: The Coronavirus Quieted City Noise. Listen to What’s Left; or: The New York City of Our Imagination).
From an academic point of view, several studies have already been carried out on these “soundscapes of a confined world”, at different scales and in different types of spaces and territories (sub-continents15, countries16–21, regions22–25, cities18,26–33, neighborhoods29,34, protected natural areas35, semi-anthropized environments36–38, tourist sites39). Among these studies, some benefited from sensor networks predating the COVID-19 crisis, mobilizing for example underwater acoustics and/or seismic monitoring networks16,22,40, permanent noise pollution monitoring networks in urban environments26,41,42, or devices installed for pre-existing research projects43. Beyond these physical measurement approaches, several studies have also investigated individual and subjective perceptions of changes in soundscape composition. More specifically, the perceived proportion between natural and anthropogenic events in the soundscape is regularly raised in investigations that more broadly address the changes induced by different periods of population containment on experiential relationships to nature21,29,44.
In this paper, we present a global acoustic dataset45, collected between March and October 2020 by 261 contributors, at 317 sites distributed in 35 countries (Fig. 1). Recordings were primarily collected using Open Acoustic Devices AudioMoth46 or Wildlife Acoustics Song Meters SM4 (www.wildlifeacoustics.com) programmable recorders, which are widely used within the professional and amateur naturalist communities. This dataset is unique, with its international dimension, collaborative construction and open access availability. The acoustic data are presented in addition to climate classification and surrounding environment, offering a more comprehensive understanding of their significance and implications. In addition, we provide a set of descriptors based on ecoacoustic indices47 and on automatic recognition of sound categories using a pretrained deep neural network48. These descriptors were subsequently validated by collecting expert annotations on a small subset of the dataset, with which we derived statistical models to demonstrate their usability.
Methods
Silent Cities is a data collection that involved programmable audio recordings worldwide. The global scale of the project warranted us to not only gather acoustical recordings, but also contextualize them. We first describe the data collection procedures, the contributors’ network and contextual information related to the recording sites, such as location, urban density, climate classification or governmental policies related to human population containment in response to COVID-19. Next, we describe the processing of acoustic measurements computed on all recordings, including ecoacoustic indices, automatic sound event recognition, and voice activity detection.
Data collection
Recording protocol
On March 16, 2020, the French government announced the upcoming first containment of the population. A few days later, a first version of the Silent Cities protocol was submitted to professional networks. Feedback from researchers but also from journalists, artists and biological conservation practitioners interested in contributing were received. As requested, a more inclusive version, opening up the possibility of using different equipment and sampling efforts while preserving requirements for further robust statistical analyses was proposed (https://osf.io/m4vnw/). This second and final version of the protocol was shared on March 25, 2020 and is described below.
Each contributor provided recording equipment. To homogenize the recordings collection, recording devices were configured to obtain a 1 minute-long recording every 10 minutes on a daily cycle schedule, with a sampling rate set at 48 kHz. All recorders were to be set in Coordinated Universal Time (UTC+00) with an output format in .wav. In order to have comparable data, the use of an audible SM4 (Wildlife Acoustics) or an AudioMoth (Open Acoustic Devices), which were the two most popular programmable recorders at the time, was recommended. However, any device with high quality recording, allowing the recording configuration requested, was accepted. To anticipate the return of high levels of anthropogenic sounds after the end of containment measures, the gain was to be set at “low” for the Audiomoth and at 31 dB for the SM4 (gain at 5 dB and preamplification at 26 dB). The final dataset includes 216 sites monitored by an Audiomoth, 47 by an SM4 and 54 by another device.
The sampling duration of the collection was locally dependent. The protocol recommended to continue recording a minimum of two weeks after the end of the total city shut down and restoration of “normal” activities. However, the expected scenario of the return to “normal” activity extended well beyond predictions as the magnitude of the pandemic became progressively realised. As containment measures were being lifted in many countries during the summer, the acoustic sampling was ended on July 31, allowing contributors to continue collecting data after this date based on local situations. To summarise, the entire recordings collection covers the period from March 16 to October 31, 2020, with the highest number of recordings between April and July (see Fig. 1d).
Originally, contributors were able to choose between three levels of sampling effort based on their ability to record during the entire or partial duration of the project. Hereafter, we refine the definition of those levels to better fit the diversity of recording profiles represented in the final data set:
expert - The daily cycle schedule, duration of files and sampling rate were set according to the recommendations, and the duration of the sampling period was at least two months;
modified - The parameters are set as recommended but the sampling period is less than two months or some parameters such as the file duration, the sampling rate or the daily cycle schedule are different (i.e. every 3 hours), while conserving a fixed recording pattern along the sampling period;
opportunistic - All other sites that do not show any type of recording patterns.
The expert protocol was applied by 228 contributors, while the modified protocol and the opportunistic protocol were followed respectively by 72 and 17 contributors.
International contributor network
The dataset45 results from the collaborative work of 261 international contributors from various professional fields: 182 are academics, 37 are conservationist practitioners, 12 are artists and 30 do not recognize themselves in the three previous groups. An Open Science Foundation (OSF) project45 was created to organize the data collection and guarantee its open access with no restrictions. Other tools used to manage the collaborative work were Framaforms (https://framaforms.org/abc/fr/) to collect metadata about sites and contributors from the consortium.
Site descriptions
The containment of a large number of citizens worldwide restricted the location of the recorders. Contributors deployed their recorder on private land or a balcony at their residency (example on Fig. 2a). We encouraged those living in (peri-)urban areas to participate, even though recordings were also collected in rural areas. The soundscape recordings were collected from 317 sites located in or around 197 cities and 35 countries (see Fig. 1). In order to protect citizens’ privacy, the exact coordinates of the sites remain unknown and the location of the sites were based on the coordinates of their corresponding cities and approximate neighborhood. The sites cover four of the five climates defined by the Köppen climate classification49, with a majority of sites located in the temperate and dry climates and a spatial sampling in favor of the European and American continents (see Fig. 1). For each site, we extracted information about the surrounding land cover (more specifically the percentage of built-up and tree cover within a 1 km radius buffer scale around the sites; 100 m resolution50), human footprint (from 0 to 50, with the lowest score depicting the least human influence, 1 km resolution51,52), and population density (no. of inhabitants per square kilometer, 1 km resolution53) to document the degree of urbanization and human impact on the landscapes encompassing the recordings. In addition, contributors described in a few sentences the surroundings/context of their site. Thanks to the open data available on https://aa.usno.navy.mil/data/AltAz, we also extracted for each recording site the altazimuth coordinates of the Moon and Sun as well as the moon phase for each 10-second time interval during the days where soundscapes were collected. These data would be important for potential analysis about temporal soundscape dynamics. Finally, containment measures3 per country and date, summarized by the University of Oxford in the Oxford COVID-19 Government Response Tracker dataset, were downloaded from the web portal https://ourworldindata.org/grapher/stay-at-home-covid. These stay-at-home requirements are organized in 4 levels:
0 - No measures;
1 - Recommended not to leave home;
2 - Not allowed to leave home, with exceptions for daily exercise, grocery shopping, and other activities considered as essential;
3 - Not allowed to leave home, with rare exceptions (e.g. allowed to leave only once every few days, or only one person at a time).
Due to the limited data collected during the strictest containment period (level 3, see Fig. 1c), we combined data from the two periods when leaving home was not permitted (levels 2 and 3) when performing the technical validation.
Acoustic measurements
All computations described here were performed with open-source packages or code from github, including scikit-maad (v1.4)54,55, librosa56 and pytorch57. The analysis code used to prepare this dataset is available for reference at https://github.com/brain-bzh/SilentCities.
Preprocessing audio
Audio preprocessing was divided into two steps. First, the file name, sample rate, date and relative sound pressure level were extracted from each audio recording. Then, each file (n = 2,701,378) was divided into 10-second segments (n = 16,252,373) in order to have a meaningful duration for both acoustic index calculation and automatic sound event recognition. The sampling rate of audio segments were homogenised at 48 kHz for acoustic index calculation and resampled to 32 kHz for automatic sound event recognition. For acoustic indices, the signals were filtered using a bandpass filter from 100 Hz to 20 kHz to remove low frequency electronic noise inherent to some recorders.
Acoustic Indices calculation
Acoustic diversity indices aim to summarize the overall complexity of an acoustic recording in a single mathematical value. Numerous acoustic indices have been previously proposed47,58,59, considering the time, frequency and/or amplitude dimensions of the recorded sound wave. We selected and calculated eight indices on all recordings; these indices were chosen based on their complementary and/or wide representation in the literature:
dB represents the relative acoustic energy of a signal;
dB Full Scale or dBfs represents the acoustic energy of a signal where the RMS value of a full-scale sine wave is defined as 0 dBfs60;
Acoustic Complexity Index or ACI61 measures the frequency modulation over the time course of the recordings. The value is calculated on a spectrogram (amplitude per frequency per time). ACI is described to be sensitive to highly modulated sounds, such as song birds, and less affected by constant sounds, such as background noise;
Activity or ACT62 corresponds to the fraction of values in the noise-reduced decibel envelope that exceed the threshold of 12 dB above the noise level. This noise level was estimated for each site by seeking the audio file yielding the minimum dB value;
Bioacoustic index or BI63 measures the area under the frequency spectrum (amplitude per frequency) above a threshold defined as the minimum amplitude value of the spectrum. This threshold represents the limit between what can be considered acoustic activity (above threshold) and what could be considered background noise (under threshold);
Entropy of the Average Spectrum or EAS62 is a measure of the ‘concentration’ of mean energy within the mid-band of the mean-energy spectrum;
Entropy of the Spectrum of Coefficients of Variation or ECV62 is derived in a similar manner to EAS except that the spectrum is composed of coefficients of variation, defined as variance divided by the mean of the energy values in each frequency bin;
Entropy of the Spectral Peaks or EPS62 is defined as a measure of the evenness or ‘flatness’ of the maximum-frequency spectrum, maximal frequencies being measured along the time of the recording. A recording with no acoustic activity should show a low EPS value, as all spectral maxima are low and constant over time;
Normalized Difference Soundscape Index or NDSI64 measures a ratio between biophony and anthropophony. The value of this index is calculated on a spectrogram and varies between -1, meaning the entire acoustic energy of the recording is concentrated under the frequency threshold of 2kHz and attributed to anthropophony only, and +1, meaning the entire acoustic energy of the recording is concentrated above the frequency threshold and attributed to biophony only.
Manual soundscapes description
In order to have a more thorough description of the recorded soundscapes, some contributors manually performed sound identification on a subset of their recordings. Two non-consecutive days of recordings were randomly selected for each site and each one-minute-long audio file recorded at the beginning of each hour was analysed (i.e. a total of 48 1-min files). Using software dedicated to sound analysis (e.g. Audacity: https://www.audacityteam.org/, Sonic visualizer: https://www.sonicvisualiser.org/, and Kaleidoscope: https://www.wildlifeacoustics.com/uploads/user-guides/Kaleidoscope-User-Guide.pdf), contributors were to (i) listen and view spectrograms of the recordings, (ii) estimate the percentage of time occurence (0%, 1-25%, 25-50%, 50-75% and 75-100%) of geophonic, biophonic, and anthropophonic events in each audio file, and (iii) provide more information about the source/type (e.g. geophony: wind, rain and river; biophony: birds, mammals and insects; anthropophony: car, plane and music) of each event. They further indicated the strength/intensity (on a scale from 0 to 3) of the identified geophonic and anthropophonic events and to provide for each biophonic event the number of different song/call/stridulation types visible on the spectrogram. Scoring per recording was associated with a confidence level on a scale from 1 to 5 (see Table 3 for an example of the identification table, inspired from protocol proposed in65).
Table 3.
Variable | Definition | Possible value and range |
---|---|---|
Geophony_TempLevel | range of occupancy | 0%/1-25%/25-50%/50-75%/75-100% |
Wind | strength | 0/1/2/3 |
Rain | strength | 0/1/2 |
Wave | strength | 0/1/2 |
Thunder | strength | 0/1 |
Biophony_TempLevel | range of occupancy | 0%/1-25%/25-50%/50-75%/75-100% |
Bird | range of song types number | 0/1-3/4-6/7-8/9-11/>11 |
Amphibian | range of song types number | 0/1-3/4-6/7-8/9-11/>11 |
Insect | range of song types number | 0/1-3/4-6/7-8/9-11/>11 |
Mammal | range of song types number | 0/1-3/4-6/7-8/9-11/>11 |
Reptile | range of song types number | 0/1-3/4-6/7-8/9-11/>11 |
Antropophony_TempLevel | range of occupancy | 0%/1-25%/25-50%/50-75%/75-100% |
Walking | presence/absence | 0/1 |
Cycling | presence/absence | 0/1 |
Beep | presence/absence | 0/1 |
Car | sound intensity | 0/1/2 |
Car honk | presence/absence | 0/1 |
Motorbike | sound intensity | 0/1/2 |
Plane | presence/absence | 0/1 |
Helicopter | presence/absence | 0/1 |
Boat | presence/absence | 0/1 |
Other_motors | sound intensity | 0/1/2 |
Shoot | presence/absence | 0/1 |
Bell | presence/absence | 0/1 |
Talking | presence/absence | 0/1 |
Music | presence/absence | 0/1 |
Dog bark | presence/absence | 0/1 |
Kitchen sounds | presence/absence | 0/1 |
Rolling shutter | presence/absence | 0/1 |
Confidence level | low (0) to high confidence (5) | 0/1/2/3/4/5 |
A total of 1351 minutes of sounds were manually described from 30 sites. Contributors from Europe (Austria, Czech Republic, France, Germany, Ireland, Poland, Portugal, Serbia, and United Kingdom), the Americas (Canada, Colombia, Mexico, and United States of America), and Australia participated in the manual sound identification process. The number of audio files described varied slightly between participants (min: 9 minutes, max: 96, median: 48, mean: 45). Most audio files manually analyzed were recorded using AudioMoth (19 sites) and SM4 (8 sites).
Recordings were dominated by geophonic, biophonic and anthropophonic events (i.e. time occurence >75% within 1-min files) in 20, 34 and 51% of the 1351 minutes of sounds recorded, respectively. The most detected geophonic sounds were from wind (26% of the total number of records, including 76 records with strong wind intensity) and rain (12%). Bird calls (63%) and insect stridulations (16%) were the most encountered biophonic sounds. Around one third of the recordings with bird calls contained at least four different bird call types. Noise from cars (61%) and people talking (26%) were responsible for most of the anthropophonic sounds.
Automatic sound event recognition
Automatic sound event recognition (SER) became an essential task due to the immense volume (around 20 Terabytes) of the Silent Cities dataset. We adopted the AudioSet ontology and dataset66, which covers a wide range of everyday sounds. We explored the viability of utilizing PANNs (pretrained audio neural networks) pretrained on the full AudioSet data (available online: https://github.com/qiuqiangkong/audioset_tagging_cnn). The choice of a pretrained model was driven by its generality, as it has been exposed to a wide range of sounds, rendering it suitable for recognizing various audio events. In implementing our methodology, we employ a zero-shot inference approach. This involves applying the pretrained model directly to the entirety of the Silent Cities recordings without the need for additional training or fine-tuning. By doing so, we can benefit from the model’s generalization capabilities and avoid the time-consuming process of manual annotation. To categorize the diverse audio events within our dataset, we leverage the Audioset ontology and make necessary adaptations. Specifically, we classify the sounds into three main types: anthropophony (sounds produced by human activities), biophony (sounds originating from natural living organisms), and geophony (sounds resulting from non-living sources like weather or geological activities). The details of sound event grouping (i.e. audio tagging types) and corresponding labels are presented in Table 4 to provide clarity and consistency in the classification process. This grouping was also done to have the same categories than in the manual annotation described in the previous section.
Table 4.
Final tag name | Corresponding labels in AudioSet Ontology | Category |
---|---|---|
Wind | Wind | Geophony |
Rain | Rain | |
River | Stream/Waterfall | |
Wave | Ocean | |
Thunder | Thunderstorm | |
Bird | Bird vocalization, bird call, bird song/Pigeon, dove/Crow/Owl/Gull, seagull | Biophony |
Amphibian | Frog | |
Insect | Insect | |
Mammal | Rodents, rats, mice/Canidae, dogs, wolves | |
Reptile | Snake | |
Walking | Run/Walk, footsteps | Anthropophony |
Cycling | Bicycle/Bicycle bell | |
Beep | Reversing beeps | |
Car | Car passing by/Tire squeal | |
Car honk | Vehicle horn, car horn, honking | |
Motorbike | Motorcycle | |
Plane | Aircraft engine/Fixed-wing aircraft, airplane | |
Helicopter | Helicopter | |
Boat | Motorboat, speedboat/Ship/Sailboat, sailing ship | |
Other motors | Traffic noise, roadway noise | |
Shoot | Gunshot, gunfire | |
Bell | Chime/Jingle bell/Cowbell/Church bell/Change ringing (campanology) | |
Talking | Speech/Hubbub, speech noise, speech babble/ | |
Music | Music | |
Dog bark | Dog | |
Rolling shutter | Power windows, electric windows | |
Kitchen sounds |
Door/Cupboard open or close/Drawer open or close/Dishes, pots, and pans/ Cutlery, silverware/Chopping (food)/Sink (filling or washing)/Water tap, faucet/Kettle whistle/ Microwave oven/Blender |
Each tag is computed using the maximum probability output from the pretrained network among the corresponding Audioset labels. Finally, the three tags Antropophony, Geophony and Biophony are computed using the maximum tag probability in the category.
Voice activity detection
As many recordings in Silent Cities were performed at home (e.g. on a balcony) during periods of containment, human voices are likely to be heard and speakers may be easily identified. In order to prevent issues related to privacy, we identified audio segments containing speech and only shared in open access the audio segments without speech. Voice activity detection was conducted using a general purpose voice activity detector (GP-VAD) that was pretrained on noisy, natural speech recordings in the wild67 (available online: https://github.com/RicherMans/GPV). We applied GP-VAD on a subset of 250000 one-minute recordings (approx. 24 weeks). Detections on this subset were considered as a ground truth speech label, that we set a reference to detect speech in the entirety of the Silent Cities dataset, for which we have a weak speech label from the Audioset SER (described in the previous paragraph). More precisely, we used the GP-VAD predictions on the subset to estimate a receiver-operator characteristic curve, and by setting a true positive rate of detecting 75 % of speech recordings, we obtain an average false positive rate of 34 % false alarms when using the Audioset SER. The corresponding threshold was applied on the raw probability from the Audioset SER on the entirety of the dataset, which eventually resulted in a rejection of 2,868,098 10-second audio segments, representing approximately 18 % of the dataset.
Data Records
The dataset45 comprises the entire collection of acoustic recordings in Free Lossless Audio Codec (FLAC) format and associated metadata spread across several Comma Separated Value (CSV) tables (see Table 1). In order to protect privacy, only the preprocessed 10-second audio files with no speech identified are in direct open access on the OSF website (10.17605/OSF.IO/H285U).
Table 1.
Name | Description | Type | Number of files |
---|---|---|---|
Collection of acoustic recordings | Preprocessed 10-second audio files from soundscape recordings collected for each site (compressed in tar.gz archives) | FLAC | 16,252,373 |
Glossary | Definitions of table elements | csv | 1 |
dB | List of readable files uploaded by the contributors and their dB level (archived in a single zip file) | csv | 317 |
Site | Information about each site including contributors’ description about the recorder (e.g. type and serial number), the location (e.g. description of the surrounding area, city), and the description of the containment measures in place at the time of deployment. Also contains the metadata describing the landscape (e.g. population density, climate) corresponding to the cities of the dataset as well as the extracted information about the protocol used (e.g. type, sampling rate, file duration), and the amount of data collected | csv | 1 |
ConfinementLevels | For each country and date covered by the acoustic collection, displays information about the levels of “stay-at-home requirements" according to the dataset built by the University of Oxford | csv | 1 |
SunMoon | Information about the sun and moon azimuth and altitude for the dates and times covered by the Silent Cities dataset, with a 10-second increment, for each city (197 csv files in a zip file) | csv | 197 |
AcousticMeasurements | List of preprocessed 10-second acoustic files and associated calculations of acoustic indices and categories of automatic sound event (all csv files compressed in a single zip file) | csv | 317 |
AcousticMeasurements_nospeech | Same as AcousticMeasurements but only for recordings without speech (all csv.gz files in a single zip file) | csv.gz | 317 |
ManualIdentification | Sound event identification made by contributors on a subsample of the original 1-min recordings | csv | 1 |
AverageCompleteTable | For each unique site at a unique date and a unique hour, averaged values of acoustic indices and automated event recognition categories. This table also includes the corresponding Site, SunMoon, and ConfinementLevels information. Finally, given the original recording date and time in UTC+0 and knowing the associated timezone, a local date and time information was calculated | csv | 1 |
AverageCompleteTable_nospeech | Same as AverageCompleteTable but the averaged values are only calculated on speech-filtered subsample of the acoustic collection | csv | 1 |
Technical Validation
To validate the Silent Cities dataset45, we verified the veracity of the metadata reported by the contributors and consolidated the acoustic recordings collections by checking for device malfunctions. We also verified whether the automated acoustic measurements conducted on the recordings were coherent with aural human observations. Finally, proof of validity of the dataset to reflect urban soundscape changes due to stay-at-home requirements is presented. The three steps of this technical validation are detailed below.
First, we verified the quality of the data by manually verifying that the recordings were correctly attributed to their dedicated site with the help of the contributors. We also ran a manual cleaning of information given by the contributors to remove any personal information, such as address or GPS coordinates, and to correct spelling mistakes to ensure interoperability between tables. In addition, we verified the conformity of the protocol by automatically extracting information from the recording collection (i.e. frequency range, schedule of recordings) and reported observed modification of the protocol. We also automatically and manually verified the proper calculation of acoustic measurements and identified 10,724 files for which the calculation failed, probably due to file-related issues; these files were excluded from the dataset without affecting an entire site (i.e. no sites were excluded because of this issue). Finally, we checked for recorder device malfunction by making sure of a temporal variation of the dB value for each recorder, only one site was identified with a flat dB response, leading to its exclusion from the dataset.
Second, we confirmed that the automated soundscape measurements informed and aligned with real soundscape events. More specifically, we investigated whether the acoustic indices and audio tagging categories were representative of geophonic, biophonic and anthropophonic events detected manually by the contributors. To do so, we conducted a series of univariate generalized linear mixed-effect models (GLMMs; ‘glmmTMB’ package68,) in R v4.2.1. We tested independently the presence/absence of geophonic, biophonic and anthropophonic events within the 1351 1-min recordings (i.e. response variables) in relation to acoustic indices and tagging types (i.e. explanatory variables). Models were fitted with a binomial error distribution and a logit link function. We considered the identity of contributors as a random effect to avoid pseudoreplication. We also implemented a first-order autoregressive function to account for serial autocorrelation in residuals. Statistical assumptions were visually assessed using model diagnostics (i.e. Quantile-Quantile plot, residuals vs fitted plot) with the DHARMa package69. The acoustic indices were linked to geophonic, biophonic, or anthropophonic events, albeit to varying degrees (Fig. 2b). For instance, the presence of biophonic events was associated with greater values of EAS and ECV and lower values of dB. Audio tagging categories effectively captured the intended soundscapes they aimed to portray (Fig. 2b).
Third, we assessed the validity of the dataset in evaluating the impact of stay-at-home requirements on soundscapes. In a first step, we plotted the mean values of biophony and anthropophony levels (here defined as the maximum probability of having a biophonic and anthropophonic event in the 1-min recording, respectively) per site recorded at each hour (all protocols combined). As expected, we observed temporal patterns in biophony and anthropophony levels throughout the day (Fig. 2c). Regardless of the time of day, biophony levels were greater during the period when leaving home was not permitted (i.e. confinement level 2 or 3) compared to the other periods, while the opposite pattern was true for anthropophony. In a second step, we modeled changes in the values of acoustic indices as well as biophony and anthropophony levels (i.e. response variables) in relation to the containment measures (i.e. explanatory variables) using GLMMs with a beta distribution and a log link function. We aimed to provide a proof of validity and therefore limited the analysis to the expert protocol and all recordings collected at 8:00 am (i.e. peak of biophonic and anthropophonic events; Fig. 2d). We focused on NDSI for the acoustic index and the probability of bird calls and engine noise indicated by the automatic sound event recognition in the recordings as proxies of biophony and anthropophony levels, respectively. We added as covariates in the models: (i) Julian day to consider seasonal changes in biological and anthropogenical sounds, (ii) the first Principal Component Analysis axis depicting the level of anthropization in the landscape surrounding the recordings, and (iii) the climate type. Continuous covariates were scaled (mean = 0; SD = 1) to avoid convergence issues. We considered as random effects site identity nested within country to account for hierarchical clustering within data and recorder type, due to potential sensitivity differences between devices. Due to the limited data collected during the strictest containment period, we combined data from the two periods when leaving home was not permitted. The same approach as outlined previously was employed for model validation (note that the validity of the statistical assumptions, assessed using Quantile-Quantile and residuals vs fitted plots, was only partially met for the engine noise model). Full models were more informative than the null ones with differences in Akaike Information Criterion scores > 500. Finally, we conducted Tukey’s post hoc multiple comparison test to investigate pairwise differences in NDSI values and biophony and anthropophony levels between the three COVID-19 containment measures investigated. Overall, we found that COVID-19 lockdown had positive effects on NDSI values and biophony levels and negative effects on anthropophony levels. After accounting for seasonal and landscape effects, our models suggest that NDSI values and biophony levels were significantly greater during the periods when leaving home was not recommended or permitted, compared to the period with no measures (Fig. 2d; Table 2). There were also higher biophony levels during the period when leaving home was not permitted than during the period with when leaving home was not recommended. The opposite patterns were found for the anthropophony levels, with significantly lower values measured during the periods when leaving home was not permitted compared to the other periods, albeit the differences were of smaller magnitude (Fig. 2d; Table 5). Altogether, our preliminary analysis revealed potential changes in soundscape patterns that can be attributed to containment policies, these changes being above expected differences due to climate.
Table 2.
Response variable | Explanatory variable | Estimate | SE | Z value | P value |
---|---|---|---|---|---|
NDSI | Intercept | 0.121 | 0.243 | 0.497 | 0.619 |
Confinement level 1 vs no measures | 0.035 | 0.012 | 2.832 | 0.005** | |
Confinement level 2 and 3 vs no measures | 0.051 | 0.014 | 3.535 | <0.001*** | |
PCA axis: degree of anthropization | −0.190 | 0.052 | −3.645 | <0.001*** | |
Julian day: season | −0.083 | 0.005 | −17.930 | <0.001*** | |
Climate: dry vs tropical | 0.498 | 0.366 | 1.358 | 0.174 | |
Climate: temperate vs tropical | 0.018 | 0.230 | 0.077 | 0.938 | |
Climate: continental vs tropical | 0.023 | 0.276 | 0.082 | 0.934 | |
Birds | Intercept | −2.408 | 0.264 | −9.137 | <0.001*** |
Confinement level 1 vs no measures | 0.083 | 0.013 | 6.191 | <0.001*** | |
Confinement level 2 and 3 vs no measures | 0.207 | 0.015 | 14.074 | <0.001*** | |
PCA axis: degree of anthropization | −0.115 | 0.044 | −2.595 | 0.009** | |
Julian day: season | −0.131 | 0.005 | −28.449 | <0.001*** | |
Climate: dry vs tropical | 0.381 | 0.353 | 1.079 | 0.281 | |
Climate: temperate vs tropical | −0.153 | 0.238 | −0.642 | 0.521 | |
Climate: continental vs tropical | −0.136 | 0.287 | −0.476 | 0.634 | |
Engine noise | Intercept | −4.844 | 0.207 | −23.445 | <0.001*** |
Confinement level 1 vs no measures | −0.033 | 0.011 | −2.887 | 0.004** | |
Confinement level 2 and 3 vs no measures | −0.216 | 0.014 | −15.130 | <0.001*** | |
PCA axis: degree of anthropization | 0.091 | 0.049 | 1.845 | 0.065 | |
Julian day: season | 0.054 | 0.005 | 10.839 | <0.001*** | |
Climate: dry vs tropical | −0.220 | 0.351 | −0.626 | 0.531 | |
Climate: temperate vs tropical | 0.022 | 0.220 | 0.100 | 0.920 | |
Climate: continental vs tropical | −0.152 | 0.265 | −0.573 | 0.566 |
Δ AIC values between the full and the null models are, from top to bottom, 593, 2056 and 1023, thus indicating that the full models were more informative than the null ones. SE: standard error of the estimate. ***P < 0.001, **P <0.010, *P < 0.050. Confinement level 1 calls for “recommended not to leave home” and Confinement level 2 and 3 calls for “not allowed to leave home”.
Table 5.
Response variable | Explanatory variable | Estimate | SE | t ratio | P value |
---|---|---|---|---|---|
NDSI | Confinement level 1 vs no measures | 0.035 | 0.012 | 2.832 | 0.013* |
Confinement level 2 and 3 vs no measures | 0.051 | 0.014 | 3.535 | 0.001** | |
Confinement level 2 and 3 vs Confinement level 1 | 0.016 | 0.011 | 1.464 | 0.309 | |
Birds | Confinement level 1 vs no measures | 0.083 | 0.013 | 6.191 | <0.001*** |
Confinement level 2 and 3 vs no measures | 0.207 | 0.015 | 14.074 | <0.001*** | |
Confinement level 2 and 3 vs Confinement level 1 | 0.124 | 0.011 | 11.272 | <0.001*** | |
Engine noise | Confinement level 1 vs no measures | −0.327 | 0.011 | −2.887 | 0.011* |
Confinement level 2 and 3 vs no measures | −0.216 | 0.014 | −15.130 | <0.001*** | |
Confinement level 2 and 3 vs Confinement level 1 | −0.183 | 0.01 | −16.999 | <0.001*** |
SE: standard error of the estimate. ***P < 0.001, **P <0.010, *P < 0.050. Confinement level 1 calls for “recommended not to leave home” and Confinement level 2 and 3 calls for “not allowed to leave home”.
Usage Notes
The Silent Cities dataset could be considered for multiple applications. In the specific fields of bio/ecoacoustics, it could be used to study the effect of containment measures on urban soundscapes29, to improve the performance of acoustic indices in urban environments70 and to gain a deeper understanding of the interplay between biophony and urban environment characteristics71. In the field of machine learning (machine listening, deep learning), it will allow the testing of difficult cases of generalization in sound event recognition from one site to another, due to the variety of sampled sites72. In the interdisciplinary field of territorial sciences (e.g. economic geography, territorial economics, spatial planning, urban engineering sciences), it will make it possible to analyze the links between the levels of economic activity of a city and the levels of noise pollution. Finally, for environmental sciences interested in well-being and relationships between humans and non-humans within urban socio-ecosystems (e.g. environmental and health psychology, landscape design, environmental geography, etc.), this dataset opens up opportunities for the qualitative study of individual and subjective perceptions of the different soundscape configurations collected. More broadly, we aim for this international and collaborative dataset to be usefully mobilized in any research working to make better coexistence between humans and non-humans possible, and thus working to maintain the Earth’s habitability conditions for all of them.
The Silent Cities dataset45 is available under the terms of a Creative Commons Attribution 4.0 International waiver (CC-BY 4.0, https://creativecommons.org/licenses/by/4.0/). The CC-BY-4.0 waiver facilitates the discovery, re-use, and citation of the dataset. When using all or part of the dataset, we require anyone to cite both the dataset45 and this publication.
Acknowledgements
We would like to thank Aimee Johanssen for the English editing of the manuscript. We also would like to thank Garett and Marlow Pignotti for testing of the appropriate SM4 gain level in context of high anthropophony. JSPF was supported by the Leverhulme Trust through an early-career fellowship (award reference: ECF-2020-571). Finally, we dedicate this article to our colleague Didier Galop, one of the contributors to the Silent Cities dataset, who recently passed away during a field mission and whose unwavering support enabled us to launch this project. Since then, “even brook trout get the blues” in the Pyrenees.
Author contributions
S.C. introduced the concept; S.C., A.G., J.S.P.F. and N.F. designed the protocol; S.C., A.G., J.S.P.F., N.P., N.F. and the Silent Cities Consortium collected the soundscape recordings; A.G., N.P. managed the data; A.G., J.S.P.F., N.P. and N.F. conducted the data analysis; A.G. and J.S.P.F. conducted the technical validation. N.F. was in charge of high performance computing; S.C., A.G., J.S.P.F., N.P. and N.F. wrote the initial draft; all authors including the Silent Cities Consortium reviewed the manuscript.
Code availability
The recording manipulation and acoustic measurements were run using Python, https://github.com/brain-bzh/SilentCities and the analyses were run on R https://github.com/agasc/SilentCities-R.
Competing interests
The authors declare no competing interests.
Footnotes
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
These authors contributed equally: Samuel Challéat, Nicolas Farrugia, Jérémy S. P. Froidevaux, Amandine Gasc, Nicolas Pajusco.
A list of authors and their affiliations appears at the end of the paper.
Contributor Information
Samuel Challéat, Email: samuel.challeat@cnrs.fr.
Nicolas Farrugia, Email: nicolas.farrugia@imt-atlantique.fr.
Silent Cities project consortium:
Carlos R. Abrahams, Orlando Acevedo-Charry, Ludmilla M. S. Aguiar, Zachary R. Ahlin, Franz Aiple, Cécile H. Albert, Irene Alcocer, Ana Sofia Alves, Francisco Amorim, Ludmila B. Andrade, Pedro M. Araújo, Fernando Ascensão, Serge Aucoin, Elias Bader, Diego Balbuena, Luc Barbaro, Eder Barbier, Eliana Barona Cortés, Luis Emilio Barrie, José L. Bartheld, Henry Bates, Alice Baudouin, Richard D. Beason, Christa Beckmann, Amy Beeston, Gvan Belá, Kristen M. Bellisario, Simon Belshaw, Juan F. Beltrán, Raone Beltrão-Mendes, Enrico Bernard, Thierry Besche, Peter A. Biro, Cathie Boléat, Mathieu Bossaert, Ally Bradley, Paulo Branco, Wijnand Bredewold, Philip A. Briggs, Sylvio Romério Briglia-Ferreira, Emily Buckner, Ivana Budinski, Albane Burens, Rachel T. Buxton, Andrés Canavero, Paulo Cardoso, Farah Carrasco-Rueda, Paula C. Caycedo, Frédéric Cazaban, Lara R. Cerveira, Ada Ceuppens, Alain Challéat, Angela Chappa Larrea, Adrien Charbonneau, Mina Charnaux, Pooja Choksi, Jan Cibulka, Julián Clavijo-Bustos, Zuania Colón-Piñeiro, Sofia Conde, Maria João Costa, António Cotão, Clément Couturier, Marina D. A. Scarpelli, Luis P. da Silva, Tom Davis, Nathalie de Lacoste, Sarah L. Deans, Serge Dentin, Krzysztof Deoniziak, Sarah R. Dodgin, Ivo dos Santos, Tudor I. Draganoiu, Bruno Drolet, Marina H. L. Duarte, Gonçalo Duarte, Chloé Dubset, Frank Dziock, Alice Eldridge, Simon Elise, David R. Elliott, Arthur Enguehard, Karl Esztl, Darren M. Evans, Daniel M. Ferreira, Sonia A. F. Ferreira, Diogo F. Ferreira, Ana Margarida Ferreira, Penelope C. Fialas, Lauren Foster-Shaner, Bárbara Freitas, Nicholas R. Friedman, Susan Fuller, Didier Galop, Daniel Garside, Jean-Christophe Gattus, Sylvain Geoffray, Louis Godart, Laurent Godet, Inês Gomes Marques, Fernando González-Garca, Paul Griesberger, Bilal Habib, Madeline E. Hallet, Meena M. Haribal, Jennifer Hatlauf, Sylvain Haupert, José M. Herrera, Sierra E. Herzberger, Frederico Hintze Oliveira, Kathy H. Hodder, Isabelle Hoecherl, Mark F. Hulme, Emilia Hyland, Michel Jacobs, Akash Jaiswal, Laurent Jégou, Steve Jones, Hervé Jourdan, Tomáš Jůnek, Leili Khalatbari, Sarika Khanwilkar, James J. N. Kitson, Amanda H. Korstjens, Kim Krähenbühl-Künzli, Natalija Lace, Sébastien Laguet, Hedwig Lankau, Thiago O. Laranjeiras, Gregoire Lauvin, Samuel Lavin, Matthieu Le Corre, Monica León, Judah J. Levenson, Pavel Linhart, Juliette Linossier, Diego J. Lizcano, Diego Llusia, Marty Lockett, Pedro B. Lopes, Ricardo Jorge Lopes, José Vicente López-Bao, Adrià López-Baucells, David López-Bosch, Ricardo B. Machado, Claude Mande, Guillaume Marchais, Fabio Marcolin, Oscar H. Marn Gómez, Carina B. Marques, J. Tiago Marques, Tilla Martin, Vanessa Mata, Eloisa Matheu-Cortada, Vincent Médoc, Kirsten E. Miller, Basile Montagne, Allen Moore, JoMari M. A. Moreno, Felipe N. Moreno-Gómez, Sandra Mueller, Daniela Murillo-Bedoya, Luciano N. Naka, Adrian C. Newton, João T. Nunes, Pierrette Nyssen, Fionn Ó Marcaigh, Darren P. O’Connell, M. Teague O’Mara, David Ocampo, Meryem Ouertani, Jan Olav Owren, Vitor H. Paiva, Stéphane Paris, Marion Parisot, Swaroop Patankar, Jorge M. Pereira, Slvia Pereira Barreiro, Cédric Peyronnet, Magali Philippe, Bryan C. Pijanowski, Nuno Pinto, Zach Poff, Jonathan M. Poppele, Andrew Power, Victoria Pratt, Darren S. Proppe, Raphaël Proulx, Laura Prugh, Sebastien J. Puechmaille, Xavier Puig-Montserrat, Lorenzo Quaglietta, John E. Quinn, Nancy I. Quiroga, Mariana Ramos, Rebecca Rasmussen, Georges Reckinger, Mimi Reed, Jean-Benoît Reginster, Vanesa Rivera, Clara F. Rodrigues, Patricia Mara Rodrguez-González, Eduardo Rodrguez-Rodrguez, Luke Romaine, Andrei L. Roos, Joao Rosa, Samuel R. P-J. Ross, Quentin Rouy, Alyssa M. Ryser, Sougata Sadhukhan, Robin Sandfort, José M. Santos, David Savage, Stéphanie C. Schai-Braun, Michael Scherer-Lorenzen, Mathilde Schoenauer Sebag, Pedro Segurado, Ana M. Serronha, Taylor Shaw, Brenda Shepherd, Cárol Sierra-Durán, Bruno M. Silva, Victoire Simon, Peter F. Sinclair, Carolina Soto-Navarro, Anne Sourdril, Jérôme Sueur, Larissa S. M. Sugai, Ian B. Tarrant, Fran Tattersall, Christopher N. Templeton, Michelle E. Thompson, Marcela Todd, Juan D. Tovar-Garca, Karina Townsend, Amaro Tuninetti, Paul A. Ullrich, Juan S. Vargas Soto, Kevin Vega, Gabriella Ventrice, Pierre J. Victor, Josep Vidal Oliveras, Sara Villén-Pérez, Olivier Vinet, Agnés Vivat, Jean-Do. Vrignault, William D. J. Walton, Christopher J. Watson, Oliver R. Wearn, Damion L. Whyte, Fredric M. Windsor, Yanchen Wu, Selena Xie, Ignacio Zeballos Puccherelli, and Vera Zina
References
- 1.Desvars-Larrive, A. et al. A structured open dataset of government interventions in response to covid-19. Scientific data7, 285 (2020). 10.1038/s41597-020-00609-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Porcher, S. Response2covid19, a dataset of governments’ responses to covid-19 all around the world. Scientific data7, 423 (2020). 10.1038/s41597-020-00757-y [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Hale, T. et al. A global panel database of pandemic policies (oxford covid-19 government response tracker). Nature human behaviour5, 529–538 (2021). 10.1038/s41562-021-01079-8 [DOI] [PubMed] [Google Scholar]
- 4.Gaiser, E. E. et al. Long-term ecological research and the covid-19 anthropause: A window to understanding social–ecological disturbance. Ecosphere13, e4019 (2022). 10.1002/ecs2.4019 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Rutz, C. et al. Covid-19 lockdown allows researchers to quantify the effects of human activity on wildlife. Nature Ecology & Evolution4, 1156–1159 (2020). 10.1038/s41559-020-1237-z [DOI] [PubMed] [Google Scholar]
- 6.Bates, A. E., Primack, R. B., Moraga, P. & Duarte, C. M. Covid-19 pandemic and associated lockdown as a “global human confinement experiment” to investigate biodiversity conservation. Biological conservation248, 108665 (2020). 10.1016/j.biocon.2020.108665 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Diffenbaugh, N. S. et al. The covid-19 lockdowns: a window into the earth system. Nature Reviews Earth & Environment1, 470–481 (2020). 10.1038/s43017-020-0079-1 [DOI] [Google Scholar]
- 8.Warrington, M. H., Schrimpf, M. B., Des Brisay, P., Taylor, M. E. & Koper, N. Avian behaviour changes in response to human activity during the covid-19 lockdown in the united kingdom. Proceedings of the Royal Society B289, 20212740 (2022). 10.1098/rspb.2021.2740 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Hasegawa, Y. & Lau, S.-K. A qualitative and quantitative synthesis of the impacts of covid-19 on soundscapes: A systematic review and meta-analysis. Science of The Total Environment 157223 (2022). [DOI] [PMC free article] [PubMed]
- 10.Aletta, F., Oberman, T., Mitchell, A., Tong, H. & Kang, J. Assessing the changing urban sound environment during the covid-19 lockdown period using short-term acoustic measurements. Noise mapping7, 123–134 (2020). 10.1515/noise-2020-0011 [DOI] [Google Scholar]
- 11.Aletta, F. & Van Renterghem, T. Associations between personal attitudes towards covid-19 and public space soundscape assessment: An example from antwerp, belgium. International Journal of Environmental Research and Public Health18, 11774 (2021). 10.3390/ijerph182211774 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Mitchell, A., Oberman, T., Aletta, F. & Kang, J. Development of a multi-level predictive soundscape model to assess the soundscapes of public spaces during the covid-19 lockdowns. The Journal of the Acoustical Society of America150, A293–A293 (2021). 10.1121/10.0008334 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.Mitchell, A. et al. Investigating urban soundscapes of the covid-19 lockdown: A predictive soundscape modeling approach. The Journal of the Acoustical Society of America150, 4474–4488 (2021). 10.1121/10.0008928 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Barbaro, L. et al. Covid-19 shutdown revealed higher acoustic diversity and vocal activity of flagship birds in old-growth than in production forests. Science of The Total Environment 166328 (2023). [DOI] [PubMed]
- 15.Schrimpf, M. B. et al. Reduced human activity during covid-19 alters avian land use across north america. Science Advances7, eabf5073 (2021). 10.1126/sciadv.abf5073 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.Xiao, H., Eilon, Z. C., Ji, C. & Tanimoto, T. Covid-19 societal response captured by seismic noise in china and italy. Seismological Research Letters91, 2757–2768 (2020). 10.1785/0220200147 [DOI] [Google Scholar]
- 17.Bartalucci, C., Bellomini, R., Luzzi, S., Pulella, P. & Torelli, G. A survey on the soundscape perception before and during the covid-19 pandemic in italy. Noise Mapping8, 65–88 (2021). 10.1515/noise-2021-0005 [DOI] [Google Scholar]
- 18.Montano, W. & Gushiken, E. Lima soundscape before confinement and during curfew. airplane flights suppressions because of peruvian lockdown. The Journal of the Acoustical Society of America148, 1824–1830 (2020). 10.1121/10.0002112 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.Ulloa, J. S. et al. Listening to cities during the covid-19 lockdown: How do human activities and urbanization impact soundscapes in colombia? Biological Conservation255, 108996 (2021). 10.1016/j.biocon.2021.108996 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20.Mimani, A. & Singh, R. Anthropogenic noise variation in indian cities due to the covid-19 lockdown during march-to-may 2020. The Journal of the Acoustical Society of America150, 3216–3227 (2021). 10.1121/10.0006966 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.Maggi, A. L. et al. Perception of the acoustic environment during covid-19 lockdown in argentina. The Journal of the Acoustical Society of America149, 3902–3909 (2021). 10.1121/10.0005131 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22.Piccinini, D. et al. Covid-19 lockdown and its latency in northern italy: seismic evidence and socio-economic interpretation. Scientific reports10, 1–10 (2020). 10.1038/s41598-020-73102-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23.Alsina-Pagès, R. M. et al. Soundscape of catalonia during the first covid-19 lockdown: Preliminary results from the sons al balcó project. Eng. Proc. 21, 77 (2020). [Google Scholar]
- 24.Alsina-Pagès, R. M., Bergadà, P. & Martínez-Suquía, C. Changes in the soundscape of girona during the covid lockdown. The Journal of the Acoustical Society of America149, 3416–3423 (2021). 10.1121/10.0004986 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25.Hentati-Sundberg, J., Berglund, P.-A., Hejdström, A. & Olsson, O. Covid-19 lockdown reveals tourists as seabird guardians. Biological Conservation254, 108950 (2021). 10.1016/j.biocon.2021.108950 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26.Zambon, G., Confalonieri, C., Angelini, F. & Benocci, R. Effects of covid-19 outbreak on the sound environment of the city of milan, italy. Noise Mapping8, 116–128 (2021). 10.1515/noise-2021-0009 [DOI] [Google Scholar]
- 27.Pagès, R. M. A. et al. Noise at the time of covid 19: The impact in some areas in rome and milan, italy. Noise Mapping7, 248–264 (2020). 10.1515/noise-2020-0021 [DOI] [Google Scholar]
- 28.Derryberry, E. P., Phillips, J. N., Derryberry, G. E., Blum, M. J. & Luther, D. Singing in a silent spring: Birds respond to a half-century soundscape reversion during the covid-19 shutdown. Science370, 575–579 (2020). 10.1126/science.abd5777 [DOI] [PubMed] [Google Scholar]
- 29.Lenzi, S., Sádaba, J. & Lindborg, P. Soundscape in times of change: Case study of a city neighbourhood during the covid-19 lockdown. Frontiers in psychology12, 570741 (2021). 10.3389/fpsyg.2021.570741 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30.Bonet-Solà, D., Martínez-Suquía, C., Alsina-Pagès, R. M. & Bergadà, P. The soundscape of the covid-19 lockdown: Barcelona noise monitoring network case study. International Journal of Environmental Research and Public Health18, 5799 (2021). 10.3390/ijerph18115799 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 31.Manzano, J. V. et al. The “sound of silence” in granada during the covid-19 lockdown. Noise Mapping8, 16–31 (2021). 10.1515/noise-2021-0002 [DOI] [Google Scholar]
- 32.Sakagami, K. A note on variation of the acoustic environment in a quiet residential area in kobe (japan): Seasonal changes in noise levels including covid-related variation. Urban Science4, 63 (2020). 10.3390/urbansci4040063 [DOI] [Google Scholar]
- 33.Güler, G. A. & Bi˙len, A. Ö. Urban soundscape changes in turkey before and after covid-19: Eski˙şehi˙r, an anatolian city. ArtGRID-Journal of Architecture Engineering and Fine Arts4, 30–40 (2022). [Google Scholar]
- 34.Ross, S. R. J. A suburban soundscape reveals altered acoustic dynamics during the covid-19 lockdown. JEA6, 0–0 (2022). 10.35995/jea6010001 [DOI] [Google Scholar]
- 35.Terry, C., Rothendler, M., Zipf, L., Dietze, M. C. & Primack, R. B. Effects of the covid-19 pandemic on noise pollution in three protected areas in metropolitan boston (usa). Biological conservation256, 109039 (2021). 10.1016/j.biocon.2021.109039 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 36.Smith, K. B. et al. Acoustic vector sensor analysis of the monterey bay region soundscape and the impact of covid-19. The Journal of the Acoustical Society of America151, 2507–2520 (2022). 10.1121/10.0010162 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 37.Bertucci, F. et al. Changes to an urban marina soundscape associated with covid-19 lockdown in guadeloupe. Environmental Pollution289, 117898 (2021). 10.1016/j.envpol.2021.117898 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 38.Leon-Lopez, B., Romero-Vivas, E. & Viloria-Gomora, L. Reduction of roadway noise in a coastal city underwater soundscape during covid-19 confinement. The Journal of the Acoustical Society of America149, 652–659 (2021). 10.1121/10.0003354 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 39.De Lauro, E., Falanga, M. & Lalli, L. T. The soundscape of the trevi fountain in covid-19 silence. Noise Mapping7, 212–222 (2020). 10.1515/noise-2020-0018 [DOI] [Google Scholar]
- 40.Lecocq, T. et al. Global quieting of high-frequency seismic noise due to covid-19 pandemic lockdown measures. Science369, 1338–1343 (2020). 10.1126/science.abd2438 [DOI] [PubMed] [Google Scholar]
- 41.Steele, D. & Guastavino, C. Quieted city sounds during the covid-19 pandemic in montreal. International Journal of Environmental Research and Public Health18, 5877 (2021). 10.3390/ijerph18115877 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 42.Asensio, C. et al. A taxonomy proposal for the assessment of the changes in soundscape resulting from the covid-19 lockdown. International journal of environmental research and public health17, 4205 (2020). 10.3390/ijerph17124205 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 43.Aumond, P., Can, A., Lagrange, M., Gontier, F. & Lavandier, C. Multidimensional analysis to monitor the effects of covid-19 lockdown on the urban sound environment of lorient. In European Congress on Noise Control Engineering (EuroNoise), Oct 2021, Madeira, Portugal (2021).
- 44.Vimal, R. The impact of the covid-19 lockdown on the human experience of nature. Science of the Total Environment803, 149571 (2022). 10.1016/j.scitotenv.2021.149571 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 45.Challéat, S., Farrugia, N., Gasc, A., Froidevaux, J. & Pajusco, N. Silent cities. OSF10.17605/OSF.IO/H285U (2024). 10.17605/OSF.IO/H285U [DOI]
- 46.Hill, A. P. et al. Audiomoth: Evaluation of a smart open acoustic device for monitoring biodiversity and the environment. Methods in Ecology and Evolution9, 1199–1211 (2018). 10.1111/2041-210X.12955 [DOI] [Google Scholar]
- 47.Sueur, J., Farina, A., Gasc, A., Pieretti, N. & Pavoine, S. Acoustic indices for biodiversity assessment and landscape investigation. Acta Acustica united with Acustica100, 772–781 (2014). 10.3813/AAA.918757 [DOI] [Google Scholar]
- 48.Kong, Q. et al. Panns: Large-scale pretrained audio neural networks for audio pattern recognition. IEEE/ACM Transactions on Audio, Speech, and Language Processing28, 2880–2894 (2020). 10.1109/TASLP.2020.3030497 [DOI] [Google Scholar]
- 49.Beck, H. E. et al. Present and future köppen-geiger climate classification maps at 1-km resolution. Scientific Data5, 10.1038/sdata.2018.214 (2018). [DOI] [PMC free article] [PubMed]
- 50.Buchhorn, M. et al. Copernicus global land service: Land cover 100m: collection 3: epoch 2019: Globe (v3.0.1) [data set]. Zenodo10.5281/zenodo.3939050 (2020). 10.5281/zenodo.3939050 [DOI]
- 51.Venter, O. et al. Sixteen years of change in the global terrestrial human footprint and implications for biodiversity conservation. Nature Communications7, 12558, 10.1038/ncomms12558 (2016). 10.1038/ncomms12558 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 52.Venter, O. et al. Last of the wild project, version 3 (lwp-3): 2009 human footprint, 2018 release. Palisades, New York: NASA Socioeconomic Data and Applications Center (SEDAC). 10.7927/H46T0JQ4 (2018). 10.7927/H46T0JQ4 [DOI]
- 53.Center for International Earth Science Information Network - CIESIN - Columbia University. Gridded population of the world, version 4 (gpwv4): Population density, revision 11. Palisades, New York: NASA Socioeconomic Data and Applications Center (SEDAC)10.7927/H49C6VHW (2018). 10.7927/H49C6VHW [DOI]
- 54.Ulloa, J. S., Haupert, S., Latorre, J., Aubin, T. & Sueur, J. scikit-maad: an open-source and modular toolbox for quantitative soundscape analysis in python. Methods in Ecology and Evolution12, 2334–2340 (2021). 10.1111/2041-210X.13711 [DOI] [Google Scholar]
- 55.Haupert, S., Ulloa, J. S. & Latorre Gil, J. F. scikit-maad: an open-source and modular toolbox for quantitative soundscape analysis in python. Zenodo10.5281/zenodo.6129239 (2021). 10.5281/zenodo.6129239 [DOI]
- 56.McFee, B. et al. librosa/librosa: 0.10.1. Zenodo10.5281/zenodo.8252662 (2023). 10.5281/zenodo.8252662 [DOI]
- 57.Paszke, A. et al. PyTorch: An Imperative Style, High-Performance Deep Learning Library. In Wallach, H.et al. (eds.) Advances in Neural Information Processing Systems 32, 8024–8035 http://papers.neurips.cc/paper/9015-pytorch-an-imperative-style-high-performance-deep-learning-library.pdf (Curran Associates, Inc., 2019).
- 58.Towsey, M., Wimmer, J., Williamson, I. & Roe, P. The use of acoustic indices to determine avian species richness in audio-recordings of the environment. Ecological Informatics21, 110–119, 10.1016/j.ecoinf.2013.11.007 (2014). 10.1016/j.ecoinf.2013.11.007 [DOI] [Google Scholar]
- 59.Buxton, R. T. et al. Efficacy of extracting indices from large-scale acoustic recordings to monitor biodiversity. Conservation Biology32, 1174–1184 (2018). 10.1111/cobi.13119 [DOI] [PubMed] [Google Scholar]
- 60.Audio Engineering Society. Aes17-2020: Aes standard method for digital audio engineering - measurement of digital audio equipment. AES17-2020 (2020).
- 61.Pieretti, N., Farina, A. & Morri, D. A new methodology to infer the singing activity of an avian community: The Acoustic Complexity Index (ACI). Ecological Indicators11, 868–873, 10.1016/j.ecolind.2010.11.005 (2011). 10.1016/j.ecolind.2010.11.005 [DOI] [Google Scholar]
- 62.Towsey, M. The calculation of acoustic indices derived from long-duration recordings of the natural environment. Tech. Rep. August 2017, Queensland University of Technology, Brisbane (2018). https://eprints.qut.edu.au/110634/.
- 63.Boelman, N. T., Asner, G. P., Hart, P. J. & Martin, R. E. Multi-Trophic Invasion Resistance in Hawai ’ I : Bioacoustics, Field Surveys, and Airborne Remote Sensing. Ecological Applications17, 2137–2144 (2007). 10.1890/07-0004.1 [DOI] [PubMed] [Google Scholar]
- 64.Kasten, E. P., Gage, S. H., Fox, J. & Joo, W. The remote environmental assessment laboratory’s acoustic library: An archive for studying soundscape ecology. Ecol. Informatics12, 50–67 (2012). 10.1016/j.ecoinf.2012.08.001 [DOI] [Google Scholar]
- 65.Gasc, A. et al. Soundscapes reveal disturbance impacts : biophonic response to wildfire in the Sonoran Desert Sky Islands. Landscape Ecology33, 1399–1415, 10.1007/s10980-018-0675-3 (2018). 10.1007/s10980-018-0675-3 [DOI] [Google Scholar]
- 66.Gemmeke, J. F. et al. Audio set: An ontology and human-labeled dataset for audio events. In 2017 IEEE international conference on acoustics, speech and signal processing (ICASSP), 776–780 10.1109/ICASSP.2017.7952261 (IEEE, 2017).
- 67.Dinkel, H., Chen, Y., Wu, M. & Yu, K. Voice activity detection in the wild via weakly supervised sound event detection. Proceedings of the conference of the International Speech Communication Association (INTERSPEECH) (2020).
- 68.Brooks, M. E. et al. glmmtmb balances speed and flexibility among packages for zero-inflated generalized linear mixed modeling. The R journal9, 378–400 (2017). 10.32614/RJ-2017-066 [DOI] [Google Scholar]
- 69.Hartig, F. Dharma: residual diagnostics for hierarchical (multi-level/mixed) regression models. R package version 0.33 (2020).
- 70.Fairbrass, A. J. et al. Citynet-deep learning tools for urban ecoacoustic assessment. Methods in Ecology and Evolution10, 186–197 (2019). 10.1111/2041-210X.13114 [DOI] [Google Scholar]
- 71.Fairbrass, A. J., Rennert, P., Williams, C., Titheridge, H. & Jones, K. E. Biases of acoustic indices measuring biodiversity in urban areas. Ecological Indicators83, 169–177 (2017). 10.1016/j.ecolind.2017.07.064 [DOI] [Google Scholar]
- 72.Lostanlen, V., Salamon, J., Farnsworth, A., Kelling, S. & Bello, J. P. Robust sound event detection in bioacoustic sensor networks. PloS one14, e0214168 (2019). 10.1371/journal.pone.0214168 [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Citations
- Challéat, S., Farrugia, N., Gasc, A., Froidevaux, J. & Pajusco, N. Silent cities. OSF10.17605/OSF.IO/H285U (2024). 10.17605/OSF.IO/H285U [DOI]
- Buchhorn, M. et al. Copernicus global land service: Land cover 100m: collection 3: epoch 2019: Globe (v3.0.1) [data set]. Zenodo10.5281/zenodo.3939050 (2020). 10.5281/zenodo.3939050 [DOI]
- Venter, O. et al. Last of the wild project, version 3 (lwp-3): 2009 human footprint, 2018 release. Palisades, New York: NASA Socioeconomic Data and Applications Center (SEDAC). 10.7927/H46T0JQ4 (2018). 10.7927/H46T0JQ4 [DOI]
- Center for International Earth Science Information Network - CIESIN - Columbia University. Gridded population of the world, version 4 (gpwv4): Population density, revision 11. Palisades, New York: NASA Socioeconomic Data and Applications Center (SEDAC)10.7927/H49C6VHW (2018). 10.7927/H49C6VHW [DOI]
- Haupert, S., Ulloa, J. S. & Latorre Gil, J. F. scikit-maad: an open-source and modular toolbox for quantitative soundscape analysis in python. Zenodo10.5281/zenodo.6129239 (2021). 10.5281/zenodo.6129239 [DOI]
- McFee, B. et al. librosa/librosa: 0.10.1. Zenodo10.5281/zenodo.8252662 (2023). 10.5281/zenodo.8252662 [DOI]
Data Availability Statement
The recording manipulation and acoustic measurements were run using Python, https://github.com/brain-bzh/SilentCities and the analyses were run on R https://github.com/agasc/SilentCities-R.