Skip to main content
PLOS One logoLink to PLOS One
. 2025 Jan 15;20(1):e0316187. doi: 10.1371/journal.pone.0316187

Hear it here: Built environments predict ratings and descriptions of ambiguous sounds

Brandon J Forys 1,*, Emily Qi 1, Rebecca M Todd 1,2, Alan Kingstone 1
Editor: Bruno Alejandro Mesz3
PMCID: PMC11734970  PMID: 39813179

Abstract

The built environments we move through are a filter for the stimuli we experience. If we are in a darker or a lighter room or space, a neutrally valenced sound could be perceived as more unpleasant or more pleasant. Past research suggests a role for the layout and lighting of a space in impacting how stimuli are rated, especially on bipolar valence scales. However, we do not know how affective experiences and descriptions of everyday auditory stimuli are impacted by built environments. In this study, we examine whether listening to a series of ambiguously valenced sounds in an older, darker building leads these sounds to be rated as less pleasant—and described using more negatively valenced language—compared to listening to these sounds in a newer building with more natural light. In a between-subjects design, undergraduate participants at an older building or a newer building (nOld = 46, nNew = 46; nFemale = 71, nMale = 18, MAge = 21.18, RangeAge = 17-38) listened to ten sounds that had previously been rated as ambiguous in valence, then rated these sounds on a bipolar valence scale before being asked to describe, in writing, how they felt about each sound. Participants rated sounds as being more pleasant at the New site compared to the Old site, but the sentiment of their descriptions only differed between sites when controlling for collinearity. However, bipolar scale ratings and description sentiment were highly correlated. Our findings suggest a role for the features of built environments in impacting how we appraise the valence of everyday sounds.

Introduction

The built environments we inhabit in our daily lives affect how we perceive the world around us. If you are in a dark building with narrow hallways and hear a dog barking, you may feel more threatened than if you heard a dog barking in a bright, airy space—but why is this the case? Stimuli are specific sensory experiences that can guide our decisions in how we interpret the global environment in which they are situated. Auditory stimuli, unlike many kinds of visual stimuli, are especially susceptible to misinterpretation because it is generally more difficult to localize the source of auditory stimuli [1]. Furthermore, the perceived valence of a sound can change depending on whether the person experiencing the sound can localize [2] and control [3] the sound.

If the valences of simultaneous auditory and visual stimuli differ, cross-modal effects can result in the perceived valence of one stimulus type priming the experienced valence of the other stimulus type [4, 5]—even if we are not selectively attending to one of the stimuli [6]. This transfer of valence between stimuli has previously been characterized through the Affect Misattribution Procedure (AMP) [7], an experimental paradigm capturing the misattribution of affective information based on contextual information [8]. Thus, in a space with unwelcoming features, we may perceive a neutral or even pleasant sound as unpleasant [9] compared to if we heard that sound in a space with pleasant features. While past research has established cross-modal priming given the co-presentation of two stimuli with differing valences, we do not yet know whether we would see priming from more global environmental features to a specific stimulus.

The features of a built environment beyond the lab—especially its architectural features, such as the size of a room or the amount of lighting available—can promote specific behaviours or cognitive states that differ from what can be characterized in the lab. We can form a unified representation of this environment that modulates how we experience stimuli and make decisions within it [10]. The cognitive ethology approach [1113] evaluates how our ability to complete various tasks is impacted by these tasks’ context. By focusing on how stimuli are situated and interpreted in their specific environments, we can better understand how the environment influences our perception of these stimuli. To this end, past studies have identified ways in which spaces with distinct features can nudge us towards making specific behavioural decisions through subtle incentives and cues [14, 15]. For example, Wu et al. [16] found that being in a building that promoted environmental sustainability increased the likelihood of participants selecting the correct bin in which to correctly recycle products. In the domain of auditory stimulus valence, Tajadura-Jiménez et al. [17] found specific effects of architectural features on sound ratings, such that participants rated sounds heard behind them in smaller rooms as more pleasant than those heard in front of them in larger rooms—but not if the sounds were threatening. Similar effects can be ascribed to outdoors spaces. In a study where participants rated the valence of an outdoor soundscape on a university campus, they reported the soundscape as more positively valenced if people were not speaking loudly and traffic noise was not heard [18]. Additionally, a fountain noise was rated as being negatively valenced—in spite of the fountain’s pleasant appearance—as it compounded the negative experience of other unpleasant sounds. These studies exemplify how even subtle environmental features—such as architecture linked to sustainable design, or traffic noise heard in a courtyard—can shift people’s behaviours and change how sets of stimuli within these environments are interpreted. However, they do not examine whether stimuli with valences open to interpretation are differentially appraised depending on these environmental features.

These features can manifest in myriad ways depending on the architectural configuration of a space. For example, we might expect that a room in an older building with no natural lighting may be less pleasant compared to a room in a new building with plentiful natural lighting. Such an older building might be characterized by cramped spaces and insufficient lighting, making it more difficult to know how to interpret stimuli in the space and how they might affect us. Thus, we might respond to stimuli in the less pleasant space more negatively. These features could change how we determine the sources of auditory stimuli [19] and whether we find them threatening [20]. No one element of these environments drives the behaviour. The general representation of the built environment could impact how we appraise stimuli within it through a process of behavioural priming, such that an environment with more positively valenced features could prime positive stimulus ratings—with negative features priming negative ratings [4]. Past studies primarily focus on how features of the built environment shift behaviours, or how stimuli in the environment impact appraisals of one’s context. However, they do not capture how environmental features impact appraisals of specific stimuli that do not have a strong initial valence. Furthermore, existing studies typically rely on limited survey or scale-based measures of responses to these stimuli, whereas our experiences of these stimuli may draw on a variety of features that cannot be captured by these traditional measures alone.

Experiences of stimuli could also differ greatly depending on whether we are in a laboratory context or a more naturalistic setting [11]. Design features of a space, including lighting, room size, and materials used, can also reduce the unpleasantness of auditory stimuli with negatively valenced features, driving generalization of stimulus valence from the laboratory to naturalistic contexts [17]. Accordingly, the ways in which participants respond to ambiguous stimuli could differ greatly if they are in a laboratory context vs. a more naturalistic setting—and whether they experience them in a highly controlled space vs. in built environments with differing acoustic and lighting properties. However, we do not know the extent to which the effects of context on perceived stimulus valence are reflected in participants’ experiences of these stimuli. Such experiences could be captured by asking participants to describe how they feel about the stimuli they experience in each space, taking a phenomenological approach [21] to understanding how sounds may be differentially experienced depending on the features of the built environment in which they are heard.

The ways in which we interpret a sound could also be influenced not only by the sound’s present spatial environment, but also by our understanding of what the sound means to us. Hearing a certain sound that we associate with a negative experience may evoke a strong emotional response regardless of present context and may, in fact, evoke a past spatial and temporal context instead [22]. Additionally, our perception of an ambiguous sound could be an indicator for how we interpret the environment in which it is heard [23]. Thus, it is important to evaluate how past and present experiences of sounds impact how we rate and perceive them. Valence rating scales allow participants to rate the degree to which these sounds elicit positive or negative sentiments, offering important information about quantifiable negative or positive aspects of an experience. However, they may not fully capture qualitative experiences of sounds in different built environments. Furthermore, we do not yet know how experiences of auditory stimuli are reflected in the context in which they are presented, or in descriptions of features and affective states associated with these stimuli.

In the present study, we take an ethological approach [11, 12] to explore whether common stimuli are contextually rated and described as positive or negative in different architectural environments. Participants completed the study at one of two different sites—a windowless room in an older building and a windowed room in a newer building. In a windowless room, participants have less of a reference for the outside world compared to a room with windows, which could cause the space—and stimuli experienced within it—to be judged as less pleasant. Furthermore, if one has to walk through an older building to reach this study room, one will experience more antiquated infrastructure that might be more effortful to traverse (i.e. narrower staircases or a smaller elevator), potentially driving a more negative appraisal of stimuli experienced in that space as compared to in a newer building. Thus, these two spaces offer contrasts in the environments that could drive stimulus interpretations. At each site, participants listened to a series of everyday sounds that have previously been rated as neutral with a high standard deviation on a bipolar scale, indicating varied representations of valence [24]. Such sounds would not have acoustic content (i.e. frequencies) or semantic-affective associations that would make them strongly pleasant or painful to listen to by themselves [25], allowing their perceived valence to be shifted by the built environment in which they are heard. Furthermore, these sounds would be open to interpretation based on contexts in which they were previously experienced, and negatively or positively valenced experiences associated with these past contexts [20]. Participants first rated each sound’s valence on a bipolar scale, which has been shown to disentangle valence from arousal information in emotionally salient stimuli more effectively than unipolar scales for each valence dimension [26]. These sound ratings were used to establish a self-report baseline for how participants rated the valence of each sound in different built environments. Then, to explore how participants describe their experiences of each sound, we asked them to write out how they felt about each sound. Their responses could then be evaluated for sentiment information—whether they used more positive or negative language to describe sounds depending on the context [27]. Last, they indicated the route they used to enter the building where the study was located, to evaluate participants’ awareness of the features of their built environment. This allowed us to evaluate the role of movement through architectural spaces in impacting how participants experience ambiguous sounds.

Materials and methods

Participants

We powered our study for a medium effect of 0.4 for a difference in bipolar sound ratings between sites, based on the effect sizes observed in an online pilot study comparing ratings on unipolar vs. bipolar scales with the same stimulus set as used in the present study. This effect size is also congruent with that found by Tajadura-Jiménez et al. [17] when looking at the effect of architectural features and sound position on perceived sound valence. Using a bootstrapped power analysis for 1000 iterations in the Superpower package in R [28], we obtained a target sample size of N = 64 per site with a power level of 0.8. We recruited N = 97 psychology undergraduate participants from the Human Subject Pool at the University of British Columbia, between March 11th, 2022 and February 21st, 2024. All participants gave written, informed consent, and received bonus points towards their courses. Of these, N = 5 participants were unable to finish the task because of equipment errors. Therefore, we analyzed data from N = 92 participants (n = 18 male, n = 71 female, n = 3 other; Table 1). This study was approved by the research ethics board at the University of British Columbia with code H10-00527.

Table 1. Demographic information for all participants, by site and sex.

Site Sex n M age SD age Min age Max age
New Female 35 21.91 4.09 18 38
Male 10 20.70 1.7 18 24
Other 1 21.00 21 21
Old Female 36 20.63 3.34 17 36
Male 8 21.62 1.69 19 24
Other 2 19.00 0 19 19

Stimulus presentation

All visual stimuli were presented on a Lenovo ThinkPad P52s laptop with a 15.6 inch display (resolution: 1920x1080) running Windows 10, via the Pavlovia online study platform using PsychoPy 2021.2.3 (RRID: SCR_006571) [29]. Auditory stimuli were presented via the task through a set of two Dell A215 stereo speakers. In order to create the impression that the auditory stimuli originated from the room as opposed to from the computer—and was situated within the architectural context—the left channel speaker was placed directly behind the participant, while the right channel speaker was placed directly to the right of the participant. The volume on these speakers was set to maximum, while the computer volume was set to a level of 32 in Windows.

Stimuli

The stimuli in our study were a series of 20 audio recordings of everyday sounds, each with a uniform original duration of 6000 ms (Table 2). We selected these auditory stimuli from the International Affective Digitized Sounds system (IADS-2; [24]). These sounds were presented at the same volume with no fading in or out. All sounds selected had a mean rating between 5 (neutral point) and 5.9 on a 9-point Self-Assessment Manikin scale rating from completely unhappy to completely happy [24], and all but one sound (“Sink”) had a standard deviation (SD) in their pleasure rating of at least 1.5. These sounds were played in two different random orders twice through the task: once for participants to rate how they felt about the sound on an onscreen bipolar scale, and again for participants to describe how they felt about the sound.

Table 2. Summary data for all IADS-2 sounds used, from original IADS study and from the current study.

IADS ratings have been centred from an original range of 1-10 to a range of -5 to 5, to match the scale used in the study. IADS-2: International Affective Digitized Sounds.

Sound ID Description M IADSrating SD IADSrating M IADSarousal SD IADSarousal M rating SD rating
107 Dog 0.47 2.22 5.85 2.22 0.04 1.43
109 Carousel 1.40 2.13 5.64 2.13 -0.04 1.73
113 Cows 0.45 1.71 4.88 1.71 0.11 1.31
114 Cattle 0.01 1.85 6.04 1.85 -0.45 1.54
120 Rooster 0.20 2.10 5.41 2.10 0.32 1.55
152 Tropical 0.23 2.28 5.51 2.28 0.58 1.78
170 Night 0.31 2.12 4.60 2.12 0.02 2.00
171 CountryNight 0.59 1.79 3.71 1.79 0.17 1.65
262 Yawn 0.26 1.58 2.88 1.58 -0.23 1.38
322 TypeWriter 0.01 1.82 4.79 1.82 -0.57 1.48
361 Restaurant 0.36 1.62 5.01 1.62 -0.21 1.50
373 Paint 0.09 1.55 4.65 1.55 -0.22 1.41
374 Sink 0.60 1.35 4.23 1.35 0.29 1.47
378 Doorbell 1.06 2.01 6.15 2.01 -0.23 1.63
403 Helicopter1 0.57 1.83 5.56 1.83 -0.70 1.29
425 Train 0.09 1.42 5.15 1.42 0.44 1.47
602 Thunderstorm 0.99 2.23 3.77 2.23 0.99 2.00
698 Rain2 0.18 1.94 4.12 1.94 0.67 1.94
704 Phone1 0.49 1.98 6.54 1.98 -1.40 1.41
724 Chewing 0.34 1.97 4.91 1.97 -0.90 1.97

Procedure

Participants came to the study room in-person to complete the task. In order to provide two distinct architectural environments in which the sounds would be heard, the study took place in two locations on the campus of the University of British Columbia (Table 3). In the old building condition (Old; Fig 1A), participants completed the study in a 2.00m x 3.00m room in a 1980s-era building on campus with a fully covered window and no natural lighting. In the new building condition (New; Fig 1B), participants completed the study in a 2.00m x 3.00m room in a 2010s-era building on campus with two large windows and plentiful natural lighting. The laptop and speakers used, as well as the orientation of the participant relative to the door in the room, were kept consistent between sites. In both conditions, the participant was seated at a desk in front of the laptop. After reviewing the consent form and filling out demographic information and a questionnaire about COVID-related stress, participants began the experimental task. In the first block of the task (Fig 2A), participants listened to a series of ten sounds presented in a random order. After hearing each sound, participants were instructed to rate the sound on how pleasant or unpleasant it was on a scale from -4 (most unpleasant) to 4 (most pleasant), with a neutral point at 0. In the second block of the task (Fig 2B), participants heard the same ten sounds in a differently randomized order and were instructed to describe how they felt about each sound in 1-2 sentences by typing in an on-screen text box. Participants were not reminded of their sound ratings in this stage. Afterwards, participants were sent to a debrief survey in which they were asked to indicate how pleasant the room they were currently in was, on a 7-item Likert scale from “Very unpleasant” to “Very pleasant” with a neutral point. They were then asked how familiar they were with the building they were currently in, on a 7-item Likert scale from “Not familar at all” to “Very familiar” with a neutral point. They were further asked which entrance they used to enter the building (north, south, or side entrance) and, once in the building, whether they used the stairs or elevator to reach the study room. Participants then received credit towards their psychology courses.

Table 3. Information about each site used in the study.

Site Room dimensions (m) Site construction date Room features
Old 2.00 x 3.00 1984 Windowless
New 2.00 x 3.00 2013 Windowed

Fig 1. Images of the sites used in each condition.

Fig 1

(A) The site used in the Old condition; (B) the site used in the New condition.

Fig 2. Diagram of the trials in each phase of the task.

Fig 2

In the first phase (A), participants listen to and rate the valence of 20 sounds on a bipolar scale. In the second phase (B), participants listen to the same 20 sounds and describe in text how they feel about the sound.

Analyses

We conducted all analyses using R 4.4.2 “Pile of Leaves” [30] through RStudio [31].

Our primary predictor variables was site—that is, whether the participant completed the study in an older, windowless room in a darker, concrete building (Old) or a newer, windowed room in a brighter, more modern building (New). Our secondary predictors were room pleasantness, operationalized as the participant’s rating of the room’s pleasantness; site familiarity, operationalized as the participant’s familiarity with the building in which they were completing the study; entrance used, operationalized as the entrance participants used to arrive at the study; and path, operationalized as whether participants used the elevator or stairs to arrive at the study. Our primary dependent variables were 1) bipolar scale rating, operationalized as the participant’s rating of each sound’s valence on a bipolar scale; and 2) sentiment score, operationalized as the sentiment of each body of text that participants used to describe how they felt about each sound. We calculated sentiment scores by running the VADER sentiment analysis R package [27] on each body of text that participants wrote, and obtaining the overall sentiment of each sentence. VADER calculates sentiment scores by evaluating the valence of each word in each text response—accounting for the order of, and modifiers on, each word—to output a compound sentiment score for each participant’s response to each sound.

We compared bipolar scale and sentiment analysis ratings by site using two mixed-effects linear models using the lmerTest package in R [32] predicting bipolar ratings and sentiment scores, respectively, from each site with participant and sound used as random effects. We followed these analyses with a multi-level model predicting bipolar ratings and sentiment scores from rating type (bipolar rating vs. sentiment score), site, room pleasantness, site familiarity, entrance used, and path as fixed effects and participant as a random effect (illustrated in Fig 3). We further evaluated the correlation between bipolar ratings and sentiment score to explore whether both scales captured information about perceptions of sound valence. Additionally, to control for collinearity in bipolar scale and sentiment score results, we conducted a multivariate ANOVA using the manova function in R [33] with valence ratings and sentiment scores as distinct dependent variables.

Fig 3. Schematic of the factor structure of the statistical analyses used.

Fig 3

Results

We first asked whether participants rated sounds as being significantly more pleasant on the bipolar scale at the New site relative to the Old site, to see if built environment can impact how participants rate ambiguously valenced sounds on a conventional self-report scale. A linear mixed-effects model evaluating sound ratings by site, controlling for sound as a random effect, revealed that participants rated all sounds as being significantly more pleasant in the New site compared to the Old site (Table 4; Fig 4A).

Table 4. Linear mixed effects analysis results for sound ratings by sound and site.

AIC = Akaike information criterion, BIC = Bayesian information criterion, ICC = intraclass correlation, RMSE = root mean squared error.

Multi-level model: Rating
(Intercept) 0.14
(0.15)
Site -0.40**
(0.13)
SD (Intercept participant) 0.54
SD (Intercept sound) 0.54
SD (Observations) 1.51
Num.Obs. 1822
R2 Marg. 0.014
R2 Cond. 0.217
AIC 6844.1
BIC 6871.6
ICC 0.2
RMSE 1.47

+ p < 0.1,

* p < 0.05,

** p < 0.01,

*** p < 0.001

Fig 4. Mean sound ratings from (A) bipolar scale and (B) sentiment analysis scores by site.

Fig 4

We next evaluated whether the sentiment of participants’ descriptions of how they rated sounds was significantly more positively valenced at the New site compared to the Old site, to determine whether built environment shifted the content of unprompted descriptions of sounds by participants. A linear mixed-effects model evaluating sound sentiment scores by site, controlling for sound as a random effect, revealed that the sentiment of sound descriptions did not significantly differ between sites (Table 5; Fig 4B).

Table 5. Linear mixed effects analysis results for sentiment scores by sound and site.

AIC = Akaike information criterion, BIC = Bayesian information criterion, ICC = intraclass correlation, RMSE = root mean squared error.

Multi-level model: Sentiment
(Intercept) 0.10*
(0.04)
Site -0.04
(0.03)
SD (Intercept participant) 0.11
SD (Intercept sound) 0.16
SD (Observations) 0.50
Num.Obs. 1820
R2 Marg. 0.001
R2 Cond. 0.136
AIC 2739.4
BIC 2766.9
ICC 0.1
RMSE 0.49

+ p < 0.1,

* p < 0.05,

** p < 0.01,

*** p < 0.001

The participant experience of each site may have differed on a variety of factors, including their route to the study room and their familiarity with the site itself, which could impact the built environment in which sounds were experienced. We therefore conducted a multi-level model analysis to investigate whether experiences of the built environment directly explained sound ratings and sentiment scores. This analysis revealed that site, rating scale type, and whether the south vs. north entrance to the building was used significantly predicted sound ratings (Table 6; Fig 5) such that bipolar sound ratings, but not sentiment scores, were lower in the Old site compared to the New site and participants who used the south entrance to both sites generally rated and described the sounds as being more pleasant. Room pleasantness, site familiarity, whether the side entrance to the building was used, and whether stairs or an elevator were used did not significantly predict sound ratings. Given that the site influenced ratings but not the valence of descriptions, we further examined the degree to which ratings and sentiments were correlated. A correlation analysis revealed that participant ratings on bipolar scales were significantly correlated with the sentiment of their descriptions of the sounds (t(92) = 8.31, p <0.001, r = 0.65). Furthermore, a multivariate ANOVA analysis predicting bipolar scale ratings and sentiment scores revealed that, taken together, both measures significantly differed by site and by sound (Site: F(2, 1785) = 16.51, p <0.001, Pillai = 0.02; Sound: F(38, 3572) = 7.37, p <0.001, Pillai = 0.15).

Table 6. Multi-level model analysis results for sound ratings.

AIC = Akaike information criterion, BIC = Bayesian information criterion, ICC = intraclass correlation, RMSE = root mean squared error.

Multi-level model: Rating
(Intercept) 0.09
(0.19)
Pleasantness of room -0.01
1 (0.03)
Familiarity with building -0.02
(0.03)
Side entrance used 0.11
2 (0.10)
South entrance used 0.22*
(0.11)
Stairs vs. elevator 0.07
1 (0.10)
Rating type 0.13*
(0.06)
Site -0.23*
(0.10)
SD (Intercept participant) 0.24
SD (Observations) 0.41
Num.Obs. 184
R2 Marg. 0.102
R2 Cond. 0.338
AIC 282.8
BIC 315.0
ICC 0.3
RMSE 0.36

+ p < 0.1,

* p < 0.05,

** p < 0.01,

*** p < 0.001

Fig 5. Linear regression of mean sound ratings from bipolar scale and sentiment analysis scores by site.

Fig 5

Discussion

We investigated whether the architectural environment in which an ambiguously valenced sound is heard influences how participants rate the valence of the sound and the sentiment of how they describe it. We found that, in an older building with a windowless, artificially lit room (Old), participants rated all sounds as being significantly less pleasant than did participants in a newer building with a naturally lit room (New). When participants described how they felt about the sounds, the valence of the sentiment of these descriptions only differed between sites in our multivariate ANOVA analysis—which controlled for collinearity—but not in our linear mixed model analyses. Participants thus used moderately more negatively valenced language to describe sounds at the older site compared to the newer site.

Our findings align with past work showing that qualities of the built environment—such as the features and design of a building—can impact how we approach stimuli and make decisions—including findings by Tajadura-Jiménez et al. [17] showing that sounds were rated as more negatively valenced in smaller compared to larger rooms. The sites for our study did not have explicit cues meant to nudge specific behaviours [14, 15]—there was no signage or interventions that promoted more positive or negative appraisals of sounds. Nevertheless, differences between sites still had a significant effect on how participants rated sounds as negative or positive. Each site had distinct architectural, acoustic, and affective features, which may have impacted the decisions that participants make about stimuli they experience in each site.

Participants may have offloaded some of the cognitive effort required to make a judgment about a sound to how they interpreted its context [34]. In the Old site, participants navigated a predominantly concrete building to reach a study room that was highly enclosed and had no natural lighting. In contrast, to reach the naturally lit study room in the New site, participants navigated a bright and airy space. The less pleasant Old site may thus have drawn attention to the more negative attributes or associations of an otherwise neutral, everyday sound, with the New site comparatively enhancing the positive aspects of sounds heard. This is consistent with enactive views in cognitive science [35], which highlight the notion that cognition is a process of valenced sensemaking, tracking events that are either good or bad for the organism’s well-being in continuous engagement with a complex environment.

Further to this point, participants rated and described sounds more positively when using the south compared to the north entrance of the building. The south entrance of each building was generally further away from the test site than the north entrance. Participants entering the building by this route may have experienced more of the built environment that would impact their ratings of the sounds. Information that is encoded about architectural features could differ depending on the entrance participants use to reach the study. Through the cognitive ethology approach [11] adopted in this study, we thus show the impacts of naturalistic experiences of the environment around us on interpretations of everyday stimuli.

As shown in the AMP [9], this information can prime attributions of the valence that participants assign to stimuli they experience in the space—even if participants are unaware of the effects of the space on their sound ratings and descriptions. Thus, participant ratings of sounds could be proxies for their experience of the architectural context in which the sounds were heard—enhanced by the amount of time they spent moving through the space. It is important to note that—contrary to earlier findings—recent evaluations of the AMP suggest that cross-modal priming is dependent on our awareness of being primed [36]. However, as participant perceptions of room pleasantness were not linked to sound ratings, it is reasonable to speculate that the effects observed in the present study were implicitly primed. Our findings highlight potential latent roles for the combined features of an architectural context in impacting not only behaviours but also experiences of stimuli.

The attribution of differences in sound interpretation to the site in which they were heard may be challenged by the lack of difference in sentiment ratings between sites in participant descriptions of sounds in our linear mixed model analyses. In contrast, the bipolar rating scale invites interpretations of a sound on an axis of negative to positive valence. These scales have been shown to be dissociable from arousal [26], while participant descriptions of their experiences of sounds could range much more widely and be more nuanced. As the sounds used were not strongly positively or negatively valenced and had a wide range of ratings when originally normed [24] (2), it is not surprising that this wide range of experiences would be reflected in how they were described. In spite of our finding that only bipolar sound ratings differed between sites given linear mixed model analyses, sentiment scores were significantly correlated with bipolar ratings—a result supported by our multivariate ANOVA analysis showing that site predicted both sound ratings and sentiment scores. This MANOVA result’s contrast with the non-significant linear mixed model results could speak to how the linear mixed models do not account for collinearity among bipolar ratings and sentiment scores, thus weakening the power of this analysis to detect differences between aspects of the auditory experience by site as compared to the MANOVA. The participant descriptions—from which sentiment scores were derived—captured a wider variety of experiences than valence information; however, the valence information that was present was congruent with that of bipolar ratings. This finding highlights the importance of using both quantitative and qualitative research methods in complementary ways—as each method can capture different aspects of how a built environment can impact perceptions of ambiguous sounds.

Further to this point, many participants drew on their memories of past experiences with the sounds, beyond merely describing how they experienced the sound in their present context. For example, one of the sounds was of a dog barking. While some participants described the sound outside of any temporal context (e.g., “This sound is not that pleasant because it feels like the dog is sensing danger”, others drew on past experiences with dogs to inform their present appraisal of the experience (e.g. “[I] quite like this one. it sounds like what i would hear from my childhood bedroom.”). Other sounds had ambiguous sources and meanings to different participants. For example, one sound used sounded like nighttime in a rainforest to some participants (“This sound feels like home, during nighttime at a tropical country, it could be soothing to me.”); to others, it sounded like an unpleasant dental tool (“[This] sound makes me uncomfortable. it reminds me of the drying device used at the dentist. [The] sound makes my mouth and skin feel uncomfortably dry listening to it.”). Participants did not reference the architectural context in which they heard the sounds at any point. However, this diverse array of participant responses reflects the variety of reasons that participants may have had for rating sounds as more or less pleasant—ratings that did interact with the architectural context in which they were presented. Queries about the qualitative experiences of stimuli in our environment augment information from self-report scales and offer rich phenomenological information [21] about whether participants situate these stimuli in their present context or, instead, draw on past contexts and experiences, when interpreting how they feel about these stimuli.

We must consider a number of qualifications when interpreting these results. First, because of the nature of the sites used, their room configuration could not be kept perfectly consistent. In the Old site, although the participant and speaker positions were held constant across all participants, the arrangement of other furniture and objects in the room underwent minor changes over the course of the study (e.g., a chair was pulled out rather than tucked under a desk). However, these changes did not affect the lighting, space available, or layout in the room, and the qualitative impression of the space was maintained throughout the study. Similarly, in the New site, the furniture stayed relatively constant; however, changes in natural lighting throughout the day during different runs of the study may have impacted the room’s character. Furthermore, for n = 24 participants, the room partition on one wall in the New site was locked in an open position, causing other parts of the room to be visible. However, the rest of the space was the same in appearance to this study room. Second, in both sites, loud conversations and sounds outside the room were reported by n = 5 participants. However, such experiences were typical of these sites and did not interrupt the study. Third, the sounds used were highly open to interpretation in terms of valence. Although this was by design, it may have weakened potential effects of site compared to using sounds that had an established positive or negative valence—especially as previous studies have investigated reductions in the aversiveness of negatively valenced sounds depending on properties of the built environment [2].

Last, the order of the sound ratings and descriptions were not counterbalanced across participants. Participant descriptions of sounds may therefore have been informed by their memory of how they rated the sounds. However, the lack of a significant difference in the sentiment of sound ratings between sites—even when sound ratings on the bipolar scale significantly differed—suggests that the qualitative component of the study still captured distinct information about participants’ sound experience beyond merely recapitulating the sound ratings. On the other hand, the robustness of the site difference in sound ratings in spite of these environmental differences speaks to how the overall feel of an architectural space can strongly drive perceptions of stimuli experienced within that space, and highlights the insights that can be derived from evaluating how stimuli are interpreted in real-world, uncontrolled environments.

Future studies could investigate whether and how individual attributes of a built environment drive quantitative and qualitative evaluations of participant experiences of sounds. For example, increasing the level of illuminance in a room or on a computer screen could increase the perceived positivity of a sound’s valence [37], while reducing one’s ability to attribute a sound to a source could increase the negativity of its valence [2, 38]. Furthermore, given the increasing relevance of virtual and augmented reality in our everyday lives, future work should also explore similarities and differences in how sounds are rated and described in virtual vs. real-life spaces [39]. These projects would help us better understand whether and how we interpret everyday experiences beyond the computer screen and beyond the lab, in more naturalistic settings. These results could also have implications for a wide variety of fields, including designing theatre sound effects and music to elicit specific emotional responses in audiences and maximize the impact of a performance [40, 41].

Our study’s findings emphasize the importance of considering the role of the architectural environment when we judge the valence and meaning of everyday sounds. In two buildings that frequently serve as experiment testing spaces, we show significant differences in how sounds are rated while controlling for sound identity as well as familiarity and perceived pleasantness of the built environment. Furthermore, our findings suggest a potential role for past experiences interacting with present context when participants give rich descriptions of their experiences of everyday sounds. Experiences of stimuli within architectural contexts could tell us about how the contexts themselves are perceived, shedding light on the role of features like lighting and acoustic qualities in determining how comfortable we feel in real-world spaces—as well as in how we can create artistic experiences that capture our imagination and inspire us. These results could inform future conversations on how to create built environments that are amenable to pleasant experiences, and emphasize the important of context when interpreting stimuli both inside and outside the lab.

Data Availability

The data is available on OSF at https://osf.io/wysdb/, DOI: 10.17605/OSF.IO/WYSDB.

Funding Statement

Natural Sciences and Engineering Research Council of Canada, 170077-2011, Alan Kingstone Natural Sciences and Engineering Research Council of Canada, F19-05182, Rebecca M. Todd Natural Sciences and Engineering Research Council of Canada, 569604, Brandon J. Forys.

References

  • 1. Cavanna AE, Seri S. Misophonia: Current perspectives. Neuropsychiatric Disease and Treatment. 2015;11: 2117–2123. doi: 10.2147/NDT.S81438 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2. Samermit P, Saal J, Davidenko N. Cross-sensory stimuli modulate reactions to aversive sounds. Multisensory Research. 2019;32: 197–213. doi: 10.1163/22134808-20191344 [DOI] [PubMed] [Google Scholar]
  • 3. Wang KS, Delgado MR. The Protective Effects of Perceived Control During Repeated Exposure to Aversive Stimuli. Frontiers in Neuroscience. 2021;15: 625816. doi: 10.3389/fnins.2021.625816 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4. Goerlich KS, Witteman J, Schiller NO, Van Heuven VJ, Aleman A, Martens S. The Nature of Affective Priming in Music and Speech. Journal of Cognitive Neuroscience. 2012;24: 1725–1741. doi: 10.1162/jocn_a_00213 [DOI] [PubMed] [Google Scholar]
  • 5. Greene AJ, Easton RD, LaShell LSR. Visual-Auditory Events: Cross-Modal Perceptual Priming and Recognition Memory. Consciousness and Cognition. 2001;10: 425–435. doi: 10.1006/ccog.2001.0502 [DOI] [PubMed] [Google Scholar]
  • 6. Evans KK. The Role of Selective Attention in Cross-modal Interactions between Auditory and Visual Features. Cognition. 2020;196: 104119. doi: 10.1016/j.cognition.2019.104119 [DOI] [PubMed] [Google Scholar]
  • 7. Payne K, Lundberg K. The Affect Misattribution Procedure: Ten Years of Evidence on Reliability, Validity, and Mechanisms. Social and Personality Psychology Compass. 2014;8: 672–686. doi: 10.1111/spc3.12148 [DOI] [Google Scholar]
  • 8. Kurdi B, Melnikoff DE, Hannay JW, Korkmaz A, Lee KM, Ritchie E, et al. Testing the automaticity features of the affect misattribution procedure: The roles of awareness and intentionality. Behavior Research Methods. 2024;56: 3161–3194. doi: 10.3758/s13428-023-02291-2 [DOI] [PubMed] [Google Scholar]
  • 9. Gawronski B, Ye Y. What Drives Priming Effects in the Affect Misattribution Procedure? Personality and Social Psychology Bulletin. 2014;40: 3–15. doi: 10.1177/0146167213502548 [DOI] [PubMed] [Google Scholar]
  • 10. Markowitz S, Fanselow M. Exposure Therapy for Post-Traumatic Stress Disorder: Factors of Limited Success and Possible Alternative Treatment. Brain Sciences. 2020;10: 167. doi: 10.3390/brainsci10030167 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11. Kingstone A, Smilek D, Eastwood JD. Cognitive Ethology: A new approach for studying human cognition. British Journal of Psychology. 2008;99: 317–340. doi: 10.1348/000712607X251243 [DOI] [PubMed] [Google Scholar]
  • 12. Kingstone A. Everyday human cognition and behaviour. Canadian Journal of Experimental Psychology / Revue canadienne de psychologie experimentale. 2020;74: 267–274. doi: 10.1037/cep0000244 [DOI] [PubMed] [Google Scholar]
  • 13. Ma HL, Dawson MRW, Prinsen RS, Hayward DA. Embodying cognitive ethology. Theory & Psychology. 2023;33: 42–58. doi: 10.1177/09593543221126165 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14. Egner T. Principles of cognitive control over task focus and task switching. Nature Reviews Psychology. 2023;2: 702–714. doi: 10.1038/s44159-023-00234-4 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15. Yin Z, Zhao M, Wang Y, Yang J, Zhang J. Recognition of emotions using multimodal physiological signals and an ensemble deep learning model. Computer Methods and Programs in Biomedicine. 2017;140: 93–110. doi: 10.1016/j.cmpb.2016.12.005 [DOI] [PubMed] [Google Scholar]
  • 16. Wu DWL, DiGiacomo A, Kingstone A. A sustainable building promotes pro-environmental behavior: An observational study on food disposal. PLoS ONE. 2013;8: 1–4. doi: 10.1371/journal.pone.0053856 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17. Tajadura-Jimenez A, Larsson P, Valjamae A, Vastfjall D, Kleiner M. When room size matters: Acoustic influences on emotional responses to sounds. Emotion. 2010;10: 416–422. doi: 10.1037/a0018423 [DOI] [PubMed] [Google Scholar]
  • 18. D’Alessandro F, Evangelisti L, Guattari C, Grazieschi G, Orsini F. Influence of visual aspects and other features on the soundscape assessment of a university external area. Building Acoustics. 2018;25: 199–217. doi: 10.1177/1351010X18778759 [DOI] [Google Scholar]
  • 19. Yang W, Moon HJ. Combined effects of sound and illuminance on indoor environmental perception. Applied Acoustics. 2018;141: 136–143. doi: 10.1016/j.apacoust.2018.07.008 [DOI] [Google Scholar]
  • 20. Schreiber CA, Kahneman D. Determinants of the remembered utility of aversive sounds. Journal of Experimental Psychology: General. 2000;129: 27–42. doi: 10.1037/0096-3445.129.1.27 [DOI] [PubMed] [Google Scholar]
  • 21. Irarrazaval L. A Phenomenological Paradigm for Empirical Research in Psychiatry and Psychology: Open Questions. Frontiers in Psychology. 2020;11: 1399. doi: 10.3389/fpsyg.2020.01399 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22. Schmidt K, Patnaik P, Kensinger EA. Emotion’s influence on memory for spatial and temporal context. Cognition & Emotion. 2011;25: 229–243. doi: 10.1080/02699931.2010.483123 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23. Fowler M. Sounds in space or space in sounds? Architecture as an auditory construct. Architectural Research Quarterly. 2015;19: 61–72. doi: 10.1017/S1359135515000226 [DOI] [Google Scholar]
  • 24.Bradley MM, Lang PJ. International affective digitized sounds (2nd edition; IADS-2): Affective ratings of sounds and instruction manual. 2007.
  • 25. Kumar S, Forster HM, Bailey P, Griffiths TD. Mapping unpleasantness of sounds to their auditory representation. The Journal of the Acoustical Society of America. 2008;124: 3810–3817. doi: 10.1121/1.3006380 [DOI] [PubMed] [Google Scholar]
  • 26. Kron A, Goldstein A, Lee DHJ, Gardhouse K, Anderson AK. How are you feeling? Revisiting the quantification of emotional qualia. Psychological Science. 2013;24: 1503–1511. doi: 10.1177/0956797613475456 [DOI] [PubMed] [Google Scholar]
  • 27. Hutto CJ, Gilbert E. VADER: A Parsimonious Rule-based Model for Sentiment Analysis of Social Media Text. Proceedings of the Eighth International AAAI Conference on Weblogs and Social Media. 2014; 10. [Google Scholar]
  • 28. Lakens D, Caldwell AR. Simulation-based power-analysis for factorial ANOVA designs. Pre-Print. 2019; 1–8. [Google Scholar]
  • 29. Peirce J, Gray JR, Simpson S, MacAskill M, Hochenberger R, Sogo H, et al. PsychoPy2: Experiments in behavior made easy. Behavior Research Methods. 2019;51: 195–203. doi: 10.3758/s13428-018-01193-y [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30. R Development Core Team R. R: A language and environment for statistical computing. 2011. [Google Scholar]
  • 31. Booth DS, Szmidt-Middleton H, King N, Westbrook MJ, Young SL, Kuo A, et al. RStudio: Integrated development for r. Nature. 2018. 10.1108/eb003648 [DOI] [Google Scholar]
  • 32. Kuznetsova A, Brockhoff PB, Christensen RHB. {lmerTest} package: Tests in linear mixed effects models. 2017;82. 10.18637/jss.v082.i13 [DOI] [Google Scholar]
  • 33.Venables WN, Ripley BD. Modern applied statistics with s. 2002. Available: https://www.stats.ox.ac.uk/pub/MASS4/.
  • 34. Wilson M. Six views of embodied cognition. Psychonomic Bulletin & Review. 2002;9: 625–636. doi: 10.3758/BF03196322 [DOI] [PubMed] [Google Scholar]
  • 35. Thompson E. Sensorimotor subjectivity and the enactive approach to experience. Phenomenology and the Cognitive Sciences. 2005;4: 407–427. doi: 10.1007/s11097-005-9003-x [DOI] [Google Scholar]
  • 36. Hughes S, Cummins J, Hussey I. Effects on the Affect Misattribution Procedure are strongly moderated by influence awareness. Behavior Research Methods. 2023;55: 1558–1586. doi: 10.3758/s13428-022-01879-4 [DOI] [PubMed] [Google Scholar]
  • 37. Wardhani IK, Janssen BH, Boehler CN. Investigating the relationship between background luminance and selfreported valence of auditory stimuli. Acta Psychologica. 2022;224: 103532. doi: 10.1016/j.actpsy.2022.103532 [DOI] [PubMed] [Google Scholar]
  • 38.Niessen ME, Maanen L van, Andringa TC. 2008 Second IEEE International Conference on Semantic Computing (ICSC). Santa Monica, CA, USA: IEEE; 2008. pp. 88–95.
  • 39. Gallup AC, Vasilyev D, Anderson N, Kingstone A. Contagious yawning in virtual reality is affected by actual, but not simulated, social presence. Scientific Reports. 2019;9: 1–10. doi: 10.1038/s41598-018-36570-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 40. Hillman N, Pauletto S. The Craftsman: The use of sound design to elicit emotions. Soundtrack, The. 2014;7: 5–23. doi: 10.1386/st.7.1.5_1 [DOI] [Google Scholar]
  • 41. Aly L, Bota P, Godinho L, Bernardes G, Silva H. Acting emotions: Physiological correlates of emotional valence and arousal dynamics in theatre. New York, NY, USA: Association for Computing Machinery; 2022. p. 381386. [Google Scholar]

Decision Letter 0

Bruno Alejandro Mesz

28 Aug 2024

PONE-D-24-27494Hear it here: Built environments predict ratings and descriptions of ambiguous soundsPLOS ONE

Dear Dr. Forys,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Two experts in the field have carefully reviewed the interesting manuscript entitled “Hear it here: Built environments predict ratings and descriptions of ambiguous sounds“. You can find their comments below.

On my side, I would like also to have means and std of the ratings to see if the stimuli have remained neutral with wide dispersion or not.

In light of these reviews and my own reading, I am requesting a major revision and resubmission, in which you will need to respond to each point in each review. 

Please submit your revised manuscript by Oct 12 2024 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Bruno Alejandro Mesz, Ph.D.

Academic Editor

PLOS ONE

Journal requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. Thank you for stating the following financial disclosure:

“BJF: NSERC Canada Graduate Scholarship - Doctoral (CGS D) (569604)

AK: NSERC (170077-2011)

RMT: NSERC (F19-05182)”

Please state what role the funders took in the study.  If the funders had no role, please state: "The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript."

If this statement is not correct you must amend it as needed.

Please include this amended Role of Funder statement in your cover letter; we will change the online submission form on your behalf.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Partly

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: No

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: No

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: It would be interesting, perhaps for a future publication, to have a more detailed description of the audio signals used as stimuli for the study.

Furthermore, some parts of the article repeat concepts already developed.

Reviewer #2: The submitted manuscript addresses an interesting question: does the immediate (built) environment (i.e., a modern vs an old room) influence valence judgements about neutral auditory stimuli (drawn from the IADS stimulus set). The authors consider additionally some qualitative data which is subject to sentiment analysis, and nested independent variables such as route taken to the room. Although, in a broad sense, the study is adequately put together and reaches a logical conclusion, there are a number of reasons why the study is not publishable at present. Below I present some quite detailed comments - I hope that they are helpful to the authors in formulating future iterations of the manuscript for resubmission to PLOS or elsewhere.

Keywords: embodied cognition is mentioned only once in the manuscript and isn't really used as the conceptual framework here. Valence scales and stimulus ratings aren't particularly informative. Also, this paper does not particularly take an ethological approach.

Introduction:

The purpose of the article is clear. However, the literature review draws on all sorts of sources, and I think it could be more focused on environmental influences on auditory experience. There are some priming studies that consider cross-modal relationships between visual stimuli and sounds, e.g. Goerlich et al (2012), and there are plenty of others. https://pubmed.ncbi.nlm.nih.gov/22360592/) Tadajua-Jiminez (2010) https://psycnet.apa.org/buy/2010-09991-010, D'Alessandro et al (2018) https://journals.sagepub.com/doi/abs/10.1177/1351010X18778759 might also be useful in the literature review. It will be really important to build a rationale for the study here - there is useful material in auditory perception, soundscapes and architecture literature - as it is, some of the points seem a bit tangential.

If this phenomonen is conceptualised as transfer of affect from a valenced prime to a neutral target, the Affect Misattribution Procedure is probably the place to look for the cognitive underpinning of the effect - I was surprised not to see this mentioned, especially as priming is mentioned, albeit briefly, in the discussion.

Also, there are some instances where the authors cite something but don't capture accurately what the cited article is saying, e.g., Son et al. (2015) isn't an auditory study and doesn't directly bear on the present manuscript.

As with the keywords, the text leans on embodied cognition in the first paragraph, and claims to take an ethological approach later on - neither of these are really reflective of the manuscript as a whole.

There are some phrases that seem a little odd to me in academic writing, for instance "as such" is used four times. Additionally, there seems to be a mixture of registers in the writing. Most readers are happy with 'passive' scientific writing or with first person writing, but it would be better to stick with one or the other rather than mix.

There is a lot of explanation of the benefits of rating scales, and the some variation phrase 'bipolar rating scales' is used several times. This reads more like a student trying to convince a marker that they know what they are doing, rather than something written for an interested and scientifically literate audience.

Methods:

Line 85 - there is more to sounds than frequencies. It really is the semantic-affective content of the IADS sounds that is important. Psychoacoustics (and frequencies are not the only part of this) is important as a determinant of affect, but not the only thing.

More details of the stimuli would be good - which IADS sounds did you use? what were the valence/arousal ratings in the original Bradley et al study. Were they uniformly 6000 ms or did you allow variation to allow for ecological validity of the sounds. What about volume? Were the onsets/offsets ramped/faded or did the sounds just start/stop?

The main manipulation is the sites. Perhaps it would be worth including details of the sites in the method somewhere, e.g. a table comparing the two sites. Optionally, you could perhaps include photographs of the two sites?

Line 104 - 0.4 seems like a sensible effect size to base power analysis on, but how did you arrive at it? Is it based on previous research?

Overall, the procedure seems sound. Perhaps a schematic of the route*site*elevator design might be easier to digest than the text for some readers to understand the design quickly.

Results:

I have some concerns about parts of the analysis. A simple t-test between old and new sites is not quite the right thing here. There needs to be some control for how participants are using the scale, and I can't see how the t-test can handle the different IADS sounds. One option is to normalise the ratings. An even better option would be something like a linear mixed effects model with rating~site+(1|Participant)+(1|Sound), which would be easy enough to implement in R (eg using lmer).

Line 175 - a simple outline of how VADER calculates sentiment would be helpful to some readers, as there are a few different ways to calculate sentiment. From memory, the VADER GitHub has a neat description.

line 184: Bonferroni corrected for multiple comparisons - I can't really see where the multiple comparisons are. There are basically two separate t-tests.

Also, given that there are two correlated dependent variables, might manova remove some of the confusion?

Incidentally, the headings of the subsections refer to ratings but include analysis of the sentiments as well. I think the results section is short enough to be delineated by paragraphs rather the subsections.

Discussion:

'Nudge' is quite a specific concept in behavioural psychology. This should be introduced earlier or not at all, and discussed more fully if you use it. Same goes for 'priming' (line 252).

Line 233/236 - the switch to 'aversive' from negative seems like you're talking about something slightly different from before. It would be easier for the reader if you were consistent in terminology.

As with the Introduction, the Discussion hops around a lot of areas. I'd suggest keeping it really focused on environmental/architectural influences on affective ratings of auditory stimuli. This has implications for instance in theatre design, and is interesting as a question in basic science.

Finally, I hope the authors are not disheartened by the review - this is an interesting manuscript, and has the potential to contribute to architectural/auditory psychology.

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

**********

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2025 Jan 15;20(1):e0316187. doi: 10.1371/journal.pone.0316187.r002

Author response to Decision Letter 0


9 Oct 2024

We would like to thank the editor and reviewers for their thoughtful and constructive comments and their positive assessment of our manuscript. We have responded to all the suggestions, which we itemize below.

As a brief overview, our responses to the reviewers' comments involve several important additions. We strengthened the theoretical grounding of the Introduction, especially with regards to cognitive ethology and the priming literature. In the Methods section, we provided more details on the previous and current ratings of the auditory stimuli used. In the Results section, we conducted two mixed-effects models to control for the random effects of individual sounds, as well as a multivariate ANOVA to simultaneously predict bipolar ratings and sentiment scores.

Our responses to Editor and reviewer comments are marked as “Response:”; changes in the paper are in bold face.

Editor:

Two experts in the field have carefully reviewed the interesting manuscript entitled “Hear it here: Built environments predict ratings and descriptions of ambiguous sounds“. You can find their comments below.

On my side, I would like also to have means and std of the ratings to see if the stimuli have remained neutral with wide dispersion or not.

Response: Thank you. We have now added a table (please see Table 2) with the means and SDs of our study’s bipolar scale ratings, alongside the original IADS valence and arousal ratings. These results show that, in our study, sounds were rated similarly close to the neutral point of the scale, and had a slightly smaller range of dispersion, compared to how they were rated in the original IADS norming of these sounds.

In light of these reviews and my own reading, I am requesting a major revision and resubmission, in which you will need to respond to each point in each review.

Journal requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

Response: We have now revised the manuscript’s formatting and naming scheme to match these style requirements.

2. Thank you for stating the following financial disclosure:

“BJF: NSERC Canada Graduate Scholarship - Doctoral (CGS D) (569604)

AK: NSERC (170077-2011)

RMT: NSERC (F19-05182)”

Please state what role the funders took in the study. If the funders had no role, please state: "The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript."

If this statement is not correct you must amend it as needed.

Please include this amended Role of Funder statement in your cover letter; we will change the online submission form on your behalf.

Response: The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. We now state this in the revised cover letter along with a correction to AK's grant number: NSERC (2022-03079).

Reviewers' comments:

Reviewer #1: It would be interesting, perhaps for a future publication, to have a more detailed description of the audio signals used as stimuli for the study.

Furthermore, some parts of the article repeat concepts already developed.

Response: We appreciate this comment. We have now more thoroughly discussed the features of the auditory stimuli used for the study on page 6:

“The stimuli in our study were a series of 20 audio recordings of everyday sounds, each with a uniform original duration of 6000 ms (Table 2). We selected these auditory stimuli from the International Affective Digitized Sounds system (IADS-2; [24]). These sounds were presented at the same volume with no fading in or out. All sounds selected had a mean rating between 5 and 5.9 (neutral point) on a 9-point Self-Assessment Manikin scale rating from completely unhappy to completely happy [24], and all but one sound ("Sink") had a standard deviation (SD) in their pleasure rating of at least 1.5.”

We have also indicated the specific sounds used from the IADS stimulus set, as well as the valence and arousal ratings provided in the original IADS study, in Table 2. In general, we have revised the manuscript to reduce repetition.

Reviewer #2: The submitted manuscript addresses an interesting question: does the immediate (built) environment (i.e., a modern vs an old room) influence valence judgements about neutral auditory stimuli (drawn from the IADS stimulus set). The authors consider additionally some qualitative data which is subject to sentiment analysis, and nested independent variables such as route taken to the room. Although, in a broad sense, the study is adequately put together and reaches a logical conclusion, there are a number of reasons why the study is not publishable at present. Below I present some quite detailed comments - I hope that they are helpful to the authors in formulating future iterations of the manuscript for resubmission to PLOS or elsewhere.

Response: We thank the reviewer for their thoughtful comments and feedback, which we have sought to address below.

Keywords: embodied cognition is mentioned only once in the manuscript and isn't really used as the conceptual framework here. Valence scales and stimulus ratings aren't particularly informative. Also, this paper does not particularly take an ethological approach.

Response: We have now revised our choice of keywords to better reflect the themes discussed in the paper, as follows: “Auditory psychology”, “Architectural psychology”, “Auditory perception”, and “Priming”.

Introduction:

The purpose of the article is clear. However, the literature review draws on all sorts of sources, and I think it could be more focused on environmental influences on auditory experience. There are some priming studies that consider cross-modal relationships between visual stimuli and sounds, e.g. Goerlich et al (2012), and there are plenty of others. Tadajua-Jiminez (2010), D'Alessandro et al (2018) might also be useful in the literature review. It will be really important to build a rationale for the study here - there is useful material in auditory perception, soundscapes and architecture literature - as it is, some of the points seem a bit tangential.

Response: We have now revised the Introduction section (pages 2-5) to build a stronger rationale for environmental influences on auditory experience; cited and discussed the recommended priming studies as well as additional studies to support this rationale; and restructured the literature review such that it is more focused and cohesive.

If this phenomonen is conceptualised as transfer of affect from a valenced prime to a neutral target, the Affect Misattribution Procedure is probably the place to look for the cognitive underpinning of the effect - I was surprised not to see this mentioned, especially as priming is mentioned, albeit briefly, in the discussion.

Response: We now discuss the Affect Misattribution Procedure, together with a more in-depth discussion of behavioural priming and transfer of valence from the environment to the auditory stimuli, in the Introduction on pages 2-3:

“This transfer of valence between stimuli has previously been characterized through the Affect Misattribution Procedure (AMP) [7], an experimental paradigm capturing the misattribution of affective information to stimuli based on contextual information [8]. Thus, in a space with unwelcoming features, we may perceive a neutral or even pleasant sound as unpleasant [9] compared to if we heard that sound in a space with pleasant features. While past research has established cross-modal priming given the co-presentation of two stimuli with differing valences, we do not yet know whether we would see priming from more global environmental features to a specific stimulus.”

We also discuss the AMP in the Discussion on pages 13-14:

“As shown in the AMP [9], this information can prime attributions of the valence that participants assign to stimuli they experience in the space - even if participants are unaware of the effects of the space on their sound ratings and descriptions. Thus, participant ratings of sounds could be proxies for their experience of the architectural context in which the sounds were heard - enhanced by the amount of time they spent moving through the space. It is important to note that - contrary to earlier findings - recent evaluations of the AMP suggest that cross-modal priming is dependent on our awareness of being primed [36]. However, as participant perceptions of room pleasantness were not linked to sound ratings, it is reasonable to speculate that the effects observed in the present study were implicitly primed.”

Also, there are some instances where the authors cite something but don't capture accurately what the cited article is saying, e.g., Son et al. (2015) isn't an auditory study and doesn't directly bear on the present manuscript.

Response: If the reviewer is referring to our citation of Song et al. (2019), we have replaced references to this paper with more appropriate, auditory-focused references to Tajadura-Jiménez et al. (2010) and Wardhani et al. (2022) in the Introduction on page 3 and in the Discussion on page 15, respectively.

As with the keywords, the text leans on embodied cognition in the first paragraph, and claims to take an ethological approach later on - neither of these are really reflective of the manuscript as a whole.

Response: We have revised the Introduction to remove the focus on embodied cognition in the first paragraph on page 2.

We have also introduced the ethological literature earlier and more thoroughly, on page 3:

“The cognitive ethology approach [11–13] evaluates how our ability to complete various tasks is impacted by these tasks’ context. By focusing on how stimuli are situated and interpreted in their specific environments, we can better understand how the environment influences our perception of these stimuli.”

There are some phrases that seem a little odd to me in academic writing, for instance "as such" is used four times. Additionally, there seems to be a mixture of registers in the writing. Most readers are happy with 'passive' scientific writing or with first person writing, but it would be better to stick with one or the other rather than mix.

Response: We thank the reviewer for their feedback; we have revised the Introduction section to increase the variety of phrases used and revised the overall paper to use a consistent first-person register.

There is a lot of explanation of the benefits of rating scales, and the some variation phrase 'bipolar rating scales' is used several times. This reads more like a student trying to convince a marker that they know what they are doing, rather than something written for an interested and scientifically literate audience.

Response: We have now revised the discussion of rating scales on pages 3-4 to be more concise and less repetitive to improve readability.

Methods:

Line 85 - there is more to sounds than frequencies. It really is the semantic-affective content of the IADS sounds that is important. Psychoacoustics (and frequencies are not the only part of this) is important as a determinant of affect, but not the only thing.

More details of the stimuli would be good - which IADS sounds did you use? what were the valence/arousal ratings in the original Bradley et al study. Were they uniformly 6000 ms or did you allow variation to allow for ecological validity of the sounds. What about volume? Were the onsets/offsets ramped/faded or did the sounds just start/stop?

Response: We have now added additional information about the auditory stimuli used in the study. In Table 2, we list all sounds used and the original valence and arousal ratings reported for each sound from the IADS study. Furthermore, on page 6, we have now revised our description of the sounds to be more precise:

“The stimuli in our study were a series of 20 audio recordings of everyday sounds, each with a uniform original duration of 6000 ms (Table 2). We selected these auditory stimuli from the International Affective Digitized Sounds system (IADS-2; [24]). These sounds were presented at the same volume with no fading in or out. All sounds selected had a mean rating between 5 and 5.9 (neutral point) on a 9-point Self-Assessment Manikin scale rating from completely unhappy to completely happy [24], and all but one sound ("Sink") had a standard deviation (SD) in their pleasure rating of at least 1.5.”

Additionally, we now mention in the Introduction on page 5 that the sounds did not have strong initial semantic-affective associations.

The main manipulation is the sites. Perhaps it would be worth including details of the sites in the method somewhere, e.g. a table comparing the two sites. Optionally, you could perhaps include photographs of the two sites?

Response: In Table 3, we have now included details on the spatial configuration and characteristics of each site. In Figure 1, we have now included pictures of the study room at each site.

Line 104 - 0.4 seems like a sensible effect size to base power analysis on, but how did you arrive at it? Is it based on previous research?

Response: We thank the reviewer for their feedback. On page 5, we have now clarified that this effect size was selected based on results from an online pilot study focusing on sound ratings for unipolar vs. bipolar scales, and are congruent with those observed in an existing study comparing sound ratings:

“We powered our study for a medium effect of 0.4 for a difference in bipolar sound ratings between sites, based on the effect sizes observed in an online pilot study comparing ratings on unipolar vs. bipolar scales with the same stimulus set used in the present study. This effect size is also congruent with that found by Tajadura-Jiménez et al. [17] when looking at the effect of architectural features and sound position on perceived sound valence.”

Overall, the procedure seems sound. Perhaps a schematic of the route*site*elevator design might be easier to digest than the text for some readers to understand the design quickly.

Response: We thank the reviewer for their comments and have now included a schematic of the design in Figure 3.

Results:

I have some concerns about parts of the analysis. A simple t-test between old and new sites is not quite the right thing here. There needs to be some control for how participants are using the scale, and I can't see how the t-test can handle the different IADS sounds. One option is to normalise the ratings. An even better option would be something like a linear mixed effects model with rating~site+(1|Participant)+(1|Sound), which would be easy enough to implement in R (eg using lmer).

Response: We have revised the Results section to replace the t-tests with the recommended linear mixed effects models (separated for predictors of sound ratings and sentiment scores), the results for which we now report in Tables 4 and 5, and described as follows on page 9:

“A linear mixed-effects model evaluating sound ratings by site, controlling for sound as a random effect, revealed that participants rated all sounds as being significantly more pleasant in the New site compared to the Old site (Table 4, Figure 4A).”

“A linear mixed-effects model evaluating sound sentiment scores by site, controlling for sound as a random effect, revealed that the sentiment of sound descriptions did not significantly differ between sites (Table 5, Figure 4B).”

We have also updated the Methods section on page 8 to introduce this method:

“We compared bipolar scale and sentiment analysis ratings by site using two mixed-effects linear models using the lmerTest package in R [32] predicting bipolar ratings and sentiment scores, respectively, from each site with participant and sound used as random effects.”

Line 175 - a simple outline of how VADER calculates sentiment would be helpful to some readers, as there are a few dif

Attachment

Submitted filename: author_rebuttal_rev1_final.docx

pone.0316187.s001.docx (33.6KB, docx)

Decision Letter 1

Bruno Alejandro Mesz

3 Dec 2024

PONE-D-24-27494R1Hear it here: Built environments predict ratings and descriptions of ambiguous soundsPLOS ONE

Dear Dr. Forys,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Dear authors, thanks for your careful attention to reviewers' comments; I find the manuscript has greatly improved.  There is still one point that is confusing for me: linear mixed models and MANOVA give different results with respect to the significance of sentiment description differences between both sites. However, in the abstract and discussion you state that there were no differences. Please clarify this choice.

Please submit your revised manuscript by Jan 17 2025 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Bruno Alejandro Mesz, Ph.D.

Academic Editor

PLOS ONE

Journal Requirements:

Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: All comments have been addressed

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: (No Response)

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

**********

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2025 Jan 15;20(1):e0316187. doi: 10.1371/journal.pone.0316187.r004

Author response to Decision Letter 1


4 Dec 2024

We would like to once again thank the editor and reviewers for their thoughtful and constructive comments and recommendations to our manuscript. We have addressed this additional comment below and feel that the manuscript is stronger as a result of their feedback.

In our response to the editor’s comments, we now discuss the differences in the results given by the linear mixed models and the MANOVA, with regards to potential differences in how well each model accounts for collinearity. Changes in the paper are in bold face.

We address the editor’s comment below.

Dear authors, thanks for your careful attention to reviewers' comments; I find the manuscript has greatly improved.

There is still one point that is confusing for me: linear mixed models and MANOVA give different results with respect to the significance of sentiment description differences between both sites. However, in the abstract and discussion you state that there were no differences. Please clarify this choice.

Response:

We thank the editor for their suggestions. We have now revised the abstract and discussion section to explain the differences in the MANOVA and linear mixed model results. The linear mixed effect models did not account for potential collinearities in the regressors used, while the multivariate ANOVA was able to capture the contribution of site and sound identity to both bipolar ratings and sentiment levels. The changes in the abstract are as follows:

“Participants rated sounds as being more pleasant at the New site compared to the Old site, but the sentiment of their descriptions only differed between sites when controlling for collinearity”.

The changes in the discussion section are as follows. On page 10:

“When participants described how they felt about the sounds, the valence of the sentiment of these descriptions only differed between sites in our multivariate ANOVA analysis - which controlled for collinearity - but not in our linear mixed model analyses. Participants thus used moderately more negatively valenced language to describe sounds at the older site compared to the newer site.”

On pages 11-12:

“The attribution of differences in sound interpretation to the site in which they were heard may be challenged by the lack of difference in sentiment ratings between sites in participant descriptions of sounds in our linear mixed model analyses.”

“In spite of our finding that only bipolar sound ratings differed between sites with linear mixed model analyses, sentiment scores were significantly correlated with bipolar ratings - a result supported by our multivariate ANOVA analysis showing that site predicted both sound ratings and sentiment scores. This MANOVA result’s contrast with the non-significant linear mixed model results could speak to how the linear mixed models do not account for collinearity among bipolar ratings and sentiment scores, thus weakening the power of this analysis to detect differences between aspects of the auditory experience by site as compared to the MANOVA.”

Attachment

Submitted filename: author_rebuttal_rev2.docx

pone.0316187.s002.docx (20.7KB, docx)

Decision Letter 2

Bruno Alejandro Mesz

8 Dec 2024

Hear it here: Built environments predict ratings and descriptions of ambiguous sounds

PONE-D-24-27494R2

Dear Dr. Forys,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice will be generated when your article is formally accepted. Please note, if your institution has a publishing partnership with PLOS and your article meets the relevant criteria, all or part of your publication costs will be covered. Please make sure your user information is up-to-date by logging into Editorial Manager at Editorial Manager® and clicking the ‘Update My Information' link at the top of the page. If you have any questions relating to publication charges, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Bruno Alejandro Mesz, Ph.D.

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Reviewers' comments:

Acceptance letter

Bruno Alejandro Mesz

11 Dec 2024

PONE-D-24-27494R2

PLOS ONE

Dear Dr. Forys,

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now being handed over to our production team.

At this stage, our production department will prepare your paper for publication. This includes ensuring the following:

* All references, tables, and figures are properly cited

* All relevant supporting information is included in the manuscript submission,

* There are no issues that prevent the paper from being properly typeset

If revisions are needed, the production department will contact you directly to resolve them. If no revisions are needed, you will receive an email when the publication date has been set. At this time, we do not offer pre-publication proofs to authors during production of the accepted work. Please keep in mind that we are working through a large volume of accepted articles, so please give us a few weeks to review your paper and let you know the next and final steps.

Lastly, if your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

If we can help with anything else, please email us at customercare@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. Bruno Alejandro Mesz

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    Attachment

    Submitted filename: author_rebuttal_rev1_final.docx

    pone.0316187.s001.docx (33.6KB, docx)
    Attachment

    Submitted filename: author_rebuttal_rev2.docx

    pone.0316187.s002.docx (20.7KB, docx)

    Data Availability Statement

    The data is available on OSF at https://osf.io/wysdb/, DOI: 10.17605/OSF.IO/WYSDB.


    Articles from PLOS ONE are provided here courtesy of PLOS

    RESOURCES