Skip to main content
PLOS One logoLink to PLOS One
. 2025 Feb 12;20(2):e0316047. doi: 10.1371/journal.pone.0316047

Effects of music advertised to support focus on mood and processing speed

Joan Orpella 1,2,#, Daniel Liu Bowling 3,4,#, Concetta Tomaino 5,6, Pablo Ripollés 2,7,8,*
Editor: Bruno Alejandro Mesz9
PMCID: PMC11819607  PMID: 39937723

Abstract

While music’s effects on emotion are widely appreciated, its effects on cognition are less understood. As mobile devices continue to afford new opportunities to engage with music during work, it is important to understand associated effects on how we feel and perform. Capitalizing on potential benefits, many commercial music platforms advertise content specifically to support attentional focus and concentration. Although already in wide-spread use, the effects of such content remain largely untested. In this online behavioral study, we tested the effects of music advertised to support “work flow” and “deep focus” on mood and performance during a cognitively demanding psychological test (the flanker task). We additionally included a sample of popular hit music representing mainstream musical stimulation and a sample of office noise representing typical background stimulation in a social working environment. Our findings show that, despite similar marketing, only the work flow music gave rise to significant and positively correlated improvements in mood and performance (i.e., faster responses over time, with similar accuracy). Analyses of objective and perceived musical features indicate consistency with the “arousal-mood theory” of music’s cognitive impact and provide new insights into how music can be structured to regulate mood and cognition in the general population.

Introduction

Music is at the core of what it means to be human. It is present in all societies and often reported as one of the most engaging, enjoyable, moving, and satisfying human activities [13]. People use music to self-regulate their mood and emotions, engaging with it intentionally to improve wellbeing [48]. A large body of research shows that music can increase positive affect [9, 10], reduce stress, mitigate anxiety, and decrease depression [1114] in clinical (e.g., stroke, dementia, and Parkinson’s disease among others; [15, 16]) and healthy populations across the lifespan [7, 1719]. Neurally, there is compelling evidence that music stimulates brain networks critical to emotion, including core dopaminergic and opioidergic regions within the brain’s reward system [2028] and an extended network of cortical and subcortical structures that likely supports the upregulation of positive mood [22, 29, 30].

Although the positive effects of music on mood and wellbeing are well-established, music’s impact on cognition are less clear [31]. There is increasing evidence that music can enhance memory via reward-related mechanisms [3235], but the effects of music on attention are more equivocal, with evidence supporting enhancement, impairment, or no effect across clinical and healthy populations [3643]. Understanding how music modulates selective attention in particular—the ability to enhance important signals while at the same time suppressing distracting information—is critical, given that people often listen to music while engaged in other tasks (e.g., working, studying, exercising, cooking, driving, operating heavy machinery, etc.), a fact that differentiates music listening from wellness practices like exercise or meditation [44]. Anecdotally, this type of “on-task” music listening can be an important source of support during tasks that are perceived as mundane or unpleasant, buffering against emotional and/or psychological stress.

Research examining the effects of background music on selective attention [3739, 45] has made progress using standard psychological tests, such as the flanker task, which requires participants to selectively attend and respond to the features of a central “target” stimulus while inhibiting responses to nearby “flanking” distractors [46, 47]. For example, in two recent laboratory studies, listening to “joyful” classical music during a flanker task resulted in faster reaction times (RTs) in response to target stimuli flanked by both similar (congruent) and dissimilar (incongruent) distractors [38, 39]. These results suggest that certain types of music can have a general positive effect on processing speed during tasks that require selective attention and distractor conflict resolution.

However, when engaging in real-world tasks that require selective attention listeners do not restrict their listening preferences to classical music. And, while success in modulating mood can be achieved with music of many kinds (e.g., from pop, classical, or rock genres; [46], listeners are typically unaware of the ways in which their music choices may impact selective attention, and potentially, overall performance. This is problematic because many musical compositions, especially those created mainly for entertainment or self-expression, arguably aim to capture as much attention as possible. Addressing this problem, many commercial music platforms advertise selections of music for the specific purpose of enhancing attentional focus and concentration while engaged in cognitively demanding tasks. While these tracks are becoming widely used, especially with the success of music streaming platforms, research assessing their effects on cognition is lacking. Here, we address this gap in the literature by testing the effects of music advertised to support focus on mood and performance while participants complete a flanker task.

We focus on two audio conditions: a sample of tracks aimed at enhancing “work flow” offered by a commercially available music therapy app, and a sample of tracks aimed at enhancing “deep focus” offered by a commercially available music streaming platform. These were selected because, despite similar marketing, they exhibit pronounced differences in objective musical features (see “Stimuli” below in Materials and Methods) that can be expected to drive different neural and behavioral responses. For context, we also included a third audio condition comprising popular music to represent mainstream musical stimulation (hypothesized to be distracting), and a fourth audio condition comprising simulated office noise to represent background acoustic stimulation in a typical social working environment (as an ecologically valid baseline for sound during work).

In an online behavioral study (N = 196) we compared the effects of the above audio conditions on mood and performance during a flanker task, as a proxy for straight-forward but cognitively demanding work. In this experimental context, we hypothesized that either work flow or deep focus music would positively modulate mood and performance relative to popular music and office noise. Based on previous literature, we also hypothesized that any positive effects on performance would primarily be reflected in generally increased processing speed [38, 39].

Materials and methods

Experimental design

Participants

Online participants were randomized to one of four audio conditions (“work flow”, “deep focus”, “pop hits”, and “office noise”) during which they completed a six-minute flanker task in the middle of ten minutes of listening. The experiment was coded using PsychoPy [48] and participants were recruited online using Amazon Mechanical Turk. Recruitment was restricted to individuals located in the United States with a record of success completing Mechanical Turk tasks (i.e., at least 100 tasks completed with a 95% approval rate from task requesters). Experiments using participants from Mechanical Turk have been previously demonstrated to replicate results from more traditional participant populations across a variety of cognitive tasks, including flanker tasks [49] and tasks involving complex auditory stimuli [5053]. A power analysis using MorePower [54] indicated a sample size of at least 44 participants per group for 80% power to find a medium effect size (partial eta2 = 0.06; [37]) in a between-subjects one-way ANOVA with four conditions (the analysis required to assess the effect of audio condition on mood). This statistical test was selected among other possible ones (e.g., effect of audio condition on Flanker accuracy) as it was the one requiring the largest sample size. In addition, previous research using a similar paradigm, assessing different groups as we do here, and finding significant effects has used smaller sample sizes than the one we employed ([37]: two groups, N = 19 and N = 21; [38]: two groups, N = 19, N = 33; [39]: two groups, N = 15, N = 15). We recruited 70 participants per group to account for attrition. This experiment was approved by the Institutional Review Board at New York University. Participants were recruited between March 1st 2022 and May 31st 2022. Written consent was obtained online. To do so, participants clicked on an “accept” button after reading the consent form. Participants were paid for their participation in the study.

Stimuli

In this experiment, we aimed to study commercially available music advertised to support attentional focus and concentration. Stimuli were selected to contrast two acoustically distinct types of “focus” music. The first was “work flow” music sampled from a synonymous playlist on a music therapy app [55]. Overall, the work flow music included here was characterized by strong rhythm (e.g., moderately fast tempo with high pulse clarity and low rhythmic complexity), simple tonality (e.g., predominantly major with high key clarity and low melodic and harmonic variation), broadly distributed spectral energy below ~6000 Hz (e.g., centroid, spread, and roll-off), and moderate dynamism (e.g., moderately steep attacks and moderate event density; see Table 1). The second type was “deep focus” music, sampled from a synonymous playlist on a music streaming platform [56]. Overall, this music was relatively minimalistic, being characterized by similarly simple tonality, but weaker rhythm (e.g., slower tempo with lower pulse clarity and higher complexity), lower and more restricted spectral energy (e.g., lower centroid, spread, and roll-off), and more reserved dynamism (e.g., gentler attacks and lower event density; see Table 1). Neither work flow nor deep focus music had lyrics. Further context for evaluating the effects of these distinct types of “focus” music was provided by two additional audio conditions, also implemented to reflect real-world listening. One was popular hit music, sampled from the “Hot 100” playlist published by an American music magazine [57] in the second week of October 2021. Given prior research on the effects of listening to popular music‒especially with lyrics‒during work and other complex tasks [5860], we expected this condition to negatively impact on-task performance. The other condition was simulated “calm office noise”, sampled from a synonymous sound generator on a website offering noise stimulation [61]. Our intention with this last audio condition was to represent acoustic stimulation in a naturalistic social working environment. It was preferred to silence because people rarely work in acoustic isolation. Audio files of the work flow tracks are included in supplementary material; the deep focus and pop hits tracks are available commercially.

Table 1. Musical features of the music stimuli.
MUSICAL FEATURE MUSIC CONDITION STATISTICS
Type Name Work Flow (WF) Deep Focus (DF) Pop Hits (PH) ANOVAs (DF = 2,33) Tukey’s HSDs (α = 0.05)
F p WF vs. DF WF vs. PH DF vs. PH
Rhythm Tempo 119.250 70.250 111.438 19.365 2.73e-6 ** ***
Pulse clarity 0.545 0.219 0.483 9.073 7.00e-4 * *
Fluctuation entropy 0.963 0.985 0.979 41.480 9.87e-10 *** *** *
Fluctuation maximum 4077.285 1010.193 2573.015 15.899 1.46e-5 *** . **
Tonality Key (% major) 75.000 100.000 56.250
Key clarity 0.689 0.866 0.693 8.051 1.40e-3 . *
Mode 0.117 0.19 0.045 4.598 1.73e-2 *
Chromatic complexity 4.750 5.625 8.688 8.672 9.00e-4 * *
HCDF mean 0.195 0.174 0.257 53.87 4.04e-11 *** ***
Spectrum Flux 33.282 15.362 33.682 90.874 3.79e-14 *** ***
Entropy 0.778 0.763 0.838 111.516 2.08e-15 *** ***
Centroid 2273.033 1015.235 3431.681 175.817 2.52e-18 *** *** ***
Spread 3964.343 1712.401 4029.667 61.767 6.99e-12 *** ***
Flatness 0.130 0.038 0.247 73.788 6.62e-13 * ** ***
Roll-off 5613.889 1730.855 7783.217 139.447 8.02e-17 *** * ***
Brightness 0.278 0.177 0.525 234.807 3.06e-20 *** * ***
Zero crossing rate 492.950 418.910 1199.087 97.762 1.36e-14 *** ***
Dynamics Attack time 0.115 0.142 0.104 3.556 3.99e-2 *
Attack slope 3.174 1.908 4.018 19.756 2.28e-6 * ***
Decay time 0.179 0.253 0.133 2.885 7.00e-2
Decay slope -1.976 -1.499 -3.140 22.865 5.88e-7 * ***
Event density 1.931 1.085 2.348 14.186 3.58e-5 . *
RMS amplitude 0.058 0.047 0.054 8.807 8.61e-4 ** *

Values represent means for each condition. HCDF = Harmonic Change Density Function; RMS = Root Mean Square; DF = Degrees of Freedom; HSD = Honest Significant Difference. See Supplementary Material and S1 Table for further details. p<0.1

* = p<0.05

**p<0.001

***p<0.0001 (Tukey corrected).

For each of the three musical audio conditions, we obtained enough music to create four unique stimuli, each comprising at least ten minutes of stimulation. For the work flow condition, our sample comprised four tracks, each just over ten minutes in duration and each composed by a different artist. Each participant in this condition heard only one of the tracks, with the specific track counterbalanced across participants. For the deep focus condition, our sample comprised sixteen tracks, each approximately three minutes in duration and each composed by a different artist. These were sorted into four sets of four tracks each (satisfying the required minimum of ten minutes; note that even though these sets were approximately twelve minutes long, playback was terminated at 10 minutes; see "Procedure” below). Within each set, the four tracks were concatenated end-to-end with two seconds of intervening silence to stimulate their occurrence in the source playlist. The sorting of tracks into sets was made so that their average musical features were representative of the full deep focus sample (see S1 Table). The same procedure was followed for the pop hits condition, also resulting in four representative sets of four tracks each. Each participant in the deep focus and pop hit conditions heard only one of the track sets, with the specific set counterbalanced across participants. Finally, in the office noise condition, our sample comprised a single ten-minute track.

Work flow tracks were provided to us as.wav files by the music therapy app from the “anxious-to-energized” portion of their workflow playlist (sampling rate = 44.1 KHz, bit depth = 16; see Discussion for further details). Out of the twelve tracks provided, four were selected for this study based on having musical features that were broadly similar and representative of the entire set. Deep focus and pop hits tracks were purchased and downloaded as.m4a files from Apple Music (sampling rate = 44.1 kHz, bit rate = 256 kbps). For both of these conditions, we shuffled each source playlist until the first sixteen tracks were each created by a different artist, and then selected these sixteen for inclusion. The loudness of each track in the work flow condition and each set in the deep focus and pop hits conditions was normalized to -23 LUFS (Loudness Units relative to Full Scale) using the “integratedloudness.m” function from Matlab’s Audio toolbox (Matlab Version R2022a; Audio toolbox Version 3.2; The Mathworks Inc.), with the result saved as a.wav file for streaming (44.1kHz/16 bit). The office noise track was generated with the following settings on the sound generator web page: “Room Tone” at 38% of maximum; “Air Co” at 50%, “Chatty Colleagues” at 89%, “Copy machine” at 66%; “Printing & Scanning” at 74%, “Office Noises” at 100%, “Keyboards & Mouse” at 58%, “Keyboards” at 73%, “Writing” at 68%; and “Office Clock” at 45% (these settings were determined by ear with the goal of simulating a typical open office environment; see supplementary materials S5 Audio for the recording used for this condition). The resulting audio stream was recorded for ten minutes as a.wav file (sampling rate = 44.1 KHz, bit depth = 16) using Audio Hijack software [62]. The loudness of the resulting track was set at -33 LUFS (i.e., 10 dB quieter than the music, with the result also saved as a.wav file for streaming (44.1kHz/16 bit).

An objective analysis of musical features for the music used in this study is shown in Table 1. This analysis was conducted using MIRtoolbox Version 1.8.1. [63, 64] in Matlab. We chose a list of 23 features designed to capture rhythmic, tonal, spectral, and dynamic aspects of the musical stimuli that are broadly associated with ratings of emotions expressed in music [6567] but note that the best approach to feature-based music description remains a matter of debate [68]. Explanations of each feature and how they were extracted are provided in the Supplementary Materials. See S1 Table for specific track names, arrangement of tracks into sets (for deep focus and pop hits), track-by-track break downs of musical features, and the results of a clustering/silhouette analysis indicating that the work flow and deep focus tracks are generally well separated into two different groups.

Procedure

The experiment was conducted online. An overview of the experimental procedure is shown in Fig 1. Participants first read the informed consent. Written consent was obtained online. To do so, participants clicked on an “accept” button after reading the consent form. Participants then filled out a brief demographic survey (age, education level, and gender) and then completed three well-validated questionnaires relevant to music and mental health: the Goldsmith Musical Sophistication Index (Gold-MSI) as a continuous measure of musical skills and expertise [69], the Barcelona Music Reward (BMRQ) as a continuous measure of sensitivity to musical reward [70], and the 21-question version of the Depression, Anxiety, and Stress Scale (DASS-21), which measures core components of basal psychological distress [71]. To ensure data quality, all questionnaires included an item that served as an attentional check (e.g., Please, select the option “Agree”). Additionally, participants could not advance to the next questionnaire unless all answers had been provided.

Fig 1. Overview of the experimental procedure.

Fig 1

After completing the baseline questionnaires, participants were asked to adjust their volume to a comfortable level while listening to a noise stimulus calibrated to the experimental stimuli. Participants were then asked to record the approximate volume, as represented by their computer’s operating system, by matching it to a visual analogue scale ranged between 0 and 1. After volume calibration, each participant underwent a headphone screen to determine if they were wearing functioning headphones as instructed. This screen, described in [72], is based on correctly identifying the occurrence of Huggins pitches in noise stimuli. These are very difficult to hear unless the left and right audio channels are presented dichotically.

After volume calibration and the headphone screen, the pre-task mood state of participants was measured using the Positive And Negative Affect Scales (PANAS; [73]), a well-validated measure of acute emotional status that has been successfully used to assess music-induced mood changes in pre/post listening designs like that used here [16, 74]. It comprises ten adjectives describing positive affect (e.g., “interested”, “enthusiastic”, etc.) and ten describing negative affect (e.g., “irritable”, “upset”, etc.), each rated on a five-point Likert scale. After the pre-task PANAS, participants read brief instructions on how to complete the flanker task and completed a series practice trials (detailed below). Playback of audio track or set started after completion of the practice trials. For the first minute of audio playback, on-screen text instructions “Stay tuned”, “Focus on the sounds”, “Enjoy the sounds”, or “Listen to the sounds” were presented, allowing familiarization in the absence of an experimental task. During this period, participants were occasionally prompted with brief attention checks to ensure that they were still engaged. These consisted of responding by pressing an arrow key on the keyboard (left, right, up, or down) to describe the direction of an arrow image presented at the center of the screen. Each attention check prompt lasted just 1.5 seconds at maximum, with responses accepted anywhere within a two second window after presentation onset. To prevent anticipation, the time interval between attention checks was varied according to the formula, interval + 2—RT, where “interval” was selected (without replacement) from the set [10, 14, 16] and “RT” was the participant’s RT to the previous attention check. The on-screen text between checks was randomly selected from the list provided above. After approximately one minute (depending on the intervals between attention checks), the flanker task was initiated.

On each trial of the flanker task, participants were presented with a stimulus consisting of a central right- or left-pointing arrow flanked by arrows pointing in the same direction (congruent condition), the opposite direction (incongruent condition), squares (neutral condition), or crosses (no-go condition). The task was to respond to the direction of the middle arrow by pressing the corresponding arrow key on the keyboard as soon as possible, except in trials with flanking crosses, in which case the correct response was to refrain from responding (see Fig 2). Each trial started with a black fixation dot presented in the middle of the screen for two seconds, followed by a flanker stimulus, which lasted for a maximum of two seconds. Responses were allowed any time after presentation of the stimulus within a three second window. RTs for each trial were computed from the beginning of stimulus presentation. The 24 practice trials comprised six trials per flanker condition presented in randomized order with feedback ("Correct!” or “Incorrect”) given immediately after a response, or after the three second window if no response was made. The structure of the test trials was the same except no feedback was provided. Each participant completed 72 test trials (18 per condition, half with the center arrow pointing right, half with it pointing left). Additionally, test trials were made to occur exactly 5 seconds apart by introducing variable duration intervals after participant responses such that RT + interval was always equal to three seconds (plus two seconds for the pre-stimulus fixation dot equals five seconds of total trial time). Standardizing trial duration allowed us to fix the duration of the flanker task at 6 minutes across all participants, regardless of differences in RT.

Fig 2. Flanker task conditions.

Fig 2

Note that there are two trial types per condition, central arrow pointing left or right.

After the flanker task (approximately seven minutes after initiating audio playback), participants completed a post-task PANAS, allowing us to compute changes in mood. Finally, participants completed a short end survey about their listening experience. This survey examined subjective perceptions of pleasure (“How much did you like the soundtrack that you heard?”), groove (“How much did the soundtrack make you want to move? For example, by tapping your foot, bobbing your head, or rocking back and forth”), and familiarity (“How familiar are you with the soundtrack you heard?”). For the office noise condition, we changed the word soundtrack for “audio”. These final questions were answered using a slider with values ranging between 0 and 1. Audio playback was stopped precisely ten minutes after initiation. The experiment was terminated at this point unless the end survey was not yet complete. In cases where the end survey was completed before ten minutes of audio had elapsed, an on-screen message instructed participants to continue listening for the remaining time.

Statistical analysis

To determine whether participants assigned to the four audio conditions were balanced in terms of demographic variables (age, gender, level of education), musical skills and expertise (Gold-MSI), sensitivity to musical reward (BMRQ), and acute psychological distress (DASS-21), we used Bayes Factors (BFs) as implemented in JASP using default priors [7578]. We also tested whether there were differences between the groups for the self-reports of volume. We report BF01, which reflects the probability of the data given H0 relative to H1, which in our case quantifies the strength of the evidence supporting the hypothesis that the groups are equal on the indicated variables (H0), relative to the strength of the evidence supporting the hypothesis that the groups are different (H1). For example, a BF01 = 4 can be interpreted as the data being 4 times more likely under H0 than under H1 [79]. For continuous variables (age, GOLD-MSI, BMRQ, and DASS-21) we used Bayesian one-way ANOVAs to compare the groups in the different audio conditions. For categorical variables (education, gender) we used Bayesian Contingency Tables with independent multinomial sampling to compare conditions, as participants were randomized to one of four audio conditions with the aim of collecting approximately the same number of participants in each condition [80].

To test the effects of audio condition on mood (PANAS), we used a single one-way between-participants ANOVAs with four levels (work flow, deep focus, pop hits, office noise) for each dependent variable, implemented in JASP. For an overall assessment of mood effects, we examined total change in PANAS scores, calculated as change-in-PANAS-positive + (-1)*change-in-PANAS-negative. To assess effects on positive and negative affect separately, we subsequently analyzed changes in the PANAS positive and negative scales. For significant effects of audio condition, post-hoc t-tests were used to compare each possible pairing of groups, with Tukey correction for multiple comparison. One-way between-participants ANOVAs with four levels (work flow, deep focus, pop hits, office noise) followed by post-hoc t-tests with Tukey correction were used to analyze the self-reported musical familiarity, pleasure, and groove ratings made during the end survey. Partial eta2p2) and Cohen’s d were used as measures of effect size.

To test the effects of the audio condition on performance, we conducted several analyses. First, we examined accuracy, using the number of correct trials in the Flanker Task as the dependent variable, and a mixed between-within 4 x 4 repeated measures ANOVA with audio condition (work flow, deep focus, pop hits, office noise) as the between-subjects factor and flanker condition (congruent, incongruent, neutral, no-go) as the within-subjects factor. For significant effects, we used post-hoc tests with Holm correction for multiple comparisons. Second, we also examined reaction times (RTs). RTs were only analyzed for correct trials. We opted for linear mixed models rather than ANOVAs for analyzing RTs [81, 82]. We employed the lme4 package [83] within R (version 4.2.0) and RStudio (version 2022.02.2.485) for this analysis. We discarded any trial with an RT faster than 150ms, which was deemed too fast to be a real response (typical reaction times vary between 400–500 ms; 46,47). For the final sample of participants (N = 196; see Results) only four trials were discarded for this reason, out of a total of 10,152 correct trials across all participants and conditions.

For linear mixed modeling, we first generated a null model with a random intercept for participant but no experimental effects. We then defined the minimal model as that containing a random intercept effect for participant, and fixed factors effects for: i) audio condition (work flow, deep focus, pop hits, office noise) and ii) flanker condition (congruent, incongruent, and neutral; no-go trials do not have RTs); iii) the BMRQ, to control for individual variation in sensitivity to musical reward (this scale is known to correlate with neurophysiological and behavioral responses to music; [27, 84, 85]); iv) the Music Training subscale of the Gold-MSI, to control for individual variation in musical skills and expertise; and v) the three subscales of the DASS-21 (Depression, Anxiety, and Stress), to consider effects of basal psychological distress. The minimal model included no interactions. Next, we generated a set of models by adding trial number to assess the effect of time on performance (note that trial number is directly proportional to time, since each trial was spaced exactly 5 seconds apart; see “Procedure” above) and modeling all possible interactions between audio condition, flanker condition, and trial number. Finally, we compared the Akaike information criterion (AIC) of each model to determine which model explained the most variance in the data with the most efficient effect structure. In accord with established standards for comparing AICs, we considered a model superior to another if its AIC was two or more points smaller [34, 86, 87]. Each effect in the best model was further examined using Type III Wald chi-square tests, as implemented in the car package. Pairwise contrasts for significant effects were carried out using the emmeans package and Tukey correction for multiple comparisons. Predicted effects were plotted using the ggpredict package. Data is available as supplementary material files. Analysis code followed standard pipelines in R and JASP.

Results

Participants

Out of the total 280 participants recruited, 84 were excluded for the following reasons: 18 had musical anhedonia, as defined by a score of less than 63 on the BMRQ [70, 84], 32 failed the headphone screener, 17 responded only in fewer than 50% of the flanker test trials (i.e., fewer than 26 responses on the 54 congruent, incongruent, or neutral trials in which a response was expected), 15 failed one or more of the attention checks in the baseline questionnaires or during the first minute of audio playback, and 2 responded on more than 80% of the flanker test trials with implausibly fast RTs (i.e., <150 ms). The final sample thus consisted of 196 participants (mean age = 38.14 years, SD = 9.72 years; 80 female, 116 male; 160 with a college degree or higher, 36 with a high-school diploma). This included 50 assigned to the work flow condition, 50 to deep focus, 49 to pop hits, and 47 to office noise. The attrition rate of 30% is consistent with other online behavioral studies using participants from Amazon Mechanical Turk completed by the authors, including studies with similarly complex tasks and auditory stimuli [50, 51, 53, 87].

Group characterization

Bayes factors show that the groups were well balanced (see S1 Fig in S1 File) in terms of demographic variables (age BF01 = 30.00; gender BF01 = 56.94; education BF01 = 83.46), sensitivity to musical reward (BMRQ: total BF01 = 35.98; seeking BF01 = 15.67; emotion BF01 = 11.11; mood BF01 = 24.09; sensorimotor BF01 = 19.44; social: BF01 = 13.05), basal mental health status (DASS-21: depression BF01 = 15.26; anxiety BF01 = 5.11; stress BF01 = 9.40), and computer volume (BF01 = 13.80). For musical training the BF01 was 2.11. This lower BF is driven by a higher score displayed by the participants in the pop hits group (see S1D Fig in S1 File). However, there was no significant effect of group of musical training (F(3,192) = 2.38, p = 0.07) with all post-hoc tests comparing each group score against each other showing no significant differences (all corrected ps>0.092). Music genre preferences varied within groups but were comparable across them. Among the four groups, the two most popular genres were Pop and Rock (see S2 Table in S1 File). An analysis of the genres selected as favorites by at least 10 participants—namely Rock, Pop, Classical, Rap, and Jazz—revealed no significant differences in their distribution across the groups (BF01 = 1081).

Effect of audio condition on mood

There was a significant effect of audio condition on the total change in PANAS score (F3,192 = 11.22, p<0.001, ƞp2 = 0.149; Fig 3A), with post-hoc tests showing that participants in the work flow condition experienced greater improvements in mood than those in the deep focus condition (t98 = 4.15, pholm<0.001, d = 0.83), pop hits condition (t97 = 5.03, pholm<0.001, d = 1.01), and office noise condition (t95 = 4.79, pholm<0.001, d = 0.97). No other between-group comparisons were significant (all ps>0.080). Furthermore, one-sample t-tests applied to each condition indicated that work flow was the only audio condition in which the total change in PANAS score from before to after the flanker task was significantly different than zero (work flow, t49 = 7.0, p<0.007, d = 0.99, this p-value survives Bonferroni correction for the 4 tests computed; focus, t49 = -0.879, p = 0.38, d = 0.124; pop hits, t48 = -1.56, p = 0.12, d = 0.224; office noise, t46 = -1.55, p = 0.12, d = 0.227). At the individual level, 76% of participants in the work flow condition reported an increase in total PANAS score, reflecting a net improvement in mood; the same percentage was less than half for each of the other three conditions (44% for deep focus, 36.7% for pop hits, and 36.17% for office noise). Similar results were obtained when separately analyzing changes in the PANAS positive scale (F3,192 = 8.31, p<0.001, ƞp2 = 0.115)—with participants in the work flow condition experiencing greater increases in positive affect than those in the deep focus (t98 = 3.62, pholm = 0.002, d = 0.72), pop hits (t97 = 4.23, pholm<0.001, d = 0.85), and office noise conditions (t95 = 4.22, pholm<0.001, d = 0.85)—as well as the PANAS negative scale (F3,192 = 6.40, p<0.001, ƞp2 = 0.091)—with participants in the work flow condition experiencing greater decreases in negative affect than those in the deep focus (t98 = 3.06, pholm = 0.010, d = 0.61), pop hits (t97 = 3.92, pholm<0.001, d = 0.79), and office noise conditions (t95 = 4.79, pholm = 0.003, d = 0.71).

Fig 3. Effects of audio condition on mood.

Fig 3

A. Participants in the work flow (WF) condition exhibited greater improvements in mood than participants in the deep focus (Focus), pop hits (Hits), and office noise (Office) conditions (all ps<0.001, corrected for multiple comparisons). Bar plots show mean with standard deviation. B. Exploratory analysis indicating that mood improvement in the work flow condition was independent of individual variation in basal levels of anxiety as assessed with the DASS-21 (i.e., audio condition did not interact with the DASS-21 anxiety subscale). The same was true for the DASS-21 depression and stress subscales, but only results for anxiety are shown here. Solid lines show predicted values; dashed lines show 95% confidence intervals. For reference, DASS-21 Anxiety scores 0–7 are considered normal, 8–9 are considered mild, 10–14 moderate, 15–19 severe, and 20+ extremely severe.

Given the significant main effect of audio condition on mood, an exploratory analysis was performed to determine whether the effect of the work flow condition on mood interacted with our basal measures of psychological distress. Using R, we computed three simple linear models predicting total change in PANAS score as a function of the interaction between audio condition and each of the DASS-21 subscale scores (e.g., Total change in PANAS score = AudioCondition*DASS-21Anxiety). We used F-tests implemented in the function anova to compare each linear model to a version of itself specified without the interaction (e.g., Total change in PANAS score = AudioCondition+DASS-21Anxiety). These tests indicated no significant interactions between audio condition and anxiety (p = 0.589; see Fig 3B), depression (p = 0.351), or stress (p = 0.544), suggesting that beneficial effects of the work flow music on mood during task performance observed here are robust to individual variation in basal levels of depression, anxiety, and stress.

The results of the end survey are shown in Fig 4 (these data were missing for two participants in the work flow condition, three in the deep focus condition, five in the pop hits condition, and three in the office noise condition). There was a significant effect of audio condition on pleasure (F3,179 = 12.52, p<0.001, ƞp2 = 0.17), with post-hoc tests showing that participants in the office noise condition reported liking the audio less than those in the work flow (t90 = 5.01, pholm<0.001, d = 1.04), deep focus (t89 = 5.54, pholm<0.001, d = 1.16), and pop hits conditions (t86 = 2.92, pholm = 0.016, d = 0.62). In addition, participants in the pop hits condition reported liking the audio less than those in the deep focus condition (t89 = 2.57, pholm = 0.033, d = 0.54), although the difference with the work flow condition was not significant (t90 = 2.02, pholm = 0.088, d = 0.423). There was also a significant effect of audio condition on self-reported ratings of groove (F3,179 = 18.85, p<0.001, ƞp2 = 0.24), with post-hoc tests showing that participants in the work flow and pop hits conditions reported that the audio stimulated greater desire for movement than those in the deep focus condition (work flow vs. deep focus: t93 = 2.78, pholm = 0.017, d = 0.57; pop hits vs. deep focus: t89 = 2.80, pholm = 0.017, d = 0.58). Not surprisingly, participants in the office noise condition reported lower groove scores than those in each of the other three audio conditions (work flow: t90 = 6.53, pholm<0.001, d = 1.36; pop hits: t86 = 6.46, pholm<0.001, d = 1.37; deep focus: t89 = 3.76, pholm = 0.001, d = 0.79). Finally, there was a significant effect of audio condition on self-reported ratings of familiarity (F3,179 = 4.55, p = 0.004, ƞp2 = 0.071), with participants in the work flow condition reporting that the audio was less familiar than those in the deep focus (t93 = 2.99, pholm = 0.016, d = 0.61), pop hits (t90 = 2.54, pholm = 0.048, d = 0.53), and office noise conditions (t90 = 3.22, pholm = 0.008, d = 0.68).

Fig 4. End survey results.

Fig 4

Self-reported pleasure (A), groove (B), and familiarity (C) associated with the audio in the work flow (WF), deep focus (Focus), pop hits (Hits), and office noise (Office) conditions. Bars show mean with standard deviation. *** p<0.001, **p<0.05.

Given these significant differences in pleasure and familiarity between audition conditions, both of which are generally related to effects of music on mood [88], we performed a further exploratory analysis to determine if mood changes were related to pleasure or familiarity. We computed two simple linear models predicting total change in PANAS score as a function of the interaction between audio condition and pleasure or familiarity (e.g., Total change in PANAS score = AudioCondition*Pleasure). We again used F-tests implemented in the function anova to compare each linear model to a version of itself specified without the interaction (e.g., Total change in PANAS score = AudioCondition+Pleasure). No significant interactions were observed between audio condition and pleasure (p = 0.108) or audio condition and familiarity (p = 0.745), suggesting that differences in mood change between audio conditions are not a result of associated differences in either of these variables. That said, these analyses did indicate a significant main effect of pleasure on total change in PANAS score (t178 = 7.2, p<0.001; see S5 Fig in S1 File), suggesting that the more participants liked the audio in any condition, the more their mood improved from before to after the flanker task. We did not observe a main effect of familiarity on total change in PANAS score (t175 = 0.042, p = 0.96).

Effect of background audio on performance

The results of the flanker task accuracy analysis are shown in S2 Fig in S1 File. A mixed between-within repeated measures ANOVA showed a main effect of flanker condition on accuracy (F3,576 = 38.46, p<0.001, ƞp2 = 0.167), with post-hoc tests showing that participants made significantly more errors in the no-go condition than in the congruent (t195 = 9.62, pholm<0.001, d = 0.82), incongruent (t195 = 5.81, pholm<0.001, d = 0.49), and neutral conditions (t195 = 8.93, pholm<0.001, d = 0.76). In addition, participants made significantly more errors in the incongruent condition than in the congruent (t195 = 3.80, pholm<0.001, d = 0.32) or the neutral conditions (t195 = 3.11, pholm = 0.004, d = 0.26). These results replicate standard accuracy effects for the flanker task [46, 47]. However, the effect of audio condition was not significant (F3,192 = 1.07, p = 0.36, ƞp2 = 0.016), nor was its interaction with flanker condition (F9,576 = 6.81, p = 0.65, ƞp2 = 0.012). This implies that flanker task accuracy was not significantly modulated by the different types of music tested in this study.

We next examined flanker task RTs (or speed). The different candidate models used to assess the effect of audio condition on flanker task RT are shown in Table 2. Examining the best model [FlankerCondition + AudioCondition*TrialNumber + BMRQ + GOLD-MSI + DASS21 + (1|ID); see S3 Table in S1 File for a full list of parameter estimates with 95% confidence intervals)], as determined by AIC comparisons, showed a main effect of flanker condition (χ22 = 361.62, p < 0.001). Post-hoc tests showed that participants reacted faster in the congruent compared to incongruent (congruent minus incongruent: -89.5±4.97 ms, t9961 = -18.01, ptukey<0.001) and neutral conditions (congruent minus neutral: -17.4±4.89 ms, t9959 = -3.55, ptukey = 0.001), and slower in the incongruent compared to neutral conditions (incongruent minus neutral: +72.1±4.97 ms, t9960 = 14.48, ptukey<0.001; see S3 Fig in S1 File). These results are also consistent with typical flanker task performance [46, 47].

Table 2. Candidate linear mixed models for flanker task RT.

Linear mixed model predicting RT Ki AIC Δ(AIC)
FlankerCondition + AudioCondition*TrialNumber + BMRQ + GOLD-MSI + DASS21 + (1|ID) 17 137434.8 0.00
FlankerCondition + AudioCondition + TrialNumber + BMRQ + GOLD-MSI + DASS21 + (1|ID) 14 137445.0 10.23
FlankerCondition*TrialNumber + AudioCondition + BMRQ + GOLD-MSI + DASS21 + (1|ID) 16 137446.0 11.22
FlankerCondition + AudioCondition + BMRQ + GOLD-MSI + DASS21+(1|ID) 13 137448.0 13.19
FlankerCondition*TrialNumber*AudioCondition + BMRQ + GOLD-MSI + DASS21 + (1|ID) 31 137448.9 14.09
FlankerCondition*AudioCondition + TrialNumber+ BMRQ+GOLD-MSI +DASS21 + (1|ID) 20 137449.2 14.42
FlankerCondition*AudioCondition + BMRQ + GOLD-MSI + DASS21 + (1|ID) 19 137452.2 17.37
Empty Model 3 137881.8 446.97

All models included random intercepts for participants (1|ID).

* indicates an interaction. DASS21 stands for three fixed factors: DASS21Anxiety+DASS21Depression+DASS21Stress. Ki = the number of estimated parameters in each model. AIC = corrected Akaike information criterion. Δ(AIC) = difference between AIC for each model and the best model. The best model appears in in bold.

Intriguingly, there was a main effect of basal anxiety levels on flanker task RT, with participants who scored higher on the DASS-21 anxiety subscale responding slower overall (β = 18.22, i.e., for each one-point increment in DASS-21 anxiety score, RTs increased by 18 ms; χ21 = 30.82, p<0.001; see S4A Fig in S1 File). A main effect was also observed for baseline sensitivity to musical reward, with participants who scored higher on the BMRQ responding faster overall (β = -6.02, i.e., for each one-point increment in BMRQ score, RTs decreased by 6 ms; χ21 = 7.79, p = 0.005; see S4B Fig in S1 File]. This result remained significant when excluding data from participants in the office noise condition (χ21 = 5.45, p = 0.019). Finally, there was a significant interaction between audio condition and trial number (χ23 = 16.26, p = 0.001; see Fig 5A). Post-hoc tests showed that participants in the work flow condition became faster over time, regardless of flanker condition with the rate of decrease in RT (work flow slope: β = -0.421) being significantly steeper than in each of the other conditions (deep focus slope: β = 0.615, difference in slope with work flow = 1.03, t9959 = 3.80, ptukey<0.001; pop hits slope: β = 0.340, difference in slope with work flow = 0.76, t9959 = 2.79, ptukey = 0.027; office slope: β = 0.356, difference in slope with work flow = 0.77, t9959 = 2.83, ptukey = 0.024). This indicated a significant effect of work flow music on performance (i.e., faster responses over time, with similar flanker accuracy). That is, despite similar levels of accuracy, participants in the work flow condition reacted significantly faster to flanker stimuli over time.

Fig 5. Effect of audio condition on flanker task RT over time.

Fig 5

A. RT slopes with increasing trial number (time) for each audio condition, as predicted by the best RT model (see Table 2). Participants in the work flow (WF) condition performed significantly faster over time, as compared to those in the deep focus (Focus), pop hits (Hits), and office noise (Office) conditions (see main text for stats). B. Participants in the work flow condition performed faster over time regardless of basal anxiety status, as indicated by the similarity of RT slopes with increasing trial number for DASS-21 anxiety scores corresponding to “normal” (score = 5), “moderate” (score = 12), and “severe” (score = 19) levels. That is, the interaction between audio condition, trial number, and basal anxiety status was non-significant (see main text for stats). Dashed lines represent 95% confidence intervals.

Given this effect of work flow music on RT over time, combined with the fact that RT was negatively impacted by high levels of basal anxiety, an exploratory analysis was performed to assess the potential relationship between these effects. Specifically, we aimed to test whether the effect of work flow music on RT over time was dependent on anxiety level. Accordingly, using R, we computed a linear mixed model to predict RT that was the same as the best model described above (see also Table 2) except that it also included the three-way interaction between audio condition, trial number, and DASS21 anxiety subscore [i.e., FlankerCondition + AudioCondition*TrialNumber*DASS21Anxiety + DASS21Depression + DASS21Stress + BMRQ + GOLD-MSI + (1|ID)]. The results showed that this three-way interaction was not significant (χ23 = 3.01, p = 0.389). This suggests that the observed effect of work flow music on task performance over time was independent of anxiety severity as assessed by the DASS-21 (see Fig 5B), despite the overall slower performance of individuals with high levels of anxiety (see S4A Fig in S1 File).

In a final exploratory analysis, and for the work flow condition only, we assessed the relationship between the significant observed effects on mood and task performance. Specifically, we aimed to test whether the magnitude of mood improvement was related to the magnitude of RT decrease over time. To do this we computed a linear mixed model predicting RT as a function of fixed effects for trial number, total change in PANAS score, and their interaction, as well as a random intercept effect for participant [i.e., RT ~ TrialNumber*TotalPANAS+ (1|ID)]. The results (see S4 Table in S1 File for a full list of parameter estimates with 95% confidence intervals) showed that the interaction term was significant (χ21 = 4.96, p = 0.0258), suggesting that for participants in the work flow condition (N = 50), larger improvements in mood were associated with increasingly faster performance over time (Fig 6).

Fig 6. Relationship between effects on mood and performance in the work flow condition.

Fig 6

Each line shows the model prediction for RT over time for a different change in total PANAS score (from -0.25 to 1). For participants who listened to work flow music, larger improvements in mood (black to blue gradient) were associated with increasingly rapid responses on flanker trials with increasing trial number (i.e., steeper slopes; p = 0.0258).

Discussion

In this study we investigated the effect of two acoustically distinct forms of music advertised to support attentional focus and concentration on mood and performance during a flanker task. The “work flow” music we tested was characterized by strong rhythm with simple tonality, broadly distributed spectral energy below ~6000 Hz, and moderate dynamism. By contrast, the “deep focus” music that we tested was more minimalistic, with similarly simple tonality, but weaker rhythm, lower and more restricted spectral energy, and more reserved dynamism (see Table 1 and S1 Table). The effects of these types of music were compared with those of popular music and office noise, representing mainstream musical stimulation and the acoustic background in a naturalistic social working environment, respectively. The experiment was conducted between-subjects, with approximately 50 participants per condition. Groups were well-balanced in terms of basic demographics, musical training, sensitivity to musical reward, basal mental health status, and stimulus volume.

Regarding mood, the results showed that only the work flow music had a significant effect, driving improvements in overall mood from before to after the flanker task (see Fig 3A), underpinned by increases in positive affect and decreases in negative affect. While there was not a significant interaction between audio condition and pleasure, there was a main effect of pleasure on total change in PANAS score, with greater improvements in mood being associated with greater music liking across conditions. Importantly for applications of music in mental health and wellness, the observed effect of work flow music on mood was independent of individual variation in baseline levels of self-reported anxiety, depression, or stress over the past week, as measured by the DASS-21 (see Fig 3B). This suggests that work flow music may be effective for mood management even when people are suffering from emotional distress (approximately 46%, 30%, and 20% of the work flow sample scored in the moderate to extremely severe range for depression, anxiety, and stress, respectively). Regarding flanker task performance, the results showed that, while response accuracy was not affected by audio condition, response speed was. Specifically, we observed a significant interaction between audio condition and trial number, such that participants listening to work flow music responded more quickly over time. Importantly for applications of music in mental health and wellness, this effect was independent of individual differences in self-reported anxiety, which otherwise negatively impacted response speed. This suggests that work flow music may be useful for people losing focus due to high levels of anxiety (approximately 30% of the work flow sample scored in the moderate to extremely severe range for anxiety). Finally, in the work flow condition, effects on mood and performance were correlated such that greater improvements in mood were associated with increasingly faster responses, regardless of flanker stimulus condition (see Fig 6).

The work flow music used here was composed by professional musicians commissioned by a music therapy app for the purpose of supporting on-task functioning at work. In creating these tracks, composers were asked to follow detailed acoustic guidelines specific to the listening category “work flow” and an emotional trajectory called “anxious-to-energized”. These guidelines were created by the music therapy app in consultation with co-authors D.L. B. and C.T. D.L.B. and C.T. also reviewed each track in an iterative editorial process in which adherence to the acoustic guidelines was maintained. In broad terms, these guidelines specified that the majority of each track comprised strong rhythmic features to support groove, simple melodic and harmonic features, and an explicit avoidance of features that readily divert attentional focus (e.g., concentrated high frequency energy, highly articulated attacks, lyrics; [29, 59, 89, 90]). By contrast, the deep focus music used here can be expected to exhibit greater diversity in compositional process, having instead been deemed suitable to support on-task functioning after composition took place, by the personnel and/or algorithms that curated the source playlist.

The PANAS results showing significant changes in mood after anywhere from 7–10 minutes of listening to work flow music (see Fig 3A) add to the growing literature showing that listening to music can improve mood at very short timescales [13, 91, 92]. This raises the question of why no significant changes in mood were observed in the other music conditions. As already mentioned, no significant differences in self-reported liking were observed between work flow and deep focus music, suggesting that the absence of mood effects in the deep focus condition is not explained by the music being inherently less pleasurable. Instead, we speculate that the minimalistic nature of the deep focus music, especially its relatively low energy level (as implied by, e.g., lower spectral flux and entropy; see Table 1; [67]), may have been responsible for its failure to inspire a mood shift. Another, non-mutually exclusive possibility is related to the fact that the work flow stimuli were coherent wholes, each comprising one evolving track with a duration of approximately 10 minutes. By contrast, the deep focus tracks were only approximately three minutes each, thus requiring the combination of multiple tracks into sets to achieve the same duration (dosage) for comparison. While these sets were assembled with the intention of maintaining uniformity in musical features (see S1 Table), inevitable musical differences and inter-track transitions may have nonetheless interfered with the consistency of mood effects. It is plausible, for example, that entrainment facilitated by listening to a continuous, coherent track may have aided participants to perform the task correctly faster, whereas adapting to a dynamically changing auditory environment could have increased cognitive load and impaired performance. In the pop hits condition, interference of this kind can be expected to have been even greater, as variability between tracks was more pronounced (see S1 Table). Further, given that pop hits come from different music genres, that individual genre preferences vary widely, and that different genres can have very different mood effects [93], the mix of genres in the pop hits condition may have further precluded any consistent mood effects across listeners.

Turning to the observed effect of work flow music on task performance, our results are consistent with a large body of research showing that listening to music can have beneficial effects on various aspects of cognition including verbal learning, memory, and semantic fluency, as well as attention in some cases [3339, 60, 9496]. The effect of work flow music on flanker task performance observed here was a general increase in response speed that was observed across flanker stimulus conditions (i.e., congruent, incongruent, and neutral). While much research using the flanker task has traditionally aimed to index selective attention by focusing on the flanker interference effect (i.e., the difference in RT between incongruent and congruent trials; [97]) as an index of selective attention, we note that recent research assessing the effect of background music found results consistent with ours. As mentioned in the introduction, two recent studies reported that participants also showed a general increase in response speed that was independent of flanker stimulus condition while listening to “joyful” classical music [38, 39].

The results of this study appear to be in good accord with arousal-mood theory, which posits that positive effects of music listening on cognition can be understood in terms of music’s well-documented capacities to upregulate arousal level and positive affect [98]. Although we did not explicitly measure subjective perceptions of arousal, there is good reason to expect that both the work flow tracks and pop hits tracks used here were relatively arousing, while the deep focus music was not. The perception of groove is related to arousal, measured physiologically or perceptually [90, 99], and was comparable between the work flow and pop hits conditions (means = 0.724±0.246 and 0.729±0.277, respectively; BF01 = 4.55; see Fig 4). Perceived groove in these conditions was also significantly higher than it was in the deep focus condition (mean = 0.549±0.337; see Fig 4). Moreover, the work flow and pop hits tracks were relatively high in musical features closely associated with perceived arousal, such as spectral flux, spectral entropy, pulse clarity, and fluctuation maximum (see Table 1; [65, 67]), whereas the deep focus tracks exhibited lesser values for these features. Regarding positive affect, only the work flow tracks increased positive affect, as measured by the PANAS survey. That said, both the work flow and deep focus tracks were reasonably well-liked (mean pleasure = 0.829±0.177 for work flow, 0.860±0.150 for deep focus; BF01 = 3.25), and relatively more so than the pop hits tracks, which were rated lower on average (mean = 0.720±0.294; Fig 4), and more variably overall (see error bars). In the context of the arousal-mood theory, the beneficial effects of work flow music on performance speed can thus be rationalized by its capacity to stimulate both high arousal and positive affect simultaneously. By contrast, the other music conditions may have failed to modulate response speed because of lesser capacities to do so: the deep focus tracks were pleasurable but not arousing; the pop hits tracks were arousing but not consistently pleasurable.

Additional support for the arousal-mood theory comes from our exploratory analysis showing that for participants in the work flow condition, greater improvements in mood were associated with more rapid improvements in response speed over time (see Fig 6). These results are of particular importance when paired with our finding that individuals with higher anxiety levels had slower RTs in the flanker task (see S4A Fig in S1 File). Anxiety is well-known to hinder cognition [100103], including selective attention [104]. This raises the possibility that, under some circumstances, music modulates cognition via emotional effects that downregulate anxiety and its detrimental effect on cognition, leading to an increase in performance speed. In this context, we also note our finding that individuals with higher sensitivity to musical reward had faster RTs in the flanker task (see S4B Fig in S1 File), which is consistent with some previous research showing that patients with higher sensitivity to musical reward benefit the most from music-based interventions [105]. If high sensitivity to musical reward confers heightened sensitivity to music’s emotional effects, such individuals may also benefit the most from using music to improve task performance. Finally, in considering our results in the context of arousal-mood theory, it should be noted that the latter was developed to explain cognitive effects driven by listening to music before performing a particular task [98], whereas our results (and those of other recent studies, e.g., [38, 39]) primarily concern music listening during task performance. These results suggest that the theory may hold equally well in both cases.

Our study suffers from a few limitations. First, data was collected online and while multiple attentional checks were implemented (including a test ensuring that participants were wearing headphones), online experiments are inevitably less well-controlled than laboratory studies. Second, the study involved four different groups of participants, and while we confirmed that these groups were comparable in terms of basic demographic, music-related, and mental health variables, there may have been relevant group-level differences that our surveys might not have captured. Third, we did not explicitly measure subjective impressions of arousal, which may (or may not) have provided more direct support for our interpretation of the results in terms of arousal-mood theory. Future research examining the effects of music on cognitive performance should collect behavioral and physiological measures of arousal alongside mood to further evaluate this theory. Fourth, the time-scale of this experiment was relatively short compared to that taken to complete many real-world work tasks. It is possible that the music tested here may have different effects at longer time scales. Finally, despite placing consistent and measurable demands on cognition, the flanker task is relatively simplistic. Indeed, most participants made few errors across the Flanker conditions (mean correct answers ± SD for all participants, regardless of audio condition: NoGo, 15.12 ± 4.93; Congruent, 17.66 ± 1.08; Incongruent, 16.65 ± 3.13; Neutral, 17.48 ± 1.63). This ceiling effect might account for the lack of significant effects of music background condition on Flanker accuracy. Future studies should examine music’s cognitive effects with a greater variety of tasks and/or more naturalistic work settings (e.g., real world problem solving, creative writing or design, critical analysis, etc.) to more precisely define how and when listening to music can be beneficial.

In conclusion, here we show that instrumental music intentionally composed to support attentional focus and concentration during work—comprising strong rhythm, simple tonality, broad spectral energy, and moderate dynamism—improves mood and increases processing speed during a cognitively demanding task that requires selective attention. These effects were demonstrated in comparison to more minimalistic music similarly advertised to improve attentional focus, popular music, and typical background noise in a calm office environment. This work has real-world implications for providing the general population with effective and affordable strategies to regulate mood and performance during routine work tasks often experienced as emotionally and physically taxing.

Supporting information

S1 File. Supplementary text that includes (i) a brief description of each of the 23 musical features used to characterize the music used in this work along with the MIRtoolbox / Matlab function calls used to extract them; (ii) S2-S4 Tables; and (iii) S1-S5 Figs.

(PDF)

pone.0316047.s001.pdf (427.4KB, pdf)
S1 Table. Table with the track names and artists, set arrangements, and musical features for each track in each audio condition.

(XLSX)

pone.0316047.s002.xlsx (34.8KB, xlsx)
S1 Data. All behavioral data analyzed using JASP.

(XLSX)

pone.0316047.s003.xlsx (41.3KB, xlsx)
S2 Data. All behavioral data analyzed using R.

(XLSX)

pone.0316047.s004.xlsx (651.3KB, xlsx)
S1 Audio. Track used for the work flow condition.

(M4A)

Download audio file (9.6MB, m4a)
S2 Audio. Track used for the work flow condition.

(M4A)

Download audio file (9.4MB, m4a)
S3 Audio. Track used for the work flow condition.

(M4A)

Download audio file (9.5MB, m4a)
S4 Audio. Track used for the work flow condition.

(M4A)

Download audio file (9.5MB, m4a)
S5 Audio. Track used for the office noise condition.

(M4A)

Download audio file (9.3MB, m4a)

Acknowledgments

We thank David Poeppel for early feedback on this manuscript.

Data Availability

All relevant data are within the paper and its Supporting Information files.

Funding Statement

This work was funded by a grant from the company that provided the tracks for the work flow condition to Dr. Ripolles. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

References

  • 1.Dubé L, Le Bel J. The content and structure of laypeople’s concept of pleasure. Cogn Emot. 2003;17(2):263–95. doi: 10.1080/02699930302295 [DOI] [PubMed] [Google Scholar]
  • 2.Juslin PN, Västfjäll D. Emotional responses to music: The need to consider underlying mechanisms. Behav Brain Sci. 2008;31(5):559–75. doi: 10.1017/S0140525X08005293 [DOI] [PubMed] [Google Scholar]
  • 3.Mehr SA, Singh M, Knox D, Ketter DM, Pickens-Jones D, Atwood S, et al. Universality and diversity in human song. Science. 2019;366(6468):0868. doi: 10.1126/science.aax0868 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Weinberg MK, Joseph D. If you’re happy and you know it: Music engagement and subjective well-being. Psychol Music. 2017;45(2):257–67. [Google Scholar]
  • 5.Sloboda JA, O’Neill SA, Ivaldi A. Functions of music in everyday life: An exploratory study using the Experience Sampling Method. Music Sci. 2001;5(1):9–32. [Google Scholar]
  • 6.Baltazar M, Saarikallio S. Strategies and mechanisms in musical affect self-regulation: A new model. Music Sci. 2019;23(2):177–95. [Google Scholar]
  • 7.Greasley AE, Lamont AM. Music preference in adulthood: Why do we like the music we do. In: Proceedings of the 9th International Conference on Music Perception and Cognition. Bologna, Italy: University of Bologna; 2006. p. 960–6. [Google Scholar]
  • 8.Schäfer T, Sedlmeier P, Städtler C, Huron D. The psychological functions of music listening. Front Psychol. 2013;4:511. doi: 10.3389/fpsyg.2013.00511 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Goethem A, Sloboda J. The functions of music for affect regulation. Music Sci. 2011;15(2):208–28. [Google Scholar]
  • 10.Chin T, Rickard NS. Emotion regulation strategy mediates both positive and negative relationships between music uses and well-being. Psychol Music. 2014;42(5):692–713. [Google Scholar]
  • 11.Mas-Herrero E, Singer N, Ferreri L, McPhee M, Zatorre R, Ripollés P. Music is negatively correlated to depressive symptoms during the COVID-19 pandemic via reward-related mechanisms. Ann N Y Acad Sci. 2023;00:1–13. [DOI] [PubMed] [Google Scholar]
  • 12.Ferreri L, Singer N, McPhee M, Ripollés P, Zatorre RJ, Mas-Herrero E. Engagement in music-related activities during the COVID-19 pandemic as a mirror of individual differences in musical reward and coping strategies. Front Psychol. 2021;12. doi: 10.3389/fpsyg.2021.673772 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Witte M, Spruit A, Hooren S, Moonen X, Stams GJ. Effects of music interventions on stress-related outcomes: a systematic review and two meta-analyses. Health Psychol Rev. 2020;14(2):294–324. doi: 10.1080/17437199.2019.1627897 [DOI] [PubMed] [Google Scholar]
  • 14.Sakka LS, Juslin PN. Emotion regulation with music in depressed and non-depressed individuals: Goals, strategies, and mechanisms. Music Sci. 2018;1:2059204318755023. [Google Scholar]
  • 15.Sihvonen AJ, Särkämö T, Leo V, Tervaniemi M, Altenmüller E, Soinila S. Music-based interventions in neurological rehabilitation. Lancet Neurol. 2017;16(8):648–60. doi: 10.1016/S1474-4422(17)30168-0 [DOI] [PubMed] [Google Scholar]
  • 16.Ripollés P, Rojo N, Grau-Sánchez J, Amengual JL, Càmara E, Marco-Pallarés J, et al. Music supported therapy promotes motor plasticity in individuals with chronic stroke. Brain Imaging Behav. 2016;10(4):1289–307. doi: 10.1007/s11682-015-9498-x [DOI] [PubMed] [Google Scholar]
  • 17.Juslin PN, Laukka P. Expression, perception, and induction of musical emotions: A review and a questionnaire study of everyday listening. J New Music Res. 2004;33(3):217–38. [Google Scholar]
  • 18.Kratus J. A developmental study of children’s interpretation of emotion in music. Psychol Music. 1993;21(1):3–19. [Google Scholar]
  • 19.Hays T, Minichiello V. The meaning of music in the lives of older people: A qualitative study. Psychol Music. 2005;33(4):437–51. [Google Scholar]
  • 20.Goldstein A. Thrills in response to music and other stimuli. Psychobiology. 1980;8:126–9. [Google Scholar]
  • 21.Mallik A, Chanda M, Levitin D. Anhedonia to music and mu-opioids: Evidence from the administration of naltrexone. Sci Rep. 2017;7:41952. doi: 10.1038/srep41952 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Koelsch S. Brain correlates of music-evoked emotions. Nat Rev Neurosci. 2014;15(3):170–80. doi: 10.1038/nrn3666 [DOI] [PubMed] [Google Scholar]
  • 23.Koelsch S. A coordinate-based meta-analysis of music-evoked emotions. NeuroImage. 2020;223:117350. doi: 10.1016/j.neuroimage.2020.117350 [DOI] [PubMed] [Google Scholar]
  • 24.Mas-Herrero E, Maini L, Sescousse G, Zatorre RJ. Common and distinct neural correlates of music and food-induced pleasure: A coordinate-based meta-analysis of neuroimaging studies. Neurosci Biobehav Rev. 2021;123:61–71. doi: 10.1016/j.neubiorev.2020.12.008 [DOI] [PubMed] [Google Scholar]
  • 25.Blood AJ, Zatorre RJ. Intensely pleasurable responses to music correlate with activity in brain regions implicated in reward and emotion. Proc Natl Acad Sci. 2001;98(20):11818–23. doi: 10.1073/pnas.191355898 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Salimpoor VN, Benovoy M, Larcher K, Dagher A, Zatorre RJ. Anatomically distinct dopamine release during anticipation and experience of peak emotion to music. Nat Neurosci. 2011;14(2):257–62. doi: 10.1038/nn.2726 [DOI] [PubMed] [Google Scholar]
  • 27.Ferreri L, Mas-Herrero E, Zatorre RJ, Ripollés P, Gomez-Andres A, Alicart H, et al. Dopamine modulates the reward experiences elicited by music. Proc Natl Acad Sci. 2019;116(9):3793–8. doi: 10.1073/pnas.1811878116 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Mas‐Herrero E, Ferreri L, Cardona G, Zatorre RJ, Pla‐Juncà F, Antonijoan RM, et al. The role of opioid transmission in music‐induced pleasure. Ann N Y Acad Sci. 2023. Feb;1520(1):105–14. doi: 10.1111/nyas.14946 [DOI] [PubMed] [Google Scholar]
  • 29.Bowling DL. Biological principles for music and mental health. Transl Psychiatry. 2023;13:374. doi: 10.1038/s41398-023-02671-4 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Hou J, Song B, Chen AC, Sun C, Zhou J, Zhu H, et al. Review on neural correlates of emotion regulation and music: implications for emotion dysregulation. Front Psychol. 2017;8:501. doi: 10.3389/fpsyg.2017.00501 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Moreno S. Can music influence language and cognition? Contemp Music Rev. 2009;28(3):329–45. [Google Scholar]
  • 32.Ferreri L, Verga L. Benefits of music on verbal learning and memory: How and when does it work? Music Percept Interdiscip J. 2016;34(2):167–82. [Google Scholar]
  • 33.Ferreri L, Rodriguez-Fornells A. Music-related reward responses predict episodic memory performance. Exp Brain Res. 2017;235(12):3721–31. doi: 10.1007/s00221-017-5095-0 [DOI] [PubMed] [Google Scholar]
  • 34.Ferreri L, Mas-Herrero E, Cardona G, Zatorre RJ, Antonijoan RM, Valle M, et al. Dopamine modulations of reward-driven music memory consolidation. Ann N Y Acad Sci. 2021;1502(1):85–98. doi: 10.1111/nyas.14656 [DOI] [PubMed] [Google Scholar]
  • 35.Cardona G, Rodriguez-Fornells A, Nye H, Rifà-Ros X, Ferreri L. The impact of musical pleasure and musical hedonia on verbal episodic memory. Sci Rep. 2020;10(1):16113. doi: 10.1038/s41598-020-72772-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36.Kämpfe J, Sedlmeier P, Renkewitz F. The impact of background music on adult listeners: A meta-analysis. Psychol Music. 2010;39:424–48. [Google Scholar]
  • 37.Cloutier A, Fernandez NB, Houde-Archambault C, Gosselin N. Effect of background music on attentional control in older and young adults. Front Psychol. 2020;11:557225. doi: 10.3389/fpsyg.2020.557225 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38.Fernandez NB, Trost WJ, Vuilleumier P. Brain networks mediating the influence of background music on selective attention. Soc Cogn Affect Neurosci. 2019;14:1441–52. doi: 10.1093/scan/nsaa004 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 39.Fernandez NB, Vuilleumier P, Gosselin N, Peretz I. Influence of Background Musical Emotions on Attention in Congenital Amusia. Front Hum Neurosci. 2021;14:566841. doi: 10.3389/fnhum.2020.566841 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 40.Davies DR, Lang L, Shackleton VJ. The effects of music and task difficulty on performance at a visual vigilance task. Br J Psychol. 1973;64(3):383–9. doi: 10.1111/j.2044-8295.1973.tb01364.x [DOI] [PubMed] [Google Scholar]
  • 41.Küssner MB. Eysenck’s theory of personality and the role of background music in cognitive task performance: A mini-review of conflicting findings and a new perspective. Front Psychol. 2017;8. doi: 10.3389/fpsyg.2017.01991 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 42.Särkämö T, Ripollés P, Vepsäläinen H, Autti T, Silvennoinen HM, Salli E, et al. Structural changes induced by daily music listening in the recovering brain after middle cerebral artery stroke: a voxel-based morphometry study. Front Hum Neurosci. 2014;8:245. doi: 10.3389/fnhum.2014.00245 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 43.Särkämö T, Tervaniemi M, Laitinen S, Forsblom A, Soinila S, Mikkonen M, et al. Music listening enhances cognitive recovery and mood after middle cerebral artery stroke. Brain. 2008;131(3):866–76. doi: 10.1093/brain/awn013 [DOI] [PubMed] [Google Scholar]
  • 44.Darrow AA, Johnson C, Agnew S, Fuller ER, Uchisaka M. Effect of preferred music as a distraction on music majors’ and nonmusic majors’ selective attention. Bull Counc Res Music Educ. 2006;21–31. [Google Scholar]
  • 45.Rowe G, Hirsh JB, Anderson AK. Positive affect increases the breadth of attentional selection. Proc Natl Acad Sci. 2007;104:383–8. doi: 10.1073/pnas.0605198104 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 46.Eriksen BA, Eriksen CW. Effects of noise letters upon the identification of a target letter in a nonsearch task. Percept Psychophys. 1974;16:143–9. [Google Scholar]
  • 47.Sanders AF, Lamers JM. The Eriksen flanker effect revisited. Acta Psychol (Amst). 2002;109(1):41–56. doi: 10.1016/s0001-6918(01)00048-8 [DOI] [PubMed] [Google Scholar]
  • 48.Peirce J, Gray JR, Simpson S, MacAskill M, Höchenberger R, Sogo H, et al. PsychoPy2: Experiments in behavior made easy. Behav Res Methods. 2019;51:195–203. doi: 10.3758/s13428-018-01193-y [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 49.Crump MJ, McDonnell JV, Gureckis TM. Evaluating Amazon’s Mechanical Turk as a tool for experimental behavioral research. PloS One. 2013;8(3):57410. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 50.Lizcano-Cortés F, Gómez-Varela I, Mares C, Wallisch P, Orpella J, Poeppel D, et al. Speech-to-Speech Synchronization protocol to classify human participants as high or low auditory-motor synchronizers. STAR Protoc. 2022;3(2):101248. doi: 10.1016/j.xpro.2022.101248 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 51.Assaneo MF, Ripollés P, Orpella J, Lin WM, Diego-Balaguer R, Poeppel D. Spontaneous synchronization to speech reveals neural mechanisms facilitating language learning. Nat Neurosci. 2019;22(4):627–32. doi: 10.1038/s41593-019-0353-z [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 52.Assaneo MF, Ripollés P, Tichenor SE, Yaruss JS, Jackson ES. The Relationship Between Auditory-Motor Integration, Interoceptive Awareness, and Self-Reported Stuttering Severity. Front Integr Neurosci. 2022;16:869571. doi: 10.3389/fnint.2022.869571 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 53.Pei Y, Li Y, Ripollés P. Automotive audio system evaluation over headphones based on the biaural vehicle impulse responses of different listening positions: A case study of a specific audio system. J Acoust Soc Am. 2022;152(4):120–120. [Google Scholar]
  • 54.Campbell JI, Thompson VA. MorePower 6.0 for ANOVA with relational confidence intervals and Bayesian analysis. Behav Res Methods. 2012;44:1255–65. doi: 10.3758/s13428-012-0186-0 [DOI] [PubMed] [Google Scholar]
  • 55.Spiritune. (n.d.). Retrieved October 10, 2021, from http://www.spiritune.com
  • 56.Spotify. (n.d.) Retrieved October 10, 2021, from http://www.spotify.com
  • 57.Billboard. (n.d.) Retrieved October 10, 2021, from http://www.billboard.com
  • 58.Vasilev MR, Kirkby JA, Angele B. Auditory Distraction During Reading: A Bayesian Meta-Analysis of a Continuing Controversy. Perspect Psychol Sci J Assoc Psychol Sci. 2018;13(5):567–97. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 59.Sun Y, Sun C, Li C, Shao X, Liu Q, Liu H. Impact of background music on reading comprehension: influence of lyrics language and study habits. Front Psychol. 2024;15:1363562. doi: 10.3389/fpsyg.2024.1363562 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 60.Shih YN, Huang RH, Chiang HY. Background music: Effects on attention performance. Work. 2012;42(4):573–8. doi: 10.3233/WOR-2012-1410 [DOI] [PubMed] [Google Scholar]
  • 61.Pigeon, S. (n.d.). myNoise. Retrieved October 10, 2021, from http://mynoise.net
  • 62.Rogue Amoeba Software. (n.d.). Audio Hijack version 3.2.7 [software]. Retrieved from https://rogueamoeba.com/audiohijack/
  • 63.Lartillot O, Eerola T, Toiviainen P, Fornari J. Multi-feature modeling of pulse clarity: design, validation, and optimization. In: International conference on music information retrieval. 2008. p. 521.
  • 64.Lartillot O, Toiviainen P. A Matlab toolbox for musical feature extraction from audio. In: Proceedings of the 9th International conference on digital audio effects. 2007. p. 244.
  • 65.Eerola T, Lartillot O, Toiviainen P. Prediction of multidimensional emotional ratings in music from audio using multivariate regression models. In: Proceedings of the 10th International Society for Music Information Retrieval Conference, ISMIR. 2009. p. 621–6. [Google Scholar]
  • 66.Alluri V, Toizianinen PJ, GE I.P, Sams M, Brattico E. Large-scale brain networks emerge from dynamic processing of musical timbre, key and rhythm. Neuroimage. 2012;59:3677–89. doi: 10.1016/j.neuroimage.2011.11.019 [DOI] [PubMed] [Google Scholar]
  • 67.Gingras B, Marin MM, Fitch WT. Beyond intensity: spectral features effectively predict music-induced subjective arousal. Q J Exp Psychol. 2014;67:1428–46. doi: 10.1080/17470218.2013.863954 [DOI] [PubMed] [Google Scholar]
  • 68.Lange EB, Frieler K. Challenges and opportunities of predicting musical emotions with perceptual and automatized features. Music Percept. 2018;36:217–42. [Google Scholar]
  • 69.Müllensiefen D, Gingras B, Musil J, Stewart L. The musicality of non-musicians: An index for assessing musical sophistication in the general population. PLoS One. 2014;9:89642. doi: 10.1371/journal.pone.0089642 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 70.Mas-Herrero E, Marco-Pallares J, Lorenzo-Seva U, Zatorre RJ, Rodriguez-Fornells A. Individual differences in music reward experiences. Music Percept Interdiscip J. 2013;31(2):118–38. [Google Scholar]
  • 71.Lovibond SH, Lovibond PF. Manual for the depression, anxiety, and stress scales. Psychology Foundation of Australia; 1995. [Google Scholar]
  • 72.Milne AE, Bianco R, Poole KC. An online headphone screening test based on dichotic pitch. Behav Res. 2021;53:1551–62. doi: 10.3758/s13428-020-01514-0 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 73.Watson D, Clark LA, Tellegen A. Development and validation of brief measures of positive and negative affect: The PANAS scales. J Pers Soc Psychol. 1988;54(6):1063–70. doi: 10.1037//0022-3514.54.6.1063 [DOI] [PubMed] [Google Scholar]
  • 74.Bowling DL, Gahr J, Ancochea PG, Hoeschele M, Canoine V, Fusani L, et al. Endogenous oxytocin, cortisol, and testosterone in response to group singing. Horm Behav. 2022;139:105105. doi: 10.1016/j.yhbeh.2021.105105 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 75.Wagenmakers EJ, Marsman M, Jamil T, Ly A, Verhagen J, Love J, et al. Bayesian inference for psychology. Part I: Theoretical advantages and practical ramifications. Psychon Bull Rev. 2018;25:35–57. doi: 10.3758/s13423-017-1343-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 76.Wagenmakers EJ, Love J, Marsman M, Jamil T, Ly A, Verhagen J, et al. Bayesian inference for psychology. Part II: Example applications with JASP. Psychon Bull Rev. 2018;25:58–76. doi: 10.3758/s13423-017-1323-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 77.Rouder JN, Morey RD. Default Bayes factors for model selection in regression. Multivar Behav Res. 2012;47(6):877–903. doi: 10.1080/00273171.2012.734737 [DOI] [PubMed] [Google Scholar]
  • 78.Morey RD, Rouder JN, Jamil T. Package ‘BayesFactor’ Package ‘BayesFactor.’ 2015. [Google Scholar]
  • 79.Lee MD, Wagenmakers EJ. Bayesian cognitive modeling: A practical course. Cambridge University Press; 2013. [Google Scholar]
  • 80.Jamil T, Ly A, Morey RD, Love J, Marsman M, Wagenmakers EJ. Default “Gunel and Dickey” Bayes factors for contingency tables. Behav Res Methods. 2017;49(2):638–52. doi: 10.3758/s13428-016-0739-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 81.Yu Z, Guindani M, Grieco SF, Chen L, Holmes TC, Xu X. Beyond t test and ANOVA: applications of mixed-effects models for more rigorous statistical analysis in neuroscience research. Neuron. 2022;110(1):21–35. doi: 10.1016/j.neuron.2021.10.030 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 82.Krueger C, Tian L. A comparison of the general linear mixed model and repeated measures ANOVA using a dataset with multiple missing data points. Biol Res Nurs. 2004;6(2):151–7. doi: 10.1177/1099800404267682 [DOI] [PubMed] [Google Scholar]
  • 83.Bates D, Mächler M, Bolker B, Walker S. Fitting Linear Mixed-Effects Models Using lme4. J Stat Softw [Internet]. 2015. [cited 2024 Oct 3];67(1). Available from: http://www.jstatsoft.org/v67/i01/ [Google Scholar]
  • 84.Mas-Herrero E, Zatorre RJ, Rodriguez-Fornells A, Marco-Pallarés J. Dissociation between musical and monetary reward responses in specific musical anhedonia. Curr Biol. 2014;24(6):699–704. doi: 10.1016/j.cub.2014.01.068 [DOI] [PubMed] [Google Scholar]
  • 85.Martínez-Molina N, Mas-Herrero E, Rodríguez-Fornells A, Zatorre RJ, Marco-Pallarés J. Neural correlates of specific musical anhedonia. Proc Natl Acad Sci. 2016;113(46):7337–45. doi: 10.1073/pnas.1611211113 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 86.Symonds MRE, Moussalli A. A brief guide to model selection, multimodel inference and model averaging in behavioural ecology using Akaike’s information criterion. Behav Ecol Sociobiol. 2011;65:13–21. [Google Scholar]
  • 87.Abrams EB, Namballa R, He R, Poeppel D, Ripollés P. Elevator music as a tool for the quantitative characterization of reward. Ann N Y Acad Sci. 2024. May;1535(1):121–36. doi: 10.1111/nyas.15131 [DOI] [PubMed] [Google Scholar]
  • 88.Bosch I, Salimpoor VN, Zatorre RJ. Familiarity mediates the relationship between emotional arousal and pleasure during music listening. Front Hum Neurosci. 2013;7:534. doi: 10.3389/fnhum.2013.00534 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 89.Souza AS, Barbosa LCL. Should We Turn off the music? Music with lyrics interferes with cognitive tasks. J Cogn. 2023;6:1–18. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 90.Bowling DL, Grad Ancochea P, Hove MJ, Fitch WT. Pupillometry of groove: evidence for noradrenergic arousal in the link between music and movement. Front Neurosci. 2018;12:01039. doi: 10.3389/fnins.2018.01039 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 91.Mok E, Wong KY. Effects of music on patient anxiety. AORN J. 2003;77(2):396–410. doi: 10.1016/s0001-2092(06)61207-6 [DOI] [PubMed] [Google Scholar]
  • 92.Wu PY, Huang ML, Lee WP, Wang C, Shih WM. Effects of music listening on anxiety and physiological responses in patients undergoing awake craniotomy. Complement Ther Med. 2017;32:56–60. doi: 10.1016/j.ctim.2017.03.007 [DOI] [PubMed] [Google Scholar]
  • 93.McCraty R, Barrios-Choplin B, Atkinson M, Tomasino D. The effects of different types of music on mood, tension, and mental clarity. Altern Ther Health Med. 1998;4(1):75–84. [PubMed] [Google Scholar]
  • 94.Shih YN, Chien WH, Chiang HS. Elucidating the relationship between work attention performance and emotions arising from listening to music. Work. 2016;55(2):489–94. doi: 10.3233/WOR-162408 [DOI] [PubMed] [Google Scholar]
  • 95.Jiang J, Scolaro AJ, Bailey K, Chen A. The effect of music-induced mood on attentional networks. Int J Psychol. 2011;46(3):214–22. doi: 10.1080/00207594.2010.541255 [DOI] [PubMed] [Google Scholar]
  • 96.Thompson RG, Moulin CJA, Hayre S, Jones RW. Music enhances category fluency in healthy older adults and Alzheimer’s disease patients. Exp Aging Res. 2005;31(1):91–9. doi: 10.1080/03610730590882819 [DOI] [PubMed] [Google Scholar]
  • 97.Wild-Wall N, Falkenstein M, Hohnsbein J. Flanker interference in young and older participants as reflected in event-related potentials. Brain Res. 2008;1211:72–84. doi: 10.1016/j.brainres.2008.03.025 [DOI] [PubMed] [Google Scholar]
  • 98.Thompson WF, Schellenberg EG, Husain G. Arousal, mood, and the Mozart effect. Psychol Sci. 2001;12:248–51. doi: 10.1111/1467-9280.00345 [DOI] [PubMed] [Google Scholar]
  • 99.Senn O, Bechtold TA, Jerjen R, Kilchenmann L, Hoesl F. Three Psychometric Scales for Groove Research: Inner Representation of Temporal Regularity, Time-Related Interest, and Energetic Arousal. Music Sci. 2023;6:1–16. [Google Scholar]
  • 100.Shi R, Sharpe L, Abbott M. A meta-analysis of the relationship between anxiety and attentional control. Clin Psychol Rev. 2019; doi: 10.1016/j.cpr.2019.101754 [DOI] [PubMed] [Google Scholar]
  • 101.Ajilchi B, Nejati V. Executive Functions in Students With Depression, Anxiety, and Stress Symptoms. Basic Clin Neurosci. 2017;8(3):223–32. doi: 10.18869/nirp.bcn.8.3.223 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 102.Ursache A, Raver CC. Trait and state anxiety: Relations to executive functioning in an at-risk sample. Cogn Emot. 2014;28(5):845–55. doi: 10.1080/02699931.2013.855173 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 103.Zainal NH, Newman MG. Executive function and other cognitive deficits are distal risk factors of generalized anxiety disorder 9 years later. Psychol Med. 2018;48(12):2045–53. doi: 10.1017/S0033291717003579 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 104.Chen S, Yao N, Qian M, Lin M. Attentional biases in high social anxiety using a flanker task. J Behav Ther Exp Psychiatry. 2016;51:27–34. doi: 10.1016/j.jbtep.2015.12.002 [DOI] [PubMed] [Google Scholar]
  • 105.Grau‐Sánchez J, Duarte E, Ramos‐Escobar N, Sierpowska J, Rueda N, Redón S, et al. Music‐supported therapy in the rehabilitation of subacute stroke patients: a randomized controlled trial. Ann N Y Acad Sci. 2018;1423(1):318–28. doi: 10.1111/nyas.13590 [DOI] [PubMed] [Google Scholar]

Decision Letter 0

Bruno Alejandro Mesz

26 Aug 2024

PONE-D-24-26875Effects of Music Advertised to Support Focus on Mood and Processing SpeedPLOS ONE

Dear Dr. Ripolles,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Two experts in the field have carefully reviewed the manuscript entitled "Effects of Music Advertised to Support Focus on Mood and Processing Speed" . Both reviewers have made observations that need to be addressed (see below).

In light of these reviews, I am requesting a minor revision and resubmission, in which you will need to respond to each point made in the reviews.

Please submit your revised manuscript by Oct 10 2024 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Bruno Alejandro Mesz, Ph.D.

Academic Editor

PLOS ONE

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. Please note that PLOS ONE has specific guidelines on code sharing for submissions in which author-generated code underpins the findings in the manuscript. In these cases, we expect all author-generated code to be made available without restrictions upon publication of the work. Please review our guidelines at https://journals.plos.org/plosone/s/materials-and-software-sharing#loc-sharing-code and ensure that your code is shared in a way that follows best practice and facilitates reproducibility and reuse.

3. Thank you for stating the following in the Acknowledgments Section of your manuscript:

"This work was funded by a grant from the company that provided the tracks for the work flow condition to Dr. Ripolles."

We note that you have provided additional information within the Acknowledgements Section that is not currently declared in your Funding Statement. Please note that funding information should not appear in the Acknowledgments section or other areas of your manuscript. We will only publish funding information present in the Funding Statement section of the online submission form.

Please remove any funding-related text from the manuscript and let us know how you would like to update your Funding Statement. Currently, your Funding Statement reads as follows:

"This work was funded by a grant from the company that provided the tracks for the work flow condition to Dr. Ripolles."

Please include your amended statements within your cover letter; we will change the online submission form on your behalf.

4. We note that the grant information you provided in the ‘Funding Information’ and ‘Financial Disclosure’ sections do not match.

When you resubmit, please ensure that you provide the correct grant numbers for the awards you received for your study in the ‘Funding Information’ section.

5. Thank you for stating the following in the Competing Interests section:

"Dr. Tomaino serves as the music therapy advisor at the company that provided the tracks for the work flow condition. Dr. Bowling serves as the neuroscience advisor at the company that provided the tracks for the work flow condition. This work was funded by a grant from the company that provided the tracks for the work flow condition to Dr. Ripolles. Dr. Orpella has no competing interests."

Please confirm that this does not alter your adherence to all PLOS ONE policies on sharing data and materials, by including the following statement: "This does not alter our adherence to  PLOS ONE policies on sharing data and materials.” (as detailed online in our guide for authors http://journals.plos.org/plosone/s/competing-interests).  If there are restrictions on sharing of data and/or materials, please state these. Please note that we cannot proceed with consideration of your article until this information has been declared.

Please include your updated Competing Interests statement in your cover letter; we will change the online submission form on your behalf.

6. Please include captions for your Supporting Information files at the end of your manuscript, and update any in-text citations to match accordingly. Please see our Supporting Information guidelines for more information: http://journals.plos.org/plosone/s/supporting-information.

7. Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Yes

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: Yes

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: ### Summary of paper

The current paper evaluates the effects of music advertised to support attentional focus on mood and performance of participants on a cognitively demanding task (the flanker test). The study stands out from previous work as it focuses on evaluating the effect of music in task performance. It is also one of the few studies where the effect is regarding music that is being listened to while the task is being performed (instead of before). The study also investigates the effect of the music in mood and the liking of it by the participants, in order to evaluate possible mechanisms through which changes in task performance may be explained (in particular, the arousal-mood theory). In a between-subjects design, they compared two music sets advertised to improve performance (”work flow” and “deep focus”) with two control conditions: popular music and office background noise. Groups were assessed to be comparable in terms of basic demographic, music-related, and mental health variables. Results showed that only the “work flow” music yielded performance improvement as decreased RT. Music condition had no effect on task accuracy. The paper clearly states the set of a priori statistic analyses by introducing them in the methods section and then exploratory analyses introduced during the result section. A priori analyses were: similarity of participant groups, effect of audio condition on mood change, effect of audio condition on performance accuracy, and effect of audio condition on performance speed considering sensitivity to musical reward, music training and basal psychological distress. Results showed a main effect of anxiety and sensitivity to musical reward for RT as well as decreased RT for the “Work flow” condition. Exploratory analyses focused on looking for possible modulators of the performance improvement effect. Tests were performed for the effect of basal levels of depression, anxiety and stress as well as mood change. Basal levels of depression, anxiety and stress did not show an effect on performance improvement, but mood change did. Greater mood change correlated with lower RTs over time. Additionally, they verified that changes in mood were not predicted by an interaction of music condition and music familiarity or pleasure. Yet, changes in mood were related to musical pleasure. These results are used to hypothesize that the speed improvement in the task may be due to the music’s ability to upregulate arousal and positive affect simultaneously, which seemed to happen mainly with “work flow” music, as it yielded high pleasure as well as high groove ratings, contrasting other music conditions.

The conclusions seem to be well supported by the results and the analysis. The three supplementary tables contain all the raw data mentioned in the paper (acoustic features of the stimuli, background data of the participants and individual trial responses).

### Main comments

Here I detail main concerns regarding presentation.

1. One of the main concerns is that I was unable to find the supplementary figures (Figures S1-S4).

2. The explanation on how the stimuli was selected lacks detail. In the Stimuli section (l117 - l161) it is said that the tracks were sampled from a larger set. No detail is provided on the sampling method. Later on, in the discussion (l489) it is said “While these sets were assembled with the intention of maintaining uniformity in musical features”. This should be explained in the stimuli section. Moreover, some further analysis on how similar the music sets were (e.g.: Silhouette score) could better illustrate the picture.

3. In the results for Group categorization (l324), it is stated: "Music genre preferences varied within groups but were comparable across them”. Here, it is not clear what the music genre preferences refer to or how they are comparable. The text references Table S2, which contains the Pleasure, Familiarity and Groove ratings. If this is what the text refers too, then this is explained in the discussion section, when Bayesian stats are presented (l513-l517). These results should be presented in the results section.

4. In the discussion, a main difference is stated between the “Work flow” condition and other conditions based on musical attributes of the music; namely, “musical features closely associated with perceived arousal, such as spectral flux” as well as strong rhythmic features such as pulse clarity. Another important difference is given by the “Work flow” sets being comprised of a single coherent track, while the other musical conditions (excluding “Office”) having multiple tracks with silence in between. I would argue that the entrainment allowed by a coherent continuous track could be part of the reasons for improved RT, or the cognitive load of adapting to a changing auditory environment a hindrance to the benefits of music.

### Minor comments

- p6, 155: "The office noise track was generated with the following 155

settings on the sound generator web page" - I would add an explanation on the criterion by which these settings were selected.

- p7, 164: I would report MATLAB version used, as MIRToolbox has shown to work differently with different MATLAB versions.

- p11, 267: "The same procedure was used..." - I am confused on which procedure this is as this

is for only post-measured variables instead of pre-post (which is what the last analysis referred to). Is it the same analysis as the previous paragraph?

- p17: "it also included the three-way interaction between audio condition,

trial number, and DASS21 anxiety subscore." - Does this model also contain

the previous terms with these variables (i.e.: AudioCondition*TrialNumber +

DASS21)?

- p20, 469: DB and CT introduced out of nowhere

Reviewer #2: General comments

In this study, the authors wanted to test if music advertised as able to improve work flow or to engage listeners in deep focus affects a cognitively demanding task (flanker test). They also tested if this music affected the listeners’ mood and state.

They test this by running an online experiment on MTurk (n~200). The experiment consisted of a series of music, mental health, and mood-related questionnaires followed by a flanker task. This flanker task had to be solved while listening to three types of background music (work flow, deep focus, and pop hits) and a control condition (calm office noise).

They found that the Work Flow music significantly improved mood (as measured by PANAS) and differentially decreased RTs with time on the flanker task. They did not find any effect on participants’ accuracy on the flanker task.

The manuscript is clear in presenting the research question and all the relevant previous work in the area. The stimuli selection and the experimental procedure and clearly and thoroughly described in the manuscript. The experimental design and the data collected were consistent with the hypothesis they wanted to test. The statistical tools used were generally correct and well-interpreted by the authors.

However, there are several points the authors should address for this manuscript to be ready for publication. The main concern is that the significant triple interaction (shown in Figure 6) seems to show a more negative slope of the RTs for the work flow condition but the predicted RTs are larger for most of the trials (~60 out of 72).

The authors properly disclosed their conflict of interest.

Particular comments

P4-L85: Remove the “very” in “very different neural and behavioral responses” unless there is a quantification of this expectation.

P4-L94: I will not consider N=196 as a large-scale experiment. Please remove this adjective or add references that support this claim.

P5-L110: Why would the authors want to be able to detect a medium effect size with the given statistical power? Please provide a rationale for this decision. What was the dependent variable in your power analysis? The manuscript is full of statistical tests and it should be clear for which one they planned the power.

P5-L110: How did the authors estimate the variability for the power analysis? Did they use data from a pilot study or an estimation based on previous studies? Please provide additional information on this matter to the manuscript.

P5-L131: Add a reference to support this affirmation: “Given prior research on the effects of listening to pop music during work, we expected this condition to negatively impact on-task performance.“

P7-L168: Supplementary material with details on how the musical features were extracted was not provided in the manuscript. Please add this document to the revised version.

P11-L271: ANOVAs assume that residuals are normally distributed, something that rarely happens on count data, especially if you experience any floor or ceiling effect. In the particular case of the data of this study, this assumption is not met mainly because the number of correct responses is not only a count variable but it is bounded between 0 and 18 with most of its mass (for most subjects) close to 18. The authors could explore models for bounded count data (generalized linear mixed-effect models for Poisson or Negative Binomial) to properly model the participant’s responses.

A more direct approach could be to replicate the model structure of the RTs but using correct/incorrect as the dependent variable and “logit” as the link function of a generalized linear mixed effect model (although this seems to have convergence issues with your data). This could be the reason for the non-significant results on page 16 (L388 to L397).

P11-L276: I do not think the substantial variability is a reason to use mixed-effect models. If you think that is the case please provide a quantifiable definition of substantial and references that justify the relationship with that and the recommendation to use linear mixed models.

P13-L314: There is an extra “)” after “noise”.

P13-L319: Figure S1 was not in the manuscript. Please add this document to the revised version.

P13-L320: A BF of 2.11 is usually not considered strong evidence for a given hypothesis. Please better explain the group differences in musical training.

P13-L324: Table S2 was not in the manuscript. Please add this document to the revised version.

P13-L327: Add the individual data points to Figure 3A.

P13-L327: It is not clear if the only statistically significant difference condition was work flow. I assume that when it is not mentioned the results were nonsignificant but please, if that is the case, make it explicit.

P13-L330: Clarify how you corrected for multiple comparisons when running the one-sample t-tests.

P14-L355: Add the individual data points to Figure 4.

P14-L361: When using null-hypothesis statistical testing the results are dichotomical: Differences are either different from zero (given a significance level set by the selection of alpha) or not. The authors should remove the “marginally less” statement from the text and the use of * for p=0.88 from the figure. This last issue is especially problematic since it could be mistaken by the more common use of * for p<0.05.

P14-L369: How did the authors phrase the question about familiarity with the “office noise” condition?

P16-L384: Figure S5 was not in the manuscript. Please add this document to the revised version.

P16-L388: Figure S2 was not in the manuscript. Please add this document to the revised version.

P17-L414: Figure S4 was not in the manuscript. Please add this document to the revised version.

P17-L418: Authors should provide a table with all the estimated parameters of the fitted model (with CIs). I would recommend they use the modelsummary R package.

P17-L421: Please provide a p-value after “(slope; β=-0.421)”.

P17-L423: The use of the wording “task performance” could be misleading since what the authors observed is a differential effect of the time on the response time but not on the accuracy of the responses. Please rephrase the last sentence of that paragraph.

P18-L425: A non-significant result could mean that there is no difference only if the sample size was determined with an apriori power analysis, otherwise this could attributed to a lack of statistical power. The non-significant three-way interaction does not “indicate” that the observed effect of work flow music on task performance over time was independent of anxiety severity as assessed by the DASS-21”

P18-L432: Authors should add a title to the color bar on Figure 6.

P18-L432: Two things in Figure 6 are interesting and not addressed in the manuscript: 1- Although larger PANAS scores are associated with a faster improvement of the RTs, it also seems to make the participants respond slower in the work flow condition for the first trials. 2- The predicted RTs cross around trial number 60, and trial number 72 the difference in RTs is quite small compared to the ones at the beginning of the experiment. The authors should explain both these effects in more detail.

P18-L447: If I am not mistaken, the authors could not affirm that “These mood effects were not explained by differences between audio conditions in self-reported musical pleasure” since there is a main effect of pleasure on PANAS change. I assume the previous statements refer to the interaction being non-significant, but given that the mean pleasure level is not the same for all the musical stimuli, music's effect on mood could, in fact, be partially mediated by pleasure.

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: Yes: Martin Alejandro Miguel

Reviewer #2: Yes: Ignacio Spiousas

**********

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2025 Feb 12;20(2):e0316047. doi: 10.1371/journal.pone.0316047.r002

Author response to Decision Letter 0


13 Oct 2024

We thank the editor and reviewers for their comments. We have provided answers to all of them

Attachment

Submitted filename: 01_AnswerToReviewers_final.docx

pone.0316047.s010.docx (589.7KB, docx)

Decision Letter 1

Bruno Alejandro Mesz

28 Nov 2024

PONE-D-24-26875R1Effects of Music Advertised to Support Focus on Mood and Processing SpeedPLOS ONE

Dear Dr. Ripolles,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Both reviewers have now accepted your manuscript. However, one of them believes that there are still two points that require further clarification. After you consider these points, i will submit my decision without sending it again for further revision.

Please submit your revised manuscript by Jan 12 2025 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Bruno Alejandro Mesz, Ph.D.

Academic Editor

PLOS ONE

Journal Requirements:

Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: All comments have been addressed

Reviewer #2: All comments have been addressed

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: Yes

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: All comments have been properly addressed. The missing supplementary material is sound, complementing the main text. The clustering analysis performed on the selected music is satisfactory to understand that the sets in the "deep focus" playlist are similar with each other. This is similar for the sets in the "work flow" playlist, with the exception of one track, which is still better clustered with the "work flow" set.

Reviewer #2: The authors have thoroughly addressed all comments, concerns, and suggestions. They accepted the majority and, in cases where they did not, provided a sufficiently robust rationale.

However, I believe there are still two points that require further clarification:

1- The rationale for fitting mixed-effects models remains unclear. Random effects should only be included when residuals are correlated, typically due to the way units are sampled (i.e., randomly). I recommend that the authors either justify the use of mixed-effects models appropriately or refrain from providing a justification if it cannot be done accurately.

2- The model and analysis of the triple interaction (Figure 6) should be included as supplementary material, with a reference to it added in the main text.

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: Yes: Martin Alejandro Miguel

Reviewer #2: Yes: Ignacio Spiousas

**********

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

Decision Letter 2

Bruno Alejandro Mesz

5 Dec 2024

Effects of Music Advertised to Support Focus on Mood and Processing Speed

PONE-D-24-26875R2

Dear Dr. Ripolles,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice will be generated when your article is formally accepted. Please note, if your institution has a publishing partnership with PLOS and your article meets the relevant criteria, all or part of your publication costs will be covered. Please make sure your user information is up-to-date by logging into Editorial Manager at Editorial Manager® and clicking the ‘Update My Information' link at the top of the page. If you have any questions relating to publication charges, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Bruno Alejandro Mesz, Ph.D.

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Reviewers' comments:

Acceptance letter

Bruno Alejandro Mesz

10 Jan 2025

PONE-D-24-26875R2

PLOS ONE

Dear Dr. Ripolles,

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now being handed over to our production team.

At this stage, our production department will prepare your paper for publication. This includes ensuring the following:

* All references, tables, and figures are properly cited

* All relevant supporting information is included in the manuscript submission,

* There are no issues that prevent the paper from being properly typeset

If revisions are needed, the production department will contact you directly to resolve them. If no revisions are needed, you will receive an email when the publication date has been set. At this time, we do not offer pre-publication proofs to authors during production of the accepted work. Please keep in mind that we are working through a large volume of accepted articles, so please give us a few weeks to review your paper and let you know the next and final steps.

Lastly, if your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

If we can help with anything else, please email us at customercare@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. Bruno Alejandro Mesz

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 File. Supplementary text that includes (i) a brief description of each of the 23 musical features used to characterize the music used in this work along with the MIRtoolbox / Matlab function calls used to extract them; (ii) S2-S4 Tables; and (iii) S1-S5 Figs.

    (PDF)

    pone.0316047.s001.pdf (427.4KB, pdf)
    S1 Table. Table with the track names and artists, set arrangements, and musical features for each track in each audio condition.

    (XLSX)

    pone.0316047.s002.xlsx (34.8KB, xlsx)
    S1 Data. All behavioral data analyzed using JASP.

    (XLSX)

    pone.0316047.s003.xlsx (41.3KB, xlsx)
    S2 Data. All behavioral data analyzed using R.

    (XLSX)

    pone.0316047.s004.xlsx (651.3KB, xlsx)
    S1 Audio. Track used for the work flow condition.

    (M4A)

    Download audio file (9.6MB, m4a)
    S2 Audio. Track used for the work flow condition.

    (M4A)

    Download audio file (9.4MB, m4a)
    S3 Audio. Track used for the work flow condition.

    (M4A)

    Download audio file (9.5MB, m4a)
    S4 Audio. Track used for the work flow condition.

    (M4A)

    Download audio file (9.5MB, m4a)
    S5 Audio. Track used for the office noise condition.

    (M4A)

    Download audio file (9.3MB, m4a)
    Attachment

    Submitted filename: 01_AnswerToReviewers_final.docx

    pone.0316047.s010.docx (589.7KB, docx)
    Attachment

    Submitted filename: 01_AnswerToReviewers_final.docx

    pone.0316047.s011.docx (21KB, docx)

    Data Availability Statement

    All relevant data are within the paper and its Supporting Information files.


    Articles from PLOS ONE are provided here courtesy of PLOS

    RESOURCES