Skip to main content
PLOS ONE logoLink to PLOS ONE
. 2022 Nov 17;17(11):e0277220. doi: 10.1371/journal.pone.0277220

The hidden cost of a smartphone: The effects of smartphone notifications on cognitive control from a behavioral and electrophysiological perspective

Joshua D Upshaw 1,*, Carl E Stevens Jr 1, Giorgio Ganis 2, Darya L Zabelina 1
Editor: Francesco Di Russo3
PMCID: PMC9671478  PMID: 36395335

Abstract

Since their release in 2007, smartphones and their use have seemingly become a fundamental aspect of life in western society. Prior literature has suggested a link between mobile technology use and lower levels of cognitive control when people engage in a cognitively demanding task. This effect is more evident for people who report higher levels of smartphone use. The current study examined the effects of smartphones notifications on cognitive control and attention. Participants completed the Navon Letter paradigm which paired visual (frequent and rare target letters) and auditory (smartphone and control sounds) stimuli. We found that overall, participants responded slower on trials paired with smartphone notification (vs. control) sounds. They also demonstrated larger overall N2 ERP and a larger N2 oddball effect on trials paired with smartphone (vs. control) sounds, suggesting that people generally exhibited greater levels of cognitive control on the smartphone trials. In addition, people with higher smartphone addiction proneness showed lower P2 ERP on trials with the smartphone (vs. control) sounds, suggesting lower attentional engagement. These results add to the debate on the effects of smartphones on cognition. Limitations and future directions are discussed.

Introduction

My favorite things in life don’t cost any money. It’s really clear that the most precious resource we all have is time.”- Steve Jobs

Acknowledging the global prevalence of smartphone use requires little convincing, given the ubiquity of text messaging, selfie liking, and unlimited access to information at the touch of a button [1]. According to the Pew Research Center, 94% of adults in advanced economies own a smartphone or a similar device [2]. Since the release of the first iPhone, a large body of research has investigated the social and psychological impacts of mobile technology. Smartphones are undoubtedly beneficial in many ways, such as connecting with loved ones and supporting our productivity goals [3]. However, smartphone use has also been demonstrated to have negative influences in a number of important life outcomes, such as “real-world” social interactions [4], walking and driving abilities [5, 6], and educational outcomes [7, 8]. Other research demonstrates a negative association between smartphone overuse (i.e., more than 2 hours per day) and psychological well-being [9]. For example, frequent social media use was found to be associated with a greater likelihood of developing severe anxiety [10], behavior and attention problems [11], and increasing the risk of suicide-related outcomes [12]. However, after accounting for other lifestyle factors (e.g., sleep, exercise, diet), the negative associations between smartphone use and well-being were rather small [13].

Cognition and smartphone use

What may be common to all the aforementioned influences of smartphones described in the literature is their influences on people’s executive functions, and namely their attention and cognitive control. From the framework of the cognitive load theory of attentional control [14], people with better cognitive control should be better at maintaining focus on the task at hand when exposed to task-irrelevant smartphone notifications. A recent review outlined multiple studies which have examined the relationships between mobile technology use and cognitive functions with the majority of studies indicating that increased smartphone use has been associated with reduced performance on tasks assessing cognitive control and attention [15]. In one study for example, heavy smartphone users were found to have a lower capacity for sustained attention during an arithmetic task [16]. Other studies found an increase in error rates on cognitive control tasks in people who use smartphones and social media more frequently [1719]. Other work has demonstrated that people with higher media multitasking behavior (i.e., engaging in multiple forms of media concurrently) are worse at filtering out irrelevant distractor stimuli [20, 21], and show heightened attentional impulsivity [22, 23]. In terms of smartphone notifications, Stothart and colleagues [23] found that receiving a notification resulted in decreased sustained attention abilities similar to when people were actively using their devices. Beyond actually using smartphones and hearing notifications, prior studies have found that even the mere presence of smartphones can negatively affect performance on attention tasks [24, 25].

Although there is some behavioral evidence for the link between smartphone use and cognitive control, the neurophysiological mechanisms of this association are poorly understood. Thus, the present study aimed to examine the effects of smartphone notifications on behavioral and neural markers of top down executive functions, namely cognitive control, and attentional processes known to play a role in stimulus orienting and categorization. Recent findings from neuroimaging research provide a broad understanding of the structural and spatial neural activity of cognitive control processes associated with smartphone use. For example, cognitive control functions and smartphone use have been separately associated with activity in similar reward processing regions of dopaminergic neural pathways such as the ventromedial prefrontal cortex and dorsolateral prefrontal cortex [26, 27]. Another study found that heavy multimedia users had reduced gray matter volume in the anterior cingulate cortex, known for its involvement in higher order cognitive control processes [28]. Furthermore, higher media multitaskers are reported to recruit more neural resources from top-down control networks during a sustained attention task when they are in the presence of distractor stimuli [19]. Though these findings offer insight into potential neural mechanisms involved in smartphone use, they do not provide causal explanations, nor do they employ temporally specific measures necessary for understanding millisecond-level changes in cognitive processes as a function of smartphone use and exposure to notifications.

Present study

The present study aimed to examine the extent to which smartphone notifications influence cognitive control and attention on an adapted Local/Global hierarchical letter three-stimulus oddball paradigm using event-related potentials (ERPs), and behavioral performance. The Local/Global task requires an individual to attentionally reorient to and update working memory to accurately respond to the presence of a target letter while monitoring for the presence of rare distractor letters presented at opposite levels of visual attention [29, 30].The rare target letter presents an exogenous salient singleton requiring increased recruitment of attentional and cognitive control resources, specifically those necessary for conflict monitoring [31]. Cognitive control paradigms such as the Stroop or Erickson Flanker task also measure conflict monitoring, however, the paradigm in the current study was chosen to heighten engagement of early attentional orienting mechanisms. While monitoring for conflict between frequent, rare, and non-target trials, participants were required to ignore inconsistent visual information between hierarchically nested visual stimuli presented at opposing levels of local or global attention. Therefore, this paradigm allowed us to simultaneously measure the effects of smartphone notifications on attention and cognitive control.

Cognitive control was measured using the oddball effect, which is calculated by subtracting reaction times (RT) and ERP amplitudes on frequent target trials from rare target trials. Better cognitive control is considered to be reflected by a smaller RT oddball effect and a larger ERP oddball effect [32]. We examined three ERPs, the P200 (P2), N200 (N2), and P300 (P3), which are commonly accepted in the literature as underlying neural markers of electrical cortical activation associated with attention and cognitive control processes [30, 33, 34].

This paradigm and various adaptations have been used in prior studies indicating a family of frontocentral N2 components related to cognitive control [33]. The N2 ERP component is the second negative peak along the average ERP waveform which generally occurs between 200–350 ms after stimulus onset near frontocentral and central electrode site. N2 is considered to be involved in strategy regulation, feedback processing, immediate action control, novel stimuli detection, and visual attention orienting [33]. Though multiple N2 subcomponents exist, the current study focused on a frontocentral N2 component related to cognitive control processes of response inhibition, response conflict, and error monitoring. This anterior N2 component is said to be generated from the anterior cingulate gyrus [35] and is associated with top-down control of attention [36]. Based on previous literature, we expected that participants would respond more slowly and would show a smaller N2 oddball effect (i.e., worse cognitive control) on trials with the smartphone notification (vs. control) sounds.

We examined the P2 ERP as it is likely to be affected by exposure to smartphone notifications. The P2 ERP is the second positive peak along the average ERP waveform which generally occurs between 150–250 ms after stimulus onset near frontal electrode sites [34]. P2 is considered to reflect stimulus monitoring and early attention classification processes and has been shown to demonstrate differential activation between target stimuli conditions in oddball paradigms assessing capacities to withdraw attentional resources away from stimuli [37, 38]. P2 is said to be generated largely as a result from activation within the reticular activating system [39] as a response to input from sensory modalities [34]. If it is the case that smartphone notifications “capture” people’s attention, trials with smartphone notification (vs. control) sounds should elicit a larger P2 ERP.

Among oddball paradigms, an anterior N2 component to frequent targets is often observed in combination with a posterior P3 component to distractor targets, suggesting cognitive processes involved in contextual and memory updating [40, 41]. The P3 ERP component is the third positive peak along the average ERP waveform which generally occurs between 250–500 ms after stimulus onset near large frontoparietal scalp electrode networks [30]. P3 can be divided into two subcomponents. P3 is said to originate from frontal lobe activation for attention-driven stimuli processing, particularly for task-irrelevant neural activity elicited during target stimulus processing [30]. P3 has been considered to be a late cognitive component involved in endogenous decision-making and stimulus categorization originating from the dorsolateral prefrontal cortex (DLPFC) which communicates with the cingulate cortex and parietal structures [35]. P2 and N2 reflect early cognitive processes and are likely to be influenced by exogenous smartphone notifications. P3, on the other hand, should not be affected by smartphone notifications, as P3 is thought to reflect late cognitive processes involved in endogenous decision-making and stimulus categorization [32].

We also considered the role of individual differences in self-reported smartphone addiction. According to Folk and colleagues [42], attentional orienting is dependent upon one’s internal attentional control settings, which are dictated by people’s current behavioral goals and are subject to being influenced by their individual cognitive biases. Thus, people who use smartphones more frequently, should be more likely to have an underlying cognitive bias to orient their attention to their devices when hearing smartphone notification sounds. In contrast, people who use their phones less frequently should demonstrate a greater ability to complete behavioral goals as they are less cognitively biased to orient to their devices when hearing notifications.

In fact, studies have shown that people with excessive (vs. moderate) smartphone use show higher N2 during a Go-NoGo task in which participants were asked to view smartphone-relevant (vs. control) images [43]. The authors suggested that excessive (vs. moderate) smartphone users recruited higher levels of cognitive control necessary for inhibition processes in order to maintain similar levels of goal-directed behavioral performance when exposed to smartphone related stimuli. Additional work found that people higher (vs. lower) in problematic smartphone use, measured by the Smartphone Addiction Proneness Scale, showed smaller N2 amplitudes, delayed response latencies, and higher error rates on a Go-NoGo task [44]. This effect was found to be stronger when participants were exposed to smartphone notification vibration sounds. Thus, based on previous studies, we expected that individual differences in proneness to smartphone addiction would be associated with slower responses and worse cognitive control (larger RT and smaller N2 oddball effect) on trials with smartphone (vs. control) sounds.

Methods

Ethics statement

The study was approved by the local Institutional Review Board at the University of Arkansas and was assigned the protocol number 1807134340. All participants provided written informed consent to participate. Participants were compensated with course credit.

Participants

College students (N = 73; 38 male, 35 females, mean age = 19.78, SD = 0.32, 80% white) participated for course credit. Participants were daily smartphone users, had normal or corrected to normal hearing and vision, and were fluent English speakers. Participants had no history of brain damage or concussions and were not currently diagnosed with any psychiatric disorders. Participants were instructed to not be under the influence of excessive caffeine, unprescribed medication, or alcohol at the time of testing.

Materials

Oddball paradigm

A Local-Global Navon letter task [45] was used to measure the oddball effect. This task is identical to the one employed by earlier work [32]. The letter stimuli were designed to elicit approximately equal response speed and accuracy for both global and local letters [46]. In addition, all letter stimuli possessed similar perceptual features [47] reducing concern for stimulus orienting contingency on attentional control settings [42]. Further, this paradigm was designed such that motor-related confounds for ERPs are reduced as both target and non-target responses require similar motor activity [48]. The visual stimuli consisted of twelve composite letters of one global letter comprised of several uniform (never mixed) local letters (Fig 1A). The local letters (subtending 0.43 by 0.86 degrees of visual angle) were arranged within a 5 cm x 3 cm rectangular grid to form the global letters (subtending approximately 2.57 by 4.27 degrees of visual angle).

Fig 1. Modified Navon letter oddball paradigm.

Fig 1

A): Twelve composite letters: a global A made of local E’s, S’s, or H’s; a global E made of local A’s, S’s, or H’s; a global H made of local E’s, S’s, or A’s; and a global S made of local E’s, H’s, or A’s. B): In this example (white background used for example), participants were instructed to determine if the target letter E is present (either at the global or local level, 80% and 10% of trials, respectively), or not present (10% of trials). C): Single trial structure of the oddball task. ITI = inter-trial interval.

Participants were asked to indicate the presence of a target letter by pressing either Yes (1 key) for “Target letter is present,” or No (2 key) for “Target letter is not present” using their right hand on a standard keyboard digit pad. Participants were instructed to detect the presence of lack of presence of the target letter regardless of the size of the letter. Participants completed two practice blocks consisting of 9 trials each. Visual feedback was provided for response accuracy on practice trials (“Correct” or “Incorrect”). Following practice trials, participants completed 16 experimental blocks (960 trials). Before each block, a “target letter” screen was displayed on the computer monitor for 12 seconds. The “target letter” screen used a single red letter (twice in size as the local letters) to identify the specific target letter participants would be aiming to detect in the following block of trials.

Each block of the task consisted of 60 trials presented in pseudo-random order, ensuring that an equal number of sound stimuli were presented on frequent, rare, and non-target trials. On a given block, target letters were displayed on the screen at either the local or global attentional level on 80% of trials, referred to as frequent trials. During the same task block, target letters of the opposite level of attention were displayed on 10% of trials, referred to as rare trials. The final 10% of trials did not include a target letter, referred to as non-target trials; (Fig 1B). Global and local letter stimuli were counterbalanced such that big “H”s, “E”s, “S”s, and “A”s, were composed of uniform (never mixed within a single letter) sets of smaller letters for an equal presentation of hierarchical letter combinations across the 4 four possible letters to choose from. For example, on a given trial, a big “H” would be comprised of all small “S”s, but never small “S”s and small “E”s. Each block of 60 individual trials was followed by a self-paced break period.

Each composite letter stimulus was preceded by one of three auditory stimuli: A smartphone notification sound, a lawnmower sound, and a computer-generated control sound. The smartphone notification sound was the sound of the vibration of smartphone when it receives an incoming notification. In a separate pilot study, 97% of participants correctly identified the sound as the smartphone vibration. For the control sound, 14% indicated it was a smartphone sound, 26% said it was a food processor, and 60% said they could not identify the sound. Furthermore, 63% of participants indicated that their smartphone’s notification setting was typically set to vibrate (3% on sound, 33% on silent, 1% other), and 70% reported their text notification setting was set to vibrate most of the time (1% on sound, 29% on silent). Finally, 1% of participants said the smartphone notification was exactly the same as their own, 30% said it was nearly the same, 55% said it was somewhat similar, and 14% of participants said it was not at all similar.

The control sound was created in Audacity (v. 2.2.2, [49]), and was a square wave tone closely matched to the smartphone sound in duration, volume, and sound similarity. The sounds were delivered via noise canceling insert earphones at 70 percent maximum volume within safe listening levels (~ 60 dB). The lawnmower sound was included as an additional control stimulus for sound familiarity (i.e., smartphone vs. ambiguous control sound compared to smartphone vs. lawnmower sound). Consequent examination of the lawnmower sound acoustic waveform spectrum revealed unintended technical confounds (e.g. stereoscopic inconsistency creating a perception of spatial movement (Fig 2C) and was not used in subsequent analyses. Sound stimuli were presented pseudo-randomly.

Fig 2. Frequency spectral densities and signal waveforms of the auditory stimuli.

Fig 2

There were 960 trials in total: 768 frequent target trials (256 per sound condition), 96 rare target trials (32 per sound condition), and 96 no target trials (32 per sound condition). On each trial, a sound was played (~1250 ms) concurrently with a randomly jittered fixation cross (2650–2850 ms), followed by the letter stimulus (700 ms), during which time participants indicated the presence or absence of the target letter. If participants failed to respond within the 700 ms period, the task timed out and moved on to the next trial. A uniform gray screen appeared for 500 ms during the inter-trial interval before the next trial began (Fig 1C).

Four participants were excluded from behavioral data analyses due to technical issues or poor performance on the oddball task (i.e., errors or RTs exceeding +/- 2.5 SD). The final sample for behavioral analyses included 69 participants. ERP data for 19 participants were excluded because of technical issues, or for having uncorrectable artifacts greater than 25% of total trials [48]. The final sample for ERP analyses included 54 participants.

Smartphone addiction

The Smartphone Addiction Proneness Scale (SAPS) [50], is a 15-item self-report questionnaire that assesses individual differences in smartphone addiction proneness. The scale consists of four factors: Disturbance of adaptive functions (e.g., “My school grades dropped due to excessive smartphone use.”), virtual life orientation (e.g., “When I cannot use a smartphone, I feel like I have lost the entire world.”), withdrawal (e.g., “It would be painful if I am not allowed to use a smartphone.”), and tolerance (e.g., “I try cutting my smartphone usage time, but I fail.”). Responses are made on a 4-point Likert-type scale and range from 1 (strongly disagree) to 4 (strongly agree). The total SAPS score is an average of the four factors, with higher scores reflecting higher levels of proneness to smartphone addiction. The SAPS was reported to have high construct validity (NFI = .943, TLI = .902, CFI = .902, RMSEA = .034) and reliability (α = .88).

Electrophysiological recordings

Continuous EEG data were sampled at 2048 Hz using a Biosemi Active Two system. EEG data were collected from 32 active Ag/AgCl electrodes arranged according to the 10–20 system. Two loose lead electrodes below both eyes monitored eye blinks, 2 on the outer canthi of the eyes monitored horizontal eye movements, and 2 on the mastoids were used as reference. EEG data were preprocessed in MATLAB (2018b) using the EEGLAB (v13.6.5b) toolbox before further processing [51]. Continuous EEG data were down-sampled off-line to 512 Hz and high pass Basic FIR filtered at .1 Hz.

ERP waveform and component analysis

EEGLAB and ERPLAB were used to process EEG data offline. ERPs were averaged off-line for a 1000 ms total epoch segment (200 ms pre-stimulus and 800 ms post-stimulus). Artifact detection was performed to assess trials contaminated with eye blinks, horizontal eye movement, muscle activity, or other signal noise. First, a moving window peak to peak artifact detection threshold was used on vertical eye channels (voltage threshold = 75 μV, moving window width = 200 ms). Second, a step-like artifact detection analysis on lateral eye channels (voltage threshold = 30 μV, moving window width = 400 ms, window step = 10 ms). Third, a moving window peak-to-peak threshold was used on all the channels (voltage threshold = 200 μV, moving window width = 1000 ms). ERPs were 2nd order (IIR) Butterworth low-pass filtered at 30 Hz (12 dB/octave roll-off).

Participants with greater than 25% overall artifact rejections were reprocessed using independent component analysis (ICA) and bad components were inspected using ICLabel, an EEGLab plugin [52]. Non-brain identified components with greater than 94.5% confidence were removed from the data. ICA-corrected data were re-processed and artifact detections were repeated. After reprocessing ICA-corrected participants, people with greater than 25% overall rejection thresholds were excluded from data analysis. Mean ERP amplitudes of ICA and non-ICA-corrected participants did not differ, independent samples t-tests, ps > .064.

Past literature provides reasonable consensus for the ERP latencies implicated in cognitive control. However, the defined time windows can and arguably should vary across studies depending on variations in study design, stimuli, tasks, participants, conditions and unknown noise [53, 54]. Thus, a data driven approach to defining ERP latency windows provides an optimal solution for accounting for these variations in temporal and spatial location of condition effects, reducing Type I and II error rates. Precise parameters for the ERP component temporal latency windows were determined through the grand average of conditions across participants. Peak latency analyses were performed for each component at their maximal channel sites. P2, N2, and P3 mean amplitudes were measured at channel sites Fz (150–210 ms), F3 (220–380 ms), and Pz (260-700ms), respectively [34, 33, 30]. Latency windows based on prior literature were applied to detect peak amplitudes and then visually inspected to capture the entire ERP component. To avoid component overlap, a 10 ms time window was used to separate component latencies [55]. A mean amplitude value between these latencies was calculated for subsequent analyses.

The oddball effect was computed for each ERP component by subtracting the mean amplitude on frequent target trials from rare target trials. P2 is a positive-going component, thus larger positive values indicate a larger oddball effect, reflecting better stimulus monitoring and early classification processes. The N2 is a negative-going component, thus smaller positive values indicate a larger oddball effect, reflecting better cognitive control. P3 is a positive-going component, thus larger positive values indicate a larger oddball effect, reflecting better task relevant stimulus categorization processing.

Procedure

Eligible participants volunteered to participate in the study. Participants were positioned 67cm from the center of the computer screen and received instructions for the oddball task. They were instructed to respond quickly and accurately on all trials. To reduce EEG artifacts, participants were asked to minimize blinking, facial, and bodily movement throughout the task. After two practice blocks (18 trials), participants completed 16 blocks of the oddball task while EEG data were recorded continuously. Each block lasted approximately 5 minutes, with a 75-minute approximate total task time. After the task, participants completed questionnaires via Qualtrics. The entire session lasted approximately 150 minutes.

Analytical strategy

Data were analyzed in RStudio [56]. For RTs, analyses were conducted using linear mixed effect regression (LMER) models with random slopes for the trial frequency condition (rare and frequent) and random intercepts for each participant to account for within-subject variance in RT across all trials (lmerTest v. 3.1.1) [57]. This model was found to provide the best fit of the data relative to simpler LMER models. An intraclass correlation (ICC) of 0.14 for RT within participant was found, warranting the use of mixed linear models. We assessed differences in RT on rare vs. frequent target trials to evaluate the presence of oddball effect, then determined if RT varied as a function of trials with smartphone notifications vs. control sounds. Then, we assessed if differences in the oddball effect were present as a function of the sound condition. A three-way interaction regression was then performed to assess the moderating role of smartphone addiction in the effects of smartphone notifications on cognitive control.

For the accuracy analyses, we conducted generalized linear mixed effects regression model with error rate as the dependent variable and random slopes for trial frequency and random intercepts for participant. We used a bound optimization by quadratic approximation with a binomial family distribution of 0 for correct and 1 for incorrect trials.

For ERPs, paired sample t-tests were conducted to compare the grand averaged mean amplitudes of the P2, N2, and P3 components on rare and frequent targets (oddball effect), and on trials with smartphone and control sounds (trial to trial data were not available). Linear models were carried out to determine whether smartphone addiction scores predicted overall ERPs, overall oddball ERPs, and oddball ERPs between sound conditions (smartphone vs. control sounds).

Results

Behavioral findings

Reaction time

Mixed model regression analyses revealed that, across all trials (N = 37,769), participants responded slower on rare (M = 476.99 ms, SD = 115.57) than on frequent trials (M = 426.83 ms, SD = 87.53), Cohen’s d = 0.49, demonstrating a reliable 50.96 ms oddball effect with a medium effect size (Table 1). As predicted, participants responded significantly slower on trials with smartphone sounds (M = 432.97 ms, SD = 92.10) than on trials with control sounds (M = 429.70 ms, SD = 90.95), Cohen’s d = .04, demonstrating a small effect size. There was no significant interaction between trial frequency and sound condition, indicating that cognitive control did not differ as a function of the sound condition.

Table 1. Regression with reaction times predicted by trial frequency and sound condition.
95% CI
Predictor b SE Lower Upper df t p
Trial Frequency a 50.96 3.74 43.53 58.39 68 13.63 < .001
Sound b 3.18 1.51 0.23 6.13 37670 2.11 .035
Trial Frequency x Sound -0.73 3.01 -5.17 6.63 37700 -0.24 .808

Reaction times on the oddball task as a function of trial frequency (rare vs. frequent) and sound condition (smartphone vs. control).

a Trial frequency was contrast coded at -.5 for frequent and .5 for rare target trials.

b Sound condition was contrast coded at -.5 for control sound and .5 for smartphone sound. Dependent variable = Reaction time in ms. CI = Confidence Interval.

Additional regression models were conducted separately for frequent (N = 34,372) and rare (N = 3,397) target letter trials, to determine the extent to which, if any, trial frequency played a role in the effects of the sound conditions on RTs. On frequent trials participants responded significantly slower on trials with smartphone sounds (M = 428.51, SD = 87.95) than control sounds (M = 425.15, SD = 87.08), Cohen’s d = .04 (Table 2). On rare trials, however, participants did not differ in their speed on smartphone (M = 477.53, SD = 117.29) and control (M = 476.46, SD = 113.80) sound trials, Cohen’s d = .01. These findings suggest that participants slowed down on smartphone trials on frequent, but not on rare trials.

Table 2. Regressions for reaction times for frequent and rare trials predicted by sound condition.
95% CI
Predictor b SE Lower Upper df t p
Frequent Trials
Sound b 3.54 0.90 1.79 5.31 37632 3.95 < .001
Rare Trials
Sound b 2.81 2.87 -2.82 8.45 37699 0.98 .327

Dependent variable: RT (ms). Reaction times on the oddball task for frequent and rare target trials separately as a function of sound condition (smartphone vs. control).

b Sound condition was contrast coded at -.5 for control sound and .5 for smartphone sound. Dependent variable = Reaction time in ms. CI = Confidence Interval.

Individual differences in proneness to smartphone addiction

We then sought to determine the role of individual differences in smartphone addiction in response speed to smartphone notifications. We conducted a mixed linear model with RT as the outcome variable, trial frequency (frequent vs. rare trials), sound condition (smartphone vs. control sounds), and smartphone addiction (continuous, mean centered) as predictor variables. Results revealed that individual differences in smartphone addiction did not predict overall response speed (Table 3). There was no significant interaction between trial frequency and smartphone addiction, revealing that the RT oddball effect (i.e. cognitive control) was not significantly different as a function of smartphone addiction levels. RT did not significantly differ between smartphone and control sound conditions as a function of individual differences in smartphone addiction. Finally, the three-way interaction between trial frequency, sound condition, and smartphone addiction was not significant.

Table 3. Regression for reaction time predicted by trial frequency, sound condition, and smartphone addiction.
95% CI
Predictor b SE Lower Upper df t p
Trial Frequency a 50.97 3.74 43.54 58.41 69 13.64 .001
Sound b 3.19 1.51 0.24 6.13 37700 2.12 .034
SAPS c 6.89 14.12 -21.18 34.96 69 0.49 .627
Trial Frequency x Sound -0.72 3.01 -6.62 5.28 37700 -0.24 .811
Trial Frequency x SAPS -2.41 10.90 -24.09 19.24 69 -0.22 .826
Sound x SAPS 1.94 4.38 -6.64 10.52 37696 0.44 .658
Trial Frequency x Sound x SAPS -8.53 8.76 -25.69 8.63 37696 -0.97 .330

Reaction times on the oddball task as a function of trial frequency (rare vs. frequent), sound condition (smartphone vs. control), and smartphone addiction levels.

a Trial frequency was contrast coded at -.5 for frequent and .5 for rare target trials.

b Sound condition was contrast coded at -.5 for control sound and .5 for smartphone sound.

c SAPS = Smartphone Addiction Proneness Scale (mean centered). Dependent variable = Reaction time in ms. CI = Confidence Interval.

Additional regression analyses were conducted separately for frequent and rare trials. On frequent trials, there was a significant two-way interaction between sound condition (smartphone vs. control) and smartphone addiction (Table 4 and Fig 3). Specifically, people with higher levels of smartphone addiction were significantly slower to respond on frequent trials with smartphone (vs. control) sounds. On rare trials, the two-way interaction between sound condition (smartphone vs. control) and smartphone addiction was not significant, indicating that response speed did not differ between smartphone (vs. control) sounds as a function of smartphone addiction levels on rare trials.

Table 4. Regressions for reaction times on frequent and rare trials predicted by sound condition and smartphone addiction.
95% CI
Predictor b SE Lower Upper df t p
Frequent Trials
Sounda 3.54 0.90 1.79 5.31 37631 3.95 .001
SAPSb 8.10 11.74 -15.24 31.43 69 0.49 .493
Sound x SAPS 6.20 2.62 1.07 11.34 37632 -0.24 .018
Rare Trials
Sounda 2.83 2.87 -2.80 8.46 37700 0.98 .325
SAPSb 5.69 17.90 -29.89 41.25 69 0.32 .752
Sound x SAPS -2.32 8.34 -18.70 14.05 37698 -0.28 .781

Reaction times on the oddball task as a function of sound condition (smartphone vs. control) and smartphone addiction with separate results for frequent and rare trials.

a Sound condition was contrast coded at -.5 for control sound and .5 for smartphone sound.

b SAPS = Smartphone Addiction Proneness Scale (mean centered). Dependent variable = Reaction time in ms. CI = Confidence Interval.

Fig 3. Interaction analysis for reaction times predicted by sound condition and smartphone addiction.

Fig 3

Plot of reaction times on trials with smartphone sounds (dashed line) vs. control sounds (solid line) as a function of individual differences in the proneness to smartphone addiction. Plotted separately for frequent and rare target trials.

Accuracy

A generalized linear regression revealed that people made significantly more errors on rare (N = 4065, M = 16.43%, SD = 0.37) compared to frequent trials (N = 34806, M = 1.25%, SD = 0.11), while there were no significant differences in the number of errors made on trials with smartphone notifications (N = 19442, M = 2.72%, SD = 0.16) compared to control sounds trials (N = 19429, M = 2.95%, SD = 0.17; Table 5). These results indicate that accuracy on rare trials was worse than on frequent trials, as expected, however the presence of smartphone notification did not impair accuracy relative to control sounds.

Table 5. Logistic regression of error rates predicted by trial frequency and sound condition.
95% CI
Predictor Odds Ratio p>z Lower Upper
Trial Frequency a 17.83 27.41*** 2.68 3.10
Sound b 0.92 -1.38 -0.22 0.04
Trial Frequency x Sound 0.84 .-1.36 -0.43 0.08

Binomial logistic regression for the likelihood of an incorrect response.

a Trial frequency was contrast coded at -.5 for frequent and .5 for rare target trials.

b Sound condition was contrast coded at -.5 for control sound and .5 for smartphone sound. Dependent variable = Count of incorrect trials, (0 = Correct, 1 = Incorrect). CI = confidence interval.

*** p < .001.

Individual differences in smartphone addiction levels had no significant effect on error rates overall, p = .688, or in terms of responses between frequent and rare trials, p = .534, sound conditions, p = .793, nor their interaction, p = .391, showing that individual differences in smartphone addiction did not affect task performance in terms of accuracy.

Neural findings

ERP oddball effect

Paired sample t-tests were conducted to examine overall ERP differences between rare and frequent trials (Fig 4). P3 amplitude was significantly larger on rare versus frequent trials, Cohen’s d = .29 (small to medium effect size). There were no significant differences in P2 or N2 amplitudes between rare and frequent trials (Table 6).

Fig 4. Overall ERP waveforms and scalp maps.

Fig 4

A. Aggregated ERP waveforms for frequent (black lines) and rare trials (red lines). B. Aggregate ERP scalp distribution maps for P2 at 200 ms, N2 at 350 ms, and P3 at 450 ms latencies. Red color reflects activity for P2 and P3. Blue color reflects activity for N2. (Darker colors reflect increased activity).

Table 6. Paired samples t-tests for ERP amplitudes between trial frequency.
M (SD) 95% CI
ERP Frequent Trials Rare Trials Mdiff Lower Upper df t p
P2 at Fz 8.45 μv (3.66) 8.88 μv (3.98) 0.42 -0.06 0.91 53 1.74 .088
N2 at F3a 4.24 μv (3.32) 4.08 μv (3.94) -0.16 -0.72 0.40 53 -0.58 .567
P3 at Pz 10.66 μv (3.96) 12.15 μv (6.10) 1.50 0.58 2.42 53 3.26 .002

Grand averaged ERP amplitudes (in microvolts) on the oddball task for frequent and rare trials. ERPs were time-locked to the presentation of the visual stimulus.

aN2 is a negative going potential, thus smaller values indicate larger ERP amplitude. Dependent Variable = ERP mean amplitude. CI = confidence interval.

ERPs between sound conditions

Paired sample t-tests were conducted to examine overall ERP differences between trials with smartphone and control sounds. P2 was marginally smaller on smartphone versus control sound trials, Cohen’s d = .12 (Table 7). N2 was significantly larger on smartphone versus control sound trials, Cohen’s d = .13 (small effect size; Fig 5), suggesting that people generally had lower levels of cognitive control on trials with smartphone sounds. P3 did not significantly differ on smartphone versus control sound trials. These results indicate that overall, people had greater neural activation implicated in cognitive control on trials with smartphone notifications (vs. control) sound trials, while early and late attentional processes linked to stimulus orienting and categorization were not involved.

Table 7. ERP amplitudes between the smartphone and control sound trials.
M (SD) 95% CI
ERP Smartphone Control Sound Mdiff Lower Upper df t p
P2 at Fz 8.44 μv (3.80) 8.89 μv (3.83) -0.46 -0.92 0.01 53 -1.98 .053
N2 at F3a 3.94 μv (3.38) 4.39 μv (3.78) -0.45 -0.89 -0.02 53 -2.09 .041
P3 at Pz 11.42 μv (4.84) 11.39 μv (5.08) 0.03 -0.53 0.58 53 0.09 .927

Grand averaged ERP amplitudes (in microvolts) on the oddball task for frequent and rare trials. ERPs were time-locked to the presentation of the visual stimulus.

aN2 is a negative going potential, thus smaller values indicate larger ERP amplitude. Dependent Variable = ERP mean amplitude. CI = confidence interval.

Fig 5. N2 amplitude on smartphone and control sound trials.

Fig 5

N2 ERP mean amplitudes on trials with smartphone and control sounds. aN2 is a negative going potential, thus smaller values indicate larger ERP amplitudes.

ERP oddball effect between sound conditions

Paired sample t-tests were conducted to examine differences in the ERP oddball effect (ERPs on rare minus frequent trials) between trials with smartphone and control sounds. The N2 oddball effect was significantly larger on smartphone versus control sound trials, Cohen’s d = .28 (small to medium effect size; Fig 6), pointing to the increased recruitment of cognitive control processes on smartphone trials. There was no significant difference in P2 or P3 oddball effect between the sound conditions (Table 8).

Fig 6. N2 ERPs for trial frequency and sound conditions.

Fig 6

N2 ERP amplitudes on rare and frequent trials with smartphone and control sounds. The oddball effect is difference between rare and frequent trials. aN2 is a negative going potential, thus smaller values indicate larger ERP amplitude. A) Bar chart of mean N2 amplitudes. B) ERP waveforms for frequent trials with control sounds (black line), frequent trials with smartphone sounds (red line), rare trials with control sounds (blue line), and rare trials with smartphone sounds (green line). C) Scalp maps of N2 at 300 ms for frequent and rare trials after delivery of the sound stimulus.

Table 8. ERP oddball effect between the smartphone and control sound trials.
M (SD) 95% CI
ERP Smartphone Oddball Effect Control Sound Oddball Effect Mdiff Lower Upper df t p
P2 at Fz 0.09 μv (2.83) 0.76 μv (2.48) -0.67 -1.75 0.40 53 -1.26 .213
N2 at F3a -0.67 μv (2.62) 0.35 μv (2.88) -1.02 -2.03 -0.01 53 -2.03 .047
P3 at Pz 1.47 μv (3.46) 1.52 μv (4.53) -0.05 1.16 0.58 53 -0.08 .936

ERP oddball effect amplitudes (in microvolts) on the oddball task for trials with smartphone and control sounds. ERPs were time-locked to the presentation of the visual stimulus.

aN2 is a negative going potential, thus smaller values indicate larger ERP amplitude. Dependent Variable = ERP mean amplitude oddball effect. CI = confidence interval.

Overall ERPs and smartphone addiction

Linear models examining overall ERP mean amplitudes with smartphone addiction as a continuous predictor variable revealed that people higher in smartphone addiction had a significantly smaller overall P2, ƒ2 = .09 (small effect size; Table 9 and Fig 7). N2 and P3 did not significantly differ as a function of individual differences in smartphone addiction.

Table 9. Overall P2, N2, and P3 as a function of individual differences in smartphone addiction.
95% CI
ERP Predictor b SE Lower Upper df t P
P2 at Fz SAPSb -3.04 1.44 -5.94 -0.15 52 -2.11 .039
N2 at F3 SAPS -2.47 1.37 -5.22 0.28 52 -1.80 .077
P3 at Pz SAPS -2.26 1.94 -6.14 1.63 52 -1.16 .250

ERP amplitudes (in microvolts) on the oddball task as a function of individual differences in smartphone addiction. ERPs were time-locked to the presentation of the visual stimulus. aN2 is a negative going potential, thus smaller values indicate larger ERP amplitude.

bSAPS = Smartphone Addiction Proneness Scale (mean centered). Dependent variable = ERP mean amplitude. CI = Confidence Interval.

Fig 7. Correlation between smartphone addiction and P2 ERP.

Fig 7

Correlation between smartphone addiction proneness and the overall P2 mean amplitude (P2 at site Fz within the 150–210 ms time window), indicating that people with higher levels of smartphone addiction show reduced neural activation implicated in early attentional mechanisms.

ERP oddball effect and smartphone addiction

Linear models revealed no significant differences in P2, N2, or P3 oddball effects as a function of individual differences in smartphone addiction (Table 10).

Table 10. Regressions for ERP oddball effect predicted by smartphone addiction.
95% CI
Oddball ERP Predictor b SE Lower Upper df t p
P2 at Fz SAPSb 0.78 0.72 -0.65 2.22 52 1.10 .278
N2 at F3a SAPS 0.90 0.81 -0.73 2.54 52 1.11 .273
P3 at Pz SAPS -0.51 1.36 -3.24 2.22 52 -0.38 .708

P2, N2, and P3 oddball effect as a function of individual differences in smartphone addiction. ERPs were time-locked to the presentation of the visual stimulus.

a N2 is a negative going potential, thus smaller values indicate larger ERP amplitude.

b SAPS = Smartphone Addiction Proneness Scale (mean centered). Dependent Variable = ERP mean amplitude oddball effect. CI = Confidence Interval.

Overall ERPs by sound condition and smartphone addiction

Linear models examining differences in mean ERP amplitudes as a function of trials with smartphone versus control sounds and smartphone addiction revealed no significant interaction between smartphone addiction and sound trials for P2, b = 1.01, SE = 0.66, t(54) = -1.52, p = .133, 95% CI [-2.33, 0.31], N2, b = 0.23, SE = 0.63, t(54) = 0.37, p = .715, 95% CI [-1.03, 1.49], or P3, b = 0.16, SE = 0.81, t(54) = 0.20, p = .841, 95% CI [-1.45, 1.77].

ERP oddball effects by sound condition and smartphone addiction

A linear model with the mean ERP amplitude oddball effect as a dependent variable and trials with smartphone versus control sounds and smartphone addiction as a continuous predictor variable revealed no significant interactions for P2, b = 2.31, SE = 1.47, t(54) = 1.57, p = .119, 95% CI [-0.59, 5.20], N2, b = 0.54, SE = 1.46, t(54) = 0.37, p = .710, 95% CI [-2.37, 3.48], or P3, b = 2.09, SE = 1.73, t(54) = 1.21, p = .232, 95% CI [-1.36, 5.54].

Discussion

The current study investigated the effects of smartphone notifications on behavioral and neural markers of top down executive functions, namely cognitive control and attentional processes known to play a role in stimulus orienting and categorization. The study further aimed to examine whether these effects varied as a function of individual differences in self-reported proneness to smartphone addiction.

General effects of smartphone notifications on cognitive control

In line with our predictions, people responded slower on trials paired with smartphone (vs. control) sound notifications, although this effect was small. This finding is in line with recent work demonstrating, across two studies, that reaction times on trials with (vs. without) phone notifications were significantly slower [58].

These findings also offer partially contrasting evidence to previous literature which found that exposure to smartphone notifications on a sustained attention task [23] and Go/No-Go task [44] increased the speed of responding to target stimuli, which was also linked to an increase in participant error rates. We, however, found that error rates did not differ between the sound conditions. An important distinction between the current study and Stothart et al. [23] is that the current study employed a generic notification sound, whereas participants in Stothart and colleagues’ study received notifications from their personal devices. Though people may likely have a different reaction to their own smartphone notifications, the current study sought to understand the general influence of smartphone notifications on staying focused on the task at hand.

We also found that the oddball effect did not differ between the smartphone notification and control sound trials. Prior work has demonstrated similar null effects of smartphone notifications on behavioral measures of cognitive control [44, 59]. In addition, recent findings demonstrate an habituation effect on reaction times during a cognitive task following the presentation of a phone notification [58]. It could be the case that the behavioral effects of smartphone notifications on cognitive control were lost due to habituation to the auditory stimuli.

To assess for habituation, we examined if null effects were present separately in frequent and rare trials. Frequent trials were presented 768 times whereas rare trials were presented 96 times. Thus, if habituation did occur, a more robust null effect should have been observed for frequent trials. Interestingly, results indicated that smartphone notification sounds were associated with delayed response speed on frequently, but not rarely presented trials. This suggests that habituation to the sound stimuli may not explain the null effect for cognitive control. It could be that greater cognitive demand on rare trials served to prevent distraction from the smartphone notifications. However, during less cognitively demanding frequent trials, smartphone notifications did appear to have an effect on how quickly people responded. This finding is in line with the cognitive load theory of selective attention [14], which posits that when cognitive load is high, such as for rare trials, distractor interference is less likely to occur. Yet, when cognitive load is low, such as for frequent trials, distractor interference is more likely.

Contrary to our predictions, we found N2 amplitudes, as well as N2 oddball effect were greater on trials with smartphone compared to control sound trials, suggesting an upregulation of cognitive control when people were exposed to smartphone notifications. This finding is in contrast to prior work which found N2 amplitudes on a cognitive control task to be lowest when a smartphone notification was delivered during the task compared to before or not at all. We did not find this effect of early (P2) and late (P3) attentional processes, which are typically associated with stimulus orienting and categorization. N2 is considered to be involved in cognitive control processes including strategy regulation, immediate action control, novel stimuli detection, and orienting of visual attention [31]. It appears that regardless of the varying degree of cognitive load of trial frequency (i.e., rare vs. frequent), trials with smartphone notification were associated with greater neural activation underlying cognitive control.

The role of individual differences in smartphone addiction

In this study we found that individual differences in smartphone addiction did not moderate the effects of smartphone notifications as a function of trial type overall. However, examining frequent and rare trials separately, we found that higher levels of smartphone addiction were associated with significantly slower responses on trials with smartphone sounds only on frequent, but not on rare trials, suggesting that for contexts involving greater cognitive load, such as perceptually novel stimuli, smartphone notifications may impact cognitive capacities to a lesser extent.

We also found that higher levels of smartphone addiction were associated with decreased P2 activation overall. P2 activation is considered to reflect a capacity to withdraw attentional resources away from the stimulus [36, 42]. One explanation for this result may be that these individuals are more frequently exposed to notifications on their smartphones, and thus may be less likely to withdraw attention away from the smartphone notifications.

Finally, we found no support for our hypothesis that higher levels of smartphone addiction would be associated with worse cognitive control, reflected by a smaller ERP oddball effect. This suggests that cognitive control assessed as the oddball effect, or the difference in neural activation between frequent and rare trials, did not vary according to levels of reported smartphone addiction. This finding lends no support for contrasting evidence from prior studies demonstrating decreased [44] and increased [43] N2 amplitudes for people higher in smartphone addiction.

Furthermore, our results revealed that higher levels of smartphone addiction were not significantly associated with changes in neural activity between sound conditions. Trials with smartphone notification sounds compared to control sounds did not differ in terms of ERP amplitude as a function of smartphone addiction. Nor were smartphone addiction levels associated with a difference in the oddball effect between sound conditions. Thus, levels of smartphone addiction had no effect on neural activity associated with cognitive control for smartphone notification sounds versus control sounds.

In conclusion, we found partial support to the proposal that cognitive control may be the mechanism for the effect of smartphone notifications reported in the literature, such as their potential unwanted interruption of social interactions [4], walking and driving [5, 6], and educational activities [7, 8]. Further research is needed to clarify the extent to which cognitive control serves as the primary underlying neural mechanism by which these smartphone interruptions negatively impact day-to-day outcomes. It may be that an alternative cognitive process may be more impacted by smartphone notifications, such as working memory, and could thus provide a more complete explanation as to how these interruptions influence people’s lives.

From the perspective of cognitive resource allocation, (i.e. cognitive load theory [14]), one explanation of the findings could be that participants developed a mental framework of association between the amount of cognitive resources necessary to attend to smartphone notifications and the amount of top-down control available to sacrifice during task performance. As such, the results suggest that attentional control resources were more easily sacrificed on simple (i.e., frequent) trials, and less so on more difficult (i.e., rare) trials. The lack of a habituation effect, reflected by the null findings for smartphone addiction, further supports this interpretation.

Limitations and future directions

One limitation of this study was that smartphone addiction was assessed with a self-report measure, although this measure has good psychometric properties [50]. Future work may examine the link between cognitive control and more objective measures of smartphone use, such as data from the smartphone use tracking apps. Furthermore, reported levels of smartphone addiction proneness for the current sample was on the lower end of the possible range. Future work should investigate cognitive effects of smartphone notifications in people who report higher levels of problematic smartphone use. In addition, the present study included a single unfamiliar control sound. Future studies may add additional control sounds or include trials without sounds for comparison.

Although the N2 is a common approach for measuring electrophysiological markers of conflict monitoring processes, it is not the only ERP component worth examining. The current study did not assess alternative conflict monitoring ERPs, such as the error-related negativity (ERN) component, which is said to capture ERP activity associated with incorrect responses [60, 61]. We were primarily interested in neural activity during correct task responses to measure the neural effect smartphone notifications without the influence of error-related neural activity. Future work, however, needs to examine these alternative ERP components to further elucidate the way smartphone notifications influence top-down control processes.

The current study employed mixed linear model analyses for behavioral measures, while using grand-averaged ERPs due to lack of available data, in turn reducing explainable variance within participants. Future ERP analyses may be improved by including within subject variance into the regression models. Doing so may provide a more complete picture of how smartphone use affects neural processes underlying attention and cognitive control. Along a similar line, many of the effect sizes of the results were small, thus interpretation of these findings should be approached with caution.

Finally, the generalizability of the findings from this study is limited to mostly white undergraduate college students in the Midwest/Southern United States. Though it is critically important to understand the cognitive effects of smartphone use in college students, more diverse samples are needed. It may be that college students in general have relatively greater levels of attention and cognitive control, however those in less cognitively demanding career fields may show different pattern of results. For example, prior work has found a link between smartphone notifications delivered during a cognitive task to be slower for adolescents (vs. mid-life adults) [58]. This suggests that younger populations may be particularly vulnerable to the distractions of smartphone notifications. Future work should investigate how smartphone notifications and smartphone use in general influences the cognitive capacities of participants from different age groups and sociocultural backgrounds. The digital age is characterized by the seemingly constant use of modern technologies, more so now than ever as a result of social distancing and isolation. We must strive to more fully understand how these technologies influence our cognitive functioning. By doing so, we can attempt to maximize the benefits, while minimizing the costs of using these incredibly powerful technologies.

Data Availability

We have created a publicly available OSF repository that includes the R scripts and data files to improve transparency. This can be accessed here: https://osf.io/bj7zf/ In addition, we created a publicly available github repository for the project where the data and other coding script materials will be made available upon acceptance of this manuscript for publication. https://github.com/jupshaw/SMARTPHONES-AND-COGNITIVE-CONTROL.

Funding Statement

The authors received no specific funding for this work.

References

  • 1.Montag C, Diefenbach S. Towards Homo Digitalis: Important Research Issues for Psychology and the Neurosciences at the Dawn of the Internet of Things and the Digital Society. Sustainability. 2018. Feb 6;10(2):415. [Google Scholar]
  • 2.Taylor K, Silver L. Smartphone Ownership is Growing Rapidly Around the World but Not Always Equally. In Pew Research Center [FOR RELEASE FEBRUARY 5, 2019.:47. https://www.pewresearch.org/global/wp-content/uploads/sites/2/2019/02/Pew-Research-Center_Global-Technology-Use-2018_2019-02-05.pdf
  • 3.Jung Y. What a smartphone is to me: understanding user values in using smartphones: User-value perspective on a smartphone. Info Systems J. 2014. Jul;24(4):299–321. [Google Scholar]
  • 4.Przybylski AK, Weinstein N. Can you connect with me now? How the presence of mobile communication technology influences face-to-face conversation quality. Journal of Social and Personal Relationships. 2013. May;30(3):237–46. [Google Scholar]
  • 5.Hyman IE Jr, Boss SM, Wise BM, McKenzie KE, Caggiano JM. Did you see the unicycling clown? Inattentional blindness while walking and talking on a cell phone. Applied Cognitive Psychology. 2010;24(5):597–607. [Google Scholar]
  • 6.Strayer DL, Johnston WA. Driven to Distraction: Dual-Task Studies of Simulated Driving and Conversing on a Cellular Telephone. Psychol Sci. 2001. Nov;12(6):462–6. doi: 10.1111/1467-9280.00386 [DOI] [PubMed] [Google Scholar]
  • 7.Lepp A, Barkley JE, Karpinski AC. The relationship between cell phone use, academic performance, anxiety, and Satisfaction with Life in college students. Computers in Human Behavior. 2014. Feb;31:343–50. [Google Scholar]
  • 8.Rosen LD, Carrier LM, Pedroza JA, Elias S, O’Brien KM, Karina Kim JL, et al. The Role of Executive Functioning and Technological Anxiety (FOMO) in College Course Performance as Mediated by Technology Usage and Multitasking Habits. Psicología Educativa. 2018;24(1):14–25. doi: 10.5093/psed2018a3 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Twenge JM. More Time on Technology, Less Happiness? Associations Between Digital-Media Use and Psychological Well-Being. Curr Dir Psychol Sci. 2019. Aug;28(4):372–9. [Google Scholar]
  • 10.Vannucci A, Flannery KM, Ohannessian CM. Social media use and anxiety in emerging adults. Journal of Affective Disorders. 2017. Jan;207:163–6. doi: 10.1016/j.jad.2016.08.040 [DOI] [PubMed] [Google Scholar]
  • 11.Rosen LD, Lim AF, Felt J, Carrier LM, Cheever NA, Lara-Ruiz JM, et al. Media and technology use predicts ill-being among children, preteens and teenagers independent of the negative health impacts of exercise and eating habits. Computers in Human Behavior. 2014. Jun;35:364–75. doi: 10.1016/j.chb.2014.01.036 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Twenge JM, Joiner TE, Rogers ML, Martin GN. Increases in Depressive Symptoms, Suicide-Related Outcomes, and Suicide Rates Among U.S. Adolescents After 2010 and Links to Increased New Media Screen Time. Clinical Psychological Science. 2018. Jan;6(1):3–17. [Google Scholar]
  • 13.Orben A, Przybylski AK. Screens, Teens, and Psychological Well-Being: Evidence From Three Time-Use-Diary Studies. Psychol Sci. 2019. May;30(5):682–96. doi: 10.1177/0956797619830329 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Lavie N, Hirst A, de Fockert JW, Viding E. Load Theory of Selective Attention and Cognitive Control. Journal of Experimental Psychology: General. 2004;133(3):339–54. doi: 10.1037/0096-3445.133.3.339 [DOI] [PubMed] [Google Scholar]
  • 15.Wilmer HH, Sherman LE, Chein JM. Smartphones and Cognition: A Review of Research Exploring the Links between Mobile Technology Habits and Cognitive Functioning. Front Psychol. 2017. Apr 25;8:605. doi: 10.3389/fpsyg.2017.00605 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Hadar A, Hadas I, Lazarovits A, Alyagon U, Eliraz D, Zangen A. Answering the missed call: Initial exploration of cognitive and electrophysiological changes associated with smartphone use and abuse. Weinstein AM, editor. PLoS ONE. 2017. Jul 5;12(7). doi: 10.1371/journal.pone.0180094 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Abramson MJ, Benke GP, Dimitriadis C, Inyang IO, Sim MR, Wolfe RS, et al. Mobile telephone use is associated with changes in cognitive function in young adolescents. Bioelectromagnetics: Journal of the Bioelectromagnetics Society, The Society for Physical Regulation in Biology and Medicine, The European Bioelectromagnetics Association. 2009;30(8):678–86. doi: 10.1002/bem.20534 [DOI] [PubMed] [Google Scholar]
  • 18.Alloway TP, Alloway RG. The impact of engagement with social networking sites (SNSs) on cognitive skills. Computers in Human Behavior. 2012. Sep;28(5):1748–54. [Google Scholar]
  • 19.Moisala M, Salmela V, Hietajärvi L, Salo E, Carlson S, Salonen O, et al. Media multitasking is associated with distractibility and increased prefrontal activity in adolescents and young adults. NeuroImage. 2016. Jul;134:113–21. doi: 10.1016/j.neuroimage.2016.04.011 [DOI] [PubMed] [Google Scholar]
  • 20.Ophir E, Nass C, Wagner AD. Cognitive control in media multitaskers. PNAS. 2009. Sep 15;106(37):15583–7. doi: 10.1073/pnas.0903620106 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Sanbonmatsu DM, Strayer DL, Medeiros-Ward N, Watson JM. Who Multi-Tasks and Why? Multi-Tasking Ability, Perceived Multi-Tasking Ability, Impulsivity, and Sensation Seeking. Chambers C, editor. PLoS ONE. 2013. Jan 23;8(1):e54402. doi: 10.1371/journal.pone.0054402 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Uncapher MR, K. Thieu M, Wagner AD. Media multitasking and memory: Differences in working memory and long-term memory. Psychon Bull Rev. 2016. Apr;23(2):483–90. doi: 10.3758/s13423-015-0907-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Stothart C, Mitchum A, Yehnert C. The attentional cost of receiving a cell phone notification. Journal of Experimental Psychology: Human Perception and Performance. 2015. Aug;41(4):893–7. doi: 10.1037/xhp0000100 [DOI] [PubMed] [Google Scholar]
  • 24.Thornton B, Faires A, Robbins M, Rollins E. The Mere Presence of a Cell Phone May be Distracting: Implications for Attention and Task Performance. Social Psychology. 2014. Nov 1;45(6):479–88. [Google Scholar]
  • 25.Ward AF, Duke K, Gneezy A, Bos MW. Brain Drain: The Mere Presence of One’s Own Smartphone Reduces Available Cognitive Capacity. Journal of the Association for Consumer Research. 2017. Apr;2(2):140–54. [Google Scholar]
  • 26.Wilmer HH, Hampton WH, Olino TM, Olson IR, Chein JM. Wired to be connected? Links between mobile technology engagement, intertemporal preference and frontostriatal white matter connectivity. Social cognitive and affective neuroscience. 2019;14(4):367–79. doi: 10.1093/scan/nsz024 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.van Schouwenburg M, Aarts E, Cools R. Dopaminergic Modulation of Cognitive Control: Distinct Roles for the Prefrontal Cortex and the Basal Ganglia. CPD. 2010. Jun 1;16(18):2026–32. doi: 10.2174/138161210791293097 [DOI] [PubMed] [Google Scholar]
  • 28.Loh KK, Kanai R. Higher Media Multi-Tasking Activity Is Associated with Smaller Gray-Matter Density in the Anterior Cingulate Cortex. Watanabe K, editor. PLoS ONE. 2014. Sep 24;9(9):e106698. doi: 10.1371/journal.pone.0106698 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Katayama J, Polich J. Stimulus context determines P3a and P3b. Psychophysiology. 1998;35(1):23–33. [PubMed] [Google Scholar]
  • 30.Polich J. Updating P300: An integrative theory of P3a and P3b. Clinical Neurophysiology. 2007. Oct;118(10):2128–48. doi: 10.1016/j.clinph.2007.04.019 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Theeuwes J. (1994). Stimulus-driven capture and attentional set: Selective search for color and visual abrupt onsets. Journal of Experimental Psychology: Human Perception and Performance, 20(4), 799–806. doi: 10.1037//0096-1523.20.4.799 [DOI] [PubMed] [Google Scholar]
  • 32.Zabelina DL, Ganis G. Creativity and cognitive control: Behavioral and ERP evidence that divergent thinking, but not real-life creative achievement, relates to better cognitive control. Neuropsychologia. 2018. Sep;118:20–8. doi: 10.1016/j.neuropsychologia.2018.02.014 [DOI] [PubMed] [Google Scholar]
  • 33.Folstein JR, Van Petten C. Influence of cognitive control and mismatch on the N2 component of the ERP: a review. Psychophysiology. 2008. Jan;45(1):152–70. doi: 10.1111/j.1469-8986.2007.00602.x [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.Crowley KE, Colrain IM. A review of the evidence for P2 being an independent component process: age, sleep and modality. Clinical Neurophysiology. 2004. Apr;115(4):732–44. doi: 10.1016/j.clinph.2003.11.021 [DOI] [PubMed] [Google Scholar]
  • 35.Bocquillon P, Bourriez JL, Palmero-Soler E, Molaee-Ardekani B, Derambure P, Dujardin K. The spatiotemporal dynamics of early attention processes: a high-resolution electroencephalographic study of N2 subcomponent sources. Neuroscience. 2014. Jun 20;271:9–22. doi: 10.1016/j.neuroscience.2014.04.014 [DOI] [PubMed] [Google Scholar]
  • 36.Eimer M, Kiss M, Press C, Sauter D. The roles of feature-specific task set and bottom-up salience in attentional capture: an ERP study. Journal of Experimental Psychology: Human Perception and Performance. 2009. Oct;35(5):1316. doi: 10.1037/a0015872 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 37.García-Larrea L, Lukaszewicz A-C, Mauguiére F. Revisiting the oddball paradigm. Non-target vs neutral stimuli and the evaluation of ERP attentional effects. Neuropsychologia. 1992;30(8):723–41. doi: 10.1016/0028-3932(92)90042-k [DOI] [PubMed] [Google Scholar]
  • 38.Novak G, Ritter W, Vaughan HG. Mismatch detection and the latency of temporal judgments. Psychophysiology. 1992;29(4):398–411. [DOI] [PubMed] [Google Scholar]
  • 39.Jang SH, Kwon HG. The direct pathway from the brainstem reticular formation to the cerebral cortex in the ascending reticular activating system: a diffusion tensor imaging study. Neuroscience Letters. 2015. Oct 8;606:200–3. doi: 10.1016/j.neulet.2015.09.004 [DOI] [PubMed] [Google Scholar]
  • 40.Debener S, Makeig S, Delorme A, Engel AK. What is novel in the novelty oddball paradigm? Functional significance of the novelty P3 event-related potential as revealed by independent component analysis. Cognitive Brain Research. 2005. Mar 1;22(3):309–21. doi: 10.1016/j.cogbrainres.2004.09.006 [DOI] [PubMed] [Google Scholar]
  • 41.Spencer KM, Dien J, Donchin E. Spatiotemporal analysis of the late ERP responses to deviant stimuli. Psychophysiology. 2001. Mar;38(2):343–58. [PubMed] [Google Scholar]
  • 42.Folk CL, Remington RW, Johnston JC. Involuntary covert orienting is contingent on attentional control settings. Journal of Experimental Psychology: Human perception and performance. 1992. Nov;18(4):1030. [PubMed] [Google Scholar]
  • 43.Chen J, Liang Y, Mai C, Zhong X, Qu C. General deficit in inhibitory control of excessive smartphone users: Evidence from an event-related potential study. Frontiers in psychology. 2016. Apr 14;7:511. doi: 10.3389/fpsyg.2016.00511 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 44.Kim S-K, Kim S-Y, Kang H-B. An Analysis of the Effects of Smartphone Push Notifications on Task Performance with regard to Smartphone Overuse Using ERP. de Albuquerque VHC, editor. Computational Intelligence and Neuroscience. 2016. Jun 5;2016. doi: 10.1155/2016/5718580 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 45.Navon D. Forest before trees: The precedence of global features in visual perception. Cognitive psychology. 1977;9(3):353–83. [Google Scholar]
  • 46.Bultitude JH, Rafal RD, List A. Prism adaptation reverses the local processing bias in patients with right temporo-parietal junction lesions. Brain. 2009. Jun;132(6):1669–77. doi: 10.1093/brain/awp096 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 47.Becker SI, Folk CL, Remington RW. Attentional capture does not depend on feature similarity, but on target-nontarget relations. Psychological science. 2013. May;24(5):634–47. doi: 10.1177/0956797612458528 [DOI] [PubMed] [Google Scholar]
  • 48.Luck SJ. An introduction to the event-related potential technique. MIT press; 2014. Jun 20. [Google Scholar]
  • 49.Team A. Audacity (R): Free audio editor and recorder [Computer software], Version 2.3. 2. Retrieved Jun. 2019.
  • 50.Kim DI, Chung YJ, Lee JY, Kim MC, Lee YH, Kang EB, et al. Development of smartphone addiction proneness scale for adults: Self-report. The Korean Journal of Counseling. 2012;29:629–44. [Google Scholar]
  • 51.Delorme A, Makeig S. EEGLAB: an open source toolbox for analysis of single-trial EEG dynamics including independent component analysis. Journal of Neuroscience Methods. 2004. Mar;134(1):9–21. doi: 10.1016/j.jneumeth.2003.10.009 [DOI] [PubMed] [Google Scholar]
  • 52.Pion-Tonachini L, Kreutz-Delgado K, Makeig S. ICLabel: An automated electroencephalographic independent component classifier, dataset, and website. NeuroImage. 2019. Sep;198:181–97. doi: 10.1016/j.neuroimage.2019.05.026 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 53.Brooks JL, Zoumpoulaki A, Bowman H. Data-driven region-of-interest selection without inflating Type I error rate: Safe data-driven ROI selection. Psychophysiology. 2017. Jan;54(1):100–13. [DOI] [PubMed] [Google Scholar]
  • 54.Zhang W, Luck SJ. Feature-based attention modulates feedforward visual processing. Nat Neurosci. 2009. Jan;12(1):24–5. doi: 10.1038/nn.2223 [DOI] [PubMed] [Google Scholar]
  • 55.Ouyang G, Herzmann G, Zhou C, Sommer W. Residue iteration decomposition (RIDE): A new method to separate ERP components on the basis of latency variability in single trials: RIDE: A new method to separate ERP components. Psychophysiology. 2011. Dec;48(12):1631–47. [DOI] [PubMed] [Google Scholar]
  • 56.Team Rs. RStudio: integrated development for R. RStudio. Inc, Boston, MA. 2015;700. [Google Scholar]
  • 57.Kuznetsova A, Brockhoff PB, Christensen RH. lmerTest package: tests in linear mixed effects models. Journal of statistical software. 2017. Dec 6;82:1–26. [Google Scholar]
  • 58.Whiting WL, Murdock KK. Notification alert! Effects of auditory text alerts on attention and heart rate variability across three developmental periods. Quarterly Journal of Experimental Psychology. 2021. Nov;74(11):1900–13. doi: 10.1177/17470218211041851 [DOI] [PubMed] [Google Scholar]
  • 59.Johannes N, Veling H, Verwijmeren T, Buijzen M. Hard to Resist?: The Effect of Smartphone Visibility and Notifications on Response Inhibition. Journal of Media Psychology. 2019. Oct;31(4):214–25. [Google Scholar]
  • 60.Falkenstein M, Hohnsbein J, Hoormann J, Blanke L. Effects of crossmodal divided attention on late ERP components. II. Error processing in choice reaction tasks. Electroencephalography and clinical neurophysiology. 1991. Jun 1;78(6):447–55. doi: 10.1016/0013-4694(91)90062-9 [DOI] [PubMed] [Google Scholar]
  • 61.Gehring WJ, Goss B, Coles MG, Meyer DE, Donchin E. A neural system for error detection and compensation. Psychological science. 1993. Nov;4(6):385–90. [Google Scholar]

Decision Letter 0

Emily Chenette

14 Jul 2022

PONE-D-22-03238The hidden cost of a smartphone: The effects of smartphone notifications on cognitive control from a behavioral and electrophysiological perspective.PLOS ONE

Dear Dr. Upshaw,

Thank you for submitting your manuscript to PLOS ONE; I sincerely apologise for the unusually delayed review timeframe. Your manuscript has been assessed by one reviewer, whose comments are appended below. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Although the reviewer comments that "The use of N2/P2/P3 components to index cognitive control changes in response to smartphone notification was interesting and and original" and "The statistical analysis were careful and accurate", they raise a number of concerns regarding the description of the rationale and methodology, as well as the discussion of the results. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Please note that we have only been able to secure a single reviewer to assess your manuscript. We are issuing a decision on your manuscript at this point to prevent further delays in the evaluation of your manuscript. Please be aware that the editor who handles your revised manuscript might find it necessary to invite additional reviewers to assess this work once the revised manuscript is submitted. However, we will aim to proceed on the basis of this single review if possible.

Please submit your revised manuscript by Aug 27 2022 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Emily Chenette

Editor in Chief

PLOS ONE

Journal requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf.

2. We note that you have stated that you will provide repository information for your data at acceptance. Should your manuscript be accepted for publication, we will hold it until you provide the relevant accession numbers or DOIs necessary to access your data. If you wish to make changes to your Data Availability statement, please describe these changes in your cover letter and we will update your Data Availability statement to reflect the information you provide.

3. Please include your full ethics statement in the ‘Methods’ section of your manuscript file. In your statement, please include the full name of the IRB or ethics committee who approved or waived your study, as well as whether or not you obtained informed written or verbal consent. If consent was waived for your study, please include this information in your statement as well.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: No

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: The paper adds to the increasing number of publications on the cognitive effects of smart phone use. Specifically, the author investigate its putative effects on attention and cognitive control. The authors use a clever design coupling a global/local task with an oddball paradigm to examine whether trials preceded by a smartphone notification sounds would be subjected to greater involuntary distraction than trials preceded by control sounds. Changes in ERP components (N2, P2, P3) that underlie attentional orienting and cognitive control as well as changes in behavioral responses (accuracy and response times) were collected and carefully analyzed through multilevel linear models. In addition, a smart phone addiction questionnaire was collected and included in the analysis.

The results suggest that smartphone notification do not impair cognitive control processes. Rather, they are associated with deploying less control only on the easiest conditions, while control on the hardest trials seems unimpaired, as shown by both behavioral data and P2 and P3 findings. This result seems to be confirmed by the lack of reliable interactions between scores on the smartphone addiction scale and cognitive control effects (behavioral or ERPs). Even the few significant effects (effects of notification on common targets, effects on N2 component) seem small in term of effect sizes.

Overall, I found the paper interesting and well-written. The use of N2/P2/P3 components to index cognitive control changes in response to smartphone notification was interesting and and original. The statistical analysis were careful and accurate. However, I do have some concerns, which I outline below.

1. The paper does not seem to want to speculate about what these results mean. A tentative explanation would be that participants have learned an association between smartphone sounds and the amount of attentional control they can remove from the task, so that easier task condition can be sacrificed (to pay attention to the smartphone) but difficult trials cannot. The lack of habituation effects suggests that such an association is also well ingrained.

2. The justification for the neural (ERP) effects is somewhat thin. The various ERPs seemed to be introduced as a series of “this seems relevant, and this seems relevant also…”. But why were they chosen? Other ERPs could be chosen and justified. For instance, one could argue that ERNs and FRNs, which are associated to on-line adjustments of control, could have been equally relevant. Thus, the selection must be based on some assumptions of what facet of cognitive control the authors intend to focus on. I am imagining a section, in the introduction, that sounds more or less like “we considered three ERPs that are commonly accepted in the literature as being related to attention. Each of them [or all of them] tackles one specific aspect (...). Other ERPs were not included because (...)”.

3. A more general question is the nature of cognitive control that is examined in this paper. Like "Attention", "Cognitive control" is a bit of a slippery term, and different authors imply different things. It would be helpful if the authors could situate a bit better their ideas about control in the larger control/execute function literature, which would also help justify the choice of the Global/Local task (as opposed to, for example, the Flanker task, the Stroop task, the CPT task, all of which could have been mixed with the oddball paradigm).

4. Speaking of which, the experimental paradigm is described insufficiently well. Without the figure, it is impossible to understand whether the composing letters are all the same or not, and which letters were used. Even with the figure, it is not explicitly what was the exact task (to identify the letter among the small, the big, or both?), although the reader can make some reasonable assumptions.

5. The DOI link to the data (https://doi.org/10.7910/DVN/0NIJWN) does not seem to exist or to be linked to any online resource.. Thus, data cannot be accessed

Minor points:

6. “The control sound was created in Audacity (v. 2.2.2, [41]), and was a square wave tone closely matched to the smartphone sound in duration, volume, and sound similarity.” (page 15). Is it possible to include a figure of the spectral densities or waveforms of the three sounds used in the experiment?

7. I cannot understand the sentence “or not at all 10% of the time (non-targets; Fig 1B).”??? [Page 14]

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

**********

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2022 Nov 17;17(11):e0277220. doi: 10.1371/journal.pone.0277220.r002

Author response to Decision Letter 0


25 Aug 2022

Dr. Emily Chenette August 25, 2022

PLOS ONE

Dear Dr. Chenette,

Thank you for your consideration of our manuscript entitled “The hidden cost of a smartphone: The effects of smartphone notifications on cognitive control from a behavioral and electrophysiological perspective.” for publication consideration in the PLOS ONE. We were very pleased with the reviews we’ve and made changes according to yours and the reviewers suggestions (in blue font in the manuscript). We detail these changes below.

Response to Editor and Journal Requirements .

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf.

Author Response:

We have examined our manuscript and made several adjustments and to the best of our knowledge the revised submission meets PLOS ONE’s style requirements and file naming conventions.

2. We note that you have stated that you will provide repository information for your data at acceptance. Should your manuscript be accepted for publication, we will hold it until you provide the relevant accession numbers or DOIs necessary to access your data. If you wish to make changes to your Data Availability statement, please describe these changes in your cover letter and we will update your Data Availability statement to reflect the information you provide.

Author Response:

We have developed a private github repository that will store the data to be shared and will be made public upon publication acceptance. This is the link to the github repository: https://github.com/jupshaw/SMARTPHONES-AND-COGNITIVE-CONTROL

3. Please include your full ethics statement in the ‘Methods’ section of your manuscript file. In your statement, please include the full name of the IRB or ethics committee who approved or waived your study, as well as whether or not you obtained informed written or verbal consent. If consent was waived for your study, please include this information in your statement as well.

Author Response:

We have included an Ethics statement subsection in the methods section.

“Ethics statement

The study was approved by the local Institutional Review Board at the University of Arkansas and was assigned the protocol number 1807134340. All participants provided written informed consent to participate. Participants were compensated with course credit.” (pp. 9)

Reviewer 1 Comments to the Author

Reviewer #1: The paper adds to the increasing number of publications on the cognitive effects of smart phone use. Specifically, the author investigate its putative effects on attention and cognitive control. The authors use a clever design coupling a global/local task with an oddball paradigm to examine whether trials preceded by a smartphone notification sounds would be subjected to greater involuntary distraction than trials preceded by control sounds. Changes in ERP components (N2, P2, P3) that underlie attentional orienting and cognitive control as well as changes in behavioral responses (accuracy and response times) were collected and carefully analyzed through multilevel linear models. In addition, a smart phone addiction questionnaire was collected and included in the analysis.

The results suggest that smartphone notification do not impair cognitive control processes. Rather, they are associated with deploying less control only on the easiest conditions, while control on the hardest trials seems unimpaired, as shown by both behavioral data and P2 and P3 findings. This result seems to be confirmed by the lack of reliable interactions between scores on the smartphone addiction scale and cognitive control effects (behavioral or ERPs). Even the few significant effects (effects of notification on common targets, effects on N2 component) seem small in term of effect sizes.

Overall, I found the paper interesting and well-written. The use of N2/P2/P3 components to index cognitive control changes in response to smartphone notification was interesting and and original. The statistical analysis were careful and accurate. However, I do have some concerns, which I outline below.

1. The paper does not seem to want to speculate about what these results mean. A tentative explanation would be that participants have learned an association between smartphone sounds and the amount of attentional control they can remove from the task, so that the easier task trials can be sacrificed (to pay attention to the smartphone) but difficult trials cannot. The lack of habituation effects suggests that such an association is also well ingrained.

Author response:

We appreciate the reviewer comments on the need for a stronger speculation about the meaning of our results. We added a new paragraph to the conclusion section to address this concern.

“From the perspective of cognitive resource allocation, (i.e., cognitive load theory; Lavie et al., 2014) one explanation of the findings could be that participants developed a mental framework of association between the amount of cognitive resources necessary to attend to smartphone notifications and the amount of top-down control available to sacrifice during task performance. As such, the results suggest that attentional control resources were more easily sacrificed on simple (i.e., frequent) trials, and less so on more difficult (i.e., rare) trials. The lack of a habituation effect, reflected by the null findings for smartphone addiction, further supports this interpretation.” (pp. 30)

2. The justification for the neural (ERP) effects is somewhat thin. The various ERPs seemed to be introduced as a series of “this seems relevant, and this seems relevant also…”. But why were they chosen? Other ERPs could be chosen and justified. For instance, one could argue that ERNs and FRNs, which are associated to on-line adjustments of control, could have been equally relevant. Thus, the selection must be based on some assumptions of what facet of cognitive control the authors intend to focus on. I am imagining a section, in the introduction, that sounds more or less like “we considered three ERPs that are commonly accepted in the literature as being related to attention. Each of them [or all of them] tackles one specific aspect (...). Other ERPs were not included because (...)”.

Author response:

We appreciate the reviewer bringing this to our attention and we agree that further discussion for the justification of using the specific ERPs is warranted. We included the following pieces of text in the Introduction and Discussion sections, under “The Present Study” and “Limitations and future directions” subheadings, respectively, to address this issue.

“Cognitive control was measured using the oddball effect, which is calculated by subtracting reaction times (RT) and ERP amplitudes on frequent target trials from rare target trials. Better cognitive control is considered to be reflected by a smaller RT oddball effect and a larger ERP oddball effect [32]. We examined three ERPs, the P200 (P2), N200 (N2), and P300 (P3), which are commonly accepted in the literature as underlying neural markers of electrical cortical activation associated with attention and cognitive control processes [30,33,34].

This paradigm and various adaptations have been used in prior studies indicating a family of frontocentral N2 components related to cognitive control [33]. The N2 ERP component is the second negative peak along the average ERP waveform which generally occurs between 200-350 ms after stimulus onset near frontocentral and central electrode site. N2 is considered to be involved in strategy regulation, feedback processing, immediate action control, novel stimuli detection, and visual attention orienting [33]. Though multiple N2 subcomponents exist, the current study focused on a frontocentral N2 component related to cognitive control processes of response inhibition, response conflict, and error monitoring. This anterior N2 component is said to be generated from the anterior cingulate gyrus [35] and is associated with top-down control of attention [36]. Based on previous literature, we expected that participants would respond more slowly and would show a smaller N2 oddball effect (i.e., worse cognitive control) on trials with the smartphone notification (vs. control) sounds.

We examined the P2 ERP as it is likely to be affected by exposure to smartphone notifications. The P2 ERP is the second positive peak along the average ERP waveform which generally occurs between 150-250 ms after stimulus onset near frontal electrode sites [34]. P2 is considered to reflect stimulus monitoring and early attention classification processes and has been shown to demonstrate differential activation between target stimuli conditions in oddball paradigms assessing capacities to withdraw attentional resources away from stimuli [37],[38]. P2 is said to be generated largely as a result from activation within the reticular activating system [39] as a response to input from sensory modalities [34]. If it is the case that smartphone notifications “capture” people’s attention, trials with smartphone notification (vs. control) sounds should elicit a larger P2 ERP.

Among oddball paradigms, an anterior N2 component to frequent targets is often observed in combination with a posterior P3 component to distractor targets, suggesting cognitive processes involved in contextual and memory updating (Debener et al., 2005; Spencer et al., 2001). The P3 ERP component is the third positive peak along the average ERP waveform which generally occurs between 250-500 ms after stimulus onset near large frontoparietal scalp electrode networks [30]. P3 can be divided into two subcomponents. P3 is said to originate from frontal lobe activation for attention-driven stimuli processing, particularly for task-irrelevant neural activity elicited during target stimulus processing [30]. P3 has been considered to be a late cognitive component involved in endogenous decision-making and stimulus categorization originating from the dorsolateral prefrontal cortex (DLPFC) which communicates with the cingulate cortex and parietal structures [35]. P2 and N2 reflect early cognitive processes and are likely to be influenced by exogenous smartphone notifications. P3, on the other hand, should not be affected by smartphone notifications, as P3 is thought to reflect late cognitive processes involved in endogenous decision-making and stimulus categorization [32].” (pp. 6-8)

“Although the N2 is a common approach for measuring electrophysiological markers of conflict monitoring processes, it is not the only ERP component worth examining. The current study did not assess alternative conflict monitoring ERPs, such as the error-related negativity (ERN) component, which is said to capture ERP activity associated with incorrect responses [60,61]. We were primarily interested in neural activity during correct task responses to measure the neural effect smartphone notifications without the influence of error-related neural activity. Future work, however, needs to examine these alternative ERP components to further elucidate the way smartphone notifications influence top-down control processes.” (pp. 31)

3. A more general question is the nature of cognitive control that is examined in this paper. Like "Attention", "Cognitive control" is a bit of a slippery term, and different authors imply different things. It would be helpful if the authors could situate a bit better their ideas about control in the larger control/execute function literature, which would also help justify the choice of the Global/Local task (as opposed to, for example, the Flanker task, the Stroop task, the CPT task, all of which could have been mixed with the oddball paradigm).

Author response:

We now include the following information in the introduction of the manuscript to further situate our ideas about cognitive control:

“The present study aimed to examine the extent to which smartphone notifications influence cognitive control and attention on an adapted Local/Global hierarchical letter three-stimulus oddball paradigm using event-related potentials (ERPs), and behavioral performance. The Local/Global task requires an individual to attentionally reorient to and update working memory to accurately respond to the presence of a target letter while monitoring for the presence of rare distractor letters presented at opposite levels of visual attention [29,30]).The rare target letter presents an exogenous salient singleton requiring increased recruitment of attentional and cognitive control resources, specifically those necessary for conflict monitoring [31]. Cognitive control paradigms such as the Stroop or Erickson Flanker task also measure conflict monitoring, however, the paradigm in the current study was chosen to heighten engagement of early attentional orienting mechanisms. While monitoring for conflict between frequent, rare, and non-target trials, participants were required to ignore inconsistent visual information between hierarchically nested visual stimuli presented at opposing levels of local or global attention. Therefore, this paradigm allowed us to simultaneously measure the effects of smartphone notifications on attention and cognitive control.” (pp. 5-6)

4. Speaking of which, the experimental paradigm is described insufficiently well. Without the figure, it is impossible to understand whether the composing letters are all the same or not, and which letters were used. Even with the figure, it is not explicitly what was the exact task (to identify the letter among the small, the big, or both?), although the reader can make some reasonable assumptions.

Author response:

We now present a more clear description of the task to address this issue:

“Participants were asked to indicate the presence of a target letter by pressing either Yes (1 key) for “Target letter is present,” or No (2 key) for “Target letter is not present” using their right hand on a standard keyboard digit pad. Participants were instructed to detect the presence of lack of presence of the target letter regardless of the size of the letter. Participants completed two practice blocks consisting of 9 trials each. Visual feedback was provided for response accuracy on practice trials (“Correct” or “Incorrect”). Following practice trials, participants completed 16 experimental blocks (960 trials). Before each block, a “target letter” screen was displayed on the computer monitor for 12 seconds. The “target letter” screen used a single red letter (twice in size as the local letters) to identify the specific target letter participants would be aiming to detect in the following block of trials.

Each block of the task consisted of 60 trials presented in pseudo-random order, ensuring that an equal number of sound stimuli were presented on frequent, rare, and non-target trials. On a given block, target letters were displayed on the screen at either the local or global attentional level on 80% of trials, referred to as frequent trials. During the same task block, target letters of the opposite level of attention were displayed on 10% of trials, referred to as rare trials. The final 10% of trials did not include a target letter, referred to as non-target trials; Fig 1B). Global and local letter stimuli were counterbalanced such that big “H”s, “E”s, “S”s, and “A”s, were composed of uniform (never mixed within a single letter) sets of smaller letters for an equal presentation of hierarchical letter combinations across the 4 four possible letters to choose from. For example, on a given trial, a big “H” would be comprised of all small “S”s, but never small “S”s and small “E”s. Each block of 60 individual trials was followed by a self-paced break period.” (pp. 10-11)

5. The DOI link to the data (https://doi.org/10.7910/DVN/0NIJWN) does not seem to exist or to be linked to any online resource. Thus, data cannot be accessed

Author response:

We created a publicly available github repository for the project where the data and other coding script materials will be made available upon acceptance of this manuscript for publication.

https://github.com/jupshaw/SMARTPHONES-AND-COGNITIVE-CONTROL

Minor points:

6. “The control sound was created in Audacity (v. 2.2.2, [41]), and was a square wave tone closely matched to the smartphone sound in duration, volume, and sound similarity.” (page 15). Is it possible to include a figure of the spectral densities or waveforms of the three sounds used in the experiment?

Author response:

We added the text below and created the following figure to illustrate the spectral densities and waveforms of the auditory stimuli.

“Consequent examination of the lawnmower sound acoustic waveform spectrum revealed unintended technical confounds (e.g. stereoscopic inconsistency creating a perception of spatial movement (Fig 2C) and was not used in subsequent analyses. Sound stimuli were presented pseudo-randomly.” (pp. 12)

“Fig 2. Frequency spectral densities and signal waveforms of the auditory stimuli.” (pp. 12)

7. I cannot understand the sentence “or not at all 10% of the time (non-targets; Fig 1B).”??? [Page 14]

Author response:

We edited this sentence to improve its readability and comprehension.

“On a given block, target letters were displayed on the screen at either the local or global attentional level on 80% of trials, referred to as frequent trials. During the same task block, target letters of the opposite level of attention were displayed on 10% of trials, referred to as rare trials. The final 10% of trials did not include a target letter, referred to as non-target trials; Fig 1B).” (pp. 10-11)

Attachment

Submitted filename: Response to Reviewers.docx

Decision Letter 1

Francesco Di Russo

10 Oct 2022

PONE-D-22-03238R1The hidden cost of a smartphone: The effects of smartphone notifications on cognitive control from a behavioral and electrophysiological perspective.PLOS ONE

Dear Dr. Upshaw,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses all the points raised by the reviewer including the data access.

Please submit your revised manuscript by Nov 24 2022 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Francesco Di Russo, Ph.D.

Academic Editor

PLOS ONE

Journal Requirements:

Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: All comments have been addressed

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: No

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: The authors have responded to all of my previous comments to my satisfaction. My only concern is that the Github repo is still private, so I had not had a chance to fully examine the LMER model and make sure I understand them correctly. Per PLOSONE policy, the repository MUST be made public before publication (I would also recommend sharing it through OSF).

Besides that, I have only four VERY MINOR comments that would be great to see fixed before publication:

1. “An intraclass correlation of r = 0.14”. I noted this in my previous review, I am not sure if the authors actually mean ICC (suchas ICC = 0.14) or Pearson correlation.

2, The participant info (number, age, proportion female) is repeated twice, in the special Participant subsection and then again below.

3. Page 15 “ This model was found to provide the best fit of the data.” Best compared to… What? Maybe other, simpler LMER models?

4. Page 16. After explaining the logistic regression model for accuracy, the authors note that “ Error rates were assessed using the same predictor variables .”. But… Aren’t error rates just the complement of accuracies? And doesn’t the logistic model already provide a complete account?

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

**********

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2022 Nov 17;17(11):e0277220. doi: 10.1371/journal.pone.0277220.r004

Author response to Decision Letter 1


18 Oct 2022

Editor and Journal Requirements .

When submitting your revision, we need you to address these additional requirements.

1. Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice.

Author Response:

We have double checked all articles to ensure that they have not been retracted. We employed Zotero reference manager and google scholar for retraction checks.

Reviewer Comments to the Author

Reviewer #1: The authors have responded to all of my previous comments to my satisfaction. My only concern is that the Github repo is still private, so I had not had a chance to fully examine the LMER model and make sure I understand them correctly. Per PLOSONE policy, the repository MUST be made public before publication (I would also recommend sharing it through OSF).

Author Response:

We have created a publicly available OSF repository that includes the R scripts and data files to improve transparency. This can be accessed here: https://osf.io/bj7zf/

The data availability statement will also be adjusted accordingly.

MINOR comments:

1. “An intraclass correlation of r = 0.14”. I noted this in my previous review, I am not sure if the authors actually mean ICC (such as ICC = 0.14) or Pearson correlation.

Author Response:

Thank you for catching this typo. We have adjusted the sentence appropriately.

“An intraclass correlation (ICC) of 0.14 for RT within participant was found, warranting the use of mixed linear models.” (p. 15)

2. The participant info (number, age, proportion female) is repeated twice, in the special Participant subsection and then again below.

Author Response:

We have removed the unnecessary participant info as it did not provide additional useful information.

“Four participants were excluded from behavioral data analyses due to technical issues or poor performance on the oddball task (i.e., errors or RTs exceeding +/- 2.5 SD). The final sample for behavioral analyses included 69 participants. ERP data for 19 participants were excluded because of technical issues, or for having uncorrectable artifacts greater than 25% of total trials [48]. The final sample for ERP analyses included 54 participants.” (p. 12)

3. Page 15 “ This model was found to provide the best fit of the data.” Best compared to… What? Maybe other, simpler LMER models?

Author Response:

Thank you for pointing out this area of confusion. We have adjusted the analytical strategy section to improve the clarity on these points.

“For RTs, analyses were conducted using linear mixed effect regression (LMER) models with random slopes for the trial frequency condition (rare and frequent) and random intercepts for each participant to account for within-subject variance in RT across all trials (lmerTest v. 3.1.1) [57]. This model was found to provide the best fit of the data relative to simpler LMER models.” (p. 15)

4. Page 16. After explaining the logistic regression model for accuracy, the authors note that “ Error rates were assessed using the same predictor variables .”. But… Aren’t error rates just the complement of accuracies? And doesn’t the logistic model already provide a complete account?

Author Response:

Thank you for pointing out this redundant information. We have adjusted the paragraph to improve clarity.

“For the accuracy analyses, we conducted generalized linear mixed effects regression model with error rate as the dependent variable, random slopes for trial frequency, and random intercepts for participant. We used a bound optimization by quadratic approximation with a binomial family distribution of 0 for correct and 1 for incorrect trials.” (p. 16)

Attachment

Submitted filename: Response to Reviewers.docx

Decision Letter 2

Francesco Di Russo

24 Oct 2022

The hidden cost of a smartphone: The effects of smartphone notifications on cognitive control from a behavioral and electrophysiological perspective.

PONE-D-22-03238R2

Dear Dr. Upshaw,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Francesco Di Russo, Ph.D.

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Reviewers' comments:

Acceptance letter

Francesco Di Russo

9 Nov 2022

PONE-D-22-03238R2

The hidden cost of a smartphone: The effects of smartphone notifications on cognitive control from a behavioral and electrophysiological perspective.

Dear Dr. Upshaw:

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

If we can help with anything else, please email us at plosone@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Prof. Francesco Di Russo

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    Attachment

    Submitted filename: Response to Reviewers.docx

    Attachment

    Submitted filename: Response to Reviewers.docx

    Data Availability Statement

    We have created a publicly available OSF repository that includes the R scripts and data files to improve transparency. This can be accessed here: https://osf.io/bj7zf/ In addition, we created a publicly available github repository for the project where the data and other coding script materials will be made available upon acceptance of this manuscript for publication. https://github.com/jupshaw/SMARTPHONES-AND-COGNITIVE-CONTROL.


    Articles from PLOS ONE are provided here courtesy of PLOS

    RESOURCES