Skip to main content
PLOS ONE logoLink to PLOS ONE
. 2020 Jun 1;15(6):e0233544. doi: 10.1371/journal.pone.0233544

Proactively location-based suppression elicited by statistical learning

Siyang Kong 1, Xinyu Li 1, Benchi Wang 2,3,4,5,*, Jan Theeuwes 1,6,7
Editor: Evan James Livesey8
PMCID: PMC7263585  PMID: 32479531

Abstract

Recently, Wang and Theeuwes used the additional singleton task and showed that attentional capture was reduced for the location that was likely to contain a distractor [1]. It is argued that due to statistical learning, the location that was likely to contain a distractor was suppressed relative to all other locations. The current study replicated these findings and by adding a search-probe condition, we were able to determine the initial distribution of attentional resources across the visual field. Consistent with a space-based resource allocation (“biased competition”) model, it was shown that the representation of a probe presented at the location that was likely to contain a distractor was suppressed relative to other locations. Critically, the suppression of this location resulted in more attention being allocated to the target location relative to a condition in which the distractor was not suppressed. This suggests that less capture by the distractor results in more attention being allocated to the target. The results are consistent with the view that the location that is likely to contain a distractor is suppressed before display onset, modulating the first feed-forward sweep of information input into the spatial priority map.

Introduction

It is important to be able to attend to events that are relevant to us and ignore information that may distract us. Typically, salient objects in the environment have the ability to automatically grab our attention and disrupt our ongoing tasks [2, 3]. The extent to which we are able to avoid such distraction from salient events has been a central question for decades. Traditionally, it was assumed that the competition between top-down, goal-directed signals and bottom-up, salience-based signals determined the selection priority in the visual field [for reviews see 2, 4]. Recently, it was pointed out that a third category labeled “selection history” plays a larger role than previously assumed [5]. It was argued that the repeated exposure to stimuli creates (often implicitly) learned selection biases, shaped by the repeated associations of value, emotion valence, or other statistical regularities [3, 6, 7]. These effects cannot be explained by top-down nor by bottom-up factors. As such, these learning processes provide competitive advantages for certain spatial locations and/or visual features by means of altering their priority for attentional selection [1, 815].

Extracting regularities from the environment in service of automatic behavior is one of the most fundamental abilities of any living organism and is often referred to as statistical learning (SL). Statistical learning has been a subject of investigation in many domains, particularly in language acquisition, object recognition, attention, scene perception, visual search, conditioning and motor learning [1624].

Previous research on statistical learning and attention has shown that observers can learn to prioritize locations that are likely to contain a target. For example, learning contextual regularities biases attentional selection such that searching for a target is facilitated when it appears in a visual lay-out that was previously searched relative to visual lay-outs that were never seen before; research known under the term of "contextual cueing" [2528]. Typically, in these studies, participants searched for a 'T' target among 'L' distractors in sparsely scattered configurations. Half of the display configurations were repeated across blocks while others were only seen once. The classic result was that participants were faster in finding targets when they appeared in repeated configurations than in configurations that they had not seen before, suggesting that participants have learned the association between the spatial configuration and the target location. These studies show that observers can learn the association between display configurations and the target location, consistent with findings that have shown that observers are faster to detect targets appearing in probable locations than improbable locations [29, 30]. Consistent with this notion are studies that have shown that observers are faster to respond to targets that appear at more probable locations than in all other locations [3133].

Recently, however, it has been shown that lingering biases due to statistical learning history play an important role in avoiding and/or reducing distraction. In a series of experiments, Wang & Theeuwes employed a variant of the additional singleton task and showed that through statistical learning, attentional capture by the salient distractors could be significantly reduced [1, 14, 15]. Specifically, in these experiments, participants searched for a salient shape singleton (i.e., a diamond between circles or a circle between diamonds) while they ignored a salient colored distractor singleton. Critically and unknown to the participants, the presentation of the salient distractors was biased such that it was more likely to appear at one specific location (high-probability location) than at all other locations (low-probability location) in the visual field. The results indicated that there was less capture by the salient distractor when it appeared at this high-probability location than low-probability locations, suggesting that capture by salient distractor was attenuated. Moreover, when the target happened to be presented at the high-probability location, its selection was less efficient (in terms of RT and accuracy). In all studies, there was also a spatial gradient from the high-probability location as the attentional capture effect scaled with the distance from this location, and observers were basically unaware of the regularities.

These findings have led to the conclusion that the exposure to regularities regarding a distractor induces spatially selective suppression [12, 3436]. Critically, this suppression is not found when participants actively try to suppress such a location in a top-down fashion [14, 37]. Importantly, a recent EEG study employing the same paradigm as in Wang and Theeuwes [1] showed that ~1200 ms before display onset, there was increased alpha power contralateral to the high-probability location relative to its ipsilateral location [38]. This type of alpha-band oscillations has been associated with neural inhibition serving as an attentional gating mechanism [39]. These neural signatures suggest that, well before the display is presented, the location that is likely to contain a distractor is suppressed. Because the location is suppressed before display onset, this type of suppression is referred to as “proactive suppression”, suggesting that on the spatial priority map this location competes less for spatial attention than all other locations in the visual field [3]. Proactive suppression can be contrasted with retroactive suppression, which is the type of suppression that occurs only after attention has been directed to a location, disengaged and subsequently suppressed [13].

In the current study, we used a probe task to investigate the nature of this proactive suppression effect further. Participants performed a variant of Wang and Theeuwes [1] task in which the distractor singleton was presented more often in one location (high-probability location) than in all other locations (low-probability location). This task was performed on the majority of trials (66.7%). In the remaining trials, the search display was presented briefly (200 ms) immediately followed by a probe display, in which six orientation bars were presented [For an illustation see Fig 1; and see a similar probe task design in 40, 41]. Following the probe display, a bar appeared at one of the 6 locations, and participants had to adjust the bar such that its orientation would match the orientation of the element in the corresponding probe display. The distribution of response errors (i.e., response orientation value minus the correct orientation value) was characterized by fitting a standard mixture model [42] allowing us quantify independently measures of guess rate and standard deviation. This will allow us to examine whether the observed suppression effect was due to lower chance to encode the items (reflected by guess rate), or whether inhibition occurred after processing the items (reflected by standard deviation).

Fig 1. Experimental procedure.

Fig 1

Search-only trials, in which participants were required to search for a shape singleton and to indicate the position (i.e., left or right) of the white dot inside. Search-probe trials, in which the search display was present for a short period (200 ms), then participants were required to memorize six orientations and to recall one of them by rotating the response wheel as accurate as possible. This figure was used for illustration purpose only, for details see the text.

According to the biased competition theory of attention, objects in the visual field compete for cortical representation in a mutually inhibitory network [43]. Directing attention to one object comes at a cost of less attention for other objects. Bahcall and Kowler offered a similar space-based, resource allocation account, arguing that at the attended location, processing strength is increased by borrowing resources from other regions in the visual field [44]. Important for the current study, according to a biased competition resource allocation model of attention, proactive suppression of a particular location should result in more resources being available for target processing [38].

The probe task allows us to examine this distribution of attention across the visual field immediately following the display onset during the first 200 ms [41, 45]. If the location that is likely to contain a distractor is already suppressed before display onset (i.e., proactively), then one expects that a probe presented at that location is suppressed, resulting in a poor probe representation and consequently more probe response errors. Specifically, this poorer representation should result in a lower chance to encode the items resulting in a higher guess rate; if the suppression is retroactive and occurring later in time we expect effects on the standard deviation. At the same time, according to a biased competition (resource) account, this proactive suppression should allow more resources being available for target processing, resulting in a better probe representation and fewer probe response errors, and lower guess rate. The reverse conjecture should also hold: if a distractor is presented at a low-probability location, it is basically not inhibited leading to stronger attentional capture than a distractor is presented at a high-probability location.

Attentional capture implies that attention is directed to this location, resulting in a better representation of the probe presented at that location (i.e., fewer probe response errors), while at the same time less resources should be available for target processing leading to more probe response errors [46]. Previous studies [1, 8, 1415] only showed that capture was reduced when the distractor was presented the high-probability location relative to the low-probability location. The current study goes much beyond these findings and provide insights on how statistical learning impacts the distribution of attention across the display. Overall, we claim that the probe task adopted in the present study could provide a window of how, due to statistical learning, the weights within the spatial priority map are changed.

Method

Participants

Sixteen undergraduates (1 man and 15 women: with a mean age of 18.9 ± 1.0 years old) were recruited from Zhejiang Normal University in China. All participants provided written informed consent, and reported normal color vision and normal or corrected-to-normal visual acuity. Sample size was predetermined based on the significant difference between the high-probability location and low-probability location in Wang and Theeuwes [1], with an effect size of 1.83. With 16 subjects and alpha = .001, power for the critical effect should be > 0.99. The study was approved by both the Ethical Review Committee of the Vrije Universiteit Amsterdam and the Ethical Review Committee of Zhejiang Normal University.

Apparatus and stimuli

Stimulus presentation and response registration were controlled by custom scripts written in Python 2.7. In a dimly lit laboratory, participants held their chins on a chin rest located 63 cm away from the liquid crystal display (LCD) color monitor. The primary search display contained one circle with a radius of 0.7° and five diamonds (subtended by 1.6° × 1.6°) colored in red or green, or vice versa (see Fig 1 for an example). Each display-element was centered 2.0° from the fixation (a white cross, 0.5° × 0.5°), containing a 0.2° white dot located 0.2° from either the left or the right edge of the element.

In search-probe trials, six white lines (subtending 0.1° × 1°) with different orientations (randomly selected from seven orientations: 10°-160°, in 25° steps) were presented at the same locations as the display-elements of the search array (see Fig 1 bottom panel). A continuous response wheel (subtending 0.5° wide, 4.5° radius) presented at the center of the to-be-recalled item and a red pointer were used to collect participants’ response.

Procedure and design

On each trial, a fixation cross was presented for 500 ms, followed by a primary search display which consisted of five items in the same shape and a shape singleton (i.e., a circle among five diamonds, or vice versa). Participants were asked to keep fixation at the cross throughout the trial. In search-only trials (66.7% of the trials), the search array was presented for 3000 ms or until participants responded. Participants were required to search for one shape singleton and indicate whether the line segment was on the left or right side of the target by pressing the left or right key on the keyboard using left hand as fast as possible, respectively. Responses were speeded and feedback, “You did not respond, please focus on the task” or “you responded incorrectly, please focus on the task” was given when participants did not respond or responded incorrectly, respectively.

In search-probe trials (33.3% of the trials), the search array appeared for 200 ms, followed by 100 ms probe display containing six orientation bars. Participants were required not to respond to the search task, but to attend and memorize these orientations. Then a horizontal line and a response wheel were presented at the center of the to-be-reported item, remained on until the response. Participants had to rotate the line and clicked the left key on the mouse when they felt the orientation was the same as the one presented in the probe display. Responses were unspeeded and only accuracy was emphasized. The inter-trial interval (ITI) was between 500 and 750 ms at random.

The search target was presented on each trial, and it was equally likely to be a circle or a diamond. A uniquely colored distractor singleton was randomly presented in 66% of the trials in each block, with the same shape as other distractors but a different color (red or green balanced between subjects). One of these distractor locations had a high proportion of 62.5% (high-probability location), and other locations shared a low proportion of 37.5% with each had a low probability of 7.5% (low-probability location). The high-probability location remained the same for each participant and was counterbalanced across participants. In the condition with a distractor the target never appeared at the high-probability location, but appeared equally often at all other locations. This design adopted was the same as in Wang and Theeuwes [1]. One might question whether the effect reported in the present study was due to the fact that the target was never presented at the high-probability distractor location or whether it has nothing to do with target probability but instead is completely due to the fact that the distractor was presented at that location much more often. A recent study by Failing et al., answered exactly this question and showed that the suppression is solely due to the probability of that the distractor is presented at that location [9]. Participants were first trained for 360 trials to understand the search task before the testing. Then, they completed 40 practice trials and 7 blocks with each containing 360 trials in two successive days (a total of 2520 trials), in which search-only and search-probe trials were mixed within blocks.

Additional analysis

For search-probe trials, a standard mixture model was fitted to characterize the distribution of response errors in terms of response precision and guess rate [42]. The response error was calculated by subtracting the correct value of probed orientation from the response value. The distribution was assumed to consist of a uniform distribution of response errors for guessing trials and a von Mises (circular normal) distribution of response errors for non-guessing trials. By using maximum likelihood estimation, the distribution of the response error data from each condition was entered into the model,

P(e)=(1g)Φσ(e)+g/2π,

Where one input parameter e (response errors) is required, and two output parameters g (guess rate, the proportion of the guess trials) and σ (standard deviation [SD], the width of the von Mises distribution, reflecting the precision of the internal representation) will be given. The MemToolbox was used to fit the current dataset [47].

Results

Search-only condition

Trials (2.1%) on which the response times (RTs) were slower than 1500 ms or faster than 200 ms were removed from analysis.

Attentional capture effect

Mean RTs and mean error rates are presented in Fig 2A. With distractor condition (high-probability location, low-probability location, and no-distractor) as a factor, a repeated measures ANOVA on mean RTs showed a main effect, F (2, 30) = 159.22, p < .001, ηp2 = .91. Subsequent planned comparisons showed that, against the no-distractor condition, there were significant attentional capture effects when the distractor singleton was presented at the high-probability location, t (15) = 11.25, p < .001, cohen’s d = 0.43, and when it was presented at the low-probability location, t (15) = 14.01, p < .001, cohen’s d = 1.14. Consistent with Wang and Theeuwes [1, 1415], the difference between high- and low-probability locations was also reliable, t (15) = 10.71, p < .001, cohen’s d = 0.72, suggesting that the attentional capture effect was attenuated for trials in which the distractor singleton appeared at the high-probability location (see S1 Appendix for training results). Importantly, when analyzing different blocks separately, we found that the suppression effect already occurred in the first block, t = 8.49, p < .001, and remained present in the following blocks, all ps < .003, suggesting that learning to suppress the high-probability location is very efficient. No such effect was observed on error rates, F (2, 30) = 2.32, p = .116, ηp2 = .13, BF01 = 1.32.

Fig 2. Results of search-only trials.

Fig 2

The mean response times (RTs) and mean error rates in different distractor conditions (A) and in the distractor singleton absent condition (B). The spatial distribution of attentional capture effect by the means of response times and error rates in the distractor singleton present condition (C). Here, Dist-0 represents the high-probability distractor location, Dist-1 represents the low-probability distractor location with 60° polar angle away from the high-probability distractor location (physical distance), and so on. Error bars denote 95% confidence intervals (CIs).

Target selection

Mean RTs and mean error rates are presented in Fig 2B. To further examine the efficiency of target selection we calculated the mean RTs in the distractor absent condition. Pairwise t-test showed that the selection was less efficient when the target was presented at the high-probability location compared to when it was presented at the low-probability location, t (15) = 2.54, p = .023, cohen’s d = 0.26. There was no effect on error rates, t (15) = 1.18, p = .258, cohen’s d = 0.3, BF01 = 2.17.

The spatial distribution of the suppression effect

To explore the spatial gradient of the suppression effect, we divided the distractor locations into four distances (dist-0, dist-1, dist-2, and dist-3). The mean RTs and mean error rates for these conditions are presented in Fig 2C. A one-way repeated measures ANOVA with distance as a factor showed a significant main effect for RTs, F (4, 60) = 54.62, p < .001, ηp2 = .79, but not for error rates, F (4, 60) = 1.42, p = .237, ηp2 = .09. We fitted the RT data with a linear function (as one of the options to capture the gradient of the suppression effect [1]) and used its slope to describe the decrease of the suppression effect with the increase of the distance relative to the high-probability location. The slope (23.01 ms per display element) for mean RTs was significantly larger than zero, t (15) = 6.7, p < .001, cohen’s d = 2.37, suggesting that the suppression effect was not limited to one location, but had an extended spatial gradient.

Search-probe condition

Response error

The mean response errors are presented in Fig 3A. To examine the impact of statistical learning on the spatial distribution of attention, we sorted the distractor singleton present condition into two conditions: distractor present at the high-probability location and at the low-probability location. A new condition, named probe type (target, neutral-element, distractor-singleton), was defined as well. It denotes that the probe could be presented at target location, neutral-element location, or distractor-singleton location. A repeated-measures ANOVA on mean response errors with factors of distractor location (high-probability location vs. low-probability location) and probe type (target, neutral-element, distractor-singleton) showed a significant main effect for probe type, F (2, 30) = 26.92, p < .001, ηp2 = .64, but not for distractor location, F (1, 15) = 0.18, p = .674, ηp2 = .01, BF01 = 2.05×1010. Importantly, we observed a significant interaction between distractor location and probe type, F (2, 30) = 5.42, p = .01, ηp2 = .27. To unpack the main effect of probe type, we performed subsequent t-tests. When the probed item was presented at the target location, the performance was superior compared to that when the probed item was presented at the neutral-element location, t (15) = 6.94, p < .001, cohen’s d = 1.31, and at the distractor-singleton location, t (15) = 5.12, p < .001, cohen’s d = 1.1. When the probe was presented at the distractor-singleton location, performance was better compared to that when the probe was presented at the neutral-element location, t (15) = 2.36, p = .032, cohen’s d = 0.12.

Fig 3. Results of search-probe trials.

Fig 3

The mean response errors in different distractor conditions (A) and in the distractor singleton absent condition (B). The mean guess rates (C) and mean standard deviation (D) in different distractor conditions. Error bars denote 95% CIs.

Subsequent comparisons showed that when the probed item was presented at the distractor-singleton location, the performance was worse for distractor singletons that appeared at the high-probability location than at the low-probability location, t (15) = 2.19, p = .045, cohen’s d = 0.29, suggesting that the high-probability location was suppressed relative to the low-probability location. Consistently, we also found that when the probed item was presented at the target location, the suppression pattern was reversed; i.e., the performance was now better for distractors presented at the high-probability location than at the low-probability location, t (15) = 3.42, p = .004, cohen’s d = 1.79, suggesting that the processing of the target was facilitated due to more suppression when the distractor singleton was presented at the high-probability location relative to the low-probability location. When the probed item was presented at any of the neutral-element locations (not at the target nor at the distractor singleton location), there was no difference in performance between the distractor presented at the high- vs. low-probability location, t (15) = 0.5, p = .626, cohen’s d = 0.07, BF01 = 3.51. This suggests that the distractor suppression does not affect the processing of neutral elements.

In the no-distractor condition, a two-way ANOVA was conducted on mean response errors as well, with factors of recall location (high-probability location vs. low-probability location) and probe type (target vs. neutral-element). The results showed a significant main effect for probe type, F (1, 15) = 46.0, p < .001, ηp2 = .75, but not for recall location, F (1, 15) = 0.68, p = .422, ηp2 = .04, BF01 = 2.06×1010; and there was no interaction, F (1,15) = 0.75, p = .401, ηp2 = .05, BF01 = 2.66. The performance was better when the probe was presented at the target location than at the neutral-element location (see Fig 3B).

We also examined the difference in mean response errors for probing the target location between the no-distractor condition and the condition when the distractor was presented at the high- and low-probability location. The results showed that, when the probe was presented at the target location, there was no difference between the no-distractor condition and the condition that the distractor was presented at the high-probability location, t (15) = 0.51, p = .618, cohen's d = 0.07, BF01 = 3.5. However, when the distractor was presented at the low-probability location, the mean response errors for probing the target location was significantly larger than in the no-distractor condition, t (15) = 3.57, p = .003, cohen's d = 0.47.

Guess rate

The mean guess rates are presented in Fig 3C. A repeated-measures ANOVA on mean guess rates with factors of distractor location (high-probability location vs. low-probability location) and probe type (target, neutral-element, distractor-singleton) showed a significant main effect for probe type, F (2, 30) = 6.66, p = .004, ηp2 = .31, but not for distractor location, F (1, 15) = 0.17, p = .69, ηp2 = .01, BF01 = 35.8. When the probed item was presented at the target location, the guess rate was lower compared to that when the probed item was presented at the neutral-element location, t (15) = 3.01, p = .009, cohen’s d = 0.78, and at the distractor-singleton location, t (15) = 2.63, p = .019, cohen’s d = 0.69. Importantly, we observed a significant interaction between distractor location and probe type, F (2, 30) = 4.6, p = .018, ηp2 = .24.

Subsequent comparisons showed that when the probed item was presented at the distractor-singleton location, the guess rate was higher for distractor singletons that appeared at the high-probability location than at the low-probability location, t (15) = 2.54, p = .023, cohen’s d = 0.7. We also found that when the probed item was presented at the target location, the pattern was reversed; i.e., the guess rate for the target was lower when the distractor singleton was presented at the high- relative to the low-probability location, t (15) = 2.77, p = .014, cohen’s d = 0.68. When the probed item was presented at any of the neutral element locations, there was no difference between distractor singletons present at the high-probability location and at the low-probability location, t (15) = 0.44, p = .669, cohen’s d = 0.14, BF01 = 3.6. Clearly, the results of guess rate mimic what we have found for response errors.

Standard deviation

The mean SDs are presented in Fig 3D. A repeated-measures ANOVA on mean SDs with factors of distractor location (high-probability location vs. low-probability location) and probe type (target, neutral-element, distractor-singleton) showed a significant main effect for probe type, F (2, 30) = 5.41, p = .01, ηp2 = .27, but not for distractor location, F (1, 15) < 0.01, p = .982, ηp2 < .01, BF01 = 12.15. When the probed item was presented at the target location, the SD was lower compared to that when the probed item was presented at the neutral-element location, t (15) = 2.73, p = .015, cohen’s d = 0.81, and at the distractor-singleton location, t (15) = 3.38, p = .004, cohen’s d = 0.69. However, there was no interaction, F (2, 30) = 0.7, p = .507, ηp2 = .04, BF01 = 3.22.

Discussion

The current study replicated precisely all previous findings of Wang and Theeuwes [1]. We showed that for the high-probability location there were (1) less capture by the salient distractor and (2) less efficient selection of the target. There was also a spatial gradient from the high-probability location as the attentional capture effect scaled with the distance from this location.

In addition to this replication, the search-probe condition elegantly demonstrates the initial distribution of attentional resources across the visual field. Consistent with a space-based resource allocation (“biased competition”) model we showed that the high-probability location was suppressed relative to the low-probability location as there were more response errors and higher guess rate in the high- relatively to the low-probability location. At the same time, this suppression of the high-probability location resulted in more attention being allocated to the target location relative to a condition in which the distractor was not proactively suppressed, i.e., when presented at a low-probability location.

The current findings are consistent with notion that this type of learning results in proactive suppression. Indeed, because the high-probability location is proactively suppressed (i.e., before display onset) this location competes less for attention than the other locations, giving rise to more response errors for probes presented at the high-probability location than at the low-probability location. If suppression would have been applied after attention has initially shifted there (so called retro-active suppression), then we would have expected that probes would have been picked up at the high-probability location just as well as any other location within short time window. Several speculations might be derived from the results in the search-probe task. It seems to suggest that the reduction in interference of a salient distractor when presented at a high-probability location is the result of a combination of less capture by the distractor and at the same time more attentional allocation to the target. Similarly, when a distractor is presented at the low-probability location, the strong attention capture observed summons so much attention that less attention is available for target processing.

It should be noted that our analysis can say little, if anything, about whether attention is allocated in parallel or serially. The search display was presented for 200 ms and previous research has shown that within this time window, attention is first summoned by the salient distractor before it is allocated to the target. For example, Kim and Cave [48] used Theeuwes’ additional singleton paradigm with only 4 items in the display and combined it with a probe detection task. When the probe was shown 60 ms after the display onset, observers responded 20 ms faster when a probe was presented at the distractor location relative to the target location. At 150 ms interval, this pattern was reversed: the mean RTs at the target location was about 15 ms faster than at the distractor location. Kim and Cave [48] argued that, at 60 ms after display onset, more attention was allocated at the distractor location than at the target location signifying attentional capture. Soon thereafter (at 150 ms condition) attention was disengaged from the distractor location and directed at the target location. Even though we cannot take the exact timings reported by Kim and Cave as absolute (there were many differences between our singleton task and theirs, e.g., the number of items in the display), it is likely that within a short time period of 200 ms, attention may have shifted between distractor and target locations.

Our finding that the mean response errors for probing the target location in the no-distractor condition did not differ from that when the distractor was presented at the high-probability location, suggests that the distractor presented at the high-probability location hardly competes for attention. This analysis suggests that due to proactive suppression, there are equivalent processing resources available for target processing as there are in a condition in which the distractor is not present (i.e., the no-distractor condition), a finding consistent with the notion that this type of suppression is proactive in nature [37].

Note that we employed here a version of the additional singleton task [49] in which the target and distractors switched roles across trials. When using this version, participants likely employ the so-called singleton detection mode which may in turn result in stronger capture effects than when having observers search consistently for one specific feature [so called “feature search mode”, see 50; but see 51]. We took the former approach to examine the interplay between bottom-up capture and statistical learning minimizing top-down effects on search. Note however that even when one uses displays that induce feature search, the same suppression effect is observed indicating that this type of suppression does not depend on the search mode employed [15].

Supporting information

S1 Appendix

(DOCX)

Data Availability

Data and procedure can be accessed through https://github.com/wangbenchi/Search_probe.

Funding Statement

yes, XL, No. LY18C090007, the Natural Science Foundation of Zhejiang Province, http://www.zjzwfw.gov.cn; BW, No.2019A1515110581, the Natural Science Foundation of Guangdong Province. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

References

  • 1.Wang B, Theeuwes J. Statistical regularities modulate attentional capture. Journal of Experimental Psychology: Human Perception and Performance. 2018;44(1):13–17. 10.1037/xhp0000472 [DOI] [PubMed] [Google Scholar]
  • 2.Theeuwes J. Top–down and bottom–up control of visual selection. Acta Psychologica. 2010;135(2):77–99. 10.1016/j.actpsy.2010.02.006 [DOI] [PubMed] [Google Scholar]
  • 3.Theeuwes J. Visual selection: Usually fast and automatic; seldom slow and volitional. Journal of Cognition. 2018;1(1). 10.5334/joc.13 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Theeuwes J, Failing M. Attentional selection: Top-down, bottom-up, and history-based biases. Cambridge Elements in Attention. Forthcoming 2020. [Google Scholar]
  • 5.Awh E, Belopolsky A V, Theeuwes J. Top-down versus bottom-up attentional control: a failed theoretical dichotomy. Trends in cognitive sciences. 2012;16(8):437–43. 10.1016/j.tics.2012.06.010 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Theeuwes J. Goal-driven, stimulus-driven, and history-driven selection. Current Opinion in Psychology. 2019;29:97–101. 10.1016/j.copsyc.2018.12.024 [DOI] [PubMed] [Google Scholar]
  • 7.Failing M, Theeuwes J. Selection history: How reward modulates selectivity of visual attention. Psychonomic Bulletin & Review, 2018;25(2):514–538. 10.3758/s13423-017-1380-y [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Failing M, Feldmann-Wüstefeld T, Wang B, Olivers C, Theeuwes J. Statistical regularities induce spatial as well as feature-specific suppression. Journal of Experimental Psychology. Human Perception and Performance. 2019;45(10). 10.1037/xhp0000660 [DOI] [PubMed] [Google Scholar]
  • 9.Failing M, Wang B, Theeuwes J. Spatial suppression due to statistical regularities is driven by distractor suppression not by target activation. Attention, Perception, & Psychophysics. 2019;81(5):1405–1414. 10.3758/s13414-019-01704-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Ferrante O, Patacca A, Di Caro V, Della Libera C, Santandrea E, Chelazzi L. Altering spatial priority maps via statistical learning of target selection and distractor filtering. Cortex. 2018;102:67–95. 10.1016/j.cortex.2017.09.027 [DOI] [PubMed] [Google Scholar]
  • 11.Stilwell B T, Bahle B, Vecera S P. Feature-based statistical regularities of distractors modulate attentional capture. Journal of Experimental Psychology: Human Perception and Performance. 2019;45(3):419–433. 10.1037/xhp0000613 [DOI] [PubMed] [Google Scholar]
  • 12.Zhang B, Allenmark F, Liesefeld H R, Shi Z, Muller H. Probability cueing of singleton-distractor locations in visual search: Priority-map-or dimension-based inhibition? Journal of Experimental Psychology: Human Perception and Performance. 2019. 10.1101/454140 [DOI] [PubMed] [Google Scholar]
  • 13.Won B Y, Kosoyan M, Geng J J. Evidence for second-order singleton suppression based on probabilistic expectations. Journal of Experimental Psychology: Human Perception and Performance. 2019;45(1):125–138. 10.1037/xhp0000594 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Wang B, Theeuwes J. How to inhibit a distractor location? Statistical learning versus active, top-down suppression. Attention, Perception, & Psychophysics. 2018:80(4):860–870. doi: 13414-018-1493-z [DOI] [PubMed] [Google Scholar]
  • 15.Wang B, Theeuwes J. Statistical regularities modulate attentional capture independent of search strategy. Attention, Perception, & Psychophysics. 2018;80(7):1763–1774. 10.3758/s13414-018-1562-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Frost R, Armstrong B C, Siegelman N, Christiansen M H. Domain generality versus modality specificity: The paradox of statistical learning. Trends in Cognitive Sciences. 2015;19(3):117–125. 10.1016/j.tics.2014.12.010 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Frost R, Armstrong B C, Christiansen M H. Statistical learning research: A critical review and possible new directions. Psychological Bulletin. 2019;145(12):1128–1153. 10.1037/bul0000210 [DOI] [PubMed] [Google Scholar]
  • 18.Gómez R L, Gerken L. Infant artificial language learning and language acquisition. Trends in Cognitive Sciences 2000;4(5):178–186. 10.1016/S1364-6613(00)01467-4 [DOI] [PubMed] [Google Scholar]
  • 19.Fiser J, Aslin R N. Statistical learning of higher-order temporal structure from visual shape sequences. Journal of Experimental Psychology: Learning, Memory, and Cognition. 2002;28(3):458–467. 10.1037/0278-7393.28.3.458 [DOI] [PubMed] [Google Scholar]
  • 20.Turk-Browne N B, Jungé J A, Scholl B J. The automaticity of visual statistical learning. Journal of Experimental Psychology: General. 2005;134(4):552–564. 10.1037/0096-3445.134.4.552 [DOI] [PubMed] [Google Scholar]
  • 21.Brady T F, Oliva A. Statistical learning using real-world scenes: Extracting categorical regularities without conscious intent. Psychological Science. 2008;19(7):678–685. 10.1111/j.1467-9280.2008.02142.x [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Baker C I, Olson C R, Behrmann M. Role of attention and perceptual grouping in visual statistical learning. Psychological Science. 2004;15(7):460–466. 10.1111/j.0956-7976.2004.00702.x [DOI] [PubMed] [Google Scholar]
  • 23.Courville A C, Daw N D, Touretzky D S. (2006). Bayesian theories of conditioning in a changing world. Trends in Cognitive Sciences. 2006;10(7):294–300. 10.1016/j.tics.2006.05.004 [DOI] [PubMed] [Google Scholar]
  • 24.Hunt R H, Aslin R N. Statistical learning in a serial reaction time task: Access to separable statistical cues by individual learners. Journal of Experimental Psychology: General. 2001;130(4):658 10.1037//0096-3445.130.4.65825 [DOI] [PubMed] [Google Scholar]
  • 25.Chun M M, Jiang Y. Contextual cueing: Implicit learning and memory of visual context guides spatial attention. Cognitive Psychology. 1998;36(1):28–71. 10.1006/cogp.1998.0681 [DOI] [PubMed] [Google Scholar]
  • 26.Chun M M, Jiang Y. Top-down attentional guidance based on implicit learning of visual covariation. Psychological Science. 1999;10(4):360–365. 10.1111/1467-9280.00168 [DOI] [Google Scholar]
  • 27.Jiang Y, Chun M M. Selective attention modulates implicit learning. The Quarterly Journal of Experimental Psychology Section A. 2001;54(4):1105–1124. 10.1080/713756001 [DOI] [PubMed] [Google Scholar]
  • 28.Goujon A, Didierjean A, Thorpe S. Investigating implicit statistical learning mechanisms through contextual cueing. Trends in Cognitive Sciences. 2015;19(9):524–533. 10.1016/j.tics.2015.07.009 [DOI] [PubMed] [Google Scholar]
  • 29.Posner M I. Orienting of attention. The Quarterly Journal of Experimental Psychology. 1980;32(1):3–25. 10.1080/00335558008248231 [DOI] [PubMed] [Google Scholar]
  • 30.Shaw M L, Shaw P. Optimal allocation of cognitive resources to spatial locations. Journal of Experimental Psychology: Human Perception and Performance. 1977;3(2):201–211. 10.1037/0096-1523.3.2.201 [DOI] [PubMed] [Google Scholar]
  • 31.Geng J J, Behrmann M. Probability cuing of target location facilitates visual search Implicitly in normal participants and patients with hemispatial neglect. Psychological Science. 2002;13(6):520–525. 10.1111/1467-9280.00491 [DOI] [PubMed] [Google Scholar]
  • 32.Geng J J, Behrmann M. Spatial probability as an attentional cue in visual search. Perception & Psychophysics. 2005;67(7):1252–1268. 10.3758/BF03193557 [DOI] [PubMed] [Google Scholar]
  • 33.Jiang Y V, Swallow K M, Rosenbaum G M, Herzig C. Rapid acquisition but slow extinction of an attentional bias in space. Journal of Experimental Psychology Human Perception & Performance. 2013;39(1):87–99. 10.1037/a0027611 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.Feldmann-Wüstefeld T, Schubö A. Context homogeneity facilitates both distractor inhibition and target enhancement. Journal of Vision. 2013;13(3):11–11. 10.1167/13.3.11 [DOI] [PubMed] [Google Scholar]
  • 35.Goschy H, Bakos S, Müller H J, Zehetleitner M. Probability cueing of distractor locations: Both intertrial facilitation and statistical learning mediate interference reduction. Frontiers in Psychology. 2014;5 10.3389/fpsyg.2014.01195 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36.Wang B, Samara I, Theeuwes J. Statistical regularities bias overt attention. Attention, Perception, & Psychophysics. 2019;81(8):1–9. 10.3758/s13414-019-01708-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 37.Heuer A, Schubö A. Cueing distraction: Electrophysiological evidence for anticipatory active suppression of distractor location. Psychological Research. 2019. 10.1007/s00426-019-01211-4 [DOI] [PubMed] [Google Scholar]
  • 38.Wang B, van Driel J, Ort E, Theeuwes J. Anticipatory Distractor Suppression Elicited by Statistical Regularities in Visual Search. Journal of Cognitive Neuroscience. 2019;31(10):1–14. 10.1162/jocn_a_01433 [DOI] [PubMed] [Google Scholar]
  • 39.Jensen O, Mazaheri A. Shaping functional architecture by oscillatory alpha activity: Gating by inhibition. Frontiers in Human Neuroscience. 2010;4 10.3389/fnhum.2010.00186 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 40.Gaspelin N, Leonard C J, Luck S J. Direct evidence for active suppression of salient-but-irrelevant sensory inputs. Psychological Science. 2015;26(11):1740–1750. 10.1177/0956797615597913 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 41.Kim M S, Cave K R. Spatial attention in visual search for features and feature conjunctions. Psychological Science. 1995;6(6):376–380. 10.1111/j.1467-9280.1995.tb00529.x [DOI] [Google Scholar]
  • 42.Zhang W, Luck S J. Discrete fixed-resolution representations in visual working memory. Nature. 2008;453(7192):233–235. 10.1038/nature06860 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 43.Desimone R, Duncan J. Neural mechanisms of selective visual attention. Annual Review of Neuroscience. 1995;18(1):193–222. 10.1111/j.1469-8986.1995.tb03400.x [DOI] [PubMed] [Google Scholar]
  • 44.Bahcall D O, Kowler E. Attentional interference at small spatial separations. Vision Research. 1999;39(1):71–86. 10.1016/S0042-6989(98)00090-X [DOI] [PubMed] [Google Scholar]
  • 45.Theeuwes J, Kramer A F, Atchley P. On the time course of top-down and bottom-up control of visual attention. In: Monsell S, Driver J, editors. Attention and Performance (illustrated edition, Vol. 18). Cambridge, MA: MIT Press; 2000. pp. 105–125. [Google Scholar]
  • 46.Mounts J R W. Attentional capture by abrupt onsets and feature singletons produces inhibitory surrounds. Perception & Psychophysics. 2000;62(7):1485–1493. 10.3758/BF03212148 [DOI] [PubMed] [Google Scholar]
  • 47.Suchow J. W., Brady T. F., Fougnie D., & Alvarez G. A. Modeling visual working memory with the MemToolBox. Journalof Vision. 2013;13(9):1–8. 10.1167/13.10.9 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 48.Kim M S, Cave K R. Top-down and bottom-up attentional control: On the nature of interference from a salient distractor. Perception and Psychophysics. 1999;61(6):1009–1023. 10.3758/bf03207609 [DOI] [PubMed] [Google Scholar]
  • 49.Theeuwes J. Perceptual selectivity for color and form. Perception & Psychophysics. 1992;51(6):599–606. 10.3758/BF03211656 [DOI] [PubMed] [Google Scholar]
  • 50.Bacon W F, Egeth H E. Overriding stimulus-driven attentional capture. Perception & Psychophysics. 1994;55(5):485–496. 10.3758/BF03205306 [DOI] [PubMed] [Google Scholar]
  • 51.Theeuwes J. Top-down search strategies cannot override attentional capture. Psychonomic Bulletin & Review. 2004;11(1):65–70. 10.3758/BF03206462 [DOI] [PubMed] [Google Scholar]

Decision Letter 0

Evan James Livesey

18 Feb 2020

PONE-D-19-35559

Proactively location-based suppression elicited by statistical learning

PLOS ONE

Dear Mr. Wang,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

The reviewers all indicated that your work was technically sound and analyzed appropriately and, on the whole, their assessments are quite positive. However, all three raised important points which require further clarification in the manuscript, some of which may require changes to the data analyses (or at least a stronger defense of the analysis decisions that you have made). I note that some of the same concerns were raised by multiple reviewers, for instance the relevance of probability learning about the target location as a relevant factor that ought to be discussed, and the possibility of mapping out an acquisition function across the multiple blocks of testing. As the reviewers’ points are clear, I won’t reiterate all of them here, but I would expect to see each concern addressed either in revisions to the manuscript or in rebuttal before this work can be deemed acceptable for publication. 

I will note that, consistent with several of the reviewers’ concerns, I too struggled to find some of the relevant methodological details. For instance, it would be desirable to know how the distractor and no-distractor conditions were intermixed within blocks and what proportion of trials were no-distractor trials (perhaps this information is buried in there but it was not obvious to me). Perhaps Figure 1 could also be referred to earlier, for example page 7 when you introduce the present study. In addition, the Abstract currently provides very little context for readers who are not familiar with the additional singleton task. Given the wide readership of the journal, you might consider softening the blow, so to speak.

We would appreciate receiving your revised manuscript by Apr 03 2020 11:59PM. When you are ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter.

To enhance the reproducibility of your results, we recommend that if applicable you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). This letter should be uploaded as separate file and labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. This file should be uploaded as separate file and labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. This file should be uploaded as separate file and labeled 'Manuscript'.

Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out.

We look forward to receiving your revised manuscript.

Kind regards,

Evan James Livesey, Ph.D

Academic Editor

PLOS ONE

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

http://www.journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and http://www.journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

1. We noticed you have some minor occurrence of overlapping text with the following previous publication(s), which needs to be addressed:

https://link.springer.com/article/10.3758%2Fs13414-018-1562-3

https://www.mitpressjournals.org/doi/full/10.1162/jocn_a_01433

https://link.springer.com/article/10.3758%2Fs13423-019-01679-6

https://www.sciencedirect.com/science/article/pii/S2352250X18301970?via%3Dihub

In your revision ensure you cite all your sources (including your own works), and quote or rephrase any duplicated text outside the methods section. Further consideration is dependent on these concerns being addressed.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Yes

Reviewer #3: Yes

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: Yes

Reviewer #3: Yes

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

Reviewer #3: Yes

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

Reviewer #3: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: PONE-D-19-35559: Proactively location-based suppression elicited by statistical learning

This study replicated the previous findings that attentional capture is reduced by salient distractor when it appears in high probability locations of distractor compared to when it appears in low probability locations of distractor. Also, this study tested how attentional resources are distributed across the visual field by adding a search-probe condition. The authors showed that high probability location is suppressed before display onset and suggested that the suppression of high probability location resulted in more attention being allocated to the target location.

Review: This manuscript is very clearly written and had an excellent summary of statistical learning literature and the authors' previous findings in the introduction. Although I have some comments that might require additional data analysis and questions that need clarification, I believe this manuscript will be an excellent addition to the journal, PLOS ONE.

Comment #1. The authors informed that the target appeared equally often at each location in the no-distractor condition but did not report how the target was distributed in the distractor condition. It has been well known that target probability guides spatial attentional - high probable location of target quickly and implicitly attracts spatial attention (Geng & Bermann 2002; 2005; Jiang et al., 2013). I wonder, in this study, if the high probability location of color singleton contained the target less often than other locations, which might result in reducing attentional capture to that location. I hope the authors clarify this point in the revision.

Comment #2: I wonder if the search performance in the high probability location gradually increased across seven testing blocks or already reached to the ceiling when testing began. Considering that the effect came from statistical "learning," many readers will wonder about the learning curve.

Comment #3: In search-probe results, it is interesting that the response error of target location in high probability condition did not differ from that in the no-distractor condition because the search performance was much worse in the former condition than in the latter. I hope the authors provide some explanations about this discrepancy.

1. Typos: page 3, line 9: "categories" to "category"; page 5, line 15: "RT an accuracy" to "RT and accuracy"

2. Redundant sentence: page 15, lines 6-7: "Importantly, there was also a ..." is redundant (it has been very similarly mentioned on page 14, line 19)

3. Inconsistent BF report: It is a little odd that the authors provided only one BF value (page 16, the very bottom line). Please add all BFs if possible and interpret the meaning.

4. I believe the authors meant "former (singleton detection mode)" not "latter (feature search mode)" on page 19, line 14.

Reviewer #2: SUMMARY

The authors examined the influence of statistical learning on the capture of spatial attention by combining a search-probe task with a visual-search paradigm (Wang and Theeuwes (2018a)). For the majority of trials, participants only completed the visual-search: participants searched for a shape singleton, sometimes in the presence of a color singleton distractor. Critically, one of the locations had a higher probability of containing the distractor than the other locations. To engage implicit statistical learning processes, participants were not informed of this relation. On the rest of the trials, the search display was quickly interrupted by a memory probe display where the items were replaced by bars of different orientations. Participants were required to memorize the orientations of the bars before being probed to recall the orientation at one location.

The authors report significantly slower reaction times when the distractor color singleton appeared, with more capture occurring when the distractor was in the low-probability position than in the high-probability position. Error rates were also significantly higher when the distractor singleton was presented at the low-probability location but not when the distractor singleton was presented at the high-probability location. Reaction times were also slower when the target was in the high-probability location but not the low-probability location.

In the search-probe task, memory performance, measured by mean response error, was best when the probe was at the target location of the search display compared to the neutral or distractor locations. Recall when the distractor location was probed was worse at the high-probability location compared to the low-probability location, but when the probed location was the search target location, recall was more accurate when the distractor was at the high-probability location compared to the low-probability location. The authors interpret these results as suppression of the high-probability distractor location and also more attentional allocation to the target.

MAIN ISSUES

1. In the analysis of the search-probe condition, an improvement to the mean response error may be to use mixture modelling like those in visual working memory recall tasks. Fitting a combination of a Von Mises distribution and uniform distribution to the response errors will give two parameters: the precision of the recall and the amount of guessing in the recall. These parameters might be more sensitive than mean response error because it is possible that while more guessing is occurring when the distractor is at the low-probability location the mean response error remains centered at the same value. Given the brief display time of the probe arrays, it is likely that the proportion of memory responses (those in the Von Mises distribution) is larger when the distractor is in the high-probability location compared to the low-probability location.

2. There are two aspects of statistical learning that appear to be relevant; it is automatic and implicit, such that it lends itself to bottom-up rather than top-down effects. Was awareness of the statistical regularities tested in this experiment? While the procedure may previously have been shown to exist without awareness of the regularities, it might be necessary to examine whether the effects of attentional capture vary between people who are explicitly aware of the regularities or not. Given the size of some of the interaction effects, it might also be necessary to check whether these effects stay the same excluding those with explicit awareness of the statistical regularities.

3. There are some instances where it isn’t clear in the procedure whether only one other location was a low-probability location, or all other locations were low-probability. Below is an example of where it is confusing whether one low-probability location or multiple low-probability locations were used:

P11. “One of these distractor locations had a high proportion of 62.5% (high-probability location), and other locations had a low proportion of 37.5% (low-probability location).”

P12. “When the distractor singleton was presented at the low-probability location, but not when it was presented at the high-probability location.”

4. In examining the spatial distribution of the suppression effect (pp. 13-14), it looks like a linear function doesn’t best describe what is happening in Figure 2C. Perhaps a polynomial regression with linear and quadratic components might explains the trend better, suggesting moreso that the suppression is at the high-probability location and the two adjacent locations, rather than spreading linearly along the gradient.

5. For the contrast and post-hoc comparisons throughout the results, was any error rate correction applied? If so, that should be clarified in the description of the results. For example, the follow-up t-tests (pp.14-15) may no longer be significant if a correction needs to be applied. If these contrasts were planned, it would need to be mentioned too.

6. Exploring the time-course of attentional suppression could be fruitful, achieved by delaying the onset of the probe array. You might expect that if attention is captured by the salient distractor, you could observe recovery from the attentional capture with longer delays before presenting the memory array. However, a sustained suppression account might suggest that the effect does not change with a delayed onset of the memory array.

MINOR ISSUES

1. It might be helpful to refer to the probe array as a memory array to differentiate between the visual search task.

2. I think the discussion of the proactive suppression in the discussion might benefit with contrast to what is expected with retroactive suppression.

3. It might be helpful to add to the Procedure where the experimental code, data and analysis code may be accessed to the manuscript.

TYPOGRAPHICAL ERRORS

pp. 3. “Recently, it was pointed out that a third categories”, should be “a third category”.

pp. 7. The reference is missing a comma, “Wang, van Driel, et al. 2019).

pp. 10. “did not respond or respond incorrectly” should be “or responded incorrectly”.

pp. 12. “The F-value of the second paragraph has an extra period.

pp 12. “Again, there was significantly different” should be “there was a significant difference between…”

pp. 12. “Paired-wise t-test” should be “Pairwise t-tests”

William Xiang Quan Ngiam

I sign all my reviews, regardless of the recommendation to the editor. By signing this review, I affirm that I have made my best attempt to be polite and respectful while providing criticism and feedback that is hopefully helpful and reasonable.

Reviewer #3: The manuscript details a single experiment examining attentional capture and suppression of locations based on the statistical properties of the task-environment. They use a classic additional singleton paradigm, in which search for a uniquely shaped object is impaired in the presence of a unique colour singleton distractor, showing the classic effect of slower RTs to targets on these trials compared to trials in which that distractor is absent. Here though, the effect of the distractor is modulated by the probability with which it appears in certain locations, with the distractor being less effective when it appears in a high- compared to a low-probability location. This suggests that participants are learning to suppress these high-probability locations, and provides a replication of effects previously reported by Wang and Theeuwes (2018). The novel contribution here is the introduction of a new procedural element: on a third of trials, a probe test is given in which each location in the search array is quickly masked by slanted lines, and then participants are tasked with reporting on one of these. The accuracy of this report reveals interesting things about the momentary suppression of the spatial location: when a singleton distractor appears at a high probability location, participants are worse at reporting the probe in that location, and better at reporting the probe at the target location, relative to the when the singleton distractor appears at a low-probability location.

In general I thought this was an interesting set of data and should make a nice contribution to the literature. I had a number of suggestions as to how I thought the authors could improve the manuscript. Points 1-3 I see as fairly critical, but I had a number of more minor points.

1. The presentation of the rationale for the paper currently makes it seem like the contribution here is fairly modest. That is, similar effects have already been reported in the Wang and Theeuwes (2018) papers, and it wasn’t immediately clear what is left unanswered by those papers, and what the current paper addresses. I appreciate that the current paper provides a more direct test of suppression at various positions with the probe task, but I think that novel contribution, and its importance, can be made more clear in the introduction for those less familiar with the methods and results of previous papers.

2. It isn’t clear exactly what the competing hypothesis is for this experiment. On page 6 the authors state that “Proactive suppression can be contrasted with retroactive suppression, which is the type of suppression that occurs only after attention has been directed to a location, disengaged and subsequently suppressed.” Could the authors expand upon this sentence and flesh out the range of possible results they considered and the theories they would support? I feel that this very brief treatment of the theoretical possibilities weakened the manuscript and it wasn’t clear how the current data sit with such alternative accounts. For example, consider the data shown in Figure 3B. If I’ve interpreted this correctly, these data suggest that in the absence of a distractor singleton, there is no impairment in probe responding at the high probability location relative to the low probability location. Doesn’t that suggest that it is indeed the presence of the highly salient singleton that attracts attention, and it’s only when that occurs, that active suppression comes into play? That sounds similar to what you describe above for “retroactive suppression”, but I didn’t see that mentioned in the discussion.

3. Given that this is an effect of learning the statistical properties of the spatial locations, and given that each participants contributed 2520 trials, it seemed to me a missed opportunity not to analyse how these patterns develop over time. Could the authors break the data down into blocks of trials so we can see the development of the effect?

4. The description of the contextual cuing literature on page 4 doesn’t seem very accurate. Firstly, the statement “…participants were faster in finding targets when they appeared in repeated configurations than in novel locations” is ambiguous, in that “novel locations” could refer to the target location. Better would be “…participants were faster to find targets when they appeared in repeated configurations than when they appeared in novel configurations”. Secondly, I don’t agree that this is related to Posner cueing tasks, at least not in the way that is stated in the next sentence. Contextual cuing is generally not about learning the probability of where targets appear per se. Rather, it is about learning the association between configuration and target. Typically target location probability is controlled across repeated and random configurations.

5. Having said that, there is a small literature on probability cuing in contextual cuing of visual search, which starts with the paper by Jiang et al. (2013). This will be of interest to the authors and may be relevant for the discussion here.

6. pg. 9 – it wasn’t initially clear to me from Figure 1 how the participants knew which probe was being queried in the task. It is mentioned in the text briefly, but I found the figure a bit confusing, given the final panel presumably shows a magnified/schematic version of the screen. I think this Figure needs some work to clarify how the task operated, perhaps showing both the probe recall prompt in one of the actual positions, and then also magnified to show the detail.

7. I appreciate that only a small percentage of trials were removed as outliers, but I would prefer to see the upper limit on RTs be set on a participant basis and related to the distribution of RTs for that participant (e.g., 2.5/3 SDs above the mean RT). Otherwise, this is set fairly arbitrarily and it begs the question of why this value was selected. Were the results in any way contingent upon this upper threshold value?

8. I applaud the authors for making their materials and data available. I downloaded these and I was able to read in the data files, but I couldn’t make much sense of many of the variables in the data files. I would suggest including a document that lays out clearly what the variables are, what the levels of those variables are coded as, etc. It would be great if the authors also included their analysis scripts.

Jiang, Y. V., Swallow, K. M., & Rosenbaum, G. M. (2013). Guidance of spatial attention by incidental learning and endogenous cuing. Journal of Experimental Psychology: Human Perception and Performance, 39, 285–297.

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: Yes: William Xiang Quan Ngiam

Reviewer #3: No

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files to be viewed.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2020 Jun 1;15(6):e0233544. doi: 10.1371/journal.pone.0233544.r002

Author response to Decision Letter 0


22 Mar 2020

Dear Dr. Evan James Livesey,

We would like to thank you and the three reviewers for their excellent comments and suggestions. We took their comments to heart and changed the manuscript accordingly. All the changes were marked in red in Ms.

Hope that the current version is now acceptable. Below we outline how we addressed each concern of you and the reviewers.

Thank you very much!

Sincerely

Benchi Wang & Jan Theeuwes

Editor

I will note that, consistent with several of the reviewers’ concerns, I too struggled to find some of the relevant methodological details. For instance, it would be desirable to know how the distractor and no-distractor conditions were intermixed within blocks and what proportion of trials were no-distractor trials (perhaps this information is buried in there but it was not obvious to me).

Fixed. We have made it clear, see p. 11.

Perhaps Figure 1 could also be referred to earlier, for example page 7 when you introduce the present study. In addition, the Abstract currently provides very little context for readers who are not familiar with the additional singleton task. Given the wide readership of the journal, you might consider softening the blow, so to speak.

Fixed.

Reviewer #1

This manuscript is very clearly written and had an excellent summary of statistical learning literature and the authors' previous findings in the introduction. Although I have some comments that might require additional data analysis and questions that need clarification, I believe this manuscript will be an excellent addition to the journal, PLOS ONE.

Thanks for your comments.

Point 1. The authors informed that the target appeared equally often at each location in the no-distractor condition but did not report how the target was distributed in the distractor condition. It has been well known that target probability guides spatial attentional - high probable location of target quickly and implicitly attracts spatial attention (Geng & Bermann 2002; 2005; Jiang et al., 2013). I wonder, in this study, if the high probability location of color singleton contained the target less often than other locations, which might result in reducing attentional capture to that location. I hope the authors clarify this point in the revision.

Response: Thanks for pointing this out. Yes, the target never appeared in the high-probability location in the with-distractor condition. However, in a recent study, we have shown that the target probability is not responsible for the effect (Failing, Wang, et al., 2019). That is, even when the target is equally likely presented at all locations, the suppression is seen for the high-probability distractor location. Thus, we kept adopting the same design as in Wang & Theeuwes (2018a). We have made it clear in the Ms, see p. 11.

Point 2: I wonder if the search performance in the high probability location gradually increased across seven testing blocks or already reached to the ceiling when testing began. Considering that the effect came from statistical "learning," many readers will wonder about the learning curve.

Response: Thanks. We now report it in the Ms, see p. 13. Similar to what we have observed in previous studies, people learned to suppress the high-probability location very quickly. The results showed that the mean RTs for the high-probability location was smaller than that for low-probability location in the first block, p < .001, and in the following blocks, all ps < .003.

Point 3: In search-probe results, it is interesting that the response error of target location in high probability condition did not differ from that in the no-distractor condition because the search performance was much worse in the former condition than in the latter. I hope the authors provide some explanations about this discrepancy.

Response: Thanks. We thought we explained it in the general discussion in p. 22, see below:

“Our finding that the mean response errors for probing the target location in the no-distractor condition did not differ from that when the distractor was presented at the high-probability location, suggests that the distractor presented at the high-probability location hardly competes for attention. This analysis suggests that due to proactive suppression, there are equivalent processing resources available for target processing as there are in a condition in which the distractor is not present (i.e., the no-distractor condition), a finding consistent with the notion that this type of suppression is proactive in nature (see also Wang, van Driel, et al., 2019).”

Minor

1. Typos: page 3, line 9: "categories" to "category"; page 5, line 15: "RT an accuracy" to "RT and accuracy"

2. Redundant sentence: page 15, lines 6-7: "Importantly, there was also a ..." is redundant (it has been very similarly mentioned on page 14, line 19)

3. Inconsistent BF report: It is a little odd that the authors provided only one BF value (page 16, the very bottom line). Please add all BFs if possible and interpret the meaning.

4. I believe the authors meant "former (singleton detection mode)" not "latter (feature search mode)" on page 19, line 14.

Response: All fixed. We have reported BF values for all null results.

Reviewer #2

Thanks for your comments.

Point 1. In the analysis of the search-probe condition, an improvement to the mean response error may be to use mixture modelling like those in visual working memory recall tasks. Fitting a combination of a Von Mises distribution and uniform distribution to the response errors will give two parameters: the precision of the recall and the amount of guessing in the recall. These parameters might be more sensitive than mean response error because it is possible that while more guessing is occurring when the distractor is at the low-probability location the mean response error remains centered at the same value. Given the brief display time of the probe arrays, it is likely that the proportion of memory responses (those in the Von Mises distribution) is larger when the distractor is in the high-probability location compared to the low-probability location.

Response: Excellent suggestion. We have put the model analysis in.

Point 2. There are two aspects of statistical learning that appear to be relevant; it is automatic and implicit, such that it lends itself to bottom-up rather than top-down effects. Was awareness of the statistical regularities tested in this experiment? While the procedure may previously have been shown to exist without awareness of the regularities, it might be necessary to examine whether the effects of attentional capture vary between people who are explicitly aware of the regularities or not. Given the size of some of the interaction effects, it might also be necessary to check whether these effects stay the same excluding those with explicit awareness of the statistical regularities.

Response: We did not test awareness in the current study. However, we did it in several previous studies (e.g., Wang & Theeuwes, 2018a 2018b), and found that most people were basically unware of the regularities, even though they learned to suppress the distractor. This suggests that learning is basically implicit. We mentioned the previous results in Ms, see p. 5.

We do agree with that this is an interesting question, yet given by that we did not test it so that we keep not mention it in the current Ms.

Point 3. There are some instances where it isn’t clear in the procedure whether only one other location was a low-probability location, or all other locations were low-probability. Below is an example of where it is confusing whether one low-probability location or multiple low-probability locations were used:

P11. “One of these distractor locations had a high proportion of 62.5% (high-probability location), and other locations had a low proportion of 37.5% (low-probability location).”

P12. “When the distractor singleton was presented at the low-probability location, but not when it was presented at the high-probability location.”

Response: We have explained it, see p. 11.

Point 4. In examining the spatial distribution of the suppression effect (pp. 13-14), it looks like a linear function doesn’t best describe what is happening in Figure 2C. Perhaps a polynomial regression with linear and quadratic components might explains the trend better, suggesting moreso that the suppression is at the high-probability location and the two adjacent locations, rather than spreading linearly along the gradient.

Response: We agree with that, as shown in Figure 2, it seems that only the high-probability location and the two adjacent locations were suppressed. However, even though we used other fitting functions, it still suggests that the suppression effect was not limited to one location, but had an extended spatial gradient, in line with our claim. To be consistent with previous studies, we kept using liner function to describe the trend of the spatial gradient. We do not make strong claims about whether it is truly a linear function. see p. 15.

Point 5. For the contrast and post-hoc comparisons throughout the results, was any error rate correction applied? If so, that should be clarified in the description of the results. For example, the follow-up t-tests (pp.14-15) may no longer be significant if a correction needs to be applied. If these contrasts were planned, it would need to be mentioned too.

Response: Thanks. These comparisons were all planned comparison (based on the results of our previous studies).

Point 6. Exploring the time-course of attentional suppression could be fruitful, achieved by delaying the onset of the probe array. You might expect that if attention is captured by the salient distractor, you could observe recovery from the attentional capture with longer delays before presenting the memory array. However, a sustained suppression account might suggest that the effect does not change with a delayed onset of the memory array.

Response: We agree with that a more fined-grained exploration of the time course would be truly interesting. This would require another study. Regarding this study we argue that within a short time period, attention shifts between distractor and target locations.

Minor

1. It might be helpful to refer to the probe array as a memory array to differentiate between the visual search task.

We kept the term probe array even though we realize that it is also a memory array.

2. I think the discussion of the proactive suppression in the discussion might benefit with contrast to what is expected with retroactive suppression.

We discuss the effect of proactive and retroactive suppression on p. 21

3. It might be helpful to add to the Procedure where the experimental code, data and analysis code may be accessed to the manuscript.

We added it in the author notes.

TYPOGRAPHICAL ERRORS

pp. 3. “Recently, it was pointed out that a third categories”, should be “a third category”.

pp. 7. The reference is missing a comma, “Wang, van Driel, et al. 2019).

pp. 10. “did not respond or respond incorrectly” should be “or responded incorrectly”.

pp. 12. “The F-value of the second paragraph has an extra period.

pp 12. “Again, there was significantly different” should be “there was a significant difference between…”

pp. 12. “Paired-wise t-test” should be “Pairwise t-tests”

All fixed.

Reviewer #3

In general I thought this was an interesting set of data and should make a nice contribution to the literature. I had a number of suggestions as to how I thought the authors could improve the manuscript. Points 1-3 I see as fairly critical, but I had a number of more minor points.

Thanks for your comments.

Point 1. The presentation of the rationale for the paper currently makes it seem like the contribution here is fairly modest. That is, similar effects have already been reported in the Wang and Theeuwes (2018) papers, and it wasn’t immediately clear what is left unanswered by those papers, and what the current paper addresses. I appreciate that the current paper provides a more direct test of suppression at various positions with the probe task, but I think that novel contribution, and its importance, can be made clearer in the introduction for those less familiar with the methods and results of previous papers.

Response: We make this clearer in the introduction (p. 7). Because we now present the full analysis of the probe task using both guess rate and standard deviation the contribution of this paper to the existing literature has also become clearer. Thanks!

Point 2. It isn’t clear exactly what the competing hypothesis is for this experiment. On page 6 the authors state that “Proactive suppression can be contrasted with retroactive suppression, which is the type of suppression that occurs only after attention has been directed to a location, disengaged and subsequently suppressed.” Could the authors expand upon this sentence and flesh out the range of possible results they considered and the theories they would support? I feel that this very brief treatment of the theoretical possibilities weakened the manuscript and it wasn’t clear how the current data sit with such alternative accounts. For example, consider the data shown in Figure 3B. If I’ve interpreted this correctly, these data suggest that in the absence of a distractor singleton, there is no impairment in probe responding at the high probability location relative to the low probability location. Doesn’t that suggest that it is indeed the presence of the highly salient singleton that attracts attention, and it’s only when that occurs, that active suppression comes into play? That sounds similar to what you describe above for “retroactive suppression”, but I didn’t see that mentioned in the discussion.

Response: This is an interesting and clever observation. Strictly speaking when only considering the response error score for the distractor absent condition I think the reviewer’s interpretation is correct. On the other hand, Figure 2b shows that selection of the target is hampered when it appears at the high probability location which suggests that this location is supressed proactively. The result has been replicated already 10 times in the different experiments of these studies (Wang & Theeuwes, 2018a,b,c). This result can only be interpreted as evidence for pro-active suppression. Why this effect is not found in the response error score of the probe is hard to explain but likely due to a floor effect. On the basis of this overall pattern of results we stick to the original interpretation and argue for proactive suppression

Point 3. Given that this is an effect of learning the statistical properties of the spatial locations, and given that each participants contributed 2520 trials, it seemed to me a missed opportunity not to analyse how these patterns develop over time. Could the authors break the data down into blocks of trials so we can see the development of the effect?

Response: Thanks. We have reported in the Ms, see p. 13. Similar to what we have observed in our previous studies, people learned to suppress the high-probability location very quickly. The results showed that the mean RTs for the high-probability location was smaller than that for low-probability location in the first block, p < .001, and in the following blocks, all ps < .003.

Point 4. The description of the contextual cuing literature on page 4 doesn’t seem very accurate. Firstly, the statement “…participants were faster in finding targets when they appeared in repeated configurations than in novel locations” is ambiguous, in that “novel locations” could refer to the target location. Better would be “…participants were faster to find targets when they appeared in repeated configurations than when they appeared in novel configurations”. Secondly, I don’t agree that this is related to Posner cueing tasks, at least not in the way that is stated in the next sentence. Contextual cuing is generally not about learning the probability of where targets appear per se. Rather, it is about learning the association between configuration and target. Typically target location probability is controlled across repeated and random configurations.

Response: We have changed the wording

Point 5. Having said that, there is a small literature on probability cuing in contextual cuing of visual search, which starts with the paper by Jiang et al. (2013). This will be of interest to the authors and may be relevant for the discussion here.

Jiang, Y. V., Swallow, K. M., & Rosenbaum, G. M. (2013). Guidance of spatial attention by incidental learning and endogenous cuing. Journal of Experimental Psychology: Human Perception and Performance, 39, 285–297.

Response: Added

Point 6. pg. 9 – it wasn’t initially clear to me from Figure 1 how the participants knew which probe was being queried in the task. It is mentioned in the text briefly, but I found the figure a bit confusing, given the final panel presumably shows a magnified/schematic version of the screen. I think this Figure needs some work to clarify how the task operated, perhaps showing both the probe recall prompt in one of the actual positions, and then also magnified to show the detail.

Response: Thanks. We have changed the figure.

Point 7. I appreciate that only a small percentage of trials were removed as outliers, but I would prefer to see the upper limit on RTs be set on a participant basis and related to the distribution of RTs for that participant (e.g., 2.5/3 SDs above the mean RT). Otherwise, this is set fairly arbitrarily and it begs the question of why this value was selected. Were the results in any way contingent upon this upper threshold value?

Response: We have done all the analysis again with using 2.5 SD as a criteria to delete the outliers, and found the results pattern did not change. The way we deleted the data is according to the distribution of the RTs; we deleted the data belonged to two tails of the overall distribution.

Point 8. I applaud the authors for making their materials and data available. I downloaded these and I was able to read in the data files, but I couldn’t make much sense of many of the variables in the data files. I would suggest including a document that lays out clearly what the variables are, what the levels of those variables are coded as, etc. It would be great if the authors also included their analysis scripts.

Response: Thanks for pointing out this. We have uploaded it.

Attachment

Submitted filename: Resp_letter_PLOS.docx

Decision Letter 1

Evan James Livesey

6 May 2020

PONE-D-19-35559R1

Proactively location-based suppression elicited by statistical learning

PLOS ONE

Dear Mr. Wang,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we invite you to submit a revised version of the manuscript that addresses the last few minor points raised during the review process.

I have received additional assessments from two of the original reviewers. The third has indicated that they are happy for me to proceed but won't be able to produce a review in the near future for reasons that are understandable given the current difficult circumstances that we find ourselves in. The two reviewers are largely happy with your revisions, if you can address the few very minor points that Reviewer 2 has identified below, I'll endeavor to make a decision quickly without sending it out for further review.

We would appreciate receiving your revised manuscript by Jun 20 2020 11:59PM. When you are ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter.

To enhance the reproducibility of your results, we recommend that if applicable you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). This letter should be uploaded as separate file and labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. This file should be uploaded as separate file and labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. This file should be uploaded as separate file and labeled 'Manuscript'.

Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out.

We look forward to receiving your revised manuscript.

Kind regards,

Evan James Livesey, Ph.D

Academic Editor

PLOS ONE

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: All comments have been addressed

Reviewer #2: All comments have been addressed

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: Yes

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: The authors successfully addressed all my comments. Also, the authors included the new model fitting analysis and showed an interesting result. I think this manuscript will be a good addition to Plos One. I have no further comments.

Reviewer #2: The authors have made revisions that have improved the manuscript and addressed the concerns in my initial review. A new mixture model analysis showed precision for probed recall was best at the target location compared to the other locations. More guesses occur when the probe is at the high-probability distractor singleton location than the low-probability distractor location indicating that the high-probability location is being proactively suppressed. I believe the manuscript can be accepted following these very minor revisions:

Typo on p. 17: The performance was worse for distractor singletons that appeared at the high-probability location than at the low-probability location..

Typo on p. 19: The guess rate was higher for distractor singletons that appeared at the high-probability location..

Figure 3 can be improved by jittering the bars horizontally so that the 95% CI are visible in each condition.

William Xiang Quan Ngiam

I sign all my reviews, regardless of the recommendation to the editor. By signing this review, I affirm that I have made my best attempt to be polite and respectful while providing criticism and feedback that is hopefully helpful and reasonable.

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: Yes: William Xiang Quan Ngiam

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files to be viewed.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2020 Jun 1;15(6):e0233544. doi: 10.1371/journal.pone.0233544.r004

Author response to Decision Letter 1


7 May 2020

Reviewer #2

The authors have made revisions that have improved the manuscript and addressed the concerns in my initial review. A new mixture model analysis showed precision for probed recall was best at the target location compared to the other locations. More guesses occur when the probe is at the high-probability distractor singleton location than the low-probability distractor location indicating that the high-probability location is being proactively suppressed. I believe the manuscript can be accepted following these very minor revisions:

Typo on p. 17: The performance was worse for distractor singletons that appeared at the high-probability location than at the low-probability location.

Fixed.

Typo on p. 19: The guess rate was higher for distractor singletons that appeared at the high-probability location.

Fixed.

Figure 3 can be improved by jittering the bars horizontally so that the 95% CI are visible in each condition.

It is kind of weird to show a guess rate below 0, which is unusual in the memory literature. Moreover, when comparing two conditions, people would like to check the overlap between the error bars of different conditions, which could be detected from the current version of the figure. So, we chose to keep it.

Decision Letter 2

Evan James Livesey

8 May 2020

Proactively location-based suppression elicited by statistical learning

PONE-D-19-35559R2

Dear Dr. Wang,

We are pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it complies with all outstanding technical requirements.

Within one week, you will receive an e-mail containing information on the amendments required prior to publication. When all required modifications have been addressed, you will receive a formal acceptance letter and your manuscript will proceed to our production department and be scheduled for publication.

Shortly after the formal acceptance letter is sent, an invoice for payment will follow. To ensure an efficient production and billing process, please log into Editorial Manager at https://www.editorialmanager.com/pone/, click the "Update My Information" link at the top of the page, and update your user information. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to enable them to help maximize its impact. If they will be preparing press materials for this manuscript, you must inform our press team as soon as possible and no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

With kind regards,

Evan James Livesey, Ph.D

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Reviewers' comments:

Acceptance letter

Evan James Livesey

12 May 2020

PONE-D-19-35559R2

Proactively location-based suppression elicited by statistical learning

Dear Dr. Wang:

I am pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please notify them about your upcoming paper at this point, to enable them to help maximize its impact. If they will be preparing press materials for this manuscript, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

For any other questions or concerns, please email plosone@plos.org.

Thank you for submitting your work to PLOS ONE.

With kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. Evan James Livesey

Academic Editor

PLOS ONE


Articles from PLoS ONE are provided here courtesy of PLOS

RESOURCES