Skip to main content
PLOS One logoLink to PLOS One
. 2023 Nov 10;18(11):e0293657. doi: 10.1371/journal.pone.0293657

Can cognitive training capitalise on near transfer effects? Limited evidence of transfer following online inhibition training in a randomised-controlled trial

David J Harris 1,*, Mark R Wilson 1, Kieran Chillingsworth 2, Gabriella Mitchell 2, Sarah Smith 2, Tom Arthur 1, Kirsty Brock 1, Samuel J Vine 1,*
Editor: Celia Andreu-Sánchez3
PMCID: PMC10637678  PMID: 37948381

Abstract

Despite early promise, cognitive training research has failed to deliver consistent real-world benefits and questions have been raised about the experimental rigour of many studies. Several meta-analyses have suggested that there is little to no evidence for transfer of training from computerised tasks to real-world skills. More targeted training approaches that aim to optimise performance on specific tasks have, however, shown more promising effects. In particular, the use of inhibition training for improving shoot/don’t-shoot decision-making has returned positive far transfer effects. In the present work, we tested whether an online inhibition training task could generate near and mid-transfer effects in the context of response inhibition tasks. As there has been relatively little testing of retention effects in the literature to date, we also examined whether any benefits would persist over a 1-month interval. In a pre-registered, randomised-controlled trial, participants (n = 73) were allocated to either an inhibition training programme (six training sessions of a visual search task with singleton distractor) or a closely matched active control task (that omitted the distractor element). We assessed near transfer to a Flanker task, and mid-transfer to a computerised shoot/don’t-shoot task. There was evidence for a near transfer effect, but no evidence for mid-transfer. There was also no evidence that the magnitude of training improvement was related to transfer task performance. This finding adds to the growing body of literature questioning the effectiveness of cognitive training. Given previous positive findings, however, there may still be value in continuing to explore the extent to which cognitive training can capitalise on near or mid-transfer effects for performance optimisation.

Introduction

Cognitive training has promised much but delivered relatively little in the way of improving human performance. The core principle–that targeted training of domain-general mental abilities should have benefits for a range of tasks–is appealing for those aiming to optimise human performance [1, 2] or ameliorate deficits arising from clinical disorders, traumatic injury, work-induced fatigue, or age-related decline [35]. Yet, findings to date have been mixed, particularly for performance optimisation which was the focus of the present work. Indeed, early promise (e.g., [6]) has given way to increasing questions about the breadth of real-world benefits and the experimental rigour of many studies [710]. A series of meta-analyses have suggested that generic cognitive training tasks have benefits for performance on other cognitive tests but return null effects for far transfer (i.e., to untrained tasks with demands that only partially overlap with training) [8, 1113]. Given this weight of evidence against far transfer, the present work explores the idea that there may be value in re-focusing the field of cognitive training towards testing the effectiveness of more targeted and specialised interventions that capitalise on near or mid-transfer effects (i.e., to other cognitive tests that more closely resemble the training task).

Despite some unfavourable findings, there remains substantial interest and ongoing work in the field of cognitive training (to the exasperation of some researchers [14]). One reason for this persistence is that the potential for domain general improvements in cognition remains so alluring. The rigour of much work to date has also been questionable [10, 15]. For example, adequate control groups, pre-registration of analyses, realistic far transfer tests, and assessment of long-term retention have been absent from many studies. This lack of rigour has left open the possibility that better quality work could yet demonstrate the benefits of cognitive training. In a review of methodological standards for cognitive training, Green and colleagues [10] note that the use of a common moniker–‘brain training’–for a range of interventions may have also concealed beneficial sub-types. So null effects from meta-analyses could be due to a minority of effective interventions being obscured by an ineffective majority.

While the evidence against true far transfer effects is strong [8, 12, 13, 16, 17], the evidence demonstrating the presence of near transfer effects is also strong [13, 16]. Two key questions in the cognitive training field are, therefore, the degree to which these near transfer effects can be extended (i.e., ‘mid-transfer’) and whether these effects can provide any practical utility for optimising human performance. In essence, is there any value in designing cognitive training tasks that are closely matched to a target skill to generate very specific performance improvements? There is existing work that has tried to capitalise on what could be termed ‘mid-transfer’ effects. Rather than aiming to improve domain general intelligence or activities of daily living, this work has focused on training one specific aspect of cognitive function to improve a specific behavioural outcome. The clearest example of this approach is the use of inhibition training to improve the suppression of unsuitable responses during shoot/don’t-shoot decision making tasks [18, 19]. Inhibition is a sub-function of working memory which denotes the capacity to obstruct automatic or instinctive responses when they are not appropriate for the context at hand, such as ignoring a distracting noise or delaying a response to threat [20, 21]. Studies which have directly trained inhibition function have generated improved performance in both simulated [22] and live fire shooting tasks [23].

Alternatively, these positive training outcomes may be related to the relationship between inhibition and enhanced attentional capabilities. A recent review by Draheim et al. [24] has argued that attention control ability is more predictive of human performance in real-world tasks than working memory capacity. Draheim et al. describe working memory capacity as the number of units of information an individual can hold in primary memory at once while under cognitive load, while attentional control is the maintenance of goal-relevant behaviour or information and the filtering or blocking of irrelevant and inappropriate information or behaviour. Whilst these two concepts are interrelated, as working memory plays an important role in the control of attention [25], it is possible that researchers seeking real-world performance improvements should be focusing on attentional control abilities, rather than working memory capacity per se. Contrary to most existing cognitive training interventions which solely target working memory capacity, inhibition training may develop key components of attentional control. It is these potentially adaptive effects that may explain the greater reported success with far transfer tasks [22, 23, 26]. This explanation offers further theoretical support for inhibition training as a promising future route for cognitive training research and indicates that further examination of the underlying mechanisms of inhibition training is needed.

The purpose of this work was to better understand the potential of inhibition training for human performance optimisation. We built upon sports-related findings from Ducrocq et al. [26], where participants were trained on an inhibition task consisting of visual search for tennis ball stimuli accompanied by a singleton distractor that had to be ignored. Ducrocq et al. found inhibition training transferred to i) improved real-world tennis volleying under conditions of performance pressure and ii) better inhibition of visual fixations towards the target, in favour of watching the ball. We adapted this task to test whether computer-based inhibition training (delivered online) could also generate near and mid-transfer effects for shoot/don’t-shoot decision making. To date, there has been relatively little testing of retention effects in the literature, so we also sought to test whether any benefits would persist over a 1-month period. Given previous work in this area we hypothesised that individuals in the inhibition training group would outperform those in the active control group on both tests and that these benefits would still be present at a 1-month follow up test.

Methods

Pre-registration

The design, hypotheses, and planned analyses for this work were pre-registered prior to any data collection. The pre-registration document is available from the Open Science Framework (https://osf.io/7dv8h). Any analyses that deviate from, or were an addition to, the pre-registered analysis plan are identified as exploratory.

Design

The study adopted a mixed design, with two independent training groups (full training; active control) completing online cognitive performance assessments at three timepoints (baseline; post-test; retention). Participants were assigned to the following training groups (adapted from [26]):

  1. Full inhibition training–computerised visual search task with singleton distractor;

  2. Active control training–identical computerised visual search task but with the distracting element omitted.

Participants

Participants (n = 73, 41 female) were recruited from a student population using opportunity sampling. Favourable opinion was given by the Ministry of Defence Research Ethics Committee (ethical approval was also provided by a University Ethics Committee) before data collection and participants gave written informed consent prior to taking part. The only inclusion criteria were no previous participation in a cognitive training protocol. Based on an a-priori statistical power calculation using G*Power [27], a target sample size of 35 participants per group (70 participants in total) was chosen. The study of Ducrocq et al. [26] reported a large improvement (equivalent to η2 = .32) in a mid-level transfer test (anti-saccade task), while other similar work [28] has reported smaller effects (η2 = .12) for mid- transfer. A mean of these two effects was used for the power analysis. As a result, to detect an interaction effect of η2 = .22 in the main training effect analysis (a 2 (group) x 2 (time) ANOVA), a sample size of 36 (i.e., 18 per group) would be required, given α = .05 and power (1-β) of .85. For the retention test it was estimated that any effects were likely to be smaller in magnitude. We therefore used the smaller effect from Harris et al. [28] as a conservative indicator of a likely effect size. For this part of the study, it was determined that 70 participants (35 per group) would be needed to detect an interaction effect of η2 = .12 in a 2 (group) x 2 (time: post v retention) ANOVA, given α = .05 and power (1-β) of .85.

We therefore aimed to recruit 70 participants, and a final sample of 73 participants was achieved (see Table 1). Due to participant attrition, 49 participants out of the initial 73 completed the first phase of the research, but this remained well in excess of our required sample for the main analyses. Only 43 participants completed the retention tests meaning we were only powered to observe larger differences at the retention time-point.

Table 1. Summary of group membership (left) and illustration of the flow of participants through the study (right).

Active Control Group Training Group
Participants 36 (15 Male / 21 Female) 37 (17 Male / 20 Female)
Training adherence * Incomplete training: n = 12 Pre- and post-tests but no retention: n = 1 Pre, post, and retention: n = 23 Incomplete training: n = 12 Pre- and post-tests but no retention: n = 4 Pre, post, and retention: n = 21

* Participants who completed five out of the six training sessions were included in the final pre-to-post analysis of the near and mid transfer tests, but those with less than five sessions were excluded.

Tasks and materials

The online cognitive tasks were programmed in PyschoPy [29], an open-source Python-based software platform for experimental psychology studies. The PsychoPy tasks were then uploaded to the online hosting site Pavlovia (https://pavlovia.org/). Participants were sent hyperlinks to the online tasks to access through their web browser. Python code for all the tasks is available from the following GitHub page: https://github.com/Harris-D/Shoot-dont-shoot.

Near transfer test

The flanker test is a widely adopted and well validated measure of inhibition ability [30] in which the participant must respond (with a key press) to the direction of a centrally presented arrow. The arrow is flanked by other arrows that are either pointing the same way (congruent) or the opposite way (incongruent) (see Fig 1A, an incongruent trial). Incongruent flanker items require more cognitive effort than congruent items, as they must be ignored by the participant, drawing on the inhibition function of working memory [20]. As this is a pure test of inhibition, it was used to test near transfer effects and whether there is a change in inhibition ability from pre to post inhibition training. The test consisted of 10 practice trials followed by 80 test trials, which were split equally across congruent and incongruent trials, and left and right facing arrows. The left and right arrow keys on the keyboard had to be pressed to indicate the direction of the central arrow. On each trial the arrow stimuli were presented for up to 1500 ms, or until one of the arrow keys was pressed, which would initiate the next trial. The 80-trial block lasted ~5 minutes. Test performance is measured through the size of the congruency effect, that is, the difference in reaction time between presentations with congruent and incongruent distractors (a smaller difference indicates distractions are being inhibited more effectively).

Fig 1. Experimental tasks.

Fig 1

Note: A: Flanker task—participant must respond with a key press to indicate the direction of the central arrow. B: Shoot/don’t-shoot task—participant must respond with a key press (left/right) to ’shoot’ if the person in the image is holding a weapon. The image on the right is just illustrative—real pictures were used for the experimental task. C: Training task—participant indicates whether there is a weapon present in any of the images using a key press. The images shown are illustrative, Sykes-McQueen threat assessment targets were used in the real task. The red distractor slows response times and has to be inhibited. D: Active control task—No distractor is present but the rest of the task is identical to the training task.

Mid transfer test

A computerised test to further assess inhibition ability was chosen to determine whether the training transferred to a slightly more realistic test of inhibition ability (i.e., mid-transfer). The test was based on a similar shoot/don’t-shoot test used in a study by Nieuwenhuys et al. [31], which examined whether Police officers were more likely to shoot when anxious. In the study of Nieuwenhuys et al., participants had to shoot towards one of two locations on a screen if a person with a weapon appeared in that window. The test used here replicates the timings and design of the stimuli used in Nieuwenhuys et al., but instead of shooting a weapon in the direction of the left or right window, participants simply pressed a key to indicate the location of the person with the weapon (if present). The participant was asked to respond as quickly as possible to each image (person with gun present or absent) (see Fig 1B) but were instructed that they should not respond when no weapon was present. On each trial, two empty windows were presented for 1000 ms, then the picture appeared in the left or right window for 500 ms. If a weapon was present but the response was too slow (i.e., >500 ms), the participant received a ‘too slow’ message. The test consisted of 24 different picture stimuli, 10 of which included a gun and required a left/right keyboard response using the arrow keys (according to the location on the screen). All stimuli were presented twice each in a randomised order for a total of 48 trials.

In the shoot/don’t-shoot task, RTs to weapon present trials and the number of failures to withhold a response when no weapon was present were recorded. To provide greater insight into participants decision-making, the ratio of correct and incorrect decisions was also used to calculate ‘d-prime’, a measure of detection sensitivity, and ‘beta’, a measure of response bias. These measures are derived from Signal Detection Theory [32], which outlines methods for describing a person’s ability to detect the presence of a signal (which here is the presence or absence of the weapon) amidst the noise (all other aspects of the stimuli and environment that need to be processed). D-prime is calculated from the difference between the z-transformed proportions of hits (H) and false alarms (F): d’ = z(H)—z(F), where H = P("yes" | YES) and F = P("yes" | NO). It indexes how easily a signal is detected from all the surrounding distractions or complexity (the noise), with higher values indicating more sensitive responding (see Fig 2). Beta is a measure of response bias calculated from the ratio of the normal density functions at the criterion of the z-values used in the computation of d-prime. It indicates the relative preference for indicating whether a stimulus is, or is not, present. We employed a standardized response bias criterion, where negative values indicated liberal responding (high hit rates, high false alarms, and few misses) and positive values represented conservative responding (lower hit rates, lower false alarms, but more misses).

Fig 2. Illustration of signal detection metrics.

Fig 2

Note: A: The four possible combinations of signals (presence or absence of a threat) and responses (shoot or don’t shoot). B-C: D-prime and beta in relation to various distributions of signal (weapon present or absent) and noise. The greyed portion of the figure represents trials where a response was made. In panel B, the participant has a large d-prime value, indicating they were able to perceive a clear difference between the signal and the noise. Beta value is shifted right indicating a conservative response strategy (more ’misses’ but very few ’false alarms’). In panel C, d-prime is smaller showing that sensitivity is reduced, but the response strategy is still conservative, so the participant still has few `false alarms’, but a lot more ’misses’. In panel D, d-prime is again large, but this time the response bias is more liberal so there are many more ’false alarms’ but few ’misses’.

Full training task and active control task

The training and active control tasks were based on inhibition training tasks designed by Ducrocq et al. [26], but adapted for the purposes of shoot/don’t-shoot training. In the full training task, the participant was required to indicate if a target image (a person holding a weapon) was present among an array of images as fast as possible, while ignoring a salient distractor item (see Fig 1C). On each trial, a central fixation cross was shown and the image array was presented for 5000 ms, or until a response was given. Each array consisted of 5 images of people holding various items. On 50% of trials one of the five images included a weapon. These images were sourced from the Sykes-McQueen threat assessment 800 series targets (with permission), which are commonly used as threat assessment stimuli in defence and security settings (https://www.mcqueentargets.com/products/#threat). The images in Fig 1 are not the real stimuli and are just illustrative (the real images can be seen on the GitHub page: https://github.com/Harris-D/Shoot-dont-shoot). The weapon present/absent trials also included a singleton distractor on 50% of occasions and were presented in a fully randomised order. The spatial location of the different stimuli, as well as the singleton present/absent and weapon present/absent trials were fully counterbalanced across each individual block. A preceding phase of pilot testing (n = 7) was conducted to validate the task by demonstrating that RTs were slowed on singleton present versus singleton absent trials, and that this RT difference reduced over training blocks.

In the active control version of the task, the colour singleton distractor is omitted, providing a perfectly matched visual search task, but without the inhibition demands (see Fig 1D). Consequently, the active control version served to isolate the effect of inhibition practice and accounted for placebo effects arising from participants believing they were assigned to the training group. This made it a very stringent test of the training intervention and answered calls for better matched active control tasks in cognitive training research [10].

Each training session consisted of 80 trials on this task and lasted ~30 minutes. Participants completed six training sessions, on either the full training or active control task, within a 12-day period. The performance metrics for this task were percentage correct responses and RT. The Signal Detection decision making metrics d-prime and beta were calculated for this task in exactly the same way as for the shoot/don’t-shoot task.

Procedure

Potential participants were sent a recruitment email containing a summary of the requirements of the study, a full participant information sheet, and a consent form. Participants were asked to sign and return the consent form via email if they wished to take part in the research. Participants were then randomly assigned to one of the two training groups and sent further instructions on how and when to complete the cognitive tests. Participants were assigned a unique identification number and were instructed to complete the cognitive tasks at consistent times each day, where possible. Participants were asked to complete the six training sessions over a period of 6–12 days, and then repeat the baseline tests (see Fig 3). Next, participants were sent a reminder email to complete the retention test 4 weeks after completing their final training session. Participants were compensated £20.00 for their time.

Fig 3. Trial design.

Fig 3

The figure shows a schematic representation of the flow of participants through the trial.

Data analysis

Data from the cognitive training tasks was processed using bespoke analysis scripts in MATLAB (2019a; Mathworks, US) which can be found online (https://osf.io/mzxtn/). The derived performance variables were then analysed using JASP (v0.15). Data was screened for outliers and extreme deviations from normality. Outlying values were Windsorised by replacing them with a value 1% larger (or smaller) than the next most extreme value. Some of the performance data were skewed, but as ANOVA is typically robust to such deviations [33], a parametric approach was still used as a method of comparing differences between groups. A series of 2 (time: pre/post) x 2 (group: training/active control) repeated measures ANOVAs were used to examine training effects, and then separate 2 (time: post/retention) x 2 (group: training/active control) repeated measures ANOVAs were used to test for retention. This analysis was performed separately because: 1) it was regarded as a distinct research question, and 2) it enabled participants that did not return for the retention tests to still be included in the main analysis. An alpha level to determine statistical significance was set at 0.05. The effect size partial eta (ηp2) was calculated for main effects and Cohen’s d for t-tests. To support better interpretation of any null effects and supplement the frequentist analysis, we also calculated Bayes Factors, which indicate the relative likelihood of the alternative model compared to the null. We interpret BF10 > 3 as moderate evidence for the alternative model, and BF10 > 10 as strong evidence, while BF10 < 0.33 as moderate evidence for the null and BF10 < 0.1 as strong evidence for the null [34].

Results

Pre to post changes

Training task

As a manipulation check, to ensure all participants improved on the training task, regardless of training group, a series of 2 (time: pre/post) x 2 (group: training/active control) ANOVAs were performed to compare performance across the first and last training blocks. Participants significantly improved from pre- to post-training for percentage correct responses, RT, and d-prime. Beta did not significantly change, although it was close to the significance threshold (p = .07). There were no group or interaction effects, confirming that there was a parallel training effect for the full inhibition training and active control training tasks (see summary in Table 2 and Fig 4).

Table 2. ANOVA results for manipulation check.
F p ηp2 BF10
Percentage correct
Time 33.10 < .001 .41 7.05*104
Group 2.50 .12 .05 0.61
Interaction 1.15 .23 .02 0.45
Reaction time
Time 120.49 < .001 .72 1.54*1012
Group 3.98 .052 .08 1.03
Interaction 0.48 .49 .01 0.53
D-prime
Time 32.47 < .001 .40 5.30*104
Group 0.02 .90 .00 0.71
Interaction 2.79 .10 .06 0.27
Beta
Time 3.71 .07 .07 1.40
Group 0.04 .84 .00 0.26
Interaction 0.00 .95 .00 0.28
Fig 4. Training performance.

Fig 4

Plot showing improvement of training performance over time (means and SDs) with corresponding reduction in reaction times (inset).

As an alternative analysis approach, we re-ran the main analyses using ANCOVAs, testing for group differences at post-training using the baseline scores as a covariate. As some tests showed baseline differences it was decided that this alternative approach could help to determine where reliable training effects were present. These are reported in the supplementary files (https://osf.io/mzxtn/) but largely supported the pre-registered analyses in indicating no benefit of the training.

Flanker (near transfer)

To assess whether there was a near transfer effect, a 2 (time: pre/post) x 2 (group: training/active control) ANOVA was performed on the RT difference score from the flanker test (see Fig 5). There was a large effect of training [F(1,45) = 23.37, p < .001, ηp2 = .34, BF10 = 1388.82], indicating a reduction in the RT difference (i.e., better inhibition performance) over time. There was no overall group effect [F(1,45) = 1.57, p = .22, ηp2 = .03, BF10 = 0.57], but there was a significant group*time interaction [F(1,45) = 5.10, p = .029, ηp2 = .10, BF10 = 2.32]. Follow-up t-tests with the Bonferroni-Holm correction showed that the interaction effect was driven by a significant reduction in the RT difference for the training group [t(24) = 5.76, p = .002, d = 1.16] but not the control group [t(21) = 1.58, p = .13, d = 0.34]. Further Bonferroni-Holm corrected t-tests at each timepoint showed that there was, however, no difference between the two groups at post-training [t(45) = 0.17, p = .13, d = 0.05, BF10 = 0.31], or at baseline [t(45) = 1.89, p = .13, d = 0.49, BF10 = 1.15]. The medium effect for the baseline difference does, however, indicate that the training group started with poorer performance levels (see also Training Gains Analysis below).

Fig 5. Box and whisker plots with overlaid data points for performance on the flanker test.

Fig 5

Note: A smaller RT difference shows that the distractors were having a reduced effect, i.e., better inhibition. **p < .01.

Shoot/don’t-shoot (mid-transfer)

A series of 2 x 2 ANOVAs from the shoot-don’t-shoot task did not suggest any benefit to the training group over the active control group for any of the performance variables (see Fig 6).

Fig 6. Box and whisker plots with overlaid data points for performance on the shoot/don’t-shoot (SDS) test.

Fig 6

Correct hits

A 2 x 2 ANOVA on the percentage of targets correctly hit did not indicate any effect of the training, as there was no main effect of time [F(1,47) = 2.34, p = .13, ηp2 = .05, BF10 = 0.59] or group [F(1,47) = 0.10, p = .76, ηp2 = .002, BF10 = 0.37], and no interaction [F(1,47) = 1.40, p =. 24, ηp2 = .03, BF10 = 0.52].

False alarms

A 2 x 2 ANOVA on the number of non-threat targets hit indicated a small yet significant main effect of time [F(1,47) = 4.51, p = .039, ηp2 = .09, BF10 = 1.22], but no effect of group [F(1,47) = 1.26, p = .27, ηp2 = .03, BF10 = 0.56], and no interaction [F(1,47) = 3.28, p = .08, ηp2 = .07, BF10 = 1.04].

Reaction times

A 2 x 2 ANOVA on RTs showed no main effect of time [F(1,46) = 1.74, p = .19, ηp2 = .04, BF10 = 0.45], no effect of group [F(1,46) = 2.97, p = .09, ηp2 = .06, BF10 = 1.05], and no interaction [F(1,46) = 0.05, p = .82, ηp2 = .001, BF10 = 0.29].

D-prime

A 2 x 2 ANOVA on d-prime coefficients showed a significant main effect of time [F(1,47) = 7.30, p = .01, ηp2 = .13, BF10 = 4.64], but no effect of group [F(1,47) = 0.31, p = .58, ηp2 = .007, BF10 = 0.38], and no interaction [F(1,47) = 0.16, p = .69, ηp2 = .003, BF10 = 0.29].

Beta

A 2 x 2 ANOVA on beta coefficients showed no significant main effect of time [F(1,47) = 0.42, p = .52, ηp2 = .009, BF10 = 0.23], no effect of group [F(1,47) = 0.69, p = .41, ηp2 = .01, BF10 = 0.46], and no interaction [F(1,47) = 1.84, p = .18, ηp2 = .04, BF10 = 1.14].

Retention

Retention of training effects was examined using separate 2 (time: pre/retention) x 2 (group: training/active control) ANOVAs. These analyses are a deviation from the pre-registration which incorrectly outlined comparisons between retention and post-test, instead of retention to baseline. The comparisons of retention and post-test are presented in the supplementary files for completeness (https://osf.io/mzxtn/). We also ran analyses using an ANCOVA based approach to test for group differences at retention, using the baseline scores as a covariate. These are reported in the supplementary files (https://osf.io/mzxtn/) but showed no effects of group.

Flanker task (near transfer)

The 2 x 2 ANOVA showed a main effect of time [F(1,42) = 17.46, p < .001, ηp2 = .29, BF10 = 1700.25] indicating a large improvement in flanker performance from baseline to retention. There was an effect of group [F(1,42) = 5.62, p = .02, ηp2 = .12, BF10 = 1.09] reflecting slightly better performance in the control group, but no interaction effect [F(1,42) = 0.32, p = .57, ηp2 = .01, BF10 = 0.34]. This suggests that large improvements from baseline in both groups were retained over time (see Fig 5), but that there was no benefit of being in the training group.

Shoot/don’t-shoot (mid-transfer)

Analyses on the shoot/don’t-shoot performance variables also indicated general improvements from baseline to retention, but no between-group differences or interactions. This further confirms that there was no benefit of the inhibition training for performance on this task, even after a 1-month interval.

Correctly hit

A 2 x 2 ANOVA indicated an effect of time [F(1,42) = 8.09, p = .007, ηp2 = .16, BF10 = 5.05], but no effect of group [F(1,42) = 0.12, p = .74, ηp2 = .003, BF10 = 0.40], and no interaction [F(1,42) = 1.30, p = .26, ηp2 = .03, BF10 = 0.49].

False alarms

A 2 x 2 ANOVA indicated an effect of time [F(1,42) = 9.45, p = .004, ηp2 = .18, BF10 = 11.21], but no effect of group [F(1,42) = 0.28, p = .60, ηp2 = .007, BF10 = 0.42], and no interaction [F(1,42) = 0.13, p = .73, ηp2 = .003, BF10 = 0.31].

Reaction times

A 2 x 2 ANOVA indicated an effect of time [F(1,41) = 5.26, p = .03, ηp2 = .11, BF10 = 2.19], but no effect of group [F(1,41) = 0.61, p = .44, ηp2 = .02, BF10 = 0.52], and no interaction [F(1,41) = 3.31, p = .08, ηp2 = .08, BF10 = 0.90].

D-prime

A 2 x 2 ANOVA indicated an effect of time [F(1,42) = 17.14, p < .001, ηp2 = .29, BF10 = 147.35], but no effect of group [F(1,42) = 1.09, p = .30, ηp2 = .03, BF10 = 0.48], and no interaction [F(1,42) = 0.51, p = .48, ηp2 = .01, BF10 = 0.35].

Beta

A 2 x 2 ANOVA indicated no effect of time [F(1,42) = 0.05, p = .82, ηp2 = .001, BF10 = 0.23], no effect of group [F(1,42) = 0.00, p = .98, ηp2 = .000, BF10 = 0.44], and no interaction [F(1,42) = 1.16, p = .29, ηp2 = .03, BF10 = 0.49].

Training gains analysis

Finally, we conducted an exploratory analysis to examine the relationship between ‘training gain’ and transfer as it has been suggested that transfer effects might be related to the size of improvement on the training task [35]. We calculated an improvement score for the performance variables from the training task and near and far transfer tasks, based on changes from block 1 to 6 or pre to post (where a positive score corresponded to a relative improvement). This was not done for beta which refers to a response tendency which is not easily characterised as ‘better’ or ‘worse’. The correlation coefficients are summarised in Table 3, but there was little evidence that size of training gain was related to size of improvement on the flanker or shoot/don’t-shoot tasks. After a Holm-Bonferroni correction for multiple tests none of the correlations were significant.

Table 3. Correlation coefficients (Spearman’s Rho) for the relationships between ‘training gain’ and improvement on the transfer tests.

Training group Control group
Training improvement
% Correct RT d-prime % Correct RT d-prime
Flanker
RT difference r = .05, p = 1.00 r = .00, p = 1.00 r = -.05, p = 1.00 r = .30, p = 1.00 r = .44, p = .60 r = .31, p = 1.00
Shoot/don’t-shoot
% Correct r = .25, p = 1.00 r = .15, p = 1.00 r = .43, p = .56 r = -.45, p = .56 r = -.22, p = 1.00 r = .01, p = 1.00
False alarms r = .01, p = 1.00 r = .01, p = 1.00 r = .06, p = 1.00 r = -.10, p = 1.00 r = -.01, p = 1.00 r = -.06, p = 1.00
Response time r = -.02, p = 1.00 r = .17, p = 1.00 r = -.08, p = 1.00 r = -.39, p = .88 r = .01, p = 1.00 r = -.18, p = 1.00
d-prime r = -.09, p = 1.00 r = .28, p = 1.00 r = .13, p = 1.00 r = -.36, p = 1.00 r = -.23, p = 1.00 r = .08, p = 1.00

Discussion

In this pre-registered randomised-controlled trial, we examined the effectiveness of an online inhibition training task for improving performance on two tests of inhibition–a flanker task and a shoot/don’t-shoot task. We adapted an inhibition training task previously reported in Ducrocq et al. [26] to a military judgemental training context, given the previous success of studies focusing on this skill [22, 23]. Considering the limited assessment of retention of cognitive training effects in existing literature, we sought to examine whether any improvements to inhibition performance persisted over a 1-month period. In short, our results provided little support for the effectiveness of this cognitive training intervention; while there was some evidence for a training benefit on the flanker test (near transfer), there was no evidence of transfer to the shoot/don’t-shoot task. This finding adds to the growing body of literature suggesting that cognitive training may support near transfer effects, but that far transfer is unlikely, even when the transfer tests are closely aligned to the training task.

In line with our hypothesis, we found evidence for a near transfer effect. A group by time interaction effect was observed for the flanker task, which was driven by a significant improvement in the training group but not the control group. Inspection of the plots (see Fig 5), as well as supplementary ANCOVA analyses, indicated that this effect may have been partly due to the training group starting from a poorer baseline. One explanation for the interaction effect is therefore that the cognitive training may have had benefits for those with poorer initial performance and enabled them to ‘catch up’. The training gains analysis did not, however, provide support for this explanation. Another possibility is that the interaction was due to small baseline differences and a regression to the mean effect. Consequently, while there was evidence for near transfer this should not be treated as conclusive evidence. Similarly, there was no evidence that participants assigned to the inhibition training group outperformed the control group on any of the shoot/don’t-shoot variables. Indeed, Bayes factors often provided weak to moderate support for the null. Assessments of performance at a 1-month follow up also found no difference between the training and control groups, suggesting that benefits did not emerge at a later time.

One reason for the lack of clear training effects could be the very stringent control group employed here. It has been observed that the more positive findings in cognitive training research have tended to originate from studies that have used weaker designs and less well-matched control groups that don’t equate input of time and effort or the expectation of a training benefit [9, 10]. However, while a lack of well-matched active control groups is common in the cognitive training field, previous studies finding benefits of inhibition training have tended to use robust controls. Ducrocq et al. [26], for instance, employed the same procedure as in the present work, simply omitting the singleton distractor, and Hamilton et al. [23] used generic working memory tasks as a control comparison to visual search and inhibition tasks. Therefore, the active control task cannot explain why the present results diverge from these previous studies.

Another reason for the limited transfer effects could be that the test conditions were not sufficiently stressful or challenging to reveal the benefits of the training. There is evidence to suggest the benefits of cognitive training may only be revealed when cognitive capacity is already diminished. For instance, Ducrocq et al. [26] found that inhibition training resulted in improved tennis volleying performance, but only when participants were placed under performance pressure. Similarly, Wood et al. [36] found that in a Stroop handgun shooting task (where the colour word determines which target to aim at), low working memory individuals showed significant reductions in shooting accuracy when anxious, while those with high working memory capacity did not. Finally, the transfer tests reported in Hamilton et al. [23] consisted of live fire shooting tasks, which are likely to have posed a much greater stress than the task used here, which was performed in the comfort of people’s own homes.

A final consideration that could have impacted the possibility for transfer is that participants in the present study were from a student (i.e., civilian) population. Positive inhibition training effects have been observed in novice or naïve groups [22] but other studies have used tennis players [26] or police officers [23], who could be considered relatively more ‘expert’. Future work should therefore consider whether transfer effects are more likely to be revealed in groups who are already well trained in the target skill. In summary, despite the null effects reported here, there may still be value in targeting near or mid-transfer effects for performance optimisation, particularly under more straining or stressful test conditions with an expert participant group.

It is also important to acknowledge the possibility that even though inhibition training has generated more promising findings than traditional working memory capacity training, there are only a small number of studies in this specific research topic. As such, the positive reported effects could be a product of the same publication bias and file drawer effects that blight much of the field (e.g., see [10, 37]). The results of the current study should also be considered in the context of its relative strengths and weaknesses. The study was well-powered and pre-registered, providing greater rigour than some previous studies. As an online study we could not, however, ensure that participants performed the training and assessment sessions in a quiet distraction-free environment. Variation in their motivation, arousal levels, or environment could also have added noise to the data, reducing any potential training effects. We also experienced a relatively high dropout rate, which can introduce an element of selection bias and reduce the representativeness of the sample. Dropout analyses (reported in the supplementary files: https://osf.io/mzxtn/) suggested that withdrawal from the study was not related to either gender or baseline cognitive ability. To enable more robust conclusions in future work, researchers could implement strategies to improve participant engagement such as providing greater incentives for retention, and carefully considering the study’s design to minimize participant burden, making it more likely that participants will complete the trial as intended. Lastly, it is important to note that our findings are more applicable to the use of cognitive training for performance enhancement in healthy individuals than for addressing cognitive deficits in clinical or elderly populations. A limitation is that we did not explicitly screen for neurologic/psychiatric disorders, or collect detailed demographic information, but as the sample consisted of university students this was a predominantly young and healthy sample.

Conclusions

While research has converged on the idea that there is little evidence for far transfer following practice on computerised ‘brain training’ tasks, there have been more promising results from methods that specifically focus on the inhibition function of working memory [22, 23, 26]. We have suggested that one reason for these more promising effects could be that these studies are capitalising on ‘mid-transfer’, where they do not seek domain general improvements in cognition, but performance improvements in one specific task that is closely aligned to the training. While we observed some near transfer, we found no evidence to support the effectiveness of inhibition training for shoot/don’t-shoot decision-making. Given previous positive findings there may still, however, be value in continuing to explore the extent which cognitive training can capitalise on near or mid-transfer effects for performance optimisation.

Data Availability

Data cannot be shared publicly because this is a Ministry of Defence funded project and data are subject to additional restrictions. All relevant code, and the pre-registration document is available online from: https://osf.io/mzxtn/

Funding Statement

This work was funded by the Defence Science and Technology Laboratory via the Human Social Science Research Capability framework (HS1.030). The funders contributed to the study design and preparation of the manuscript but had no role in data collection and analysis or decision to publish.

References

  • 1.Blacker KJ, Hamilton J, Roush G, Pettijohn KA, Biggs AT. Cognitive training for military application: a review of the literature and practical guide. J Cogn Enhanc. 2018. [cited 23 Aug 2018]. doi: 10.1007/s41465-018-0076-1 [DOI] [Google Scholar]
  • 2.Harris DJ, Wilson MR, Vine SJ. A systematic review of commercial cognitive training devices: Implications for use in sport. Front Psychol. 2018;9. doi: 10.3389/fpsyg.2018.00709 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Ball K, Berch DB, Helmers KF, Jobe JB, Leveck MD, Marsiske M, et al. Effects of Cognitive Training Interventions With Older Adults: A Randomized Controlled Trial. JAMA. 2002;288: 2271–2281. doi: 10.1001/jama.288.18.2271 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Klingberg T, Fernell E, Olesen PJ, Johnson M, Gustafsson P, Dahlström K, et al. Computerized Training of Working Memory in Children With ADHD-A Randomized, Controlled Trial. J Am Acad Child Adolesc Psychiatry. 2005;44: 177–186. doi: 10.1097/00004583-200502000-00010 [DOI] [PubMed] [Google Scholar]
  • 5.Nouchi R, Taki Y, Takeuchi H, Hashizume H, Akitsuki Y, Shigemune Y, et al. Brain Training Game Improves Executive Functions and Processing Speed in the Elderly: A Randomized Controlled Trial. PLOS ONE. 2012;7: e29676. doi: 10.1371/journal.pone.0029676 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Jaeggi SM, Buschkuehl M, Jonides J, Perrig WJ. Improving fluid intelligence with training on working memory. Proc Natl Acad Sci. 2008;105: 6829–6833. doi: 10.1073/pnas.0801268105 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Owen AM, Hampshire A, Grahn JA, Stenton R, Dajani S, Burns AS, et al. Putting brain training to the test. Nature. 2010;465: 775–778. doi: 10.1038/nature09042 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Sala G, Gobet F. Does Far Transfer Exist? Negative Evidence From Chess, Music, and Working Memory Training. Curr Dir Psychol Sci. 2017;26: 515–520. doi: 10.1177/0963721417712760 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Sala G, Gobet F. Cognitive Training Does Not Enhance General Cognition. Trends Cogn Sci. 2019;23: 9–20. doi: 10.1016/j.tics.2018.10.004 [DOI] [PubMed] [Google Scholar]
  • 10.Shawn Green C, Bavelier D, Kramer AF, Vinogradov S, Ansorge U, Ball KK, et al. Improving Methodological Standards in Behavioral Interventions for Cognitive Enhancement. J Cogn Enhanc. 2019;3: 2–29. doi: 10.1007/s41465-018-0115-y [DOI] [Google Scholar]
  • 11.Aksayli ND, Sala G, Gobet F. The cognitive and academic benefits of Cogmed: A meta-analysis. Educ Res Rev. 2019;27: 229–243. doi: 10.1016/j.edurev.2019.04.003 [DOI] [Google Scholar]
  • 12.Dougherty MR, Hamovitz T, Tidwell JW. Reevaluating the effectiveness of n-back training on transfer through the Bayesian lens: Support for the null. Psychon Bull Rev. 2016;23: 306–316. doi: 10.3758/s13423-015-0865-9 [DOI] [PubMed] [Google Scholar]
  • 13.Sala G, Aksayli ND, Tatlidil KS, Tatsumi T, Gondo Y, Gobet F. Near and Far Transfer in Cognitive Training: A Second-Order Meta-Analysis. Collabra Psychol. 2019;5: 18. doi: 10.1525/collabra.203 [DOI] [Google Scholar]
  • 14.Gobet F, Sala G. Cognitive training: a field in search of a phenomenon. Perspect Psychol Sci. 2022. [cited 12 Aug 2022]. Available: doi: 10.1177/17456916221091830 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Simons DJ, Boot WR, Charness N, Gathercole SE, Chabris CF, Hambrick DZ, et al. Do “Brain-Training” programs work? Psychol Sci Public Interest. 2016;17: 103–186. doi: 10.1177/1529100616661983 [DOI] [PubMed] [Google Scholar]
  • 16.Melby-Lervåg M, Redick TS, Hulme C. Working Memory Training Does Not Improve Performance on Measures of Intelligence or Other Measures of “Far Transfer”: Evidence From a Meta-Analytic Review. Perspect Psychol Sci. 2016;11: 512–534. doi: 10.1177/1745691616635612 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Stojanoski B, Lyons KM, Pearce AAA, Owen AM. Targeted training: Converging evidence against the transferable benefits of online brain training on cognitive function. Neuropsychologia. 2018;117: 541–550. doi: 10.1016/j.neuropsychologia.2018.07.013 [DOI] [PubMed] [Google Scholar]
  • 18.Biggs AT. Applying inhibitory control theories to shoot/don’t-shoot decisions. Appl Cogn Psychol. 2022;36: 154–165. doi: 10.1002/acp.3905 [DOI] [Google Scholar]
  • 19.Biggs AT, Pettijohn KA. The role of inhibitory control in shoot/don’t-shoot decisions. Q J Exp Psychol. 2021; 17470218211041923. doi: 10.1177/17470218211041923 [DOI] [PubMed] [Google Scholar]
  • 20.Miyake A, Friedman NP, Emerson MJ, Witzki AH, Howerter A, Wager TD. The Unity and Diversity of Executive Functions and Their Contributions to Complex “Frontal Lobe” Tasks: A Latent Variable Analysis. Cognit Psychol. 2000;41: 49–100. doi: 10.1006/cogp.1999.0734 [DOI] [PubMed] [Google Scholar]
  • 21.Verbruggen F, Logan GD. Response inhibition in the stop-signal paradigm. Trends Cogn Sci. 2008;12: 418–424. doi: 10.1016/j.tics.2008.07.005 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Biggs AT, Cain MS, Mitroff SR. Cognitive training can reduce civilian casualties in a simulated shooting environment. Psychol Sci. 2015;26: 1164–1176. doi: 10.1177/0956797615579274 [DOI] [PubMed] [Google Scholar]
  • 23.Hamilton JA, Lambert G, Suss J, Biggs AT. Can Cognitive Training Improve Shoot/Don’t-Shoot Performance? Evidence from Live Fire Exercises. Am J Psychol. 2019;132: 179–194. [Google Scholar]
  • 24.Draheim C, Pak R, Draheim AA, Engle RW. The role of attention control in complex real-world tasks. Psychon Bull Rev. 2022. [cited 16 Aug 2022]. doi: 10.3758/s13423-021-02052-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Knudsen EI. Fundamental Components of Attention. Annu Rev Neurosci. 2007;30: 57–78. doi: 10.1146/annurev.neuro.30.051606.094256 [DOI] [PubMed] [Google Scholar]
  • 26.Ducrocq E, Wilson M, Vine S, Derakshan N. Training attentional control improves cognitive and motor task performance. J Sport Exerc Psychol. 2016;38: 521–533. doi: 10.1123/jsep.2016-0052 [DOI] [PubMed] [Google Scholar]
  • 27.Faul F, Erdfelder E, Lang A-G, Buchner A. G*Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behav Res Methods. 2007;39: 175–191. doi: 10.3758/bf03193146 [DOI] [PubMed] [Google Scholar]
  • 28.Harris DJ, Wilson MR, Smith SJR, Meder N, Vine SJ. Testing the Effects of 3D Multiple Object Tracking Training on Near, Mid and Far Transfer. Front Psychol. 2020;11. doi: 10.3389/fpsyg.2020.00196 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Peirce JW. PsychoPy—Psychophysics software in Python. J Neurosci Methods. 2007;162: 8–13. doi: 10.1016/j.jneumeth.2006.11.017 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Eriksen BA, Eriksen CW. Effects of noise letters upon the identification of a target letter in a nonsearch task. Percept Psychophys. 1974;16: 143–149. doi: 10.3758/BF03203267 [DOI] [Google Scholar]
  • 31.Nieuwenhuys A, Savelsbergh GJP, Oudejans RRD. Shoot or don’t shoot? Why police officers are more inclined to shoot when they are anxious. Emotion. 2012;12: 827–833. doi: 10.1037/a0025699 [DOI] [PubMed] [Google Scholar]
  • 32.Green DM, Swets JA. Signal detection theory and psychophysics. New York: Wiley.; 1966. [Google Scholar]
  • 33.Norman G. Likert scales, levels of measurement and the “laws” of statistics. Adv Health Sci Educ. 2010;15: 625–632. doi: 10.1007/s10459-010-9222-y [DOI] [PubMed] [Google Scholar]
  • 34.van Doorn J, van den Bergh D, Bohm U, Dablander F, Derks K, Draws T, et al. The JASP Guidelines for Conducting and Reporting a Bayesian Analysis. PsyArXiv; 2019. Jan. doi: 10.31234/osf.io/yqxfr [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.Jaeggi SM, Buschkuehl M, Jonides J, Shah P. Short- and long-term benefits of cognitive training. Proc Natl Acad Sci. 2011;108: 10081–10086. doi: 10.1073/pnas.1103228108 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36.Wood G, Vine SJ, Wilson MR. Working memory capacity, controlled attention and aiming performance under pressure. Psychol Res. 2016;80: 510–517. doi: 10.1007/s00426-015-0673-x [DOI] [PubMed] [Google Scholar]
  • 37.Coburn KM, Vevea JL. Publication bias as a function of study characteristics. Psychol Methods. 2015;20: 310–330. doi: 10.1037/met0000046 [DOI] [PubMed] [Google Scholar]

Decision Letter 0

Celia Andreu-Sánchez

29 Jun 2023

PONE-D-23-00368Can cognitive training capitalise on near transfer effects? Limited evidence of transfer following online inhibition training in a randomised-controlled trialPLOS ONE

Dear Dr. Harris,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Please submit your revised manuscript by Aug 13 2023 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Celia Andreu-Sánchez

Academic Editor

PLOS ONE

Journal requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. Thank you for stating the following financial disclosure:

“This work was funded by the Defence Science and Technology Laboratory via the Human Social Science Research Capability framework (HS1.030).”

Please state what role the funders took in the study.  If the funders had no role, please state: "The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript."

If this statement is not correct you must amend it as needed.

Please include this amended Role of Funder statement in your cover letter; we will change the online submission form on your behalf.

3. We note that Figure 1 in your submission contain copyrighted images. All PLOS content is published under the Creative Commons Attribution License (CC BY 4.0), which means that the manuscript, images, and Supporting Information files will be freely available online, and any third party is permitted to access, download, copy, distribute, and use these materials in any way, even commercially, with proper attribution. For more information, see our copyright guidelines: http://journals.plos.org/plosone/s/licenses-and-copyright.

We require you to either (1) present written permission from the copyright holder to publish these figures specifically under the CC BY 4.0 license, or (2) remove the figures from your submission:

a. You may seek permission from the original copyright holder of Figure 1 to publish the content specifically under the CC BY 4.0 license.

We recommend that you contact the original copyright holder with the Content Permission Form (http://journals.plos.org/plosone/s/file?id=7c09/content-permission-form.pdf) and the following text:

“I request permission for the open-access journal PLOS ONE to publish XXX under the Creative Commons Attribution License (CCAL) CC BY 4.0 (http://creativecommons.org/licenses/by/4.0/). Please be aware that this license allows unrestricted use and distribution, even commercially, by third parties. Please reply and provide explicit written permission to publish XXX under a CC BY license and complete the attached form.”

Please upload the completed Content Permission Form or other proof of granted permissions as an "Other" file with your submission.

In the figure caption of the copyrighted figure, please include the following text: “Reprinted from [ref] under a CC BY license, with permission from [name of publisher], original copyright [original copyright year].”

b.If you are unable to obtain permission from the original copyright holder to publish these figures under the CC BY 4.0 license or if the copyright holder’s requirements are incompatible with the CC BY 4.0 license, please either i) remove the figure or ii) supply a replacement figure that complies with the CC BY 4.0 license. Please check copyright information on all replacement figures and update the figure caption with source information. If applicable, please specify in the figure caption text when a figure is similar but not identical to the original image and is therefore for illustrative purposes only.

4. Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Partly

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: No

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: The present study aimed to investigate whether an online inhibition training task could generate near and mid-transfer effects in the context of response inhibition tasks. Furthermore, the authors examine whether any benefits would persist over a 1-month interval. This study was pre-registered and used a randomized controlled trial design. Overall, n=73 participants were included in this study and allocated to either an inhibition training program (six training sessions of a visual search task with a singleton distractor) or a closely matched active control task (that omitted the distractor element). As a result, the authors report tentative evidence for near transfer and no proof of mid-transfer. Furthermore, there was no evidence that the magnitude of training improvement was related to transfer task performance.

The study focuses on a scientific topic that interests readers of PLOS ONE. The English language is decent for being published. However, I am not a native speaker, so this should be checked elsewhere. The introduction has a straightforward structure and is well-written. The design of the study and the methods are described and conducted well and allow replication of the study. Also, the results are reported clearly and transparently. Finally, the discussion also is well-conducted. Accordingly, I only have some minor aspects that could be considered for a potential revision:

- There was a relatively high drop-out rate. Did the authors consider a drop-out analysis to investigate whether any characteristics of the participants could result in the incompletion of the training?

- On page 19, some parts of the manuscript are mixed up at the end of the page.

- The authors did not report any limitations. Therefore, concededly, this study is well-conducted. However, were there any aspects that could be seen as a limitation and could help readers for future studies that should be mentioned?

Reviewer #2: This is a well-written, rigorous study (in terms of design, in particular) of the kind that are much needed in the cognitive training field. I have a few comments that I hope will help improve the manuscript:

1. Given the variability in findings in healthy vs. clinical populations (e.g., far transfer effect results have been more promising in aging/mild cognitive impairment (Basak et al., 2020; Hill et al., 2017)), I think it would be useful from the start to clarify what your target population/target outcome is because it provides important context for your framing. Particularly given the fact that your participants are healthy younger adults; it seems to me that your goal is optimizing performance in healthy younger adults, but this only becomes clear in the last paragraph of the introduction. You should also caveat any interpretations of your finding that they may not generalize to clinical/older adult populations.

2. Related to the point above, it is not clear if this is a healthy sample. Given your very open inclusion/exclusion criteria, did you take any measure of potential neurological/psychiatric disorders? I think that including a population sample is fine, but it may be a limitation in terms of being able to understand the generalizability of your findings.

3. What is the ages of the participants? Again, this is important for interpretation and understanding the generalizability of findings, and is just generally always reported. If age can’t be reported or wasn’t collected, please state that clearly as a limitation.

4. Given that this was a (likely mostly) healthy younger population with a low-risk/non-invasive task, the retention rate seems very poor. Can you explain the poor retention rate, or is this more common for online trials? I think that performing the trial online has some benefits, but the limitations (e.g., retention rate, potentially lower participant engagement) should be clearly outlined. You should also mention that it is online at all opportunities including in the introduction.

5. Given the poor retention during training, how did you deal with missing data? I think it might be appropriate to perform an intention-to-treat analysis or something similar to see if drop-out characteristics affected results.

6. Have you considered performing an analysis such as GEE, MLM (Ma et al., 2012), or an analysis of change score controlling for baseline differences (Mattes & Roheger, 2020)? These may be more robust than an ANOVA, and the GEE and MLM can handle missing data as exists in your study. They also account for baseline differences which would make interpretation easier (may allow you to rule out catch up effects. Also, please confirm that the ANOVA you performed was repeated measures? It isn’t clear in the paper. I also think these more complex models may allow you to model the training gains analysis in a more rigorous way compared to just correlating change scores: I also think that this analysis only needs to be completed for effects that were significant in the main analysis, this might help power given the correction for multiple comparisons.

7. Given you found a group*time interaction on a transfer variable (flanker task) in a pre-registered study with very minimal differences between the conditions in a relatively small sample, your general interpretation is slightly confusing: it seems like you are trying to downplay the result. A group*time interaction is stronger evidence than a post-test t-test, which I would not include. Ideally, the analyses I have suggested will allow for clearer results, but in general I think this is a very promising findings that doesn’t match the tone in which it is described.

References

8. Basak, C., Qin, S., & O'Connell, M. A. (2020). Differential effects of cognitive training modules in healthy aging and mild cognitive impairment: A comprehensive meta-analysis of randomized controlled trials. Psychology and aging, 35(2), 220.

9. Hill, N. T., Mowszowski, L., Naismith, S. L., Chadwick, V. L., Valenzuela, M., & Lampit, A. (2017). Computerized cognitive training in older adults with mild cognitive impairment or dementia: a systematic review and meta-analysis. American Journal of Psychiatry, 174(4), 329-340.

10. Ma, Y., Mazumdar, M., & Memtsoudis, S. G. (2012). Beyond repeated-measures analysis of variance: advanced statistical methods for the analysis of longitudinal data in anesthesia research. Regional Anesthesia & Pain Medicine, 37(1), 99-105.

11. Mattes, A., & Roheger, M. (2020). Nothing wrong about change: the adequate choice of the dependent variable and design in prediction of cognitive training success. BMC Medical Research Methodology, 20(1), 1-15.

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: Yes: PD Dr. Jan Christopher Cwik

Reviewer #2: No

**********

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2023 Nov 10;18(11):e0293657. doi: 10.1371/journal.pone.0293657.r002

Author response to Decision Letter 0


24 Jul 2023

We wanted to thank the two expert reviewers for taking the time to appraise our work and provide helpful comments on the manuscript. We have made changes to the paper and provide point-by-point responses to each comment below. We think the changes have improved the manuscript and hopefully address any of the reviewers’ concerns.

Journal requirements:

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming.

Response: Manuscript reformatted.

2. Thank you for stating the following financial disclosure:

“This work was funded by the Defence Science and Technology Laboratory via the Human Social Science Research Capability framework (HS1.030).”

Please state what role the funders took in the study. If the funders had no role, please state: "The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript."

Response: Amended, thanks.

3. We note that Figure 1 in your submission contain copyrighted images. All PLOS content is published under the Creative Commons Attribution License (CC BY 4.0), which means that the manuscript, images, and Supporting Information files will be freely available online, and any third party is permitted to access, download, copy, distribute, and use these materials in any way, even commercially, with proper attribution. For more information, see our copyright guidelines: http://journals.plos.org/plosone/s/licenses-and-copyright.

We require you to either (1) present written permission from the copyright holder to publish these figures specifically under the CC BY 4.0 license, or (2) remove the figures from your submission:

Response: We have removed the copyrighted images and replaced them with icons to avoid any reproduction issues.

4. Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice.

Response: Checked, thanks.

Reviewers comments

Reviewer #1: The present study aimed to investigate whether an online inhibition training task could generate near and mid-transfer effects in the context of response inhibition tasks. Furthermore, the authors examine whether any benefits would persist over a 1-month interval. This study was pre-registered and used a randomized controlled trial design. Overall, n=73 participants were included in this study and allocated to either an inhibition training program (six training sessions of a visual search task with a singleton distractor) or a closely matched active control task (that omitted the distractor element). As a result, the authors report tentative evidence for near transfer and no proof of mid-transfer. Furthermore, there was no evidence that the magnitude of training improvement was related to transfer task performance.

The study focuses on a scientific topic that interests readers of PLOS ONE. The English language is decent for being published. However, I am not a native speaker, so this should be checked elsewhere. The introduction has a straightforward structure and is well-written. The design of the study and the methods are described and conducted well and allow replication of the study. Also, the results are reported clearly and transparently. Finally, the discussion also is well-conducted. Accordingly, I only have some minor aspects that could be considered for a potential revision:

Response: Thank you for the positive comments about the work.

- There was a relatively high drop-out rate. Did the authors consider a drop-out analysis to investigate whether any characteristics of the participants could result in the incompletion of the training?

Response: Thanks, we hadn’t considered this and it would be an interesting analysis to add. However, we don’t really have that much information on participant characteristics on which to base this analysis. We collected minimal demographic information (only gender) which was probably an error. We will definitely record more data on this in the future to enable this sort of analysis.

- On page 19, some parts of the manuscript are mixed up at the end of the page.

Response: The formatting might have gone awry when it was changed to a PDF, but it seems to just be the continuation of the footnote from page 18. It looks fine on the resubmitted version.

- The authors did not report any limitations. Therefore, concededly, this study is well-conducted. However, were there any aspects that could be seen as a limitation and could help readers for future studies that should be mentioned?

Response: Thanks, yes of course any study has limitations. We have added some to the discussion as suggested (lines 409-418):

“The results of the current study should also be considered in the context of its relative strengths and weaknesses. The study was well-powered and pre-registered, providing greater rigour than some previous studies. As an online study we could not, however, ensure that participants performed the training and assessment sessions in a quiet distraction-free environment. Variation in their motivation, arousal levels, or environment could also have added noise to the data, reducing any potential training effects. Lastly, it is important to note that our findings are more applicable to the use of cognitive training for performance enhancement in healthy individuals than for addressing cognitive deficits in clinical or elderly populations. A limitation is that we did not explicitly screen for neurologic/psychiatric disorders, or collect detailed demographic information, but as the sample consisted of university students this was a predominantly young and healthy sample.”

Reviewer #2: This is a well-written, rigorous study (in terms of design, in particular) of the kind that are much needed in the cognitive training field. I have a few comments that I hope will help improve the manuscript:

1. Given the variability in findings in healthy vs. clinical populations (e.g., far transfer effect results have been more promising in aging/mild cognitive impairment (Basak et al., 2020; Hill et al., 2017)), I think it would be useful from the start to clarify what your target population/target outcome is because it provides important context for your framing. Particularly given the fact that your participants are healthy younger adults; it seems to me that your goal is optimizing performance in healthy younger adults, but this only becomes clear in the last paragraph of the introduction. You should also caveat any interpretations of your finding that they may not generalize to clinical/older adult populations.

Response: Thanks, we have added an earlier mention in the introduction that our focus here is on performance optimisation rather than addressing cognitive impairment (see line 7) as well as noting that the results don’t apply to clinical or older populations (line 414-416).

2. Related to the point above, it is not clear if this is a healthy sample. Given your very open inclusion/exclusion criteria, did you take any measure of potential neurological/psychiatric disorders? I think that including a population sample is fine, but it may be a limitation in terms of being able to understand the generalizability of your findings.

Response: We did not ask for this information, but yes this would have been useful to check. Given that the population was university students we have a pretty good idea that this was a broadly young and healthy population. We have added a note on this (line 416-418).

3. What is the ages of the participants? Again, this is important for interpretation and understanding the generalizability of findings, and is just generally always reported. If age can’t be reported or wasn’t collected, please state that clearly as a limitation.

Response: Added, thanks.

4. Given that this was a (likely mostly) healthy younger population with a low-risk/non-invasive task, the retention rate seems very poor. Can you explain the poor retention rate, or is this more common for online trials? I think that performing the trial online has some benefits, but the limitations (e.g., retention rate, potentially lower participant engagement) should be clearly outlined. You should also mention that it is online at all opportunities including in the introduction.

Response: It is hard to pinpoint the reason for the drop out. Six sessions is quite a lot and the online nature means it is just very easy for people to decide that they don’t want to do it any more. Most of the drop out occurred after the first session – if people logged on for the second session they generally made it to the end. We have added some extra mentions of the online data collection as suggested.

5. Given the poor retention during training, how did you deal with missing data? I think it might be appropriate to perform an intention-to-treat analysis or something similar to see if drop-out characteristics affected results.

Response: Thanks, we thought this was a nice idea and we looked into how to do it. However, the intention to treat analysis only works if the participants returned for the post test. But there was only a single participant that didn’t complete all the training but still did the post-test. Essentially, if people got bored and stopped the training, they didn’t return so there’s no data to look at.

6. Have you considered performing an analysis such as GEE, MLM (Ma et al., 2012), or an analysis of change score controlling for baseline differences (Mattes & Roheger, 2020)? These may be more robust than an ANOVA, and the GEE and MLM can handle missing data as exists in your study. They also account for baseline differences which would make interpretation easier (may allow you to rule out catch up effects. Also, please confirm that the ANOVA you performed was repeated measures? It isn’t clear in the paper. I also think these more complex models may allow you to model the training gains analysis in a more rigorous way compared to just correlating change scores: I also think that this analysis only needs to be completed for effects that were significant in the main analysis, this might help power given the correction for multiple comparisons.

Response: We did initially consider running ANCOVAs for the main analyses with baseline scores as a covariate. We opted for repeated measures ANOVA in the pre-registration plan so we believe we should to stick to this analysis approach for the main paper. In response to this suggestion (and because there were some baseline differences between the groups) we have also run ANCOVA versions of the main analyses and placed them in a supplementary file. These analyses reinforce that there was no benefit of being in the training group. In fact the only group-level effects that were significant in this analysis indicated that the training group had more ‘false alarm’ responses and a more liberal response bias.

7. Given you found a group*time interaction on a transfer variable (flanker task) in a pre-registered study with very minimal differences between the conditions in a relatively small sample, your general interpretation is slightly confusing: it seems like you are trying to downplay the result. A group*time interaction is stronger evidence than a post-test t-test, which I would not include. Ideally, the analyses I have suggested will allow for clearer results, but in general I think this is a very promising findings that doesn’t match the tone in which it is described.

Response: We have a slightly different interpretation of this result. Our initial interpretation was that the interaction effect could be due to the baseline differences in the flanker scores and therefore just an artefact. We were therefore wary to talking it up too much. The additional tests that you suggested have been useful in this regard, because when controlling for baseline differences there was no group effect.

While we take the point that the interaction should not be ignored, there was no difference between the groups at post-training time point. In the context of testing the success of a training intervention this seems to be the stronger indicator – those in the training group were not better off than those in the control group.

We have re-read the conclusion and have reworded in a couple places because we realise that we probably dismissed the interaction too easily, but we have stuck with our original interpretation that this interaction was probably a result of the baseline differences (see lines 360-377).

Attachment

Submitted filename: response to reviewers.docx

Decision Letter 1

Celia Andreu-Sánchez

4 Sep 2023

PONE-D-23-00368R1Can cognitive training capitalise on near transfer effects? Limited evidence of transfer following online inhibition training in a randomised-controlled trialPLOS ONE

Dear Dr. Harris,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Please approach the comments made by reviewer 2.

Please submit your revised manuscript by Oct 19 2023 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Celia Andreu-Sánchez

Academic Editor

PLOS ONE

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: All comments have been addressed

Reviewer #2: (No Response)

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Partly

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: No

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: The authors revised the manuscript very carefully and took all of my recommendations into consideration. From my point of view, the paper is of good quality and could be accepted for publication.

Reviewer #2: I have a few responses to the revisions:

Thank you for the changes to the introduction and discussion, I think they improve the paper significantly.

1. I think you should add to the limitations that it had a high drop-out rate and you need to find ways to mitigate that in future work if you want to really understand what is happening.

2. I agree with the first reviewer that at least a drop-out analysis should be performed. Even if you don’t have significant demographic info, you can still see if the participants that dropped out had significant differences on the variables you have measured (pre-test scores, gender, etc.).

3. I agree with the authors that basic intent to treat analyses would not help in this case, but there are versions that work with participants that drop out and do not complete post-test. See this paper for example: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6022256/. I think it is fine if you don’t want to do this analysis, just wanted to provide some information.

4. I still disagree with the framing of the near transfer finding. Interpreting post-training differences is unhelpful because what you are really interested in is change from pre- to post-test. If you have differences at pre-test then post-test differences are even less useful. Sometimes, post-test t-tests are used because if randomization works correctly you can theoretically say that only post-test differences matter as pre-test differences are controlled by randomization, but as in your case randomization often doesn’t work perfectly (although your baseline differences are non-significant). In your study you have a group*time interaction and significant improvement in the active but not control group. This is a positive finding. I agree with carefully interpreting it given the other analyses, but it’s still evidence for near transfer. This sentence: “Our initial interpretation was that the interaction effect could be due to the baseline differences in the flanker scores and therefore just an artefact” in particular suggests a misunderstanding of the goals of this sort of analysis. If you had baseline differences that were erased at post-test, this could either be regression to the mean (which you mention as an option) or it could be a real intervention effect that is being masked at post-test due to real baseline differences. I recommend against post-test t-tests as meaningful analysis in an intervention design and would remove them. Overall, I don’t think this requires a huge change in the framing (it is a very tentative positive finding given the other results), but I do find it strange that you hypothesized a positive result, got a positive result, but are framing it as a negative result (not a caveated positive result).

5. In the retention analysis, it is common to do pre-test vs. follow-up not post-test vs. follow-up. You want to know if there are differences at follow-up compared to baseline, not if differences emerged after post-test (which is unlikely). I would recommend repeating the retention analysis using pre-test.

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: Yes: Adam Turnbull

**********

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2023 Nov 10;18(11):e0293657. doi: 10.1371/journal.pone.0293657.r004

Author response to Decision Letter 1


12 Oct 2023

Response: We would like to thank the reviewers for taking the time to re-evaluate the manuscript. The additional queries have been really useful and we have made some substantial changes/additions to the paper.

Reviewer 2

Thank you for the changes to the introduction and discussion, I think they improve the paper significantly.

1. I think you should add to the limitations that it had a high drop-out rate and you need to find ways to mitigate that in future work if you want to really understand what is happening.

Response: Yes, thanks for this suggestion. Added to the discussion (see lines 385-391).

2. I agree with the first reviewer that at least a drop-out analysis should be performed. Even if you don’t have significant demographic info, you can still see if the participants that dropped out had significant differences on the variables you have measured (pre-test scores, gender, etc.).

Response: Thanks, we have added this analysis in a supplementary file and refer to it in the discussion where we discuss the dropout rate. We compared the flanker performance between those that withdrew and those that stayed in the study. There was no indication that baseline cognitive performance was different between those who dropped out and those who remained in the study. There was also no indication that one gender dropped out disproportionately.

3. I agree with the authors that basic intent to treat analyses would not help in this case, but there are versions that work with participants that drop out and do not complete post-test. See this paper for example: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6022256/. I think it is fine if you don’t want to do this analysis, just wanted to provide some information.

Response: Thanks this looks really useful. Hopefully the analysis we have done provides some indication that the withdrawals did not bias the sample and we will consider this method in future trials.

4. I still disagree with the framing of the near transfer finding. Interpreting post-training differences is unhelpful because what you are really interested in is change from pre- to post-test. If you have differences at pre-test then post-test differences are even less useful. Sometimes, post-test t-tests are used because if randomization works correctly you can theoretically say that only post-test differences matter as pre-test differences are controlled by randomization, but as in your case randomization often doesn’t work perfectly (although your baseline differences are non-significant). In your study you have a group*time interaction and significant improvement in the active but not control group. This is a positive finding. I agree with carefully interpreting it given the other analyses, but it’s still evidence for near transfer. This sentence: “Our initial interpretation was that the interaction effect could be due to the baseline differences in the flanker scores and therefore just an artefact” in particular suggests a misunderstanding of the goals of this sort of analysis. If you had baseline differences that were erased at post-test, this could either be regression to the mean (which you mention as an option) or it could be a real intervention effect that is being masked at post-test due to real baseline differences. I recommend against post-test t-tests as meaningful analysis in an intervention design and would remove them. Overall, I don’t think this requires a huge change in the framing (it is a very tentative positive finding given the other results), but I do find it strange that you hypothesized a positive result, got a positive result, but are framing it as a negative result (not a caveated positive result).

Response: Thanks, you make a good argument that we have been overly conservative! We have reworded some sections in the discussion (lines 330-342; 403) to frame this as a positive result with some caveats, rather than dismiss it as a negative result. We have made a small change to the abstract as well to reflect this.

5. In the retention analysis, it is common to do pre-test vs. follow-up not post-test vs. follow-up. You want to know if there are differences at follow-up compared to baseline, not if differences emerged after post-test (which is unlikely). I would recommend repeating the retention analysis using pre-test.

Response: Thank you, yes this approach makes sense. We have repeated the retention analyses comparing to baseline instead (lines 283-308). We think the other analyses still have some value so we have added them as an online supplementary file. The new analyses don’t alter any of the conclusions.

Attachment

Submitted filename: Rebuttal.docx

Decision Letter 2

Celia Andreu-Sánchez

18 Oct 2023

Can cognitive training capitalise on near transfer effects? Limited evidence of transfer following online inhibition training in a randomised-controlled trial

PONE-D-23-00368R2

Dear Dr. Harris,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Celia Andreu-Sánchez

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #2: All comments have been addressed

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #2: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #2: Yes

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #2: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #2: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #2: Thank you for addressing my comments. I think that the manuscript is much improved and should be published.

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #2: Yes: Adam Turnbull

**********

Acceptance letter

Celia Andreu-Sánchez

23 Oct 2023

PONE-D-23-00368R2

Can cognitive training capitalise on near transfer effects? Limited evidence of transfer following online inhibition training in a randomised-controlled trial

Dear Dr. Harris:

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

If we can help with anything else, please email us at plosone@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. Celia Andreu-Sánchez

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    Attachment

    Submitted filename: response to reviewers.docx

    Attachment

    Submitted filename: Rebuttal.docx

    Data Availability Statement

    Data cannot be shared publicly because this is a Ministry of Defence funded project and data are subject to additional restrictions. All relevant code, and the pre-registration document is available online from: https://osf.io/mzxtn/


    Articles from PLOS ONE are provided here courtesy of PLOS

    RESOURCES