Skip to main content
PLOS One logoLink to PLOS One
. 2023 Aug 2;18(8):e0285925. doi: 10.1371/journal.pone.0285925

A game-factors approach to cognitive benefits from video-game training: A meta-analysis

Evan T Smith 1,2,#, Chandramallika Basak 1,2,*,#
Editor: Alessandra S Souza3
PMCID: PMC10395941  PMID: 37531408

Abstract

This current study is a meta-analysis conducted on 63 studies on video-game based cognitive interventions (118 investigations, N = 2,079), which demonstrated a moderate and significant training effect on overall gains in cognition, g = 0.25, p < .001. Significant evidence of transfer was found to overall cognition, as well as to attention/perception and higher-order cognition constructs. Examination of specific gameplay features however showed selective and differential transfer to these outcome measures, whereas the genre labels of “action”, “strategy”, “casual”, and “non-casual” were not similarly predictive of outcomes. We therefore recommend that future video-game interventions targeting cognitive enhancements should consider gameplay feature classification approach over existing genre classification, which may provide more fruitful training-related benefits to cognition.

Introduction

Videogames have also become increasingly prevalent as a cognitive training medium, with controversial results regarding their effectiveness in facilitating transfer to a broad range of cognitive abilities [1, 2]. According to Simons et al. (2016), videogames implemented in cognitive training are typically divided into three broad categories: 1) “Action” games, which emphasize the rapid identification, prioritization, and handling of threats; 2) “Strategy” games, which emphasize resource monitoring and efficiently conducting multiple simultaneous tasks; and 3) “Causal” games, which are relatively simple and designed for short playtime compared to other categories. The distinction between these categories is an important one, as each theoretically encapsulates a separate construct of cognitive faculties [36], and therefore may produce different cognitive benefits after video game training (VGT). However, there is mounting evidence that such a coarse distinction between gaming genres is insufficient to describe the profile of cognitive demands of a given game, as modern video games increasingly include features of multiple genres [7]. Therefore, there is a need to examine VGT-induced benefits to various cognitive abilities based on the specific gameplay factors invoked by the games used.

Of these categories, “Action” games have been most extensively used in VGT [2], with reported cognitive benefits to visual attention [810], cognitive control [11, 12], and visual working memory [13]. However, the “action game” moniker is incredibly broad regarding the types of games it has been applied to. Most commonly, studies of “action games” are referring to games in the “first-person shooter” (FPS) genre [14], with a handful referring to games in “racing” (i.e. Need for Speed), “fighting” (i.e. Guilty Gear [15]), “platformer” (i.e. New Super Mario Bros. [16]), and “sandbox” genres (i.e. Grand Theft Auto V, [17]). These genres differ widely from each other in terms of gameplay and mechanics. For example, both FPS and “racing” games involve first-person egocentric navigation of a three-dimensional environment, but differ in their primary method of interaction with the environment, namely combat (FPS) vs. movement (racing). Conversely, the “platformer” and “fighter” games cited above involve allocentric navigation through a two-dimensional, not three-dimensional, environment. They also differ from each other in terms of primary gameplay mechanic (platformers emphasize movement in avoidance of multiple hazards, whereas fighting games emphasize combat against a single opponent). The exemplar “sandbox” game uses egocentric navigation and combat interaction similar to FPS, but uniquely possesses multiple win-states that players are free to choose from, which is distinct from the single-goal structure of FPS, “racing”, “platforming” or “fighting” games.

“Strategy” games are relatively understudied compared to “action” games [2], whose reported benefits include reasoning [1820], visual short-term memory, working memory, and cognitive control [18]. Unlike the games classified under the “action” moniker, the games thus far classified as “strategy” games have almost universally belonged to one subgenre, “real-time strategy” (RTS) games. RTS games are played from an allocentric, third-person perspective similar to “platform” or “fighting” games (subtypes of “action” games), but are distinct from those actions games because the player is not bound to a single game object. Rather, RTS games require the player to simultaneously monitor multiple game objects to succeed. Some distinction can still be made between RTS games which feature an active antagonist (such as a rival civilization, as is the case with Rise of Nations [18]) from those that do not (as is the case with Sushi-Go-Round [3, 5]). Similarly, a distinction can be made between “strategy” games that feature multiple win-states (Rise of Nations) versus those featuring single goal (Sushi-Go-Round or StarCraft).

Research regarding the effectiveness of “casual” games is sparsest, with some evidence of benefits to attention and cognitive control [21, 22]. Complicating these results is the fact that the “casual game” designation is not an exclusive definition, but includes any relatively simple game which is designed to be played for short periods at a stretch (<30 minutes) and can potentially include many other genres of game. Indeed, Baniqued and colleagues (2013) examined the cognitive correlates of performance with 20 different casual games and found a wide range of correlation profiles. The granularity of cognitive constructs across casual games demonstrated by these studies justifies the experimental distinction between game genre—the correlation profile of “casual strategy” games differed qualitatively from that of “casual action” games—and also demonstrates that casual games should not be treated as a genre in their own right as the current literature tends to do [2].

If we assume that different gameplay factors reflect different cognitive demands, then these different games should be reclassified in terms of gameplay factors, not existing genres of “action” and “strategy”—yet the existing literature largely ignores this distinction. We hypothesize that the inconsistency regarding cognitive benefits from VGT is to some extent attributable to a lack of proper understanding of the differing cognitive demands evoked by gameplay factors. This oversight is particularly pronounced for “action” games. The aim of this meta-analysis is therefore to disambiguate the findings of the extant VGT literature by examining the specific gameplay features of the games used for training.

Methods

Literature search

A manual literature search was conducted for studies meeting the following inclusion criteria: video-game based cognitive intervention, pre- and post-intervention cognitive assessment, and randomized, controlled trials (RCT). Search terms included: “video game training”, “[Genre] game training” (action, strategy, casual, real-time strategy, first-person shooter, racing, fighting, platforming), alternate spellings of these terms (i.e. “fighter”, “shooter”, “videogame”, “computer game”), abbreviations (i.e. “FPS”), and substitution of “training” with equivalent terminology (“learning”, “intervention”, “cognitive intervention”, “brain training”). S1 Method has the details on all search terms used. Studies with clinically impaired participants were excluded from this meta-analysis, given variability of the different clinically impaired populations and the limited number of video-game interventions therein. Studies of video-games specifically designed to facilitate cognitive transfer (i.e. “edutainment” games), as well as computerized interventions designed for cognitive remediation or maintenance, were excluded from this study, as the efficacy of such training methodologies has been examined in past work from the authors of the present meta-analysis [23]. Similarly, games designed as physical fitness activities (i.e. “exergames”) were excluded from the present meta-analysis, due to the impossibility of disentangling the effects of gameplay and of physical activity on cognitive outcome of training. This literature search was conducted using the PubMed, MEDLINE, PsychARTICLES, PsychEXTRA, Psychology and Behavioral Science Collection, PsychINFO, PsychTests, and Google Scholar databases, with a cutoff date of May 1st, 2021. Reference lists of past meta-analyses and literature reviews [2, 2331] were also used as a complement to broaden the literature search.

Data collection

All included studies were read thoroughly and categorized based on the type of video-game intervention used, using the broad genre definitions provided in Simons et al. (2016) of “action”, “strategy”, and “casual” games. These were coded as two factors: Genre (action vs. strategy), and Format (long-form vs. casual), based on past research suggesting that the category “casual” video games encompasses games of multiple genres [3, 5, 6]. The Physiotherapy Evidence Database (PEDro) scale, an 11-item scale designed to assess quality and reporting of randomize controlled trials [32, 33], was used to assess quality of each individual study. The PEDro scale was utilized over other measures of study quality due to its extensive past use in high-quality meta-analyses of RCTs involving cognitive training [i.e. 23, 31, 3436]. Variables were then coded for each study for subsequent moderator analyses. Four sets of moderators/variables were coded: 1) participant characteristics (% female, average age), 2) training characteristics (see below), 3) publication characteristics (year of publication, PEDro score, number of outcome measures), and 4) control group characteristics (see below).

Training characteristics included overall hours of video-game training, and a series of binary gameplay factors involved in the video-game. We included the following gameplay factors: Movement Style (egocentric vs. allocentric method of spatial navigation), Perspective (1st person vs 3rd person viewing perspective), number of Controllable Objects (single vs multiple), number of Win States (single win state vs multiple win states), Type of Opponent featured by the game (active opponent vs passive threshold), the presence of Time Pressure (present vs absent), and the Primary Interaction method of the game (combat vs noncombat).

The Movement Style, Perspective, and Controllable Objects were selected to typify the major gameplay features of the subgenres of “Action” and “Strategy” games most commonly used in VGT interventions, namely “first-person shooter” and “real-time strategy” games [2, 14]. These two genres feature first-person perspective, egocentric movement, and a single controllable object (FPS), versus third-person perspective, allocentric movement, and multiple controllable objects (strategy). Time pressure was selected as a feature of interest due to the numerous past studies that have suggested that video games engender cognitive transfer via the application of high cognitive demands under time pressure [18, 24]. Combat versus an active opponent has been theorized to engender cognitive transfer by enforcing unpredictable task demands [1, 1820, 24]—this was coded as the two factors of Combat and Active Opponent in order to flexibly capture games which featured combat versus an active opponent (i.e. the “FPS” and “RTS” genres), as well as games which featured unpredictable, active opponents but were not combat-focused (i.e. the “racing” genres). Lastly, Win States was selected as a feature of interest to test the assertion from Basak et al., 2008 [18] that the multiple win states possible in the Rise of Nations video game engendered transfer to task switching and executive function seen in that study, as well as to better categorize games such as Grand Theft Auto V [17] which allow for multiple play-styles and/or include many optional activities.

Movement Style was coded as egocentric if navigation in the game was performed with respect to the player’s perspective (i.e. left/right movement), and allocentric if navigation was performed with respect to an outside reference (i.e. cardinal directions, the fixed edges of the play space). Perspective was coded as 1st person if the information displayed to the player reflected the perspective of the player’s game avatar, and was coded as 3rd person if the player’s avatar was visible from an outside perspective during the gameplay. Controllable Objects were coded as single if the player had simultaneous direct control of only a single game object/character, and classified as multiple if the player had simultaneous direct control of multiple game objects (such as the individual soldiers of an army). Win States were assessed by the number separate victory conditions that the player was able to pursue in the game. Games with a single victory condition (e.g. defeating an opponent, wining a race) were categorized as single, whereas games with multiple victory conditions (e.g. defeat or escape an opponent) were categorized as multiple. Type of Opponent was coded as an active opponent if success in the game depended on overcoming an unpredictable agent in the game environment, such as an AI-controlled enemy or obstacle. Static or purely predictable obstacles (e.g. the varied terrain of a Platformer game) were not considered active opponents for this purpose. Conversely, Goal Type was coded as passive threshold if success was determined by reaching or exceeding a static goal, such as a score threshold, completion time, distance traveled, etc., without the presence of an active opponent. Time Pressure was assessed as present if the player was required to make decision or take actions within a certain timeframe, else face negative consequences; these included games with time-sensitive action outcomes (such as scoring systems which take time into account) and games that proceed in real-time as the player acts. Time pressure was coded as absent if the player had an arbitrary amount of time to make decisions (i.e. turn-based games without turn time limits). Primary Interaction was coded as combat if attack or destruction of game entities or objects was the focus of gameplay, and non-combat if such content was not the focus of gameplay. As games were likely to include both combat and non-combat interactions, this factor was coded with respect to which interaction method occurred more frequently in the gameplay footage coded, as described below.

These gameplay factors were coded by four independent raters based on available gameplay footage of each game (see S1 Table for references to the analyzed footage). These raters were E.T.S. and C.B. Gameplay footage for two games (MultiTask and Smooth Snake) was not readily available—in these cases, raters coded gameplay factors based on direct experience with these games (see S1 Table). In cases where the categorization of a factor was ambiguous (i.e. games which feature some combat and some non-combat gameplay, games which were only partially timed), raters were instructed to code that factor based on which feature was most prominent in the gameplay footage/gameplay experience analyzed. Inter-rater agreement of coded gameplay factors was 89.22%.

For each included study, the presence or absence of an active control group was coded. Since many studies included video-game interventions not only as experimental groups, but also as control groups, we further coded each video-game intervention as either experimental or control.

Data analysis

The Effect Size of the Standard Mean Gain [37, 38] was calculated for every cognitive outcome in each intervention group, using the following formula:

ESsg=X-T2-X-T1Sp=G-Sg/2(1-r)
Sp=(ST12+ST22)/2

Where X-T1 is the mean at time 1 (i.e., at pre-intervention baseline cognitive assessment), X-T2 is the mean at time 2 (i.e., post-intervention cognitive assessment), G- is the mean for time 2 (X-T2) minus mean time 1 (X-T1) gain score, Sp is the pooled standard deviation of the gain score (calculated as above), Sg is the standard deviation of the gain score, and r is the correlation between the time 1 and time 2 scores. This is the method of effect size calculation recommended by Lipsey & Wilson (2001) [38] for pre-post contrasts without group comparison, originally sourced to Becker (1988) [37]. Because we did not have access to individual assessments for the interventions studied, r was assumed to be 0.5. Directionality of each standardized mean gain score (ESsg) was adjusted so that positive gain score indicates greater performance gain in the cognitive outcome from pre-intervention (Time 1) to post-intervention (Time 2).

Effect sizes (ESsg) from all cognitive outcomes within each intervention were averaged into a single effect size estimate that represented the Overall Cognitive Effect Size from that specific intervention. This approach ensures maximum independency among the effect sizes [39], and that all studies are equally represented in every analysis despite the variation in the number of cognitive outcomes reported by each study.

In addition to the Overall Cognitive Effect Size, effect sizes were also calculated separately for four broad cognitive outcomes; viz. Attention/Perception (AP), Higher-order Cognition (HC), Memory (Mem), and Psychosocial (PS). The AP effect size aggregated measures of attention and/or perception, as well as pure reaction time measures—such measures have shown the most consistent sensitivity to video game training (particularly “action” game training) in past reviews of VGT interventions [2, 24]. The HC effect size aggregated measures of cognitive control, working memory, reasoning, and general executive function—all of which require effortful invocation of top-down control processes, and which are known to be highly inter-related [36]. These measures of higher-order cognition have been a strong focus of recent VGT interventions, but a conclusive answer as to the efficacy VGT in producing transfer to these measures remains contentious [2, 17, 26]. The Memory effect score aggregated measures of measures of recall and recognition for both short-term memory and long-term memory, and was of particular interest considering the possibility of VGT’s efficacy as a cognitive remediation solution in older adults [23, 30, 31]. The PS effect size aggregated measures psychosocial wellbeing, personality and daily functioning. A list of measures included in each construct can be found in S2 Table.

After calculating these five effect sizes (Overall, AP, Mem, HC, PS) for each intervention, the standard error (SEsg) and weight (wsg) for each effect size from each intervention was calculated as follows:

SEsg=2(1-r)n+ES2n22n
wsg=1SEsg2=2n41-r+ESsg2

where n is the common sample size at time 1 and time 2, and all other notations are as specified above. Overall ESsg, SEsg, and wsg for each included investigation are presented in S3 Table.

All meta-analyses presented below were conducted using mixed effects meta-regression models, with nuisance variables and regressors of interest modeled as fixed effects and varying by model as described below. The effect of study was modeled as a random effect in all analyses conducted below—this approach allows for random slopes and intercepts to be fit on a per-study basis, without treating investigations originating from the same study as independent. Heterogeneity in all analyses was estimated using a restricted maximum likelihood estimator method [40]. These analyses were facilitated by the Metafor package for the R coding environment [41].

Results

Characteristics of included studies

A total of 7,025 publications were reviewed during this literature search. 278 of these publications were identified as potentially of interest after abstract screening. Detailed screening for inclusion criteria reduced this number to 63 studies, representing 118 total interventions (Fig 1), with a combined N of 2079. A full list of these interventions can be found in S3 Table and S1 List.

Fig 1. Flow diagram of the systematic literature review conducted for this meta-analysis.

Fig 1

After calculating the overall effect of each intervention, these studies were screened for outliers and removed on an individual intervention basis. Six interventions were removed for demonstrating an effect on overall cognition </> 2 standard deviations from sample mean, resulting in a final k of 112 (N = 1917, see Fig 2) Of the remaining interventions, 90 reported Attention/Perception outcomes, 54 reported Higher-order Cognition outcomes, 28 reported Memory outcomes, and 18 reported Psychosocial outcomes.

Fig 2. Effect size of the standardized mean gain (ESsg) on overall cognition for each included investigation.

Fig 2

Note. Partially shaded bars indicate studies that were excluded from analysis due to demonstrating an ESsg </> 2 standard deviations from sample mean.

Relationship of genre to game factors, and correlations between game factors

The proportion of games of each coded Genre and Format (Action/Strategy, Long-Form/Casual) which featured each coded Game Factor is presented in Table 1. Notably, all of the games that our raters coded as Serious featured Time Pressure, and all of the games that our raters coded as Casual featured only a single win state.

Table 1. Percentage of games in each coded genre/format which featured each coded Game Factor.

Movement
(Ego/Allo)
Perspective
(1st/3rd)
Interaction
(NC/Com)
Time
(N/Y)
Objects
(1/>1)
WinStates
(1/>1)
Opponent
(Pass/Act)
Action 66/34% 40/60% 50/50% 12/88% 88/22% 98/2% 44/56%
Strategy 13/87% 7/93% 60/40% 20/80% 27/73% 67/33% 60/40%
Long-Form 71/29% 41/59% 36/64% 0/100% 79/21% 86/14% 24/76%
Casual 22/78% 17/83% 83/17% 40/60% 65/35% 100/0% 91/9%

Inter-correlations between the coded Game Factors are presented below in Table 2.

Table 2. Inter-correlations between coded gameplay factors, genre, and format.

Movement Perspective Interaction Time Objects Win States Opponent Genre Format
Movement .57*** -.33** -.16 .64*** .13 -.35** -.46*** -.48***
Perspective .57** -.39** .01 .34** .22 -.26* -.3* .24
Interaction -.33** -.39** .29* -.08 .01 .67*** .08 -.45***
Time -.16 .01 .29* -.07 .13 .33** .1 .52***
Objects .64*** .34** -.08 -.07 .17 -.13 -.59*** .15
Win States .13 .22 .01 .13 .17 -.01 -.46*** -.24
Opponent -.35** -.26* .67*** .33** -0.13 -0.01 .14 -.65***
Genre -.46*** -.3* .08 .1 -.59*** -.46*** .14 -.05
Format -.48*** .24 -.45*** -.52*** .15 -.24 -.65*** -.05

Note: single asterisk (*) indicates p < .05, double asterisks (**) indicates p < .01, and triple asterisks (***) indicates p < .001.

Assessment of publication bias

Prior to performing our primary analyses, we first assessed publication bias in the sample of studies included in this meta-analysis. A regression test of funnel plot asymmetry [42] using the reported training gains to overall cognition as the variable of interest demonstrated significant asymmetry, z = 6.41, p < .001, indicating likely publication bias in this sample of studies. A visual inspection of the funnel plot (see Fig 3) suggests that this bias is primarily in the positive direction, overestimating the effectiveness of VGT in relation to overall cognitive outcomes. To account for this SEsg2 was included in as a (fixed) nuisance term in all analyses. This method functionally extends Egger’s regression test [42] to the multivariate modeling approach utilized in the present analysis, and thereby allows us to control for the detected asymmetrical influence of effect size on our outcome measure.

Fig 3. Funnel plot plotting effect sizes of the standardized gains (Overall cognition), by Standard Errors for each included investigation.

Fig 3

Overall effects of video-game training on overall cognition

The mixed meta-regression model predicting standardized mean gain to overall cognition (controlling for the fixed effect of investigation-wise standard error, as described above) Demonstrated a moderate and highly significant effect of VGT on overall cognition, g = .25, 95% CI [.12 .39], p < .001. Significant residual heterogeneity was detected in the sample, Q(110) = 349.36, p < .001.

Effects of participant and study quality factors on cognition

To assess the impact on study quality on cognitive outcomes, we next fitted a mixed meta-regression model including the coded participant and study quality factors as fixed effect terms (with study modeled as a random effect as described above). Fixed factors included in this model included the gender ratio and average age of investigation participant (% Female, Ave Age), the PEDro score and total number of measures of each investigation (PEDro Total, Num. Measures), a binary variable coding for the presence of an active control group (Active Control), and a binary measure coding for weather or not the examined video-game intervention was considered as an active control group for another investigation in its study of origin (Study Group). Subject-wise sample variance (SEsg2) was included as a nuisance variable to control for the effects of publication bias. It was necessary to examine the Study Group factor considering we treated active control groups that utilized a video-game intervention method as groups of interest for the purpose of assessing the impact of their game factors for the present study. Comparing interventions used as training conditions versus interventions used as control conditions allowed us to test for experimenter bias for that were expected to produce a positive training outcome, and against games that were not.

The model examining the effects of participant and study quality factors on overall cognitive outcomes was found to be significantly predictive of overall cognition, Qw(8) = 50.72, p < .001, AIC = 267.71. Study Group was the only variable of interest that significantly contributed to this model predicting overall cognitive outcomes, β = -.32, 95% CI [-.48, -.17], p < .001. As the Study Group variable was coded as experimental group = 0 and control group = 1, the negative valence of this result indicates that experimental group investigations produced significantly greater transition to general cognition than did control group investigations. The full fixed effects of this model are reported in Table 3.

Table 3. Results of three LME meta-analysis models predicting overall cognitive outcomes of VGT interventions.

β 95% ci lower 95% ci upper z p
Participant & Study Quality Model
[AIC = 258.7]
SEsg2 2.13 1.36 2.89 5.44 < .001
Ave. Age -.01 -.01 .001 -1.79 .074
% Female -.0004 -.01 .005 -.17 .864
Hours Total .001 -.01 .01 .29 .768
PEDro Total .02 -.12 .16 .28 .783
Num Measures -.001 -.02 .02 .04 .965
Active Control .08 -.23 .39 .49 .627
Study Group .32 -.48 .16 -3.87 < .001
Genre Model
[AIC = 258.87]
SEsg2 2.17 1.43 2.92 5.71 < .001
Study Group -.29 -.49 -.1 -2.89 .004
Genre .04 -.22 .13 -.48 .634
Format .06 -.23 .12 -.65 .519
Game Factors Model
[AIC = 252.87]
SEsg2 2.04 1.26 2.81 5.22 < .001
Study Group -.24 -.45 -.02 -2.1 .036
Movement Style .42 .1 .74 2.61 .009
Perspective -.47 -.77 -.18 -3.12 .002
Primary Interaction .2 -.05 .44 1.57 .117
Time Pressure .13 -.14 .4 .94 .349
Controllable Objects .06 -.29 .18 -.49 .627
Win States .02 -.24 .2 -.19 .849
Type of Opponent .09 -.37 .18 -.67 .5

Note: ci = confidence intervals; AIC = Akaike’s Information Criterion

Effects of currently used genre distinctions on overall cognition

In order to assess whether the commonly used genre distinctions have differential impact on outcomes of video-game-based cognitive training, we next ran a mixed meta-regression model including those genre distinctions as fixed factors. Specifically, we included fixed factors for Genre [strategy, action] as well as for Format [long-form, casual]. As discussed above, we chose to separately categorize action/strategy and long-form/casual in this way due to past research which has demonstrated that games labeled as “Casual” in fact encompass a wide variety of gameplay styles and genres [3, 5, 6]. Additionally, in addition to the fixed effect of SEsg2, we also retained Study Group as a fixed control variable, considering it significantly contributed to the participant and study quality factors model.

The model assessing the effect of Genre and Format on overall cognitive outcomes was found to be significantly predictive of overall cognition Qw(4) = 48.23, p < .001, AIC = 258.87. However, neither the Genre, β = -.04, 95% CI [-.22, -.14], p = .634, nor Format, β = -.06, 95% CI [-.24, -.12], p = .519, significantly contributed to the model. These results suggest that these commonly used genre distinctions did not have differential effects on overall cognitive outcomes of VGT interventions. The full fixed effects of this model are reported in Table 3.

Effect of gameplay factors on overall cognition

In order to assess if our coded gameplay factors produce separable effects on the overall cognitive outcome of VGT interventions, we next ran a mixed meta-regression model that included all of our coded gameplay factors as fixed effects (Movement Style, Perspective, Primary Interaction, Time Pressure, Controllable Objects, Win States, and Type of Opponents—see the Data Collection subsection of the Methods for a full description). As with the above model, fixed effects of SEsg2 and Study Group were included as control variables.

As before, this model examining the effects of the coded gameplay factors on overall cognitive outcomes was found to be significantly predictive of overall cognition Qw(9) = 60.6, p < .001, AIC = 252.87. Both Movement Style, β = .42, 95% CI [.1, .74], p = .009, and Perspective, β = -.47, 95% CI [-.77, -.18], p = .002, contributed significantly to the model. The directionality of these effects indicates that games featuring allocentric movement were more efficacious in producing transfer to overall cognition that were games that featured egocentric movement, and similarly first-person games were more efficacious than producing transfer than were third-person games. The full fixed effects of this model are reported in Table 3.

Effects of video-game training on cognitive and psychosocial outcomes

To test the possibility that different cognitive domains may benefit differently from video-game-based cognitive interventions, we next ran a series of meta-regression models examining the impact of VGT on specific cognitive outcomes of interest (Attention/Perception, Higher-order Cognition and Memory) as well as on the Psychosocial outcomes. As was the case with the models examining overall cognition, the models examining specific outcome constructs were controlled for the effects of investigation-wise standard error, and modeled random effects on a per-study (rather than per-investigation) basis. These models demonstrated significant training effects were demonstrated for Attention/Perception (AP) outcomes, g = .27, 95% CI [.08, .45], p =. 004, and Higher-order Cognition (HC) outcomes, g = .31, 95% CI [.1, .51], p = .003. No significant training effects were observed for Memory outcomes, g = -.14, 95% CI [-.36, .06], p = .17, or Psychosocial outcomes, g = .06, 95% CI [.41, .53], p = .812. See Fig 4 for a visualization of these results.

Fig 4. Plot of effect sizes (g) & 95% confidence intervals for overall cognition and specific cognitive constructs (AP, HC, Mem, PS).

Fig 4

Note. Values labled as “Corrected” are corrected for publication bias via an extension of Egger’s regression test [42] as defined in the Methods section above.

Effects of currently used genre distinctions on cognitive and psychosocial outcomes

Considering past research which has the “Action”, “Strategy”, “Casual”, etc. game genres to different cognitive processes and profiles [2, 3, 5, 6, 13, 18], we next replicated analysis of game Genre and Format conducted above with respect to the AP, HC, Mem, and PS constructs. These analyses were conducted using an identical model to the Genre model used with respect to Overall Cognition, using each of these four constructs as a dependent variable in each respective analysis. This Genre model significantly predicted AP, Qw(4) = 50.16, p < .001, AIC = 220.4, Mem, Qw(4) = 27.62, p < .001, AIC = 27.3, and PS outcomes, Qw(4) = 18.6, p = .001, AIC = 20.3, but neither the Genre nor Format term significantly contributed to any of those models (see the Genre Model section of Tables 2, 4 and 5). The HC construct was not significantly predicted by this model, Qw(4) = 4.01, p = .405, AIC = 128.51.

Table 4. Results of three LME meta-analysis models predicting AP outcomes.

β 95% ci lower 95% ci upper z p
Genre Model
[AIC = 220.4]
SEsg2 2.24 1.34 3.14 4.89 < .001
Study Group -.4 -.64 -.16 -3.25 .001
Genre .01 -.21 .23 .13 .9
Format -.04 -.25 .18 -.34 .731
Game Factors Model
[AIC = 200.6]
SEsg2 1.93 1.02 2.86 4.13 < .001
Study Group -.2 -.46 .06 -1.54 .124
Movement Style -.08 -.56 .39 -.34 .732
Perspective -.62 -1.02 -.21 -2.95 .003
Primary Interaction .48 .08 .88 2.35 .019
Time Pressure .24 -.15 .62 1.2 .23
Controllable Objects .4 .03 .76 2.1 .036
Win States -.05 -.35 .23 -.41 .682
Type of Opponent -.49 -.89 -.08 -2.36 .019

Note: ci = confidence intervals; AIC = Akaike’s Information Criterion

Table 5. Results of three LME meta-analysis models predicting HC outcomes.

β 95% ci lower 95% ci upper z p
Genre Model
[AIC = 128.51]
SEsg2 1.13 -.57 2.84 1.31 .191
Study Group .05 -.27 .36 .32 .749
Genre -.03 -.28 .2 -.32 .748
Format -.19 -.42 .05 -1.55 .12
Game Factors Model
[AIC = 112.47]
SEsg2 .77 -1.04 2.59 .83 .404
Study Group -.04 -.43 .36 -.17 .682
Movement Style 1.37 .81 1.94 4.79 < .001
Perspective -.3 -.78 .17 -1.25 .21
Primary Interaction .52 .14 .9 2.68 .007
Time Pressure -.03 -.42 .37 -.13 .896
Controllable Objects -1.01 -1.45 -.56 -4.46 < .001
Win States -.1 -.43 .23 -.56 .574
Type of Opponent -.08 -.48 .31 -.4 .69

Note: ci = confidence intervals; AIC = Akaike’s Information Criterion

Effect of gameplay factors on cognitive and psychosocial outcomes

To assess the degree to which our coded gameplay factors are predictive of differential outcomes of cognitive training, and to compare the efficacy the game factors approach against the genre approach, we next replicated the Game Factors analyses reported above with respect to the AP, HC, Mem, and PS constructs. As above, the Game Factors model the model applied previously to overall cognitive outcomes applied to the AP, HC, Mem, and PS constructs in four separate analyses.

The Game Factors model significantly predicted the AP outcome construct, Qw(9) = 75.16, p < .001, AIC = 200.4, with the slightly reduced AIC compared to the Genre model (AIC = 220.4) indicate a better model fit for the Game Factors model [43, 44]. The Perspective, Primary Interaction, Controllable Objects, and Type of Opponent terms all contributed significantly to the model (see Table 4). The directionality of these effects indicates that the first person, combat, multiple controllable objects, and passive threshold features were associated with greater transfer to attention and perception measures than the third person, non, combat, single controllable object, and active opponent features, respectively.

Similar to the above, the Game Factors model also significantly predicted the HC outcome construct, Qw(9) = 30.2, p < .001, AIC = 112.4. Once again, AIC fell slightly from the Genre model (AIC = 128.51) to the Game Factors model, indicating a better model fit for the Game Factors model. Significant contributors to this model included the Movement Style, Primary Interaction, and Controllable Objects (see Table 5). The directionality of observed effects indicate that the 3rd-person, combat, and single controllable object features were related to greater HC outcomes than the 1st-person, non-combat, and multiple controllable objects features, respectively.

As with the AP and HC constructs, the Game Factors model also significantly predicted the Mem outcome construct, Qw(9) = 27, p = .001, AIC = 39.45. However, this significant relationship appears to be driven entirely by the control terms included in the model—none of the gameplay factors significantly contributed to this model (see Table 6).

Table 6. Results of three LME meta-analysis models predicting memory outcomes.

β 95% ci lower 95% ci upper z p
Genre Model
[AIC = 27.3]
SEsg2 6.37 3.88 8.86 5.02 < .001
Study Group .01 -.34 .34 .03 .977
Genre .04 -.21 .28 .3 .767
Format -.14 -.44 .15 -.97 .334
Game Factors Model
[AIC = 39.45]
SEsg2 6.68 3.99 9.36 4.88 < .001
Study Group -.16 -.58 .26 -.75 .451
Movement Style .18 -.39 .75 .62 .538
Perspective -.18 -.68 .31 -.72 .472
Primary Interaction -.01 -.53 .51 -.05 .963
Time Pressure -.18 -.7 .35 -.66 .511
Controllable Objects -.03 -.52 .46 -.12 .903
Win States .01 -.41 .44 .07 .947
Type of Opponent .09 -.46 .64 .32 .75

Note: ci = confidence intervals; AIC = Akaike’s Information Criterion

Lastly, the Game Factors model did not significantly predict the PS construct, Qw(9) = 14.24, p = .114, AIC = 25.94, and again none of the included game factors were relevant to this model (see Table 7).

Table 7. Results of three LME meta-analysis models predicting PS outcomes.

β 95% ci lower 95% ci upper z p
Genre Model
[AIC = 20.3]
SEsg2 5.88 .284 8.92 3.79 < .001
Study Group .26 -.65 1.17 .56 .576
Genre -.6 -1.29 .1 -1.68 .093
Format -.59 -1.59 .41 -1.16 .244
Game Factors Model
[AIC = 25.94]
SEsg2 5.18 1.22 9.36 2.55 .011
Study Group -.77 -2.61 1.08 -.81 .416
Movement Style 1.51 -1.85 4.87 .87 .379
Perspective -.23 -1.64 1.08 -.4 .688
Primary Interaction .46 -1.06 1.96 .6 .551
Time Pressure .11 -1.96 2.18 .1 .918
Controllable Objects -.67 -3.22 1.87 -.52 .604
Win States .2 -1.3 1.69 .26 .797
Type of Opponent -.31 -2.11 1.49 -.34 .735

Note: ci = confidence intervals; AIC = Akaike’s Information Criterion

Discussion

Overall effectiveness of video-game training

The results of this meta-analysis indicate that, broadly, cognitive interventions utilizing video games are effective at producing training gains to general cognition, after taking publication bias into account. We must however emphasize that significant evidence of publication bias was found in this body of work, with published studies reviewed strongly favoring positive training outcomes even after accounting for anomalously large effects (see Fig 3). This is a known problem for interventions, including video-game trainings [2], which needs to be addressed in order to soberly assess the efficacy of such interventions as cognitive training. While the results presented here were corrected for publication bias, it is important that these findings be interpreted with the understanding that this evident publication bias may have magnified the effects observed.

There was also evidence of selection bias with regards to games used as training and/or the cognitive assessments selected as transfer methods. We treated all VG interventions as interventions of interest for our game factors analysis, even those designated as control conditions in their study of origin. Importantly, “control” condition games demonstrated less transfer to overall cognitive outcomes than did games examined as interventions of interest, despite a diverse range of games being presented as active control conditions (see S2 Table). Considering that the “control” games examined in the current meta-analysis include the vast majority of our coded gameplay factors (see Table 1), the diverse genre labels applied to these games in their studies of origin, and the varied cognitive demands of games even within the same genre [3, 7], a systematic difference in gameplay profile or cognitive demands between experimental and control games is an unlikely explanation of their difference in efficacy. The likely explanation here is one of design and task selection—the control games utilized in a given experiment by definition are those that are as similar to the experimental game as possible while retaining differences which the designers of the intervention expect will contribute to differential outcomes on their measures of interest. This finding, essentially, reflects successful selection of video games of differing gameplay profiles/cognitive demands to test specific experimental questions in the published literature. However, this approach does make disentangling the impact of specific gameplay factor—as we are trying to do in this meta-analysis—problematic, in that it is impossible to isolate single-factor differences between games that feature very distinct gameplay profiles, hence the necessity of the present meta-analysis considering all video game interventions regardless of possible status as a control condition as interventions of interest.

After considering (and correcting for where appropriate) these above caveats, the present meta-analysis still shows a significant and moderate effect on cognitive improvements from video-game training (g = .25; p < .001). While we cannot state with certainty that we have fully corrected for the observed publication bias, and we must acknowledge that these results are necessary uncontrolled due constraints in previous study design, we are confident that the observed effects are not spurious and that training with video games reliably produce transfer to cognition.

Effect of specific gameplay factors on cognitive outcomes

Our primary interest in this meta-analysis was to examine the influence of specific gameplay factors on cognitive outcomes in video game training, since not all games are created equal. With regards to overall cognition, the present meta-analysis found that both perspective and movement style significantly influenced training outcomes. Specifically, training with games featuring a first-person perspective demonstrated greater cognitive transfer than those featuring a third-person perspective, and games featuring allocentric movement demonstrated greater cognitive transfer than those featuring egocentric movement.

While games utilizing these gameplay features are common in the field (“first-person-shooter” games utilize a first-person perspective, “real-time-strategy” and many “platformer” and “puzzle” games utilize allocentric movement), neither feature has been theorized to be particularly facilitative of cognitive training. Our findings here may be reflective of other gameplay features witch often co-occur with these two features. Specifically, the correlational analysis presented in Table 2 shows that the Movement and Perspective factors share a common pattern of correlation with the Interaction, Objects, and Opponents factors. It may very well be that gameplay features coded for by these other factors are partially determinant of the finding that Movement/Perspective are impactful on overall cognitive outcomes. A further investigation of the complex relationship between these gameplay factors, including possible mediating and moderating effects, is warranted, and is a necessary step in further refining the game factors approach that we propose in this manuscript.

Interestingly, no single game examined in the present meta-analysis featured both an allocentric movement style and a 1st-person perspective, though in theory these results suggest that a game featuring may be a candidate for effective cognitive intervention, if we assume the effects of these two features are additive or synergistic. While they have not been studied within the context of cognitive interventions (insofar as the research reviewed within this meta-analysis), games that feature movement between pre-defined static first-person views such as Five Nights at Freddy’s [45], I’m on Observation Duty [46], or Observation [47] include both gameplay features and would be ideal for testing this hypothesis.

Attention & perception outcomes

Past literature has reported transfer to attention and perception measures from “action” games as one of the more robust findings, which has been typically attributed to the strong emphasis placed on rapidly identifying and dispatching threats in “action” games [2, 5, 25, 48]. In terms of the game factors used in the present meta-analysis, the above definition of “action” games would translate to games that are combat-focused with time pressure, and have an active opponent. The current meta-analysis validated that combat-focused games were more effective in facilitating transfer to attention and perception than were non-combat focused games, partially validating past theorization. Additionally, games featuring a first-person perspective—one of the defining characteristics of the commonly-studied “first person shooter” subgenre of “action” games—demonstrated greater transfer than games featuring a third-person perspective. This finding agrees with the body of work which has found transfer to AP measures specifically resulting from such “first-person shooter” games [e.g. 6, 11, 49, 50]. However, games with passive thresholds or objectives better facilitated transfer to AP outcomes than did games with active opponents, and the AP construct was insensitive to the presence or absence of time pressure. These findings are somewhat surprising considering that “Action” games—defined in part by the need to rapidly identify the actions of unpredictable opponents—have consistently produced transfer to measures of attention and perception in past publications [2, 18, 25]. These findings suggest that neither active opponents or time pressure are necessary elements of gameplay to drive attention & perception outcomes—at least when considered in isolation.

Unexpectedly, games featuring multiple controllable objects produced greater transfer to attention & perception outcome measures than did games featuring only a single controllable object. Simultaneous control of multiple objects is a hallmark of the “real-time strategy” game subgenre of “strategy” games, and that feature has been theorized to drive transfer to working memory and cognitive control that has been observed in some intervention studies using such games [18]. The facilitation of transfer to attention and perception observed here is therefore unexpected, but may be attributable to the additional attentional demands of simultaneously monitoring several gameplay objects, compared to only a single gameplay object.

Higher-order cognition outcomes

The higher-order cognition construct used in this meta-analysis aggregated across measures of executive control, working memory updating, and reasoning. Transfer to this construct was found to be facilitated by games featuring allocentric movement (compared to egocentric movement), a primary combat interaction (compared to primarily non-combat games), and only a single controllable object (compared to multiple controllable objects). Only two of the games featured in interventions examined by the present meta-analysis featured all three of these game factors, those being the casual action games Centipede [51] and Zaxxon [52].

Importantly, some gameplay features which have been commonly theorized to drive transfer to higher-order cognitive functions such as reasoning and working memory were not indicated to be particularly efficacious by the results of this meta-analysis. As was the case with transfer to the AP construct, transfer to the HC construct was insensitive to the presence or absence of time pressure, which has been invoked as an important factor driving cognitive transfer from both “Action” and “Strategy” game interventions [18, 24]. From the body of work that has examined the “real-time-strategy” subgenre of “strategy” games, time pressure coupled with simultaneous control of multiple game object has been theorized to be an effective driver of transfer [18, 19]—though again our results do not support that conclusion, and indeed suggest that games featuring only a single controllable object are more effective in producing cognitive transfer. Conversely, combat games were more effective in producing transfer to the HC construct than were non-combat games, which is a core prediction of the action game training literature [2, 25]. It should be noted that the presence/absence of combat as a primary interaction, and presence/absence of time pressure were correlated to a significant degree in the sample of studies examined by this meta-analysis (see Table 2), so the significant influence of combat on transfer to HC measures may also account for the influence of time pressure in many combat-focused games.

Memory & psychosocial outcomes

Our coded game factors were not observed to impact transfer to memory or psychosocial outcome measures, and indeed video-game training was not found to reliably transfer to either outcome construct after accounting for publication bias. However, it is also important to note that only 25% (28) and 16% (18) of studies examined by this meta-analysis examined memory and psychosocial outcomes, respectively. While we did not find evidence of transfer to either of these constructs in the present meta-analysis, these results could be the result of a relative paucity of studies which report memory or psychosocial outcomes coupled with highly varied intervention methodologies of the cognitive intervention literature [2], and/or the relatively coarse outcome constructs examined in the present meta-analysis.

Relevance of existing genre definitions and recommendations for future research

The “action” versus “strategy” categorization utilized in this meta-analysis proved ineffective in distinguishing cognitive outcomes of VGT-based cognitive interventions. As can be seen in Table 1, these two genres featured a heterogeneous mix of gameplay features, reminiscent of the wide array of gameplay styles featured in the “action” game literature specifically and as discussed above in our literature review, so it is perhaps unsurprising that the distinction between these two genres was not reflected in differential cognitive outcomes. However, it should be noted that the binary “action” versus “strategy” categorization utilized in this study is an exaggeration of how those terms have been classically applied. Yes, both the label of “action” (and to a lesser extend “strategy”) have been applied to a disparate array of games, past studies have not purported that these two genres could be used to broadly classify all games, with varying other labels being applied to different sections of the gameplay space (i.e. “puzzle” or “simulation”) [2, 5356]. Further, those studies that do focus on the effect of “action” or “strategy” games often utilize specific subgenre terms which are more suggestive of the overall cognitive demands of gameplay (i.e. “first-person shooter” or “real time strategy”, rather than “action” or “strategy” [2, 14, 19, 57]. It is conceivable that a more specific categorization of game genre would result in more consistent training outcomes and/or training outcomes more reflective of genre-standard gameplay features. However, considering that a) the division between game genres has become increasingly irrelevant as that field of art has progressed to hybrid genre and other novel formats [7], and b) games within the same genre can have very different cognitive demands [3, 4], even more specific genre classifications have limited utility. We therefor recommend that a more thorough description of the gameplay features present in a training game be presented in future research, in lieu of classification by genre.

This proposed approach presents several benefits. Firstly, an understanding of specific gameplay features can aid the field in interpreting mixed findings that VGT studies have produced thus far. Secondly, specification of gameplay factors can aid in the design of future studies, by isolating specific gameplay features that improve intended outcomes and by creating more rigorous control groups that are matched with the training group on potentially confounding gameplay features. The gameplay classification system presented in this analysis is more rigorous than any previously applied in the field, but also represents the first attempt to create such a system. In particular, the approach taken in this manuscript is limited by a-priori definition of game factors and the lack of consideration of conjunctive effects of these features. Further development thoroughly-researched classification systems to describe the specific gameplay demands of VGT interventions—and particularly how those gameplay demands relate to cognitive demands—would serve as a boon to future video-game-related research.

The “Long-Form” vs “Casual” game distinction likewise appears to have little relevance in terms of relation to observed training outcomes. “Casual” games do not appear to be a distinct genre featuring a distinct combination of gameplay features, but rather a format of video-game that can be applied to games of multiple genres [3]. The current meta-analysis did not find meaningful difference in cognitive transfer between studies utilizing “Long-Form” vs “Casual” games, suggesting this may not be a useful distinction with regards to cognitive impact of training paradigms using these genres. On the other hand, this lack of distinct effects between “Long-Form” and “Casual” games may prove a useful tool in the design of future video-game based interventions. Specifically, assuming a given investigation is not concerned with long-term memory or psychosocial outcomes, shorter-duration Causal games can be utilized to reduce the consecutive time required for training sessions, allowing for more design flexibility in terms of participant and experiment scheduling, as past research from our group has leveraged [5, 6]. Note that the authors make no claim regarding overall training duration necessary to produce cognitive transfer as we found no significant relation between training duration and cognitive outcome in the analyses conducted—our above statement specifically pertains to the utility of “Casual” games for use in individual training sessions.

Registration

This meta-analytic review’s protocol and analysis plan was pre-registered at https://osf.io/apmk5. We followed the PRISMA-P checklist when preparing the protocol, and we followed PRISMA reporting guidelines for the final report (see S1 File for PRISMA checklist). The meta-analytic data are shared at https://osf.io/6j792/.

Supporting information

S1 Method. Full search terms for literature search.

(DOCX)

S1 Table. Gameplay factors of games featured in included studies.

(DOCX)

S2 Table. Outcome measures categorized by construct, with representative studies.

(DOCX)

S3 Table. Characteristics of included video-game training studies.

(DOCX)

S1 List. References for included studies.

(DOCX)

S1 File. PRISMA checklist.

(DOCX)

Data Availability

All relevant data are within the paper and its Supporting information file. Additional data are shared at https://osf.io/6j792/.

Funding Statement

This research was supported by a grant from the National Institute on aging to CB (titled “Strategic Training to Optimize Neurocognitive Functions in Older Adults” under Award # R56AG060052; PI: Basak). The funder had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

References

  • 1.Powers KL, Brooks PJ, Aldrich NJ, Palladino MA, Alfieri L. Effects of video-game play on information processing: A meta-analytic investigation. Psychonomic Bulletin & Review. 2013. Dec; 20(6):1055–79. doi: 10.3758/s13423-013-0418-z [DOI] [PubMed] [Google Scholar]
  • 2.Simons DJ, Boot WR, Charness N, Gathercole SE, Chabris CF, Hambrick DZ, et al. Do “brain-training” programs work? Psychological Science in the Public Interest. 2016. Oct; 17(3):103–86. doi: 10.1177/1529100616661983 [DOI] [PubMed] [Google Scholar]
  • 3.Baniqued PL, Lee H, Voss MW, Basak C, Cosman JD, DeSouza S, et al. Selling points: What cognitive abilities are tapped by casual video games? Acta Psychologica. 2013. Jan; 142(1):74–86. doi: 10.1016/j.actpsy.2012.11.009 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Baniqued PL, Kranz MB, Voss MW, Lee H, Cosman JD, Severson J, et al. Cognitive training with casual video games: Points to consider. Frontiers in Psychology. 2014. Jan 7; 4. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Ray NR, O’Connell MA, Nashiro K, Smith ET, Qin S, Basak C. Evaluating the relationship between white matter integrity, cognition, and varieties of video game learning. Restorative neurology and neuroscience. 2017; 35(5):437–56. doi: 10.3233/RNN-160716 [DOI] [PubMed] [Google Scholar]
  • 6.Smith ET, Bhaskar B, Hinerman A, Basak C. Past Gaming Experience and Cognition as Selective Predictors of Novel Game Learning Across Different Gaming Genres. Frontiers in psychology. 2020. May 26; 11:786. doi: 10.3389/fpsyg.2020.00786 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Dale G, Green CS. The Changing Face of Video Games and Video Gamers: Future Directions in the Scientific Study of Video Game Play and Cognitive Performance. Journal of Cognitive Enhancement. 2017; 1:280–294. [Google Scholar]
  • 8.Green CS, Bavelier D. Action video game modifies visual selective attention. Nature. 2003. May 29; 423(6939):534–7. doi: 10.1038/nature01647 [DOI] [PubMed] [Google Scholar]
  • 9.Green CS, Bavelier D. Enumeration versus multiple object tracking: The case of action video game players. Cognition. 2006. Aug; 101(1):217–45. doi: 10.1016/j.cognition.2005.10.004 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Green CS, Bavelier D. Effect of action video games on the spatial distribution of visuospatial attention. Journal of Experimental Psychology: Human Perception and Performance. 2006. Dec; 32(6):1465–78. doi: 10.1037/0096-1523.32.6.1465 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Green CS, Bavelier D. Learning, attentional control, and action video games. Current biology: CB. 2012. Mar 20; 22(6):R197–206. doi: 10.1016/j.cub.2012.02.012 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Green CS, Sugarman MA, Medford K, Klobusicky E, Bavelier D. The effect of action video game experience on task-switching. Computers in Human Behavior. 2012. May; 28(3):984–94. doi: 10.1016/j.chb.2011.12.020 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Blacker KJ, Curby KM, Klobusicky E, Chein JM. Effects of action video game training on visual working memory. Journal of Experimental Psychology: Human Perception and Performance. 2014. Oct; 40(5):1992–2004. doi: 10.1037/a0037556 [DOI] [PubMed] [Google Scholar]
  • 14.Achtman RL, Green CS, Bavelier D. Video games as a tool to train visual skills. Restorative Neurology and Neuroscience. 2008; 26(4–5):435–46. [PMC free article] [PubMed] [Google Scholar]
  • 15.Tanaka S, Ikeda H, Kasahara K, Kato R, Tsubomi H, Sugawara SK, et al. Larger right posterior parietal volume in action video game experts: a behavioral and voxel-based morphometry (VBM) study. PloS one. 2013. Jun 11; 8(6):e66998. doi: 10.1371/journal.pone.0066998 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Kühn S, Gleich T, Lorenz RC, Lindenberger U, Gallinat J. Playing Super Mario induces structural brain plasticity: Gray matter changes resulting from training with a commercial video game. Molecular Psychiatry. 2014. Feb; 19(2):265–71. doi: 10.1038/mp.2013.120 [DOI] [PubMed] [Google Scholar]
  • 17.Bisoglio J, Michaels TI, Mervis JE, Ashinoff BK. Cognitive enhancement through action video game training: Great expectations require greater evidence. Frontiers in Psychology. 2014. Feb 19; 5. doi: 10.3389/fpsyg.2014.00136 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Basak C, Boot WR, Voss MW, Kramer AF. Can training in a real-time strategy video game attenuate cognitive decline in older adults? Psychology and Aging. 2008. Dec; 23(4):765–77. doi: 10.1037/a0013494 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Glass BD, Maddox WT, Love BC. Real-time strategy game training: emergence of a cognitive flexibility trait. PloS one. 2013. Aug 7; 8(8):e70350. doi: 10.1371/journal.pone.0070350 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Strenziok M, Parasuraman R, Clarke E, Cisler DS, Thompson JC, Greenwood PM. Neurocognitive enhancement in older adults: Comparison of three cognitive training tasks to test a hypothesis of training transfer in brain connectivity. NeuroImage. 2014. Jan 15; 85(Part 3):1027–39. [DOI] [PubMed] [Google Scholar]
  • 21.Oei AC, Patterson MD. Enhancing cognition with video games: a multiple game training study. PloS one. 2013; 8(3):e58546. doi: 10.1371/journal.pone.0058546 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Oei AC, Patterson MD. Playing a puzzle video game with changing requirements improves executive functions. Computers in Human Behavior. 2014. Aug; 37:216–28. [Google Scholar]
  • 23.Basak C, Qin S, O’Connell MA. Differential effects of cognitive training modules in healthy aging and mild cognitive impairment: A comprehensive meta-analysis of randomized controlled trials. Psychology and Aging. 2020. Mar; 35(2):220–49. doi: 10.1037/pag0000442 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Bavelier D, Green CS, Pouget A, Schrater P. Brain Plasticity Through the Life Span: Learning to Learn and Action Video Games. Annual Review of Neuroscience. 2012. Jul; 35:391–416. doi: 10.1146/annurev-neuro-060909-152832 [DOI] [PubMed] [Google Scholar]
  • 25.Bediou B, Adams DM, Mayer RE, Tipton E, Green CS, Bavelier D. Meta-analysis of action video game impact on perceptual, attentional, and cognitive skills. Psychological Bulletin. 2018. Jan; 144(1):77–110. doi: 10.1037/bul0000130 [DOI] [PubMed] [Google Scholar]
  • 26.Boot WR, Blakely DP, Simons DJ. Do action video games improve perception and cognition? Frontiers in Psychology. 2011. Sep 13; 2. doi: 10.3389/fpsyg.2011.00226 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Bonnechère B, Langley C, Sahakian BJ. The use of commercial computerized cognitive games in older adults: a meta-analysis. Scientific reports. 2020. Sep 17; 10(1):15276. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Lampit A, Hallock H, Valenzuela M. Computerized cognitive training in cognitively healthy older adults: a systematic review and meta-analysis of effect modifiers. PLoS medicine. 2014. Nov 18; 11(11):e1001756. doi: 10.1371/journal.pmed.1001756 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Mayer RE, Parong J, Bainbridge K. Young adults learning executive function skills by playing focused video games. Cognitive Development. 2019. Jan; 49:43–50. [Google Scholar]
  • 30.Wang G, Zhao M, Yang F, Cheng LJ, Lau Y. Game-based brain training for improving cognitive function in community-dwelling older adults: A systematic review and meta-regression. Archives of Gerontology and Geriatrics. 2021. Jan; 92. doi: 10.1016/j.archger.2020.104260 [DOI] [PubMed] [Google Scholar]
  • 31.Toril P., Reales J. M., & Ballesteros S. (2014). Video game training enhances cognition of older adults: A meta-analytic study. Psychology and Aging, 29, 706–716. doi: 10.1037/a0037507 [DOI] [PubMed] [Google Scholar]
  • 32.Maher C. G., Sherrington C., Herbert R. D., Moseley A. M., & Elkins M. (2003). Reliability of the PEDro scale for rating quality of randomized controlled trials. Physical Therapy, 83, 713–721. [PubMed] [Google Scholar]
  • 33.Matos A. P., & Pegorari M. S. (2020). How to Classify Clinical Trials Using the PEDro Scale?. Journal of lasers in medical sciences, 11(1), 1–2. doi: 10.15171/jlms.2020.01 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.Li H., Li J., Li N., Li B., Wang P., & Zhou T. (2011). Cognitive intervention for persons with mild cognitive impairment: A metaanalysis. Ageing Research Reviews, 10, 285–296. doi: 10.1016/j.arr.2010.11.003 [DOI] [PubMed] [Google Scholar]
  • 35.Mewborn C. M., Lindbergh C. A., & Stephen Miller L. (2017). Cognitive interventions for cognitively healthy, mildly impaired, and mixed samples of older adults: A systematic review and meta-analysis of randomized-controlled trials. Neuropsychology Review, 27, 403–439. doi: 10.1007/s11065-017-9350-8 [DOI] [PubMed] [Google Scholar]
  • 36.Miyake A., Friedman N. P., Emerson M. J., Witzki A. H., & Howerter A. (2000). The unity and diversity of executive functions and their contributions to complex “frontal lobe” tasks: A latent variable analysis. Cognitive Psychology, 41(1), 49–100 doi: 10.1006/cogp.1999.0734 [DOI] [PubMed] [Google Scholar]
  • 37.Becker BJ. Synthesizing standardized mean-change measures. British Journal of Mathematical and Statistical Psychology. 1988. Nov; 41(2):257–78. [Google Scholar]
  • 38.Lipsey MW, Wilson DB. Practical meta-analysis. Sage Publications; 2001. [Google Scholar]
  • 39.Bangert-Drowns RL, And Others. Effects of Frequent Classroom Testing. 1986 Apr 1.
  • 40.Viechtbauer W, López-López JA, Sánchez-Meca J, Marín-Martínez F. A comparison of procedures to test for moderators in mixed-effects meta-regression models. Psychological Methods. 2015. Sep; 20(3):360–74. doi: 10.1037/met0000023 [DOI] [PubMed] [Google Scholar]
  • 41.Viechtbauer W. Conducting Meta-Analyses in R with the metafor Package. Journal of Statistical Software. 2010; 36(3):1–48. [Google Scholar]
  • 42.Egger M., Davey Smith G., Schneider M., & Minder C. Bias in meta-analysis detected by a simple, graphical test. British Medical Journal. 1997; 315(7109):629–634. doi: 10.1136/bmj.315.7109.629 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 43.Akaike H. Information theory and an extension of the maximum likelihood principle. In Petrov B.N. and Csaki F. (Eds.), Second international symposium on information theory. 1973, 267–281. Budapest: Academia Kiado. [Google Scholar]
  • 44.Akaike H. Factor analysis and AIC. Psychometrika. 1987; 52, 317–332. [Google Scholar]
  • 45.Cawthon S. Five Nights at Freddy’s [Computer Software]. 2014. Clickteam.
  • 46.I’m on Observation Duty [Computer Software]. 2018. Notovia.
  • 47.Observation [Computer Software]. 2019. Devolver Digital.
  • 48.Cohen JE, Green CS, Bavelier D. Training visual attention with video games: Not all games are created equal. In: O’Neil F, Perez RS, editors. Computer Games and Team and Individual Learning. Elsevier. 2008; 205–277. [Google Scholar]
  • 49.Bailey K & West R. The effects of an action video game on visual and affective information processing. Brain Research. 2013; 1504, 35–46. doi: 10.1016/j.brainres.2013.02.019 [DOI] [PubMed] [Google Scholar]
  • 50.Cohen JE, Green CS, Bavelier D. Training visual attention with video games: Not all games are created equal. Computer Games and Team and Individual Learning. 2008.
  • 51.Orosy-fildes C. Allan RW. Psychology of computer use: XII. Videogame Play: Human Reaction Time to Visual Stimuli. Peceptual and Motor Skills. 1987; 69, 243–247. [Google Scholar]
  • 52.Dorval M. Pépin M. Effect of Playing a Video Game on a Measure of Spatial Visualization. Perceptual Motor Skills. 1986; 62, 159–162. doi: 10.2466/pms.1986.62.1.159 [DOI] [PubMed] [Google Scholar]
  • 53.Huang K.T. (2020). Exergaming Executive Functions: An Immersive Virtual Reality-Based Cognitive Training for Adults Aged 50 and Older. Cyberpsychology, Behavior and Social Networking, 23(3), 143–149. https://doi-org.libproxy.utdallas.edu/10.1089/cyber.2019.0269 [DOI] [PubMed] [Google Scholar]
  • 54.Nelson R. A., & Strachan I. (2009). Action and puzzle video games prime different speed/accuracy tradeoffs. Perception, 38(11), 1678–1687. doi: 10.1068/p6324 [DOI] [PubMed] [Google Scholar]
  • 55.West R., Swing E. L., Anderson C. A., & Prot S. (2020). The Contrasting Effects of an Action Video Game on Visuo-Spatial Processing and Proactive Cognitive Control. International Journal of Environmental Research and Public Health, 17(14). https://doi-org.libproxy.utdallas.edu/10.3390/ijerph17145160 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 56.Wu S., & Spence I. (2013). Playing shooter and driving videogames improves top-down guidance in visual search. Attention, Perception, & Psychophysics, 75(4), 673–686. doi: 10.3758/s13414-013-0440-2 [DOI] [PubMed] [Google Scholar]
  • 57.Boot WR, Kramer AF, Simons DJ, Fabiani M, Gratton G. The effects of video game playing on attention, memory, and executive control. Acta Psychologica. 2008. Nov, 129(3):387–98. doi: 10.1016/j.actpsy.2008.09.005 [DOI] [PubMed] [Google Scholar]

Decision Letter 0

Alessandra S Souza

31 May 2022

PONE-D-22-06647A Game-Factors Approach to Cognitive Benefits from Video-Game training: A Meta-AnalysisPLOS ONE

Dear Dr. Basak,

Thank you for submitting your manuscript to PLOS ONE. I am sorry for the relative delay in the evaluation of your paper. I had initially found two reviewers to assess the paper, but one reviewer fell through. I then had to search for a new reviewer. Gladly, I found one more expert that agreed to assess the paper, and I am glad I did because this expert provided excellent comments on how to improve your manuscript. I would like to take this opportunity to thank both reviewers for doing a great job in assessing the paper and providing many constructive comments. As you will see from the comments appended below (and include as attachment to this email), the reviewers have made a number of comments regarding your proposed classification and your analysis method, both in terms of needed clarifications but also with regards to improvements of the analytical methods used. I strongly recommend you to carefully consider each of their comments and helpful suggestions. I will not reiterate all of their points here. I will only make a few remarks that arouse on my own reading of the paper (see below). Overall, I believe that there is a clear revision path that may lead to publication, so I am inviting a major revision. Note that this invitation does not guarantee acceptance of the paper and hence you should do your best effort to address all concerns of the reviewers (as well as the points I will present below). My own comments:First, Similarly to Reviewer 2, I thought that the evaluation of the paper was compromised by some accessibility issues regarding the preregistration and the data. With regards to the preregistration, the project is private and hence we could not read the preregistration. The text also does not indicate which hypotheses were preregistered neither which analyses are part of the preregistration and which ones were exploratory. The text should be revised to clearly identify these. With regards to the data, it seemed to be some sort of image of an excel file, which unfortunately is not useful for sharing data since one cannot reuse the data (unless one retypes all of the information visually available). I wanted to explore the data-set, but one can neither download it, see it closely, click on it or anything. Please be aware that sharing of data should follow the FAIR principles (findable, accessible, interoperable, and reusable). These two points will be certainly critical in subsequent evaluation of your paper, if you decide to submit a revision.

Second, it stroke me that most the of the factors included in the analyses of the games that were not significant, still generated effects that went in the same direction as the significant ones, often with similar mean size (but sometimes even larger) although lacking significance. Given that these cases were also often the cases with less studies (small k), it seems to me that the difference between significant and non-significant is probably not very informative. This should be taken in consideration. Can one be certain that one factor is more conducive than the other in generating a transfer effect? Please consider this issue carefully. One possible approach to mitigate this issue would be to perform a Bayesian Meta-analysis and derive uncertainty estimates. Alternatively, you should provide the reader with some sort of evidence of which differences observed with the current analysis are meaningful, and provide the caveats explaining the ones that are yet not credible (for example, due to the lack of sufficient studies included). Thank you for considering PLOS ONE as the outlet for your work.

Please submit your revised manuscript by Jul 15 2022 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Alessandra S. Souza, Ph.D.

Academic Editor

PLOS ONE

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and 

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. PLOS requires an ORCID iD for the corresponding author in Editorial Manager on papers submitted after December 6th, 2016. Please ensure that you have an ORCID iD and that it is validated in Editorial Manager. To do this, go to ‘Update my Information’ (in the upper left-hand corner of the main menu), and click on the Fetch/Validate link next to the ORCID field. This will take you to the ORCID site and allow you to create a new iD or authenticate a pre-existing iD in Editorial Manager. Please see the following video for instructions on linking an ORCID iD to your Editorial Manager account: https://www.youtube.com/watch?v=_xcclfuvtxQ

3. We note that you have stated that you will provide repository information for your data at acceptance. Should your manuscript be accepted for publication, we will hold it until you provide the relevant accession numbers or DOIs necessary to access your data. If you wish to make changes to your Data Availability statement, please describe these changes in your cover letter and we will update your Data Availability statement to reflect the information you provide

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Partly

Reviewer #2: Yes

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: I Don't Know

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: Review of A Game-Factors Approach to Cognitive Benefits from Video-Game training: A Meta

Analysis by Smith and Basak

Overall evaluation: Meta-analyses are tedious work and usually always criticized for the many decisions the authors take during working on inclusion and exclusion criteria. I appreciate the work invested by the authors and as usual have some remarks regarding decisions taken by the authors which are, in my reading not enough motivated.

1, why is the physiotherapy evidence database chosen and were studies excluded based on this assessment?

2, why are age and percentage female chosen as moderators as well as publication characteristics – I think that overall moderator choice deserve a better motivation

3, to the gameplay factors: again, I miss a somewhat motivated choice why those factors ( Movement Style (egocentric vs. allocentric method of spatial navigation), Perspective (1st person vs 3rd person viewing perspective), number of Controllable Objects (single vs multiple), number of Win States (single win state vs multiple win states), Type of Opponent featured by the game (active opponent vs passive threshold), the presence of Time Pressure (present vs absent), and the Primary Interaction method of the game (combat vs noncombat) were chosen, ideally those choice can be tied to cognitive processes that are assumed to be affected by the engagement in playing those games. Finally, I wondered whether movement style and perspective code different factors. Thus, did the authors correlate their game factors or run a cluster/factor analysis to uncover shared and unique variance contributed by those factors.

4, the authors did not motivate their choice for the outcome measures either or do give any examples (what is an attention/perception outcome and why is it not a HC outcome).

As stated in the beginning, meta-analyses are tedious work and I presume the authors really invested lots of time and work but for the time being, to many choices remain unmotivated or opaque. Overall, I see a lot of merit in the consideration of gameplay factors instead of game category.

Reviewer #2: The study provides an interesting meta-analytic approach to study the impact of video game training on cognition, focusing on game features that are thought to characterize distinct video game genres. This is an original approach that will make an interesting contribution to the field and hopefully should stimulate further research. I have some clarification questions and also some suggestions that aim at strengthening some results and conclusions.

The present meta-analysis was pre-registered on OSF and I had access to the PRISMA checklist but not to the full methods. Therefore, the comments below are based on the information provided in the manuscript only, without taking the pre-registered methods analysis plan and hypotheses into consideration.

My primary concern relates to the use of within-subject effect sizes, which do not properly control for numerous confounds such as practice or test-retest effects or placebo effects. While I understood that this was necessary to include control games, the use of effect sizes based on pre-post change scores has important limitations that should be acknowledged (Cuijpers et al., 2017). My second and third main concerns are related to the analytic strategy (assuming independent effect sizes, using r = 0.5) and the lack of details regarding the criteria used for coding game features, genres, formats, and cognitive constructs. I felt that the paper would benefit from clarifying both the choice of categories (genre vs format, experimental vs control) and the levels (action vs strategy, serious vs. casual).

However, given that the study was pre-registered, I wasn’t able to determine if there were deviations from the registered methods (coding), analysis plan (effect sizes and models) and hypotheses tested (number of analyses). If an analysis appears sub-optimal but was registered, any change or any additional analysis, even if it represents an improvement, should be strongly justified and yet considered exploratory as it will deviate from the registered analysis plan.

Below I provide more details about each specific point. I am confident that the authors can address most of them in a reply, and that providing some of the additional analyses suggested in this review will strengthen their results and thereby increase the impact of their study. (see attached document)

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

Attachment

Submitted filename: REVIEW OF PONE-D-22-06647.docx

PLoS One. 2023 Aug 2;18(8):e0285925. doi: 10.1371/journal.pone.0285925.r002

Author response to Decision Letter 0


11 Oct 2022

Response to Editor and the Reviewers

We sincerely thank the reviewers and the Editor for their insightful comments and we have addressed all of these comments in the revised manuscript that has undergone substantial changes.

Please find our specific responses below, where all comments are in Italics and our responses are in regular font:

Editor Comments

1. First, Similarly to Reviewer 2, I thought that the evaluation of the paper was compromised by some accessibility issues regarding the preregistration and the data. With regards to the preregistration, the project is private and hence we could not read the preregistration. The text also does not indicate which hypotheses were preregistered neither which analyses are part of the preregistration and which ones were exploratory. The text should be revised to clearly identify these. With regards to the data, it seemed to be some sort of image of an excel file, which unfortunately is not useful for sharing data since one cannot reuse the data (unless one retypes all of the information visually available). I wanted to explore the data-set, but one can neither download it, see it closely, click on it or anything. Please be aware that sharing of data should follow the FAIR principles (findable, accessible, interoperable, and reusable). These two points will be certainly critical in subsequent evaluation of your paper, if you decide to submit a revision.

a. The accessibility issues expressed by the editors and reviewers were not intended by the authors – we are fully committed to providing access to the dataset utilized in this study. The difficulties in access the shared dataset appear to stem from accessibility restrictions enforced by the authors’ home university, which we believe we have addressed. The shared materials should now be fully viewable and editable by any party with an access link – please contact the corresponding author (Chandramallika Basak) if this is not the case, as further access issues are erroneous.

2. Second, it stroke me that most the of the factors included in the analyses of the games that were not significant, still generated effects that went in the same direction as the significant ones, often with similar mean size (but sometimes even larger) although lacking significance. Given that these cases were also often the cases with less studies (small k), it seems to me that the difference between significant and non-significant is probably not very informative. This should be taken in consideration. Can one be certain that one factor is more conducive than the other in generating a transfer effect? Please consider this issue carefully. One possible approach to mitigate this issue would be to perform a Bayesian Meta-analysis and derive uncertainty estimates. Alternatively, you should provide the reader with some sort of evidence of which differences observed with the current analysis are meaningful, and provide the caveats explaining the ones that are yet not credible (for example, due to the lack of sufficient studies included).

a. After carefully considering this comment (as well as those from reviewer 2 who expressed similar misgivings re. our statistical approach), we decided to re-analyze our data using a mixed-effect meta-regression approach – see lines 253-257 of the revised method section, as well as the revised publication bias analysis (lines 301-310). This approach allows us to a) compare and correct for all of our variable of interest in a single model, b) reduces the total number of comparisons (a consequence of point a), c) directly compare the effect of subgroups on our outcome measures, and d) incorporate a correction for publication bias (via Egger’s regression test) into the models natively. We feel this revised approach addresses all of the major concerns provided by the reviewers and the editor, and is a much stronger statistical approach overall.

_____________________________________________________________________________________

Reviewer 1 Comments:

1. why is the physiotherapy evidence database chosen and were studies excluded based on this assessment?

a. More information regarding the PEDro scale as well as justification for its use have been added in lines 134-138

b. This was not an exclusion criteria, but was examined as a factor of study quality.

2. why are age and percentage female chosen as moderators as well as publication characteristics – I think that overall moderator choice deserve a better motivation

a. they are not – average age and percentage female are participant characteristics, distinct from publication characteristics. However, this distinction is no longer relevant in the revised statistical approach taken in the re-submitted manuscript – participant characteristics, publication quality characteristics, and all of the binary factors considered in the initial analysis (genre, format, study group, game factors) have been modeled as fixed effects in a mixed-effect meta-regression model (see lines 253-60 and lines 324-338)

3. to the gameplay factors: again, I miss a somewhat motivated choice why those factors ( Movement Style (egocentric vs. allocentric method of spatial navigation), Perspective (1st person vs 3rd person viewing perspective), number of Controllable Objects (single vs multiple), number of Win States (single win state vs multiple win states), Type of Opponent featured by the game (active opponent vs passive threshold), the presence of Time Pressure (present vs absent), and the Primary Interaction method of the game (combat vs noncombat) were chosen, ideally those choice can be tied to cognitive processes that are assumed to be affected by the engagement in playing those games. Finally, I wondered whether movement style and perspective code different factors. Thus, did the authors correlate their game factors or run a cluster/factor analysis to uncover shared and unique variance contributed by those factors.

a. Justification for why each gameplay factor was chosen has been added to the methods section (line 148-164), and a specific call to refine the game factors approach with respect to specific cognitive demands has been added to the discussion (lines 614-617). We agree with reviewer one that establishing how specific gameplay factors influence specific cognitive demands would be a major boon to the cognitive science of video games and VGT, though establishing that experimentally would first require a standardized approach to categorizing gameplay which doesn’t exist within the field at the moment – and which this meta-analysis is intended to facilitate.

b. Re. movement style and perspective: Perspective codes weather the player perspective is from within an actual or implied player avatar (1st-person) or if that avatar is visible to the player (3rd-person). These factors are correlated (see the additional correlation analyses added to the results section, lines 287-290 & Table 2) but are not redundant.

i. For more concrete examples, the present analysis included 1st-person games with egocentric movement (i.e. Call of Duty, Unreal Tournament, Portal), 3rd-person games with egocentric movement (i.e. Grand Theft Auto V, Mario Kart, World of Warcraft) and 3rd-person games with allocentric movement (i.e. Pac-man, StarCraft, The Sims). It does not appear that 1st-person games with allocentric movement have yet been studied within the field, though such games do exist (see the examples given on lines 512-513, all of which feature first-person perspectives coupled with allocentric map-based navigation, i.e. the player can chose their first-person vantage point by selecting that point from a list or an allocentrically-presented map).

4. the authors did not motivate their choice for the outcome measures either or do give any examples (what is an attention/perception outcome and why is it not a HC outcome).

a. Further justification for the construction of our outcome constructs have been added to lines 229-244

¬¬¬¬-___________________________________________________________________________________

Reviewer 2 Comments:

1. Confounds. The first relates to the interpretation and presence of confounds. As is now standard in intervention studies, the use of a control group is essential to rule out potential confounds, such as placebo, test-retest or practice effects. This is also true in meta-analyses that aim to compare different groups or interventions.

a. The reviewer is correct that utilizing a control group is essential for controlling for many sources of noise in a study, including in meta-analyses. Ideally, we would have calculated a relative gain measure for all examined interventions versus an active control condition. However, considering that the following:

i. A substantial portion of the examined publications utilized video games as an active control group (i.e., Belchior et al., 2013; Blacker et al., 2014; Clemenson & Stark, 2015; Cohen, Green, & Bavelier, 2008; De Lisi & Wolfard, 2002; Feng, Spence, & Pratt, 2007, etc.), or examined differences in transfer across multiple video games without specifying one specifically as a control condition (i.e., Adams, 2013; Bailey & West, 2013; Boot et al., 2008; Glass, Maddox, & Love; 2007; Gonzales, 2012; Huang, 2020; Oei & Patterson, 2013; 2014a; 2015; etc.)

ii. Video games utilized as a control condition must have a different gameplay profile to serve as a suitable active control to a video game intervention of interest

iii. Creating a differential effect score comparing relative gain on two different games with two different profiles of gameplay features necessarily confounds the distinct gameplay features of the two games compared

b. Considering the above, it is not possible the primary experimental question of this review – that is examining the differential effects of gameplay factors on cognitive training – using differential measures which necessarily confound different gameplay features. This is a limitation, but a necessary one to address the issue at hand.

c. The mixed meta-regression approach presented in this re-submitted manuscript (lines 253-260, Tables 3-7) does not account for common confounds with regards to our estimation of the overall effect of game training, but it does allow us to definitively compare the effectiveness of subcategories defined by the coded game factors (i.e. combat vs noncombat games, etc.) which partially addresses this concern.

2. Between group comparisons. Second, the use of within-subject effects also makes between-group comparisons less reliable as the Q statistics controls for only one source of heterogeneity related to within-group variance and not for the variance of between-group differences in pre-test performance, post-test performance or change scores, that are essential for comparing subgroups.

a. The mixed meta-regression approach presented in this re-submitted manuscript (lines 249-252, Tables 3-7) allows for direct subgroup comparisons. The discussion has been re-written to reflect the findings of these direct comparisons.

3. Pre-post correlation. A third problem arises from the correlation between pre-test and post-test which is almost never reported in primary studies and thus calls for adjustments of the effect sizes and their variance. While assuming a prepost correlation of r = 0.5 is a neutral (i.e., unbiased) choice, this choice will impact the estimates, their variance (hence the weights) and thereby the results. It would be reassuring to see sensitivity analyses testing the impact of different values of r and this would certainly strengthen the results and conclusions. This is particularly important given that the meta-analysis is based on within-subject change scores and the correlation between pretest and postest can thus be expected to be higher than 0.5. Yet, sensitivity analyses testing several values below and above 0.5 would seem advisable.

a. While a sensitivity analysis would allow for the assessment of how different estimates of r would affect the results, it wouldn’t allow for a more accurate estimation of the “correct” estimate of r for pre-post correlations in this sample. Yes, we could assume that pre-post scores are highly correlated due to the within-subjects nature of the comparison, but that assumption is impossible to confirm with the data available. We thought it prudent to select a single unbiased value for r (=.5, which the reviewer noted as well) and run the analysis only once, rather than select between r values based on patterns of results.

4. Random effects model. The authors use random effects models, which assume dependent effect sizes. Yet, the shared excel files (which are hardly readable and would be much more useful as downloadable excel files) suggest that the authors extracted more than one effect size from each study and group of participants. Were effect sizes coming from the same study aggregated or randomly selected and how? If averaged, how was variance handled? If randomly selected, did the author assess sensitivity via bootstrapping? Because effect sizes from the same study may not be fully independent (even if coming from different groups), an alternative may be to use methods that can handle dependent effect sizes, such as multilevel methods or meta-analytic models using robust variance estimates which allow to specify a correlated or hierarchical structure of the weights.

a. The “unreadability” of the shared excel file possibly suggests that the reviewer was viewing a preview of the file, not the file itself. We recommend downloading the file (upper-left corner on most browsers) which will grant the reviewer access to the excel file in full.

i. Difficulty accessing the file (i.e. if there is no “download button) may be the result of security measures put in place by our parent university. We will happily supply this file directly (via the Editor to maintain anonymity) if the issue persists, but we believe we have set access permissions sufficiently to avoid this problem.

b. Multiple interventions/groups from the same study were treated as separate studies in the initial analysis and a separate effect size was calculated for each, as specified on lines 224-228 of the manuscript.

c. In line with this reviewer’s suggestion, we have taken a mixed effect meta-regression approach in the resubmitted manuscript, with the effect of study of origin modeled as a random effect in all analyses. Modeling the random effect by study rather than by intervention/group ensures that the inherent inter-relation of investigations arising from the same study is accounted for in the revised modeling.

5. Moderators were assessed across multiple subgroup analyses. Were all analyses pre-registered and if yes did the authors consider correcting for multiple testing? One drawback of this approach is that it assesses the impact of each moderator individually, without controlling for other moderators, some of which may covary (e.g., genre, format, features, participants age). Meta-regression models can assess the influence of several moderators simultaneously. Meta-regression was used to assess continuous moderators and a similar approach may be recommended at least for some analyses of categorical (or dummy coded) moderators. Finally, some subgroup analyses may be underpowered as they rely on relatively small numbers of studies or effects.

a. As stated above, we have we have taken a mixed effect meta-regression approach in the resubmitted manuscript, in line with this reviewer’s suggestion. As suggested by this reviewer, these mixed meta-regression models include both the study and participant quality moderator as well as categorical moderators, allowing for the effects of each to be controlled for the others, and reducing the total number of comparisons substantially.

6. Publication bias analyses are limited to trim and fill, which has been extensively criticized. There are a multitude of methods available now, each with its own pros and cons, that are generally recommended over the trim and fill approach. An increasingly common approach is to run publication bias sensitivity analyses using a variety of techniques (Carter et al., 2019; Mathur & VanderWeele, 2020). Moregenerall, I highly recommend these two recently published meta-analyses which provide state-of-the-art methods in terms of meta-analytic models for dependent effect sizes, moderator analysis and publication bias detection (Coles et al., 2019; Lehtonen et al., 2018).

a. We removed the trim-and-fill method as this reviewer suggested, and instead corrected for publication bias using a generalization of Egger’s regression test as explained on lines 307-310. This approach was chosen specifically because it could be integrated with the mixed effect meta-regression models used in the re-submitted manuscript.

b. We agree that correcting for publication bias with regards to p-values rather than study variances, as suggested by Carte et al., 2019 and Mathur & VanderWeele 2020 is likely a more ecologically valid method of correcting for publication bias, and an approach that should be considered seriously by the field. However, as far as we are aware such methods have not been implemented in statistical packages in such a way that also allows for multi-level or mixed-effect meta-regression models (the approach taken in the re-submitted manuscript, also recommended by this reviewer in questions R2.4 and R2.5). Existing statistical packages (i.e. the “PublicationBias”, “puniform” packages for R) only allow for this method to be applied to simpler regression models. There is clearly a need for development of such a statistical package, but we unfortunately lack the coding expertise to develop such an approach in the timeframe of this review process.

7. The introduction and discussion largely argue that the existing literature has emphasized an association between video game features and their cognitive effects and the present study attempts to address this hypothesis. Yet, I believe that the coding could be improved to better reflect the authors’ hypotheses as coding was sometimes inconsistent with the authors argument. Some games could be arguably coded in a different genre and some tasks in different constructs which might change the results and conclusions.

a. The coding of games into broad genres is problematic – even in the implementation we used in this meta-analysis – and this is a major impetus for conducting this meta-analysis, as discussed in the other responses to Reviewer 2.

b. The justification for the coding of cognitive constructs is explained in our response to question R2.13

8. Coding of game features. The part I liked most concerns the coding of game features which I found particularly interesting and could have been pushed even further. For example, a theory-grounded coding mapping specific game mechanics with particular cognitive demands (rather than mechanics associated with genres) would probably allow a different set of testable predictions.

a. We agree with reviewer 2 that the coding of game features can and should be further developed – one of the impetuses for this meta-analysis is to spur such future work. The authors assert that the field has yet to develop a thorough enough understanding of which gameplay features are associated with what cognitive capacities to allow for such a comprehensive categorization strategy, and that developing such a strategy as game cognition research continues to mature is of strong importance.

9. Coding of game genre and format. What criteria were used to define genres and format? casual and serious games have been considered as video game genres in other studies, suggesting these 2 categories may overlap at least partially. Aren’t action, strategy, casual and serious meant to be mutually exclusive categories? How was the game Tetris (from Belchior et al) classified? It appears in both action and casual — which would seem wrong to me? How about Portal 2? Was it considered an action game and on which criteria? Were serious games only commercial games (how about cognitive training like cogmed, lumosity)? How about brain training games? Are these considered casual or serious? These notions should be clarified. Note that some codes (e.g. format, serious) were missing from the shared word doc and that the excel files were hardly readable or usable (no possibility to search). Also, the word “commercial games” is mentioned for the first time in the discussion line 453. Does that refer to off-the-shelf games? Was that an inclusion criterion? Were all serious games commercial games? There have been debates and inconsistencies in how genre was defined and handled in prior meta-analyses (either as selection/inclusion criteria or as moderator) and I am not clear if and how format differs from genre. This is a central point of the manuscripts that deserved some clarification including whether and why these codes are considered independent.

a. Our response to this question has been organized into multple parts to address the multiple issues that Reviewer 2 raised in this comment

i. “What criteria were used to define genres and format?”

1. The distinction between “Genre” (action|strategy) and “Format” (“serious|casual”) was based largely on work by Banequed and colleagues (Banequed et al., 2013; 2014; Smith et al., 2020), which demonstrates that “casual” video games fall into the same genre distinctions (i.e. “action, strategy”) as their non-casual counterparts. We therefore categorized both serious and casual games as either action or strategy. This is specified on lines 130-132 of the Methods section.

ii. “casual and serious games have been considered as video game genres in other studies, suggesting these 2 categories may overlap at least partially. Aren’t action, strategy, casual and serious meant to be mutually exclusive categories?”

1. “Action” and “Strategy” are mutually exclusive in our coding strategy, as are “Serious” and “Casual”. However, all games are categorized on both of these variables (so a game may be “Action” and “Serious”, “Action” and “Casual”, and so on).

2. “Serious” in the context of this meta-analysis simply refers to a game that is not a “casual” game (as defined on line 131 of the methods section) and therefore these are mutually exclusive designations

iii. “How was the game Tetris (from Belchior et al) classified? It appears in both action and casual — which would seem wrong to me?”

1. Both Tetris and Super Tetris were categorized as a “Casual" and “Strategy” games by our raters. According to supplementary table 1

2. As discussed above, all games were categorized as either “Action” or “Strategy” and either “Casual” or “Serious” - this dual categorization is not an error.

iv. “How about Portal 2? Was it considered an action game and on which criteria?”

1. Both Portal and Portal 2 were categorized as “Serious” and “Strategy” by our raters. Again, this dual categorization is intended

v. “Note that some codes (e.g. format, serious) were missing from the shared word doc and that the excel files were hardly readable or usable (no possibility to search).”

1. The “Serious” vs “Casual” distinction (I.e. Format) was original summarized by the “Casual Game” column of Supplementary Table 1. This has been re-labeled as the “Format” column, with each game listed as “C” (Casual) or “S” (Serious) to address this confusion

2. The inability to zoom or search on the shared excel file was an unintentional effect of the security protocols used in UTD’s file sharing service. We have taken steps to remedy this. If difficulty accessing this file continues, we would happily provide a copy directly (via the Editor, to preserve anonymity)

vi. “Also, the word “commercial games” is mentioned for the first time in the discussion line 453. Does that refer to off-the-shelf games? Was that an inclusion criterion? Were all serious games commercial games?”

1. The “commercial games” wording was an artifact of a previous draft of the discussion section which was erroneously retained in a submitted draft. Third has been corrected

vii. “Were serious games only commercial games (how about cognitive training like cogmed, lumosity)? How about brain training games? Are these considered casual or serious? These notions should be clarified.”

1. We did not examine cognitive training or brain training games in this meta-analysis, as we feel that cognitive impact of those games is qualitatively different than games meant as entertainment and therefore not directly comparable. This has been specified on lines 117-120.

viii. “There have been debates and inconsistencies in how genre was defined and handled in prior meta-analyses (either as selection/inclusion criteria or as moderator) and I am not clear if and how format differs from genre. This is a central point of the manuscripts that deserved some clarification including whether and why these codes are considered independent.”

1. The authors strongly agree with Reviewer 2’s statement here, and hope that the responses included in this document and changes to the manuscript have clarified this issue.

10. Also, the genres Action and Strategy may be (arguably) too restrictive to capture all games. Forcing all games to fit into these 2 genres results in categories that are too heterogeneous to be meaningful. For example, the Action genre may not be the best fit for games like Centipede, Donkey Kong, Pacman or Wii Fit… Similarly, games like Angry Birds, The Sims or Tetris do not seem to fit well into the Strategy genre. Others have considered categories like simulation, exergame or platform games or even brain training. The literature search mentions brain training games but I didn’t see any. Were brain training games excluded and why? And if not how were these games coded?

a. Indeed, the Action/Strategy distinction is inadequate to categorize all games (the authors opinions on the topic broadly align with Dale & Green, 2017, as indicated in lines 48-53 of the introduction).

b. The inclusion of the Action/Strategy coding scheme alongside the more detailed Game Factors coding approach was designed contrast (one of) the ways Genre has been classified by the field in the past with our newer approach. We do not endorse the Action/Strategy distinction, and as Reviewer 2 notes we argue against the utility of that classification strategy at several points. Thus, the authors observe no contradiction in its use in this study as a point of comparison to the more detailed classification method we propose here.

c. The distinction between simulation, strategy, platforming, and action games is poorly defined throughout the literature, as discussed extensively in our introduction (lines 54-84). Our argument is that any such genre distinctions are by-and-large arbitrary and unhelpful – therefore the two-genre action/strategy categorization approach is just as valid (or invalid) as a 4-genre approach which also includes puzzle or platformer games.

d. Related to the above, we instructed our reviewers to categorize games as “action” or “strategy” as they deemed fit – this was an enforced binary choice.

e. Exergames and brain training games were excluded from the present analysis. This has been specified on lines 116-122

f. Related to the specific example cited by Reviewer 2: according to supplementary table 2, Angry Birds, Centepede, Donkey Kong, Pac Man, and Pac Man: Adventures in Time , were coded as “action” games by our raters, while The Sims, The Sims 2, The Sims 3, Tetris, and Super Tetris were all coded as “strategy” games by our raters. Wii Fit, being an exergame, was excluded from the present analysis in line with the above comment.

11. More importantly, the categorization of games into genres seems inconsistent with the authors' argument that games that belong to a given genre share features that are responsible for certain cognitive effects.

a. See response b to comment R2.10

12. Coding of experimental and control games. Here too, the reader would benefit from a more detailed description of how this coding was performed as it wasn’t available in the supplementary excel file. Note that the coding of serious games is also missing… Again, these are important as the criteria for defining serious games or experimental vs control games has varied across studies.

a. Games were not universally coded as “experimental” vs “control” - this was a binary factor of the intervention examining a game, not of the game itself, as specified on lines 206-208 of the manuscript. If the condition from which an effect size was generated was an experimental condition in its study of origin, that case was labeled as an “experimental” study (see supplementary table 2). Similarly, if the condition from which an effect size was generated was a control condition in its study of origin, that case was labeled as an “experimental” study (again, see supplementary table 2)

13. Coding of cognitive constructs. Relatedly, a similar reflection would also be relevant for defining cognitive constructs of interest. What criteria guided the choice of these 3 outcome categories: attention/perception, higher cognition and psychosocial? Previous meta-analyses of video games (and in other fields too) have categorized cognitive measures into separable constructs may provide some guidance or inspiration (e.g. Bediou et al., 2018; Powers et al., 2013; Powers & Brooks, 2014; Sala et al., 2018; Wang et al., 2017). What is the rationale for including some memory (working memory) tasks in higher cognition, whereas others are coded as memory? Psychosocial also includes depression and anxiety scales, affect or emotion (PANAS), risk taking… which may not really correspond to psychosocial skills. Again, the categories appear too heterogeneous to be meaningful. For example, memory may include tasks that include manipulation of verbal or visuospatial material which may be differentially impacted by distinct games. Studies of memory may also include older participants and although the effect is NS, the trend is negative suggesting smaller effects in older participants (consistent with previous meta-analyses that treated age as categorical that report smaller effects in older adults).

a. Further justification for the construction of our outcome constructs have been added to lines 229-244. Working memory was specifically coded as Higher Cognition and not memory because of its known inter-relation with reasoning and executive function (see Miyake et al., 2020)

14. The authors conclude that game features predict differential effects on cognition. Yet, their effect sizes do not differentiate between skills as the confidence intervals overlap and the use of within-subject effects does not allow proper between-group comparisons. Therefore, some conclusions may need to be toned down or rephrased and these limitations should be better acknowledged.

a. As stated above, our revisited statistical approach does allow for direct subgroup comparisons. The discussion section has been re-written with respect to only those differences found to be significant with the revised approach to analysis.

15. Mapping between features and cognitive constructs. The game features coded do not clearly map onto specific cognitive demands, which appears to contradict what the authors discuss about the choice of experimental and control groups in primary studies. Lines 465-477, “Importantly, “control” condition games failed to produce any significant transfer to any cognitive construct, despite a diverse range of games being presented as active control conditions (see Supplementary Table 1). This is evidence that, in general, VG-based interventions have tailored their cognitive transfer measures specifically to be sensitive to the cognitive demands of the intervention, and/or have intentionally selected control games not expected to produce transfer to the constructs examined. While this is an understandable design strategy, this practice may lead authors to incorrectly conclude that the control games used, or aspects thereof, do not produce cognitive improvements. As a concrete example of why this is problematic, consider the games Tetris and The Sims. Both games have been frequently used as active control conditions in studies of “action” video games, and producing negligible transfer as expected [12, 13, 37]. However, both of those games have demonstrated significant transfer, in studies in which they were an intervention of interest.”

a. This statement makes no claims as to the link between cognitive constructs and specific game features – the lack of transfer to ANY cognitive outcomes from Control games (which, as established by this meta-analysis, have a multitude of gameplay profiles spanning the coded gameplay features) preclude gameplay features as an explanatory factor of this lack of transfer. We suggest that a form of selection bias is present, in which experimenters are selecting control games which they believe a-priory will not cause transfer to their outcome measures.

b. This statement has been re-worded in light of this comment and the findings from our adjusted statistical approach (see lines 473-488)

16. The cases of Tetris and The Sims are particularly illustrative and interesting as the choice of using these as experimental or control games may depend on which aspect of cognition was measured (and considered as demonstrating transfer). This seems to contradict another statement that appears a few lines later. Lines 500-510: “the above definition of “action” games would translate to games that are combat-focused with time pressure, and have an active opponent. However, the current meta-analysis found consistent transfer to measures of attention and perception was facilitated by games that featured a first-person perspective, a single controllable object, a single win state, lacked an active opponent, and were primarily non-combat games. Not only was transfer to the attention and perception construct found to be agnostic to the presence of time pressure, but both active opponents and combat gameplay explicitly failed to produce consistent transfer. This finding provides evidence that the post-hoc justifications given in the past explaining the link between “action” video game play and enhanced attention and perception may be inaccurate.”

a. The referenced statement has been substantially altered in light of the findings from our adjusted statistical approach as well as this comment (see lines 519-534)

17. These are strong statements and conclusions which depend largely on the appropriate coding of game features, video game genres and cognitive construct, as well as their appropriate analysis allowing more direct comparisons which were not possible here.

a. As stated above, our statistical approach has been substantially altered and now does allow for direct subgroup comparison – the resubmitted discussion is written with regards to the findings of this updated statistical approach

18. Finally, the sentence line 566-567: “The gameplay factors analysis conducted in this meta-analysis rebuffs the “action” vs “strategy” distinction as useful.” appears too strong and should be downplayed given the results and the rest of the discussion arguing that this distinction has some value. This study sheds some light on how some of the game features that are present in action and/or strategy games and may be associated with their (differential?) cognitive effects. However, I don’t think it rebuffs the distinction between action and strategy games… As the authors rightfully discuss on line 573, the “genre distinction does have merit” (as well as some “historical” value) as action and strategy games were initially quite distinct in terms of mechanics, features and cognitive demands, and thus represented meaningful categories less than 10 years ago. However, as the video game ecosystem grows and new genres, hybrid genres and mixed genres arise, the features that were once specific to action or strategy games have become more and more common across genres and thus more difficult to isolate (see (Dale et al., 2020; Dale & Green, 2017). Toning down the conclusion won’t minimize the potential impact of this study which will make an important contribution to the field and should stimulate more work looking at how game characteristics map onto cognitive demands and whether they predict cognitive effects.

a. The conclusion re the utility of genre distinction has been re-worded in light of this comment to be less definitive and more accurately reflect the limitations of this work (see lines 580-585). We should note that the evidence supporting the partial merit of genre distinction was in fact weakened by our adjusted statistical approach, which did impact our interpretation of these results.

19. The authors argue that the lack of effect in the control condition “is evidence that, in general, VG-based interventions have tailored their cognitive transfer measures specifically to be sensitive to the cognitive demands of the intervention.” (line 468). This conclusion would require an analysis based on the cognitive demands rather than the game characteristics –which were not coded here– unless there is a perfect match between game features (e.g. pace or time pressure) and cognitive demands (attentional control)?

a. Our argument re the possible influence of selection bias on the relative efficacy of experimental vs control condition games has been re-worked, see lines 470-484

20. Study quality / risk of bias. Details about the PEDro scale would be appreciated, especially because it is less commonly used than other tools such as the Cochrane risk of bias scale. (https://guides.himmelfarb.gwu.edu/systematic_review/reporting-quality-risk-of-bias).

a. More information regarding the PEDro scale as well as justification for its use have been added in lines 134-138

21. Participant and study characteristics. Meta-regressions show non-significant modulation by either participant or study characteristics. Were these moderating influences also considered in subgroup analyses? Some factors tend to covary (e.g., sample size, age, training duration) and may thus be difficult to disentangle.

a. The participant and study characteristics were not considered in the subgroup analyses presented in the original submission, but are included in the mixed effect meta-regressions in the revised manuscript.

22. Participants characteristics: Are groups matched in gender? Proportion (%) females overall can be misleading if the groups are not balanced. Why was average age treated as continuous… previous work has mostly used age-group categories (e.g., children, young adults, old adults).

a. The majority of studies examined either reported no difference in gender between experimental groups or explicitly gender matched the groups (see table 2). Some had disproportionately more males across all groups (i.e. Adams 2013, Di Lisi & Cammarano 1996), and some explicitly compared the effects of training between genders (i.e. Feng, Spence & Pratt, 2007; Subrahmanyam & Greenfield, 1994). This disparity between studies, and in some cases between groups in the same study, is why we included % female as a moderator of interest.

b. Average age was treated as continuous for several reasons. Firstly, age bin cut points are not consistent across the field (i.e. some studies may categorize participants 60 and older as “older adults”, some 65 and older, etc.). Second, even if age bit cut points are consistent, there are considerable differences in cognitive profile and social cohort between individuals in the same age bin – i.e. an 80-year-old “older adult” would be expected to have a substantially different cognitive profile than a 60-year-old “older adult”, while a 20-year-old “younger adult” would be of a substantially different social cohort compared to a 40-year-old “older adult” especially with regard to video games and technology. Using average age as a continuous variable allows us to preserve variance that may correspond to these differences.

23. Study characteristics: Number of participants is already included in the weights (inverse variance) and therefore I don’t think it needs to be added to the models. What was the rationale for coding the number of cognitive outcomes? What result was expected? It is surprising that total hours had no effect but again there may be complex interactions with participant characteristics, cognitive construct, game features or genres.

a. Re number of participants: this point is well taken - n was not included in the meta-regression models due to the noted redundancy with the weights used.

b. Re number of cognitive outcomes: Since a single effect size of each construct was calculated from each study regardless of the number of outcome measures, controlling for that number accounts for systematic differences in effect size that this variance of measures may induce, such as relative susceptibility of each effect size to the task impurity problem (see Miyake et al., 2000). Previous work from our group has found such systematic differences (Basak, Qin & O’Connell, 2020).

c. Re total hours of training: The submitted manuscript first meta-analysis to show negligible gains from longer training durations – see Basak, Qin, & O’Connell (2020), Toril, Reales, & Ballesteros (2014), Li et al., (2011) for other examples. Aggregate evidence seems to point towards diminished returns after about five hours of training.

Attachment

Submitted filename: Smith & Basak 2022 Response to ReviewersFinal.docx

Decision Letter 1

Alessandra S Souza

18 Nov 2022

PONE-D-22-06647R1A Game-Factors Approach to Cognitive Benefits from Video-Game training: A Meta-AnalysisPLOS ONE

Dear Dr. Basak,

Thank you for submitting your revised manuscript to PLOS ONE. I have sent the manuscript back to the reviewers of the first submission. In general, they were thankful for your serious consideration of their comments and the corresponding improvements undertaken. Yet, the reviewers still have a number of concerns that require further revision. While Reviewer 2 has only one point for clarification, Reviewer 1 made a more severe list of issues. Addressing these issues seem relevant to increase the impact of the paper and to guarantee a better alignment of the proposals made in the current paper, the past literature, and also were the field is heading to. The comments made by Reviewer 1 are manifold, but are very constructive. I encourage you to address the points to the best of your abilities.

Please submit your revised manuscript by Jan 02 2023 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Alessandra S. Souza, Ph.D.

Academic Editor

PLOS ONE

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: (No Response)

Reviewer #2: (No Response)

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Partly

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: I Don't Know

Reviewer #2: Yes

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: First I would like to thank the authors for addressing my former remarks. I found the manuscript much improved and could find any information I looked for. However, I am still a bit confused regarding the Overall Cognition Factor, what does this mean and does the fact that also all other outcomes are then analysed. In my view the Overall Cognition effect sizes needs better motivation or should be left out.

Reviewer #2: I would like to reiterate my strong interest in the approach and results. I sincerely believe the analysis of video game features is a great and promising improvement over the game genres approach, and a timely one as the early conceptualization of video game genres needs to be reconsidered to reflect the diversification of video game genres (and of the whole video game ecosystem). Concerning the revised manuscript, the authors have properly addressed most of my concerns, especially regarding the meta-regression analysis. However, there are a few conceptual (theoretical) as well as methodological issues that remain unresolved and that, once clarified, could improve the paper’s impact.

Main comments

1. Game genre and format

First and foremost, I think the coding of game genres and format could be improved. The dichotomic conceptualisation of genres and formats is not only inconsistent with a growing literature on the evolution of video games reviewed in the introduction (lines 47-48): “a coarse distinction between gaming genres is insufficient to describe the profile of cognitive demands of a given game, as modern video games increasingly include features of multiple genres [7].” , but also, and more problematically, the chosen categories for genres (Action vs Strategy) and formats (Casual vs Serious) do not align with the existing literature and with the evolution of the field over the past decades (e.g., Dale et al., 2020; Dale & Green, 2017). Forcing all games to fit into these 2x2 categories is likely to result in blurred and overlapping categories with high heterogeneity, which could account for the lack of significant effects of these 2 factors. To illustrate my point, I was surprised to see games like “Tetris” or “Wii Fit Segway Circuit” be classified as Action-Casual. In contrast, games most commonly associated with the action genre, which are mostly first person shooter games, were found in the Action-Serious category which also included games like Fifa or super mario or pac-man. Such coding is only likely to add to the confusion the authors initially denounce in their introduction (e.g., note here that “Tetris” also appears in the strategy-casual category).

Second, the discussion relies heavily on non-significant effects, which should not be interpreted as evidence for the absence of a difference. For example, the non-significant effect of game genre (and format) may not be surprising considering how games genres and formats were coded in this meta-analysis – which deviates from the definitions of genre and format used in primary studies and in other meta-analyses too. More importantly, the authors rely on the lack of significant effects to criticize and thus diminish the importance of earlier work conducted more than 1 or 2 decades ago, when only a small number of genres were sufficient to appropriately capture meaningful differences between video games in terms of mechanics or gameplay features (and their associated cognitive demands).

In sum, the analysis of game genre and format needs to be improved. At the very minimum, the authors need to fully address the limitations of their choice of grouping and categories and how it is inconsistent with previous work. Alternatively, given the confusion it is likely to add, this analysis could be dropped., In all cases, the discussion should be significantly altered to (i) not over-interpret non-significant effects and (ii) acknowledge that the 2x2 coding approach taken in this work does not reflect the complexity of the video game ecosystem to date and, for the most part, does not align with the categorization of games used in primary studies.

2. Game features

I particularly liked the idea of analyzing video game features, which is the main strength of this study, although the pattern of inter-correlations calls for a cautious interpretation of the results. This actually raised the question of how were these correlations computed since all variables are coded as binary (0 or 1)? What tests were conducted and what type of correlation coefficients are reported in Table 2?

While these results are an important step forward, they do not necessarily invalidate past research, unlike what the authors suggest. Indeed, many of that past research was conducted at a time when games could still be unanimously classified into a relatively small number of distinct and homogeneous genres.

In this respect, including the variable “study group” in the analysis of game features, as well as in the analysis of genre and format seems an unmotivated choice. More generally, it is unclear how genre, format and group are related to game features. Alos, what was the rationale for including study group only and not genre and format in the analysis of game features? Are these three factors correlated and how do they correlate with game features? Adding all 3 variables to Table 2 and showing how strongly video game genre, video game format and study group correlate with game features would be essential to better understand and interpret the pattern of effects (see point 3).

The discussion of the associations between individual game features and cognitive outcomes suggests that some earlier theories may have been misinterpreted. On multiple occasions, the authors misreport the action games literature as predicting that specific features are responsible for their cognitive effects. My reading of this literature is that it has put the emphasis on identifying the combination of features that distinguishes action games from other games that do not produce similar cognitive benefits. This question could be addressed by computing the similarity between games across a number of critical features such as those coded here. Recent theories actually propose that the effects of action games on cognition arise from the particular combination of features that characterize these games (including those coded here and many others such as the presence of variable rewards, the scaffolding of difficulty and challenge to keep players in their zone of proximal development, etc.). Importantly, the action video game literature does not predict that each and every feature in isolation has an effect, quite on the contrary (Bavelier & Green, 2019; Cardoso-Leite, P. et al., 2020). “Crucially, it has been our experience that each of these three characteristics on its own does not guarantee cognitive impact, at least when it comes to attentional control enhancements. Rather, action video games are unique in that they naturally layer these three game characteristics within the same overarching game play. For example, games that put a premium on just one characteristic such as pacing do not seem to similarly enhance attentional control and other aspects of cognition (e.g., Tetris).” (Green & Bavelier 2019, page 156).

This misreading of the literature is only briefly mentioned in the discussion, lines 581-584 (typo included): “Importantly, of the central predictions of the originating from the action game training literature – that combat-focused games facilitate cognitive transfer to both attention/perception and higher-order cognitive functions [1, 25] – is in agreement with the findings of the current meta-analysis.” This is not only a misinterpretation of the action game training literature that has not claimed such type of feature-specific effect on cognition, but also a misleading terminology to reduce action games to combat-games – e.g., fighting games are not expected to have the same impact as action-shooter games. Moreover, the next sentence further highlights inconsistencies between the interpretation of game features and the fact that the adopted coding of game genres is highly uncommon (see also point 1): Lines 484-588: “However, 50% of the “action” games sampled in this study did not feature combat as a primary gameplay mechanism, whereas 40% of the examined “strategy” games did feature combat as a primary interaction, meaning that the finding that combat games facilitate transfer to both of these cognitive outcomes is not a genre-dependent one.” Taken together, these two statements illustrate a main problem with this paper which needs to be addressed: that the coding of game genres applied in this meta-analysis does not align with that used so far in the field.

3. Study group

The effect of “study group” is interesting, although not unexpected and consistent with prior meta-analyses focusing on video game interventions with active control groups. However, it remains unclear why this effect is indicative of the presence of a selection bias in past research and may thus contribute to publication bias (lines 466-484). The finding of stronger effect sizes in the experimental group is expected and doesn’t mean that the choice of control groups was intentionally biased (which the authors seem to interpret as a form of questionable research practice contributing to publication bias). There are two main problems with this claim.

First, the present meta-analysis focuses on within-subject pre-post intervention effects, which are subject to important confounds (e.g., expectations, test-retest, practice or placebo effects), and are known to cause biases in meta-analytic investigations (e.g., see Cuijpers et al., 2017 for a detailed discussion of the limitations of this approach). I understand that this was necessary as the coding of game features required computing separate effect sizes for the experimental and control games. However, the limitations of this approach should be better acknowledged, especially given the strong interpretation the authors draw from the result.

Second, the use of an active control group is critical for establishing causality and also necessary to rule-out non-specific effects and confounds. Importantly, the choice of control group is guided by the specific research question(s) addressed, which have evolved as the field has matured. To date, most active-control groups involve playing a commercially available video game in order to control for unspecific effects that may be induced by video game play per se (e.g., changes in affect or mood due to playing a video game for example). The choice of control games is thus driven to maximize across the dimensions or features (e.g., combinations of gameplay and mechanics) that are hypothesized to play a key role in driving the cognitive effect of the experimental game, while keeping equated important non-specific features (e.g., the very act of playing, positive feedback within the game, engagement with the training, social stimulation, etc). Despite a few exceptions in which the authors explicitly attempted to test whether the presence or absence of specific game features were critical (e.g., Oei & Patterson, 2015), the studies included in this meta-analysis did not seek to isolate a single video game feature, nor did they argue that a specific feature alone was causally responsible for the particular cognitive effects observed (see also Ben-Sadoun & Alvarez, 2022; Choi et al., 2020 for recent developments in this direction).

In all, the discussion about selection bias reads as if the authors were assigning the wrong intention to the primary studies and thus reflects a misunderstanding of past research. To my knowledge, most studies included here sought to either replicate or clarify the extent of effects by delineating the particular types of games and cognitive domains impacted. To demonstrate the effectiveness of an intervention, primary studies assess whether greater benefits in the experimental group are found compared to the control group (the choice of control group is important to test hypotheses about the mechanisms of cognitive improvement). Tellingly, finding a larger effect in the experimental as compared to the control group does not mean that the control game did not produce any effect (as written line 479-480, but contradicted line 633-635) but instead that the effect of the experimental game was stronger than that of the control game. This whole section needs to be clarified to avoid misleading conclusions.

Additional comments

4. The preregistration is lacking critical information about the analysis plan (meta-analytic models), hypotheses tested and expected results. This makes the preregistration less useful as it is not clear what represents a deviation from the planned protocole and still leaves researchers enough degrees of freedom and flexibility in the analysis and reporting of their results.

5. Genre (action vs strategy) and Format (serious vs casual) are first introduced as 4 types or categories of video games, and then analyzed as two separate and independent (orthogonal) factors. While I understand the rationale of casual games spanning various genres, I don’t understand the dichotomy between casual and serious and what was predicted. Here, only the serious-FPS category seems to correspond to what has been called action video games in past research and contains mostly first and third person shooters as typically used in action games interventions (which would show here as an interaction effect). A consequence of this dichotomous view of game genres is that some games do not fit in their assigned category. For example, the games Tetris, Wii Fit Segway Circuit, Angry Birds, Balance, Centipede, Pacman, marble madness, FIFA 2010, Pinball Hall of Fame, MultiTask, or New Super Mario Bros are labeled as action games despite being reported as non-action (mostly control) games in the primary studies. This is at best confusing and at worse likely to set the field on the wrong path.

6. I also wonder whether genre and format are meant to be orthogonal dimensions or simply different attributes. The classification proposed by Simons only considers 3 categories: action, strategy and casual.While I understand the authors’ argument for casual games that span distinct genres, I don’t think this applies to serious games --a term that has been more commonly used to refer to educational or therapeutic games, in contrast to entertainment video games. And indeed calling commercially available FPS or TPS, serious games is likely to be highly confusing to all readers.

7. When the authors state that the genres have become more “blurry” (line 598), it would be helpful to clarify that this is due to both an increase in the number of genres and subgenres with the classification becoming more granular with more genres (rather than less) and narrower subgenres (e.g. FPS is a subgenre of action), together with greater overlap between genres and subgenres. This state of affairs has led to the emergence of hybrid-genres such as action-role-playing games that mix action mechanics with role playing features.

8. The analysis separating by outcome is interesting but may be underpowered and should thus be reported as exploratory or at least interpreted cautiously.

9. I found surprising that Movement and Perspective produce opposite effects in Table 3, given that they are strongly and positively correlated in Table 2. Could the author comment?

10. I could not find the reference to Valdez 2011. Was this study only a 15 minutes intervention as suggested by the study label in the excel file (Experimental Study, 15 minute RDR-NV group)?

11. Line 590: Did you mean coarse instead of course?

Conclusion

In conclusion, the present results do not support the following conclusions that

(i) game genre is a useless construct; it has at least been valid and instrumental in guiding hypothesis generation and testing,

(ii) the analysis of games features invalidates the theories about the action video game effects,

(iii) the choice of control games reflects an experimenter bias (implying a form of questionable research practice).

As a consequence, I would ask to rephrase and downplay several strong judgmental statements in the discussion:

Lines 524-530: “However, games with passive thresholds or objectives better facilitate transfer to AP outcomes than did games with active opponents, and the AP construct was insensitive to the presence or absence of time pressure. This finding provides evidence that some of the post-hoc justifications given in the past explaining the link between “action” video game play and enhanced attention and perception may be inaccurate – specifically, it does not seem that active opponents or time pressure are necessary elements of gameplay to drive attention & perception outcomes.”

Lines 630-640: “This analysis, in line with previous reviews [2] confirmed the presence of publication bias within the video game training field. Our finding with regards to transfer from experimental groups vs. active control groups using VG interventions may shed light on this problem. Based on our findings that videogames utilized in active control group failed to produce cognitive transfer regardless of their gameplay properties, we can reasonably conclude that some form of bias is suppressing significant transfer in those groups. Addressing this bias is imperative in its own right, and may contribute to the reduction of publication bias in future game training interventions. The authors suggest that future studies examine multiple games of various cognitive profiles with the assumption that differential transfer will be observed between groups (i.e. to differing measures of transfer), rather than the a-priori assumption that one condition will be less effective than another [15, 22, 23, 58].

Lines 499-510: “While games utilizing these gameplay features are common in the field (“first-person-shooter” games utilize a first-person perspective, “real-time-strategy” and many “platformer” and “puzzle” games utilize allocentric movement), neither the first-person perspective nor allocentric movement style have been theorized to have a strong impact on cognitive transfer resulting from VGT interventions. Interestingly, no single game examined in the present meta- analysis featured both an allocentric movement style and a 1st-person perspective, though in theory these results suggest that a game featuring both would be a candidate for effective cognitive intervention.”

Lines 549-552: “As was the case with transfer to the AP construct, transfer to the HC construct was insensitive to the presence or absence of time pressure, which has been invoked as a crucial factor driving cognitive transfer from both “Action” and “Strategy” game interventions [19, 25].“

Lines 576-581: “The findings of the present meta-analysis demonstrate the limited utility of broad genre classifications in understanding the effects of videogame training. Not only did the “action” vs

“strategy” distinction prove ineffective in distinguishing cognitive outcomes of VGT-based cognitive interventions, the gameplay features where were found to differentially impact cognitive outcomes to general cognition, attention/perception, and higher-order cognition did not correspond to defining features of either genre.”

Lines 581-582: Importantly, of the central predictions of the originating from the action game training literature – that combat-focused games facilitate cognitive transfer to both attention/perception and higher-order cognitive functions [1, 25]”

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

**********

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2023 Aug 2;18(8):e0285925. doi: 10.1371/journal.pone.0285925.r004

Author response to Decision Letter 1


4 Apr 2023

The reviewers’ comments are in italics. The response are below the comments.

Reviewer #1’s Comment: First I would like to thank the authors for addressing my former remarks. I found the manuscript much improved and could find any information I looked for. However, I am still a bit confused regarding the Overall Cognition Factor, what does this mean and does the fact that also all other outcomes are then analysed. In my view the Overall Cognition effect sizes needs better motivation or should be left out.

a. The authors have revised the manuscript w.r.t. to overall cognition. We believe that in line with the pre-registration, standard reporting of cognitive training on cognition, and past mixed results on videogame interventions effects on cognition, the reporting of overall cognition is important and has a strong value, esp. where the moderator effects are observed.

Reviewer #2 Comments: I would like to reiterate my strong interest in the approach and results. I sincerely believe the analysis of video game features is a great and promising improvement over the game genres approach, and a timely one as the early conceptualization of video game genres needs to be reconsidered to reflect the diversification of video game genres (and of the whole video game ecosystem). Concerning the revised manuscript, the authors have properly addressed most of my concerns, especially regarding the meta-regression analysis. However, there are a few conceptual (theoretical) as well as methodological issues that remain unresolved and that, once clarified, could improve the paper’s impact.

Main Comments:

1. First and foremost, I think the coding of game genres and format could be improved. The dichotomic conceptualisation of genres and formats is not only inconsistent with a growing literature on the evolution of video games reviewed in the introduction (lines 47-48): “a coarse distinction between gaming genres is insufficient to describe the profile of cognitive demands of a given game, as modern video games increasingly include features of multiple genres [7].” , but also, and more problematically, the chosen categories for genres (Action vs Strategy) and formats (Casual vs Serious) do not align with the existing literature and with the evolution of the field over the past decades (e.g., Dale et al., 2020; Dale & Green, 2017). Forcing all games to fit into these 2x2 categories is likely to result in blurred and overlapping categories with high heterogeneity, which could account for the lack of significant effects of these 2 factors. To illustrate my point, I was surprised to see games like “Tetris” or “Wii Fit Segway Circuit” be classified as Action-Casual. In contrast, games most commonly associated with the action genre, which are mostly first person shooter games, were found in the Action-Serious category which also included games like Fifa or super mario or pac-man. Such coding is only likely to add to the confusion the authors initially denounce in their introduction (e.g., note here that “Tetris” also appears in the strategy-casual category).

a. The authors entirely agree with reviewer 2 that these binarized genre classifications are defined on inherently indistinct categories, and have likely resulted in very heterogeneous games being classified within a given category. We disagree however that these distinctions “do not align with the existing literature”. Our selection of “action” vs “strategy” and “casual” vs “serious/non-casual” was primarily based on Simons et al., 2016’s review of video-game training literature, which as reviewer 2 alluded to identified “Action”, “Strategy”, and “Casual” as the most common labels applied in the literature at the time. While the cited papers by Dale and colleagues rightly assert the limited utility of genre classification, we assert that the vast majority of the literature still utilizes the genre classifications that we have binaries and encoded in this meta-analysis, particularly as the majority of the studies we cite were published before the Dale articles. In short, we agree that these categories do not properly reflect meaningful differences in gameplay profiles, but do accurately reflect how these terms are (mis)applied in the bulk of the literature we are reviewing, and hence serve as a meaningful “control” to the game factors approach we recommend.

b. Re the examples of Tetris and Segway Circuit

i. “Tetris” and “Super Tetris” were both categorized as strategy-casual by our raters (see supplementary table 1 and our shared dataset). The only reference to “Tetris” in relation to “action” games as far as the authors are aware is the mention that “Tetris” is often used as an active control game in action game interventions (alongside “The Sims”).

ii. In terms of gameplay, Segway Circuit is a time trial racing game, strongly resembling other racing games which have been lumped under the “action” game umbrella such as “Need for Speed”, “Gran Turismo”, and “Mario Kart”, with the major difference that those games may (but don’t necessarily, based on the game mode selected) feature active opponents. In our opinion, it is not surprising that the majority of our raters coded “Segway Circuit” as an “Action” game based on these characteristics.

2. Second, the discussion relies heavily on non-significant effects, which should not be interpreted as evidence for the absence of a difference. For example, the non-significant effect of game genre (and format) may not be surprising considering how games genres and formats were coded in this meta-analysis – which deviates from the definitions of genre and format used in primary studies and in other meta-analyses too. More importantly, the authors rely on the lack of significant effects to criticize and thus diminish the importance of earlier work conducted more than 1 or 2 decades ago, when only a small number of genres were sufficient to appropriately capture meaningful differences between video games in terms of mechanics or gameplay features (and their associated cognitive demands).

a. This is a fair criticism, and our discussion has been revised to more soberly interpret these non-significant findings (as further specified in comments below).

b. As an aside, we disagree with the assertion that “only a small number of genres were sufficient to appropriately capture meaningful differences between video games... one or two decades ago”. While we go into some depth re. this opinion in other specific responses (#13, among others), fully addressing this is outside the scope of these responses. The primary author (Evan T. Smith) would happily continue this discussion with reviewer 2 after the review process is complete, should they wish to do so.

3. In sum, the analysis of game genre and format needs to be improved. At the very minimum, the authors need to fully address the limitations of their choice of grouping and categories and how it is inconsistent with previous work. Alternatively, given the confusion it is likely to add, this analysis could be dropped., In all cases, the discussion should be significantly altered to (i) not over-interpret non-significant effects and (ii) acknowledge that the 2x2 coding approach taken in this work does not reflect the complexity of the video game ecosystem to date and, for the most part, does not align with the categorization of games used in primary studies.

a. The limitations of our approach have been more strongly emphasized in our discussion, as detailed in later comments. In regards to dropping this analysis, we feel, as elaborated in our response to comment #1, that our coding of genre does reflect how genre has been (mis)applied in the bulk of past literature, and feel it is therefore a valuable inclusion in this manuscript

4. I particularly liked the idea of analyzing video game features, which is the main strength of this study, although the pattern of inter-correlations calls for a cautious interpretation of the results. This actually raised the question of how were these correlations computed since all variables are coded as binary (0 or 1)? What tests were conducted and what type of correlation coefficients are reported in Table 2?

a. Spearman’s correlation coefficient is reported on Table 2 (though in the case of binarized data that is equivalent to Pearson’s or Kendal’s tests). This is admittedly not the most rigorous way of assessing patterns of inter-correlations, but it does convey the fact that these factors are inter-related in a complex way, which as reviewer 2 mentions is important in interpreting our results.

b. As with the coded game factors themselves, a more rigorous mapping of how game factors relate to one another is a next logical step in this line of research, which the authors are keen to pursue in future manuscripts.

5. While these results are an important step forward, they do not necessarily invalidate past research, unlike what the authors suggest. Indeed, many of that past research was conducted at a time when games could still be unanimously classified into a relatively small number of distinct and homogeneous genres.

a. See response to comments 2 and 11.

6. In this respect, including the variable “study group” in the analysis of game features, as well as in the analysis of genre and format seems an unmotivated choice. More generally, it is unclear how genre, format and group are related to game features. Alos, what was the rationale for including study group only and not genre and format in the analysis of game features? Are these three factors correlated and how do they correlate with game features? Adding all 3 variables to Table 2 and showing how strongly video game genre, video game format and study group correlate with game features would be essential to better understand and interpret the pattern of effects (see point 3).

a. Regarding the relationship between genre, format, and game features – indeed we agree that the relationship of specific game features to these commonly-used genre distinctions are indistinct and by-and-large arbitrary, which is why we propose the game factors approach as a more viable alternative.

b. Regarding the logic of excluding genre and formant form the game factors model – we conceptualize the game factors approach and the genre approach as two different ways of categorizing the same complex phenomenon – that is the wide range of possible gameplay styles present in commercial video games. We’re not interested in seeing how gameplay factors relate to cognition above-and-beyond genre – we’re interested in which of these methods more accurately relate to cognitive outcomes of training. Hence, we ran models featuring each approach separately, and then compared model validity between them (via AIC in this case).

c. Genre/Format have been added to Table 2.

7. The discussion of the associations between individual game features and cognitive outcomes suggests that some earlier theories may have been misinterpreted. On multiple occasions, the authors misreport the action games literature as predicting that specific features are responsible for their cognitive effects. My reading of this literature is that it has put the emphasis on identifying the combination of features that distinguishes action games from other games that do not produce similar cognitive benefits. This question could be addressed by computing the similarity between games across a number of critical features such as those coded here. Recent theories actually propose that the effects of action games on cognition arise from the particular combination of features that characterize these games (including those coded here and many others such as the presence of variable rewards, the scaffolding of difficulty and challenge to keep players in their zone of proximal development, etc.). Importantly, the action video game literature does not predict that each and every feature in isolation has an effect, quite on the contrary (Bavelier & Green, 2019; Cardoso-Leite, P. et al., 2020). “Crucially, it has been our experience that each of these three characteristics on its own does not guarantee cognitive impact, at least when it comes to attentional control enhancements. Rather, action video games are unique in that they naturally layer these three game characteristics within the same overarching game play. For example, games that put a premium on just one characteristic such as pacing do not seem to similarly enhance attentional control and other aspects of cognition (e.g., Tetris).” (Green & Bavelier 2019, page 156).

a. We thank reviewer 2 for this valuable input, and have adjusted our discussion accordingly (see comments below), and specifically invoke the idea that our analysis of specific game features does not account for their conjunctive effects (i.e. lines 509-516, 545, 620-621)

8. This misreading of the literature is only briefly mentioned in the discussion, lines 581-584 (typo included): “Importantly, of the central predictions of the originating from the action game training literature – that combat-focused games facilitate cognitive transfer to both attention/perception and higher-order cognitive functions [1, 25] – is in agreement with the findings of the current meta-analysis.” This is not only a misinterpretation of the action game training literature that has not claimed such type of feature-specific effect on cognition, but also a misleading terminology to reduce action games to combat-games – e.g., fighting games are not expected to have the same impact as action-shooter games. Moreover, the next sentence further highlights inconsistencies between the interpretation of game features and the fact that the adopted coding of game genres is highly uncommon (see also point 1): Lines 484-588: “However, 50% of the “action” games sampled in this study did not feature combat as a primary gameplay mechanism, whereas 40% of the examined “strategy” games did feature combat as a primary interaction, meaning that the finding that combat games facilitate transfer to both of these cognitive outcomes is not a genre-dependent one.” Taken together, these two statements illustrate a main problem with this paper which needs to be addressed: that the coding of game genres applied in this meta-analysis does not align with that used so far in the field.

a. As stated in comment 1 and elsewhere in these responses, we disagree that the coding of game genres differs from what is used I the field, though certainly not all publications conform to this action/strategy or action/nonaction dichotomy. That being said, the point re conflating action games with combat is well-taken. We have modified several sections throughout the manuscript to correct this (i.e. lines 600-604 & those sections mentioned in reviewer comment #18)

9. The effect of “study group” is interesting, although not unexpected and consistent with prior meta-analyses focusing on video game interventions with active control groups. However, it remains unclear why this effect is indicative of the presence of a selection bias in past research and may thus contribute to publication bias (lines 466-484). The finding of stronger effect sizes in the experimental group is expected and doesn’t mean that the choice of control groups was intentionally biased (which the authors seem to interpret as a form of questionable research practice contributing to publication bias). There are two main problems with this claim. First, the present meta-analysis focuses on within-subject pre-post intervention effects, which are subject to important confounds (e.g., expectations, test-retest, practice or placebo effects), and are known to cause biases in meta-analytic investigations (e.g., see Cuijpers et al., 2017 for a detailed discussion of the limitations of this approach). I understand that this was necessary as the coding of game features required computing separate effect sizes for the experimental and control games. However, the limitations of this approach should be better acknowledged, especially given the strong interpretation the authors draw from the result. Second, the use of an active control group is critical for establishing causality and also necessary to rule-out non-specific effects and confounds. Importantly, the choice of control group is guided by the specific research question(s) addressed, which have evolved as the field has matured. To date, most active-control groups involve playing a commercially available video game in order to control for unspecific effects that may be induced by video game play per se (e.g., changes in affect or mood due to playing a video game for example). The choice of control games is thus driven to maximize across the dimensions or features (e.g., combinations of gameplay and mechanics) that are hypothesized to play a key role in driving the cognitive effect of the experimental game, while keeping equated important non-specific features (e.g., the very act of playing, positive feedback within the game, engagement with the training, social stimulation, etc). Despite a few exceptions in which the authors explicitly attempted to test whether the presence or absence of specific game features were critical (e.g., Oei & Patterson, 2015), the studies included in this meta-analysis did not seek to isolate a single video game feature, nor did they argue that a specific feature alone was causally responsible for the particular cognitive effects observed (see also Ben-Sadoun & Alvarez, 2022; Choi et al., 2020 for recent developments in this direction). In all, the discussion about selection bias reads as if the authors were assigning the wrong intention to the primary studies and thus reflects a misunderstanding of past research. To my knowledge, most studies included here sought to either replicate or clarify the extent of effects by delineating the particular types of games and cognitive domains impacted. To demonstrate the effectiveness of an intervention, primary studies assess whether greater benefits in the experimental group are found compared to the control group (the choice of control group is important to test hypotheses about the mechanisms of cognitive improvement). Tellingly, finding a larger effect in the experimental as compared to the control group does not mean that the control game did not produce any effect (as written line 479-480, but contradicted line 633-635) but instead that the effect of the experimental game was stronger than that of the control game. This whole section needs to be clarified to avoid misleading conclusions.

a. We thank reviewer 2 for this thorough and fair critique of our conclusions re. the experimental/control group findings. As with most of our discussion, we have reworked the sections in question to more accurately reflect the ambiguity in these findings that Reviewer 2 has highlighted here (see lines 482-495)

10. The preregistration is lacking critical information about the analysis plan (meta-analytic models), hypotheses tested and expected results. This makes the preregistration less useful as it is not clear what represents a deviation from the planned protocol and still leaves researchers enough degrees of freedom and flexibility in the analysis and reporting of their results.

a. We thank the reviewer about this insight and opinion. At the time of pre-registration, the focus was mainly on the main aims regarding game factors meta and the inclusion/exclusion criterion. We therefore believe that the pre-registration is still important than non-registered meta-analysis in terms of the main goals of the project and the specifics of I/E criteria in the broad field of videogame.

11. Genre (action vs strategy) and Format (serious vs casual) are first introduced as 4 types or categories of video games, and then analyzed as two separate and independent (orthogonal) factors. While I understand the rationale of casual games spanning various genres, I don’t understand the dichotomy between casual and serious and what was predicted. Here, only the serious-FPS category seems to correspond to what has been called action video games in past research and contains mostly first and third person shooters as typically used in action games interventions (which would show here as an interaction effect). A consequence of this dichotomous view of game genres is that some games do not fit in their assigned category. For example, the games Tetris, Wii Fit Segway Circuit, Angry Birds, Balance, Centipede, Pacman, marble madness, FIFA 2010, Pinball Hall of Fame, MultiTask, or New Super Mario Bros are labeled as action games despite being reported as non-action (mostly control) games in the primary studies. This is at best confusing and at worse likely to set the field on the wrong path.

a. The format variable is fairly rigidly defined via the definition of “casual” games commonly used in the literature, that being games designed to be played for 30 minutes or less in a single session (and “serous”/”long-form” therefore being designed to be played for longer than 30 minutes in a session). We agree, as mentioned in previous comments, that binarizing the action/strategy distinction does not adequately categorize all games, but as discussed in our response to comment #1 we feel this forces categorization accurately reflects the common misapplication of genre terms in the literature. Accurately categorizing the genre of a given game would require a much more granular approach than has been applied in the field to date, and in the end many of those distinctions would be arbitrary (do first-person shooter game and a first-person action game that does not involve projectile combat meaningfully differ in terms of cognitive demands evoked?). The authors assert that a more granular categorization of genre (i.e. “action/strategy/puzzle”) does not address the issues that reviewer 2 raises, whereas a game factors approach does – indeed, that is the thesis of this paper.

12. I also wonder whether genre and format are meant to be orthogonal dimensions or simply different attributes. The classification proposed by Simons only considers 3 categories: action, strategy and casual. While I understand the authors’ argument for casual games that span distinct genres, I don’t think this applies to serious games --a term that has been more commonly used to refer to educational or therapeutic games, in contrast to entertainment video games. And indeed calling commercially available FPS or TPS, serious games is likely to be highly confusing to all readers.

a. We had intended the term “Serious” to refer to the inverse of “Casual”, i.e. games designed to be played for sessions longer than 30 minutes. We have adjusted this term to “long-form” throughout the manuscript, to avoid confusion with the term “serious” game as it has been used to refer to educational/therapeutic games.

13. When the authors state that the genres have become more “blurry” (line 598), it would be helpful to clarify that this is due to both an increase in the number of genres and subgenres with the classification becoming more granular with more genres (rather than less) and narrower subgenres (e.g. FPS is a subgenre of action), together with greater overlap between genres and subgenres. This state of affairs has led to the emergence of hybrid-genres such as action-role-playing games that mix action mechanics with role playing features.

a. We don’t entirely agree that the in lack of distinction of game genres is resultant of “both an increase in the number of genres … and narrower subgenres (e.g. FPS is a subgenre of action), together with greater overlap between genres and subgenres.” Genres have always been a post-hoc categorization of a rich and complex gameplay space, from the earliest days of commercial gaming as a hobby. Stating that “genres are becoming more indistinct” miscategorized the issues – gaming enthusiast, marketers, etc. are continuously developing increasingly specific genre terminology to describe specific points in “gameplay space”, and admittedly this is driven by continual permutation of existing and development of new gameplay tropes in the developer space, but genres have never been accurate summaries of gameplay, (in this author’s opinion)

b. A full exploration of this topic is, as mentioned, outside the scope of this paper (and indeed, the field of cognitive science) – considering this, we have elected to keep our summary of this issue brief, but have adjusted the wording to better convey the nuances of our argument beyond “blurry” (see lines 613-616).

14. The analysis separating by outcome is interesting but may be underpowered and should thus be reported as exploratory or at least interpreted cautiously.

a. We acknowledge that our analysis of separate outcomes is lacking in power particularly with regards to memory and psychosocial outcomes on lines 570-580. We contend that the sub-analyses pertaining to Attention/Perception and Higher-order Cognition, with ks of 90 and 54 respectively, are sufficiently powered, particularly with respect to other meta-analyses.

15. I found surprising that Movement and Perspective produce opposite effects in Table 3, given that they are strongly and positively correlated in Table 2. Could the author comment?

a. Correlation between the two predictor variables does not mean that they need to have the same main effect on the dependent variable, especially after properly controlling for other influences. As each fixed effect in the LME approach is corrected for all other fixed effects, we can state that, for example, the impact of perspective on cognitive transfer is negative, (i.e. favoring 1st-person perspective) all other things being equal, including movement. We would expect, based on these results, that both egocentric games in a 1st-person perspective would produce greater cognitive transfer than egocentric games in a 3rd-person perspective. The correlation simply states that, in addition to this, egocentric games are more likely to be 1st-person whereas allocentric games are more likely to be 3rd-person.

16. I could not find the reference to Valdez 2011. Was this study only a 15 minutes intervention as suggested by the study label in the excel file (Experimental Study, 15 minute RDR-NV group)?

a. The full citations for studies included in the meta-analysis but not directly referenced in the body of the main manuscript can be found in our supplementary references, which for the Valdez study in question is “Valadez, J. J., & Ferguson, C. J. (2012). Just a game after all: Violent video game exposure and time spent playing effects on hostile feelings, depression, and visuospatial cognition. Computers in Human Behavior, 28(2), 608–616. https://doi.org/10.1016/j.chb.2011.11.006”. We note that the year of this study was incorrectly listed as “2011” in our public access data (this has been corrected), but is correctly cited to the year 2012 in the supplementary materials.

b. To answer reviewer 2’s initial question, Valdez included both a 15-minute and a 45-minute intervention group. Indeed this is the shortest intervention included in this meta-analysis.

17. Line 590: Did you mean coarse instead of course?

a. Indeed we did – we thank reviewer 2 for the correction.

18. In conclusion, the present results do not support the following conclusions that(i) game genre is a useless construct; it has at least been valid and instrumental in guiding hypothesis generation and testing, (ii) the analysis of games features invalidates the theories about the action video game effects, (iii) the choice of control games reflects an experimenter bias (implying a form of questionable research practice). As a consequence, I would ask to rephrase and downplay several strong judgmental statements in the discussion:

i. We thank reviewer 2 for their thorough critique of our discussion, and recommendations for improvement. All of the sections listed below have been adjusted in line with reviewer twos feedback – line numbers after revision (as well as specific comments as necessary) are included below.

b. Lines 524-530: “However, games with passive thresholds or objectives better facilitate transfer to AP outcomes than did games with active opponents, and the AP construct was insensitive to the presence or absence of time pressure. This finding provides evidence that some of the post-hoc justifications given in the past explaining the link between “action” video game play and enhanced attention and perception may be inaccurate – specifically, it does not seem that active opponents or time pressure are necessary elements of gameplay to drive attention & perception outcomes.”

i. This section has been reworked in line with reviewer 2’s suggestions (lines 538-545)

c. Lines 630-640: “This analysis, in line with previous reviews [2] confirmed the presence of publication bias within the video game training field. Our finding with regards to transfer from experimental groups vs. active control groups using VG interventions may shed light on this problem. Based on our findings that videogames utilized in active control group failed to produce cognitive transfer regardless of their gameplay properties, we can reasonably conclude that some form of bias is suppressing significant transfer in those groups. Addressing this bias is imperative in its own right, and may contribute to the reduction of publication bias in future game training interventions. The authors suggest that future studies examine multiple games of various cognitive profiles with the assumption that differential transfer will be observed between groups (i.e. to differing measures of transfer), rather than the a-priori assumption that one condition will be less effective than another [15, 22, 23, 58].

i. This paragraph has been removed. Discussion of selection bias is now relegated to lines 468-488 and presents, we believe, a more balanced perspective on the findings that intervention games consistently produced greater transfer than control games in this analysis.

d. Lines 499-510: “While games utilizing these gameplay features are common in the field (“first-person-shooter” games utilize a first-person perspective, “real-time-strategy” and many “platformer” and “puzzle” games utilize allocentric movement), neither the first-person perspective nor allocentric movement style have been theorized to have a strong impact on cognitive transfer resulting from VGT interventions. Interestingly, no single game examined in the present meta- analysis featured both an allocentric movement style and a 1st-person perspective, though in theory these results suggest that a game featuring both would be a candidate for effective cognitive intervention.”

i. This section has been reworked in accordance with reviewer 2’s comments (lines 505-524)

e. Lines 549-552: “As was the case with transfer to the AP construct, transfer to the HC construct was insensitive to the presence or absence of time pressure, which has been invoked as a crucial factor driving cognitive transfer from both “Action” and “Strategy” game interventions [19, 25].”

i. We contend that this statement re. the insensitivity of the HC construct to time pressure as well as the invocation of time pressure as a driver of cognitive change in past work is accurate. This has been reworded slightly so as to be less strongly assertive.

f. Lines 576-581: “The findings of the present meta-analysis demonstrate the limited utility of broad genre classifications in understanding the effects of videogame training. Not only did the “action” vs “strategy” distinction prove ineffective in distinguishing cognitive outcomes of VGT-based cognitive interventions, the gameplay features where were found to differentially impact cognitive outcomes to general cognition, attention/perception, and higher-order cognition did not correspond to defining features of either genre.”

i. The “Relevance of Existing Genre Definitions…” (beginning on line 590) of which this quoted section was the opening to has been majorly re-worked in line with Reviewer 2’s suggestion.

g. Lines 581-582: Importantly, of the central predictions of the originating from the action game training literature – that combat-focused games facilitate cognitive transfer to both attention/perception and higher-order cognitive functions [1, 25]”

i. We feel this statement is broadly accurate - we do in fact see a positive impact on transfer of combat-based games, which aligns with previous literature. This statement is not based on any null effect (which reviewer 2 rightly criticized with regards to other strong statements in our discussion). While reviewer 2 rightly criticized us for conflating combat to the “action” genre in an earlier comment, we feel that this statement – specifically that the combat-focused nature of many action games may be a facilitator of transfer from training with those games – does reflect a fairly common opinion present in the literature.

Attachment

Submitted filename: Smith & Basak 2022 Response to Reviewers 03.29.2023.docx

Decision Letter 2

Alessandra S Souza

5 May 2023

A Game-Factors Approach to Cognitive Benefits from Video-Game training: A Meta-Analysis

PONE-D-22-06647R2

Dear Dr. Basak,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Alessandra S. Souza, Ph.D.

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #2: All comments have been addressed

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #2: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #2: Yes

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #2: No

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #2: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #2: The reviewers have addressed all my comments and suggestions and have modified their manuscript accordingly. There are still some points of disagreements regarding the conceptualisation and discussion of game genres in the field. However, I believe that disagreements and debates can be beneficial to science, especially when they are respectful and constructive.

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #2: No

**********

Acceptance letter

Alessandra S Souza

22 Jun 2023

PONE-D-22-06647R2

A game-factors approach to cognitive benefits from video-game training: A meta-analysis

Dear Dr. Basak:

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

If we can help with anything else, please email us at plosone@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. Alessandra S. Souza

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 Method. Full search terms for literature search.

    (DOCX)

    S1 Table. Gameplay factors of games featured in included studies.

    (DOCX)

    S2 Table. Outcome measures categorized by construct, with representative studies.

    (DOCX)

    S3 Table. Characteristics of included video-game training studies.

    (DOCX)

    S1 List. References for included studies.

    (DOCX)

    S1 File. PRISMA checklist.

    (DOCX)

    Attachment

    Submitted filename: REVIEW OF PONE-D-22-06647.docx

    Attachment

    Submitted filename: Smith & Basak 2022 Response to ReviewersFinal.docx

    Attachment

    Submitted filename: Smith & Basak 2022 Response to Reviewers 03.29.2023.docx

    Data Availability Statement

    All relevant data are within the paper and its Supporting information file. Additional data are shared at https://osf.io/6j792/.


    Articles from PLOS ONE are provided here courtesy of PLOS

    RESOURCES