Table 1.
Critical evaluation of the current studies relative to best practice guidelines for cognitive training methodology and reporting standards (adapted from Simons et al., 2016 and Redick, 2015)
Criterion / Commentary | |
---|---|
Best practice recommendations from Simons et al. (2016) | |
✓ | Assess pre-treatment baseline performance for all groups |
Both trials used a pre-post/follow-up test design. Pre-treatment performance was assessed and controlled when probing between-treatment differences at post-treatment/follow-up. | |
✓ | Include an active, credible control group matched for expectancies |
Study 1: Behavioral parent training (BPT) served as the active comparator. BPT is currently the gold standard psychosocial treatment for pediatric ADHD (for review, see Evans et al., 2018). The CET and BPT treatments did not differ in caregiver-reported feasibility or acceptability (Kofler et al.,2018). Study 2: Working memory and inhibitory control are both putative core mechanisms implicated in ADHD and featured in prominent conceptual models of the disorder’s etiology and psychopathology. The two versions are identical except the target mechanism, and served as active, credible controls for each other. The treatments were identical in terms of expectancies and did not differ in caregiver-reported feasibility or acceptability (Kofler et al., 2020). |
|
✓ | Include at least 20 participants in each treatment arm |
Study 1: All analyses included BPT n=27 and CET n=27 participants. Study 2: All analyses included ICT n=29 and CET n=25 participants. |
|
✗✓ | Randomly assign children to condition |
Study 1: Children were assigned sequentially based on date of intake with a priori defined cut-off dates (i.e., BPT recruitment was closed when CET was ready for testing). Random assignment was not feasible for this initial study due to CET’s lengthy development cycle (i.e., children could not be assigned to CET before the software existed). Meta-analytic evidence indicates that randomization does not significantly affect estimates of working memory training’s impact on working memory for children, but non-randomized studies inflate far-transfer estimates by d=0.20 (Sala & Gobet, 2017). Study 2: Children were randomly assigned using unpredictable allocation concealment. |
|
✓ | Pre-register the trial, and explicitly acknowledge departures from pre-registered plan |
Both studies’ outcome measures and detailed data analytic plans were pre-registered. Preregistration occurred during data collection and prior to accessing the data. Data analyses were conducted masked to treatment allocation. | |
✓ | Mask raters for all subjective outcome measures |
Teachers were masked to treatment status. Masking was also included for objective outcomes:Clinicians conducting standardized academic testing were masked to treatment status in Study 2. In Study 1 it is reasonable to conclude that clinicians were not masked to treatment status because only one treatment was offered at a time. | |
✓ | Label any analyses conducted after inspecting the data as ‘exploratory’ |
The analyses reported herein did not depart from the preregistered plan, with the exception of clearly marked analyses that were added during the peer review process. | |
✓ | Avoid subgroup analyses unless preregistered |
No subgroup analyses were preregistered; therefore, none were conducted. Within-treatment analyses were limited to planned contrasts to characterize the pattern of change for each treatment. | |
✓ | Identify all outcome data collected, including outcomes not reported herein |
A complete list of data collected for secondary research questions can be found on the studies’ OSF preregistration websites. | |
Additional recommendations from Redick (2015) | |
✓ | Report full pre-test and post-test means and SDs for all groups |
Pre-treatment and post-treatment means and SDs are shown in Tables 2 and 3, respectively. | |
✓ | Provide full, subject-level data as supplementary material |
JASP (.jasp) data files with subject-level data and results output are posted for peer review on the study’s OSF website. | |
✓ | Use likelihood ratios, in particular Bayes Factors |
Traditional p-values are supplemented with Bayes Factors to allow stronger conclusions regarding both between-treatment equivalence and emerging between-treatment differences. | |
✓ | Examine outcomes graphically to ensure that the pattern of pre- to post-test change is theoretically consistent with the expected pattern of results |
Outcomes were examined graphically to ensure the pattern of change was consistent with our descriptions in the text. |