Skip to main content
Archives of Clinical Neuropsychology logoLink to Archives of Clinical Neuropsychology
. 2024 Jan 25;39(5):626–634. doi: 10.1093/arclin/acae002

Minimal Detectable Change for the ImPACT Subtests at Baseline

Kristen G Quigley 1, Madison Fenner 2, Philip Pavilionis 3, Nora L Constantino 4, Ryan N Moran 5, Nicholas G Murray 6,
PMCID: PMC11269890  PMID: 38273670

Abstract

Objective

To establish the minimal detectable change (MDC) of the subtests that comprise the composite scores from remotely administered Immediate Post-Concussion Assessment and Cognitive Testing (ImPACT) baselines.

Method

Remote ImPACT baseline data from 172 (male = 45, female = 127) National Collegiate Athletic Association Division I student-athletes from the 2020 and 2021 athletic preseasons were used to calculate the MDC at the 95%, 90%, and 80% confidence intervals (CIs) for all subtest scores used to generate the four core composite scores and the impulse control composite.

Results

The MDCs for the verbal memory subtests at the 95% CI were 10.31 for word memory percent correct, 4.68 for symbol match total correct hidden, and 18.25 for three letters percentage correct. Visual memory subtest MDCs were 19.03 for design memory total percent correct and 4.90 for XO total correct memory. Visual motor speed subtest MDCs were 18.89 for XO total correct interference and 5.40 for three letters average counted correctly. Reaction time (RT) MDCs were 0.12 for XO average correct, 0.95 for symbol match average correct RT, and 0.28 for color match average correct. Impulse control MDCs were 5.97 for XO total incorrect and 1.15 for color match total commissions. One-way repeated measures MANOVA, repeated measures ANOVAs, and Wilcoxon signed-ranks test all suggested no significant difference between any subtests across two remote ImPACT baselines.

Conclusions

The ImPACT subtest scores did not significantly change between athletic seasons. Our study suggests the subtests be evaluated in conjunction with the composite scores to provide additional metrics for clinical interpretation.

Keywords: Head injury, Traumatic brain injury, Mild cognitive impairment

INTRODUCTION

Neuropsychological testing remains an important element for sport-related concussion (SRC) return-to-play and return-to-learn decision making (Mason et al., 2020; Netzel et al., 2022; Patricios et al., 2023; Quigley et al., 2022). Prior literature advises against relying solely on athlete-reported symptom severity scores when making return-to-play decisions and instead suggests a multifaceted concussion battery be conducted in all post-concussive return-to-play decisions (Broglio et al., 2007a). Neuropsychological testing alone has been shown to be more accurate at monitoring concussion recovery than symptom reporting, and is a key component in SRC batteries (Broglio et al., 2007a; Makdissi et al., 2010; Meehan III et al., 2012; Schatz et al., 2006). Said that, cognitive performance may differ between individuals due to sex, psychiatric conditions, headache disorders, or previous diagnosis of attention deficit/hyperactivity disorder (ADHD) and/or dyslexia (Cottle et al., 2017). Because many factors can influence neurocognitive performance, it is imperative each athlete has completed a preseason baseline to allow clinicians to compare post-concussion testing results to their baseline scores.

Composite scores

It is common to create composite scores for cognitive tests in an attempt to normalize test results, combine tests that span similar cognitive domains, and reduce the overall quantity of information presented to the patient (Crockett et al., 1969; Iverson et al., 2019; Karr et al., 2022; Rund et al., 2006; Stenberg et al., 2020). Whereas this may help with clinical management, the subtests used to compute the composite scores contain meaningful cognitive data that can aid in clinical decisions. For example, a study by Tulsky and coworkers (2017) examined the use of the National Institute of Health (NIH) Toolbox Cognition Battery in individuals with mild and severe traumatic brain injury, and found the subtests used in the creation of the Toolbox’s composite scores were equally relevant to the composite scores in determining abnormal cognitive functioning (Heaton et al., 2014). Whereas the composite scores were more commonly used to detect impairment, the subtests contained useful information that aided clinicians in making important pointed decisions on cognitive declines (Tulsky et al., 2017), highlighting the important role of subtests that comprise composite scores in clinical decision making.

The ImPACT test

The Immediate Post Concussion Assessment and Cognitive Testing (ImPACT) (ImPACT Applications Inc., Pittsburgh, PA, USA) test is one of the most widely administered neurocognitive concussion assessments in the National Collegiate Athletic Association (NCAA), as it can be used as a preseason baseline, as well as post-injury if a health care provider suspects an athlete may have suffered a concussion (Netzel et al., 2022; Tsushima et al., 2016). Much like the Toolbox, the ImPACT test creates composite scores based on averages and sum of individual task scores. As a comprehensive neurocognitive exam, ImPACT consists of six different tasks, or subtests, named: Word Memory, Design Memory, X’s and O’s (XO), Symbol Match, Color Match, and Three Letters. Each of these tasks generates between 4 and 6 scores, some based on reaction time and others correctness, which are coined the “subtest scores.”

Historically most ImPACT research focused solely on evaluating the composite scores (Ferris et al., 2022; Mason et al., 2020; Netzel et al., 2022; Thoma et al., 2018). The composite scores are computed using 2–3 subtest scores from different ImPACT tasks to specifically target one neurocognitive domain. For instance, the visual memory composite score is calculated by taking the average of the design memory total percent correct and the X’s and O’s total correct memory (the latter is divided by 12 and multiplied by 100). A full breakdown of the ImPACT composite scores and the subtests that comprise them can be seen in Table 1. For example, an optimal visual memory composite score is 80 with a minimal detectable change (MDC) of 24.44 at a 95% confidence interval (CI) but no documentation exists to outline optimal scores or MDCs for the subtests (The derived scores | ImPACT version 4, 2023). Prior literature found that the ImPACT composite scores lack discriminability, which suggests that perhaps the ImPACT subtests may hold more valuable and distinguishable insights into cognitive functioning (Maerlender et al., 2013).

Table 1.

Breakdown of the ImPACT composite scores and the subtest scores that comprise them

Composite score Formula Contributing subtests
Verbal memory Average Word memory: total percent correct
Symbol match: [(total correct hidden)Inline graphic9] Inline graphic100
Three letters: percent total letters correct
Visual memory Average Design memory: total percent correct
X’s and O’s: [(total correct memory) Inline graphic12] Inline graphic100
Visual motor speed Average X’s and O’s: (total correct/interference) Inline graphic4
Three letters: (average counted correctly) Inline graphic3
Reaction time (RT)a Average X’s and O’s: average correct RT interference
Symbol match: (average correct RT/visible) Inline graphic
Color match: average correct RT
Impulse control Sum X’s and O’s: total incorrect/interference
Color match: total commissions

aAll reaction times are measured in seconds.

It may be useful for clinicians to verify if one specific ImPACT task (i.e., one subtest score within the composite) led to a significant score change, as some tasks are used to calculate multiple composite scores and thus may affect multiple domains. All individual subtest scores are available to clinicians on the automated completion report generated by ImPACT; therefore, the data are accessible for analysis if desired. Previous literature found only 26% of athletic trainers who reported interpreting ImPACT data on their own have participated in an ImPACT workshop (Covassin et al., 2009). Without ImPACT specific training, the average sports medicine professional will likely rely on ImPACT’s built-in composite score flagging system; however, the subtests provide useful data when making a clinical judgment (Covassin et al., 2009).

The composite scores aim to target a single cognitive domain, but the individual tasks that contribute to each composite score vary greatly. The visual memory composite score for example is comprised of a subtest from the design memory task, and a second subtest from a memory task that requires test takers to memorize the highlighted X’s and O’s in an array of X’s and O’s. Though both tasks are visual memory tasks, they greatly differ in terms of formatting, and it is possible someone may only struggle with one of those tasks. Therefore, if a specific subtest area within the composite score is abnormal, clinicians can make more direct recommendations on specific cognitive training to improve that distinct subtest area, rather than generalized information relating to a widespread cognitive domain.

Minimal detectable change

The MDC is calculated using the standard error of measurement (SEM) and provides cut off ranges for sports medicine staff to use when making clinical judgments from repeated tests (Howell et al., 2021; Oldham et al., 2018; Quigley et al., 2022). ImPACT’s automated flagging process uses the reliable change index (RCI) to evaluate significant changes between testing results. RCIs are calculated by dividing the two testing scores by the standard error of the difference between the two, and thus require the clinician to perform the calculation for each test-taker (Iverson et al., 2003). Literature has suggested the RCI allows clinicians to estimate the range of measurement error in test–retest difference scores, and not to demonstrate the significance of the score changes (Iverson et al., 2003). In addition, MDC ranges can be placed into a table for clinicians to easily glance at and evaluate if the score changes were significant. MDCs have been used in postural control exams, such as the Balance Error Scoring System (BESS) and the Balance Tracking System (BTrackS), as well as in cognitive exams like the King-Devick test (Carlson et al., 2020; Elbin et al., 2021; Levy et al., 2018). Both the BESS and King-Devick test use the MDCs to classify impairment post-SRC, making result interpretation much more streamlined for sports medicine staff. Recent research by Quigley and coworkers (2022) created MDCs for the ImPACT composite scores to define significant score change ranges. Little research exists examining the subtests that comprise the ImPACT composite scores (Henry & Sandel, 2015). This paper aims to serve as an expansion on the findings of Quigley and coworkers (2022), and to avoid redundancy in this manuscript, the authors recommend consulting Quigley and coworkers (2022) for additional background information and findings from previous literature.

Objective

Building off the composite MDCs calculated in Quigley and coworkers (2022), this paper aims to establish MDCs for all subtests scores that comprise the five ImPACT composite scores using remotely administered ImPACT baselines from two consecutive athletic preseasons. Using the subtests will increase the number of metrics available to clinicians making diagnostic decisions and may help demonstrate if athletes are struggling with one aspect of a cognitive domain. The subtest MDCs calculated in this paper are intended to serve as an expanded clinician’s guide to interpreting ImPACT baseline score changes. Unrelated to the MDCs, the researchers hypothesize that the ImPACT subtest scores will not significantly differ across remote testing time points, as previous literature demonstrated with the composite scores (Mason et al., 2020; Netzel et al., 2022; Quigley et al., 2022).

METHODS

Participants

Using the same cohort as Quigley and coworkers (2022), 172 NCAA Division I athletes (n = 172; male = 45, female = 127) participated in this study. The average age of the participants was 19.37 ± 1.24 years at the first remote testing time point and 20.25 ± 1.29 years at the second remote testing time point. The frequency of participants in each sport can be found in Table 2. The average time between the tests was 302 ± 46.35 days, as the study took place over two consecutive athletic seasons. All participants were recruited from the same athletic department if they had completed the NCAA mandated preseason testing prior to any athletic participation. All ImPACT scores were reviewed by a credentialed ImPACT consultant (CIC) as well as ImPACT’s automated RCI flagging system, as participants with invalid ImPACT baselines scores were forced to retake the exam at least 48 h after their initial baseline for athletic clearance. ImPACT flags tests as invalid if the test-taker is between 14 and 59 years old and receives an Impulse Control composite score greater than 30, Word Memory Learning percent correct below 69%, Design Memory Learning percent correct below 60%, and/or a Three Letters Total Letters Correct score below 8. Athletes were excluded from this study if they self-reported a history of ADHD and dyslexia at either time point, based on previous literature, as both ADHD and dyslexia may influence ImPACT performance (Maietta et al., 2023). Of the 195 identified participants, 20 were excluded due to a self-reported history of ADHD and/or dyslexia. Though ImPACT deemed their results valid, an additional three participants were excluded by the CIC as their results from the second test were significantly better than the first test, and the score differences surpassed what is considered normal due to the learning effects of ImPACT. This may have been due to a lack of effort at the first time point that went unnoticed due to the scores still being above average.

Table 2.

Frequency (n) and percent (%) of participants of each sex and sport

Frequency (n) Percentage (%)
Sex
Male 45 26.2
Female 127 73.8
Total 172 100.0
Sport
Baseball 14 8.1
Basketball 2 1.2
Cheer, stunt, dance 40 23.3
Cross-country, track and field 30 17.4
Football 4 2.3
Golf 11 6.4
Soccer 18 10.5
Softball 15 8.7
Swim and dive 18 10.5
Volleyball 10 5.8
Tennis 10 5.8
Total 172 100.0

Participants were asked to take ImPACT in their native language; however, some participants did not fully adhere to this instruction and took the test in a different language at each time point. The test languages at the first time point were English (n = 159), French (n = 5), Czech (n = 1), Portuguese (n = 1), Polish (n = 2), Spanish (n = 2), Russian (n = 1), and German (n = 1). The test languages at the second time point were English (n = 162), French (n = 4), Czech (n = 1), Polish (n = 1), Spanish (n = 2), Russian (n = 1), and German (n = 1). This mismatch was considered as a possible exclusionary criterion, but participants were ultimately included if their ImPACT scores were valid.

The ImPACT test

The ImPACT test is a computerized neurocognitive diagnostic tool designed to serve as both a baseline and post-injury concussion assessment. The test takes ~30 min to complete and can be administered either with supervision, or in an uncontrolled remote environment (Netzel et al., 2022). ImPACT collects data from six tasks to create composite scores based on five domains commonly affected by concussion, named: visual memory, verbal memory, visual motor speed, reaction time, and impulse control. Though not considered one of the four core composite scores, the impulse control composite is helpful in determining if baselines are valid by measuring error. For this reason, the present study decided to include impulse control along with the other four core composites (The derived scores | ImPACT version 4, 2023). The composite scores are calculated using a combination of subtest scores from two or three of the six ImPACT tasks (Table 3). In addition to the exam results, ImPACT collects 22 self-reported common concussion symptoms evaluated on a 7-point Likert scale and a brief medical history. ImPACT automatically generates a score report for clinicians with the composite scores, subtest scores (including those not used in composite calculation), medical history and self-reported concussion symptoms.

Table 3.

Minimal detectable change values at the 95%, 90%, and 80% confidence intervals for the subtests that comprise the ImPACT composite scores over both remote assessment periods

Measure MDC 95  
(95% confidence)
MDC 90  
(90% confidence)
MDC 80  
(80% confidence)
Verbal memory 18.60 15.66 12.17
Word memory percent correct 10.31 8.68 6.74
Symbol match total correct hidden 4.68 3.94 3.06
Three letters percentage letters correct 18.25 15.36 11.94
Visual memory 24.44 20.57 15.99
Design memory total percent correct 19.03 16.02 12.45
XO total correct memory 4.90 4.12 3.20
Visual motor speed 8.76 7.37 5.73
XO total correct interference 18.89 15.90 12.35
Three letters average counted correctly 5.40 4.55 3.53
Reaction timea 0.14 0.12 0.09
XO average correct 0.12 0.10 0.08
Symbol match average correct RT 0.95 0.80 0.62
Color match average correct 0.28 0.24 0.19
Impulse control 6.13 5.16 4.01
XO total incorrect 5.97 5.02 3.91
Color match total commissions 1.15 0.97 0.75

aAll reaction times are measured in seconds.

Note: MDC = minimal detectable change; RT = reaction time.

In practice ImPACT is often used as a stand-alone concussion diagnostic test, though a full battery containing vestibulo-ocular, cognitive and balance assessments is recommended by experts (Ferris et al., 2022; Patricios et al., 2023). The test–retest reliability of ImPACT has been heavily researched because its conception; however, no one diagnostic test should be used to make definitive clinical judgments. Intraclass correlation coefficients (ICCs) calculated by multiple studies of both collegiate and high-school athletes ranged from 0.40 to 0.62 for the verbal memory composite, 0.44 to 0.70 for the visual memory composite, 0.66 to 0.84 for the visual motor speed composite, and 0.34 to 0.88 for the reaction time composite (Broglio et al., 2018; Elbin et al., 2011; Houston et al., 2021; Netzel et al., 2022; Resch et al., 2013; Schatz & Maerlender, 2013). The impulse control composite was rarely included in ICC data reporting, but one study calculated it to be 0.52 (Netzel et al., 2022). ICCs ˂0.50 indicate poor reliability, whereas ICCs above 0.75 indicate good reliability (Koo & Li, 2016). Based on these ICC parameters, only the visual motor speed composite has demonstrated good reliability in multiple studies.

Procedures

This study followed the same procedures as Quigley and coworkers (2022), by having NCAA Division I athletes complete an ImPACT baseline prior to the start of two consecutive athletic seasons (average of 302 ± 46.35 days between tests). At both testing times, participants were emailed a unique testing link and instructions that included completing the test in a quiet remote environment. Instructions modeled previous literature, as well as ImPACT administration guidelines (Netzel et al., 2022; Quigley et al., 2022). Participants were required to complete all background sections except the additional demographics section, as that information was collected by sports medicine staff. All participants agreed verbally and signed a written informed consent form. This study was approved by the Institutional Review Board of the respective university (IRB number: 1757959-8) and in accordance with the Declaration of Helsinki.

ImPACT has a built-in process to flag results it deems invalid upon completion of the test using RCIs. In accordance with ImPACT recommendations, all ImPACT results were reviewed by a CIC regardless of ImPACT’s flags. Invalid results are typically associated with a lack of understanding of instructions or a lack of effort (Bailey et al., 2006). There were 15 participants who received an invalid ImPACT result during the study timeframe, however these athletes were retested at least 48 h later. Only the valid ImPACT results of those who were required to retake the test were considered for this study.

Statistical analysis

All statistical analyses were conducted using SPSS (IBM SPSS Statistics for Windows. Released 2022. Version 29.0. IBM Corp, Armonk, NY, USA). The subtest scores that comprise the composite scores were evaluated for both skewness and kurtosis. All subtests that are scored on a percentage scale (word memory total percent correct, three letters percent correct, and design memory total percent correct) were parametric and analyzed together using a one-way repeated measures MANOVA. Correctness based subtests (symbol match total correct hidden, XO total correct memory, three letters average counted correctly, and XO total correct interference) were parametric and analyzed using individual one-way repeated measures ANOVAs due to significant differences in the total number of points possible on each subtest. Reaction time related scores (reaction time composite score, symbol match average correct reaction time, and color match average correct reaction time) do not follow the same scoring scheme as the other parametric subtests, therefore a one-way repeated measures ANOVA was performed. The impulse control composite score, as well as the XO total incorrect and the color match total commissions, were nonparametric and were evaluated using a Wilcoxon signed-ranks test. Alpha was set to 0.05.

The MDC for the subtest scores that comprise the five composite scores were examined at 95%, 90%, and 80% CIs. MDCs exemplify statistically significant changes in scores across two testing times, or the minimum score change that is considered clinically relevant and not due to natural fluctuations in performance (Alsalaheen et al., 2016). All three confidence intervals and their associated constants were determined from prior literature, which used a varying combination of the confidence intervals in this study (Howell et al., 2016; Howell et al., 2021; Mason et al., 2020; Oldham et al., 2018; Quigley et al., 2022; Ries et al., 2009). The present study opted to use all three confidence intervals to mimic those used in previous ImPACT related MDC literature (Mason et al., 2020; Quigley et al., 2022). The MDC was calculated using the SEM based on prior literature, as the SEM estimates variability across individuals within a sampling cohort (Howell et al., 2021; Oldham et al., 2018; Quigley et al., 2022).

Equation 1.  Inline graphic, where s represents the standard deviation and r represents the reliability coefficient.

Equation 2. MDC at 95%CI: Inline graphic.

Equation 3. MDC at 90%CI: Inline graphic.

Equation 4. MDC at 80%CI: Inline graphic.

RESULTS

Minimal detectable change

The MDC was calculated at the 80%, 90%, and 95% CI for each of subtests that comprise the five ImPACT composite scores, and the ranges of MDC values denote the minimum change in baseline subtest scores needed to be considered statistically meaningful. Smaller MDCs indicate a narrower range that score can change between testing times to still be considered an insignificant fluctuation. The MDC ranged from 10.31 to 6.74 for the word memory percent correct, 4.68 to 3.06 for the symbol match total correct hidden, 18.25 to 11.94 for the three letters percentage correct, 19.03 to 12.45 for design memory total percent correct, 4.90 to 3.20 for XO total correct memory, 18.89 to 12.35 for XO total correct interference, 5.40 to 3.53 for three letters average counted correctly, 0.12 to 0.08 for XO average correct, 0.95 to 0.62 for symbol match average correct reaction time, 0.28 to 0.19 for color match average correct, 5.97 to 3.91 for XO total incorrect, and 1.15 to 0.75 for color match total commissions. The MDC values are displayed in Table 3. All subtest means, standard deviations and perfect subtest scores, as defined by ImPACT, can be found in Table 4.

Table 4.

Mean scores (Standard Deviation) for each of the five composite scores and their corresponding subtests compared to ImPACT perfect scores at both testing time points

Measure Mean score time 1 (SD) Mean score time 2 (SD) ImPACT perfect score ImPACT optimal score
Verbal memory 90.69 (8.14) 92.11 (8.11) 90
Word memory percent correct 94.15 (5.89) 94.77 (5.62) 100
Symbol match total correct hidden 7.37 (1.91) 7.71 (1.80) 9
Three letters percentage letters correct 96.11 (6.71) 96.13 (7.96) 100
Visual memory 80.47 (11.89) 81.94 (12.02) 80
Design memory total percent correct 82.92 (11.76) 84.61 (11.87) 100
XO total correct memory 9.37 (2.01) 9.51 (2.02) 12
Visual motor speed 41.18 (5.81) 41.63 (5.86) 40
XO total correct interference 114.00 (6.82) 113.49 (10.6)
Three letters average counted correctly 17.97 (3.61) 18.32 (3.58) 25
Reaction timea 0.60 (0.07) 0.61 (0.08) 0.55
XO average correct 0.52 (0.06) 0.51 (0.07) N/A
Symbol match average correct RT 1.62 (0.38) 1.68 (0.45) N/A
Color match average correct 0.74 (0.14) 0.74 (0.11) N/A
Impulse control 4.22 (2.89) 4.54 (2.90) 8–9
XO total incorrect 4.03 (2.72) 4.30 (2.75) 0
Color match total commissions 0.18 (0.55) 0.21 (0.49) 0

aAll reaction times are measured in seconds.

Note: RT = reaction time; SD = standard deviation.

Repeated measures ANOVA

The repeated measures ANOVAs revealed no significant differences in the mean scores of the symbol match average correct reaction time F(1.00, 171.00) = 2.33, p = 0.13 or in the mean scores of the color match average correct reaction time F(1.00, 171.00) = 0.68, p = 0.41 across the two remote time points.

The separate repeated measures ANOVAs for symbol match total correct hidden F(1, 171) = 3.45, p = 0.065, XO total correct memory F(1, 171) = 0.58, p = 0.45, XO total correct interference F(1, 171) = 0.46, p = 0.50, three letters average counted correctly F(1, 171) = 2.77, p = 0.098 were not statistically significant across the remote testing time points.

One-way repeated measures MANOVA

The multivariate model for the percentage based subtest scores (word memory total percent correct, three letters percent letters correct, and design memory total percent correct) did not reveal significant differences F(3, 169) = 2.41, p = 0.068 across the two remote tests.

Wilcoxon signed-ranks test

The two testing periods were not significantly different for the XO total incorrect (Z = −0.95, p = 0.30) or the color match total commissions (Z = −0.55, p = 0.58) using a Wilcoxon signed-ranks test. The median result for the XO total incorrect was 4.00 at both remote testing time points, whereas the median score for the color match total commissions was 0.20 at both remote testing time points. The mean results for the XO total incorrect were 4.03 at the first remote testing time point and 4.30 at the second, whereas the mean scores for the color match total commissions were 0.18 at the first remote testing time point and 0.21 at the second. Median results for a perfect score for both the XO total incorrect and color match total commissions would be 0.

DISCUSSION

The purpose of this study was to determine the MDC for the subtest scores that constitute the composite scores of remotely administered ImPACT preseason baselines in Division I student-athletes across two consecutive athletic seasons. A key result from this study is that the ImPACT composite and subtest scores were not statistically significant between testing time points; therefore, they are reliable across remote evaluations. This indicates that the subtests should be considered by sports medicine staff in clinical judgments, as they provide more task-specific details on poor ImPACT performance rather than the composite scores, which simply represent a broad cognitive domain. The ImPACT subtest scores in this study were similar to the average subtest scores in previous literature, and the slight differences in subtest scores may be due to the present study not analyzing female ImPACT results separately from male results (Henry & Sandel, 2015). Previous studies which found poor reliability among the ImPACT composite scores had student volunteers take three ImPACT baselines over 50 days (Broglio et al., 2007b; Resch et al., 2013). This study did not experience significant composite score changes in Division I athletes with approximately a year between baseline assessments, therefore future studies should examine ImPACT reliability over longer time frames to confirm these findings. To the author’s knowledge, there have not been any MDC values established for the ImPACT subtests, therefore there is no literature available for comparison. This research could be built upon by examining differences in ImPACT scores between high-school and collegiate athletes, or by establishing MDC values that are gender or sport specific.

Clinically, the MDCs in this study are meant to serve as a streamlined guide to ImPACT score change interpretation that does not rely on completion of CIC coursework or require additional calculations, theoretically making ImPACT results interpretable by all types of sports medicine providers. Using the subtest MDCs in conjunction with the composite MDCs serves to increase the qualitative incremental validity of ImPACT by providing additional metrics to compare. When comparing ImPACT results to a previous baseline, clinicians would have the MDCs next to them and would check if the score changes are within the MDC ranges. If score differences are outside the MDC ranges, clinicians should consider the results invalid and recommend the individual retake the exam at least 48 h later, per ImPACT guidelines (ImPACT version 4 administration manual, 2022). In addition to recommending an ImPACT retake, clinicians can use the subtest MDCs in this study to determine if certain tasks with ImPACT are demonstrating greater cognitive deficits than others and create individualized neurocognitive rehabilitation programs to address specific deficits.

Other studies have performed valuable analyses of the subtests; however, they did not provide average subtests scores for their given cohorts (Goodwin et al., 2023; Maerlender et al., 2010; Thoma et al., 2018). The most comparable study was conducted by Henry and Sandel (2015), which focused on establishing gender specific low, average, and high scores for the ImPACT subtests in athletes aged 13–21. The average scores obtained by the collegiate athletes in the present study were similar to those obtained by Henry and Sandel (2015) in the 19–21-year-old female and male age groups. This study had an average participant age of 19.81 years between both testing times, therefore making it comparable to the previous studies in 19–21-year-olds. The average scores in their study increased as participant age increased, which is expected due to prefrontal cortex maturation, relevant work experience, possible post-secondary education, or participation in elite level athletics, all of which may improve executive function and complex behavioral performance (Arain et al., 2013; Henry & Sandel, 2015). Though Henry and Sandel (2015) examined ImPACT results by gender and the present study used a predominantly female coed cohort, the studies had similar scores for collegiate athletes which align with previous research that found no gender differences in ImPACT performance (Covassin et al., 2007).

An additional consideration of this research is that ImPACT has only distributed perfect scores and not optimal scores at the subtest level. Perfect subtest scores are achieved by participants completing the task with no mistakes, whereas optimal scores were created by ImPACT to demonstrate favorable scores that can be achieved with flaws. The subtest values in this study differed significantly from the perfect subtest scores ImPACT has released, as expected with a large cohort. Perfect scores cannot be determined for any reaction time-based subtests, meaning there are currently no perfect reaction time subtest scores for comparison (Table 4). Future research should aim to create optimal scores at the subtest level, as it would provide a reference for subtest score comparison and be helpful to clinicians in evaluating ImPACT scores for first-time test takers.

For the subtests specifically, there are no studies that calculated MDCs available for comparison or validation. The MDCs calculated for the subtests in this study typically have a smaller range than those calculated for the composite scores. For example, the MDC at the 95% CI for the visual memory composite score is 24.44, but the subtests associated with that composite have MDC values at the 95% CI are 19.03 for design memory total percent correct and 4.90 for the XO total correct memory. The larger window of acceptable change could be due to composite scores being calculated as averages or sums of subtest scores, or because the composite scores combined multiple aspects of the six ImPACT tasks that are generalized to a broad cognitive domain. Given the large window of acceptable score changes for the composites, it may be beneficial for clinicians to examine the subtest scores as they offer a smaller window of acceptable change. More research of the ImPACT subtests is needed to draw conclusions from these results; however, this study provides additional metrics for clinicians to use when evaluating ImPACT baselines, which will aid in ensuring each athlete, has a valid baseline prior to the start of their respective athletic season. The next step for ImPACT researchers would be to evaluate the diagnostic utility of the subtests by assessing the sensitivity, specificity, and positive predictive power (PPP) of the ImPACT subtests. As sensitivity and specificity of the ImPACT composite scores has been heavily researched, future studies should seek to compare the diagnostic utility of the subtests (Alsalaheen et al., 2016; Maerlender et al., 2010; Resch et al., 2013).

This study indirectly demonstrated that certain ImPACT tasks/subtests have a heavier weight on the composite scores than others, as seen by the breakdown of the composite scores in Table 1. For example, the X’s and O’s task is included in the calculation of the visual memory, visual motor speed, reaction time and impulse control composite scores. Subtest scores from the Design Memory task are only included in the visual memory composite score calculation, meaning poor performance on the Design Memory task only affects a single composite score, whereas poor performance on the X’s and O’s task could reduce scores in almost all composite scores. The dependency of the composite scores on certain tasks may be an issue, as insufficient effort in some tasks, or a lack of understanding of the task, could be detrimental to their overall ImPACT performance. If concussed test-takers know the breakdown of the composite score calculations, they may put extra effort into a task like X’s and O’s to ensure they receive a strong enough score to return to sport, as studies have already shown some athletes sandbag neuropsychological testing at baseline to return to sport more quickly after a SRC (Higgins et al., n.d.; Erdal, 2012). The uneven influence of certain ImPACT tasks should be investigated further, as it is possible that the current method of calculating the composite scores relies too heavily on certain tasks. This uneven task weighting within the composite scores justifies examining the subtest scores, as they may point to difficulty with a particular task, or a single aspect of that task, and can help clinicians create more targeted cognitive rehabilitation programs based on the task individuals are struggling with. Though there does not appear to be a single widely used model for cognitive rehabilitation post-concussion, multiple studies have suggested that patients benefit from cognitive rehabilitation that targets the domains on the ImPACT composite scores, particularly those suffering from persistent concussion symptoms months after their initial injury (Leddy et al., 2012; Sayegh et al., 2010).

Limitations

First, the present study was limited in terms of participant demographics. Previous literature identified American football and soccer as high SRC risk sports; however, they were not well represented in this study, accounting for 2.3% and 10.5%, respectively (Pierpoint & Collins, 2021). Low American football participation was due to sport specific decisions by sports medicine staff that necessitated in-person batch ImPACT exams. This study also had an imbalance in terms of the cohort’s biological sex, with females accounting for 73.8% of participants. As these scores are similar to those achieved in previous studies with more balanced cohorts, it is possible sex may not affect ImPACT performance. In addition, this study relied on self-reported history of ADHD and dyslexia to exclude participants, therefore it is possible individuals were included; if they chose not to disclose their diagnoses at the time of either remote ImPACT baseline. As with all medical modifiers for performance, it is possible that athletes display signs and symptoms of a diagnosed modifier (e.g., ADHD, migraines), but have not been formally diagnosed by a healthcare professional. Concussion history was not considered, as it has not been shown to alter ImPACT baseline performance. Pointing devices were also not considered, though participants may use a mouse or a trackpad to complete the exam. ImPACT tracked the pointing device used, therefore future studies could examine if there are score differences, particularly in reaction time subtests, between pointing devices. There was no consideration of mismatched testing language between the two timepoints, as the scores did not appear significantly different, nor did they significantly alter the MDCs, despite the language discrepancy. That said, future studies should examine if individuals can reach a level of second language proficiency that would allow them to take the test in multiple languages without significant score changes. A final limitation is that it is unknown if participants took the exam alone in a distraction-free environment as instructed. Unless ImPACT adds proctoring features, this will remain a limitation of remote ImPACT research. As remote ImPACT administration has become a standard in the NCAA, this study is representative of current trends in collegiate athletics.

CONCLUSION

The ImPACT subtest scores that comprise the composite scores were not significantly different across two remote testing time points. As MDC identifies the parameters of meaningful change in repeated tests, this research recommends the MDCs are used as a framework for evaluating changes in ImPACT baseline scores. This study also supports the use of ImPACT subtest and composite score MDCs in clinical evaluations, as using both the subtests and composite scores in conjunction can provide a thorough understanding of ImPACT results.

ACKNOWLEDGMENTS

This research was partially funded by P30GM145645 and 1U54GM104944.

Contributor Information

Kristen G Quigley, Department of Kinesiology, School of Public Health, University of Nevada, Reno, NV, USA.

Madison Fenner, Department of Kinesiology, School of Public Health, University of Nevada, Reno, NV, USA.

Philip Pavilionis, Department of Kinesiology, School of Public Health, University of Nevada, Reno, NV, USA.

Nora L Constantino, Department of Kinesiology, School of Public Health, University of Nevada, Reno, NV, USA.

Ryan N Moran, Athletic Training Research Laboratory, Department of Health Science, The University of Alabama, Tuscaloosa, AL, USA.

Nicholas G Murray, Department of Kinesiology, School of Public Health, University of Nevada, Reno, NV, USA.

FUNDING

This work was supported by the National Institutes of Health [P30GM145645 to N.G.M, 1U54GM104944 to N.G.M.].

CONFLICT OF INTEREST

None declared.

References

  1. Alsalaheen, B., Stockdale, K., Pechumer, D., & Broglio, S. P. (2016). Validity of the immediate post concussion assessment and cognitive testing (ImPACT). Sports Medicine, 46(10), 1487–1501. 10.1007/s40279-016-0532-y. [DOI] [PubMed] [Google Scholar]
  2. Arain, M., Haque, M., Johal, L., Mathur, P., Nel, W., Rais, A., et al. (2013). Maturation of the adolescent brain. Neuropsychiatric Disease and Treatment, 9, 449–461. 10.2147/NDT.S39776. [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Bailey, C. M., Echemendia, R. J., & Arnett, P. A. (2006). The impact of motivation on neuropsychological performance in sports-related mild traumatic brain injury. Journal of the International Neuropsychological Society, 12(04), 475–484. 10.1017/S1355617706060619. [DOI] [PubMed] [Google Scholar]
  4. Broglio, S. P., Macciocchi, S. N., & Ferrara, M. S. (2007a). Neurocognitive performance of concussed Athletes when symptom free. Journal of Athletic Training, 42(4), 504–508. [PMC free article] [PubMed] [Google Scholar]
  5. Broglio, S. P., Ferrara, M. S., Macciocchi, S. N., Baumgartner, T. A., & Elliott, R. (2007b). Test-retest reliability of computerized concussion assessment programs. Journal of Athletic Training, 42(4), 509–514. [PMC free article] [PubMed] [Google Scholar]
  6. Broglio, S. P., Katz, B. P., Zhao, S., McCrea, M., McAllister, T., & CARE Consortium Investigators (2018). Test-retest reliability and interpretation of common concussion assessment tools: Findings from the NCAA-DoD CARE consortium. Sports Medicine, 48(5), 1255–1268. 10.1007/s40279-017-0813-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Carlson, C. D., Langdon, J. L., Munkasy, B. A., Evans, K. M., & Buckley, T. A. (2020). Minimal detectable change scores and reliability of the balance error scoring system in student-Athletes with acute concussion. Athletic Training & Sports Health Care., 12(2), 67–73. 10.3928/19425864-20190401-02. [DOI] [Google Scholar]
  8. Cottle, J. E., Hall, E. E., Patel, K., Barnes, K. P., & Ketcham, C. J. (2017). Concussion baseline testing: Preexisting factors, symptoms, and neurocognitive performance. Journal of Athletic Training, 52(2), 77–81. 10.4085/1062-6050-51.12.21. [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Covassin, T., Schatz, P., & Swanik, C. B. (2007). Sex differences in neuropsychological function and post-concussion symptoms of concussed collegiate ATHLETES. Neurosurgery, 61(2), 345–351. 10.1227/01.NEU.0000279972.95060.CB. [DOI] [PubMed] [Google Scholar]
  10. Covassin, T., Elbin, R. J., Stiller-Ostrowski, J. L., & Kontos, A. P. (2009). Immediate post-concussion assessment and cognitive testing (ImPACT) practices of sports medicine professionals. Journal of Athletic Training, 44(6), 639–644. 10.4085/1062-6050-44.6.639. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Crockett, D., Klonoff, H., & Bjerring, J. (1969). Factor analysis of neuropsychological tests. Perceptual and Motor Skills, 29(3), 791–802. 10.2466/pms.1969.29.3.791. [DOI] [PubMed] [Google Scholar]
  12. Elbin, R. J., Schatz, P., & Covassin, T. (2011). One-year test-retest reliability of the online version of ImPACT in high school Athletes. The American Journal of Sports Medicine, 39(11), 2319–2324. 10.1177/0363546511417173. [DOI] [PubMed] [Google Scholar]
  13. Elbin, R. J., Schatz, P., Mohler, S., Covassin, T., Herrington, J., & Kontos, A. P. (2021). Establishing test–retest reliability and reliable change for the king–Devick test in high school Athletes. Clinical Journal of Sport Medicine, 31(5), 235–239. 10.1097/JSM.0000000000000772. [DOI] [PubMed] [Google Scholar]
  14. Erdal, K. (2012). Neuropsychological testing for sports-related concussion: How Athletes can sandbag their baseline testing without detection. Archives of Clinical Neuropsychology, 27(5), 473–479. 10.1093/arclin/acs050. [DOI] [PubMed] [Google Scholar]
  15. Ferris, L. M., Kontos, A. P., Eagle, S. R., Elbin, R. J., Collins, M. W., Mucha, A., et al. (2022). Utility of VOMS, SCAT3, and ImPACT baseline evaluations for acute concussion identification in collegiate Athletes: Findings from the NCAA-DoD concussion assessment, research and education (CARE) consortium. The American Journal of Sports Medicine, 50(4), 1106–1119. 10.1177/03635465211072261. [DOI] [PubMed] [Google Scholar]
  16. Goodwin, G. J., John, S. E., Donohue, B., Keene, J., Kuwabara, H. C., Maietta, J. E., et al. (2023). Changes in ImPACT cognitive subtest networks following sport-related concussion. Brain Sciences, 13(2), 177. 10.3390/brainsci13020177. [DOI] [PMC free article] [PubMed] [Google Scholar]
  17. Heaton, R. K., Akshoomoff, N., Tulsky, D., Mungas, D., Weintraub, S., Dikmen, S., et al. (2014). Reliability and validity of composite scores from the NIH toolbox cognition battery in adults. Journal of the International Neuropsychological Society, 20(6), 588–598. 10.1017/S1355617714000241. [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. Henry, L. C., & Sandel, N. (2015). Adolescent subtest norms for the ImPACT neurocognitive battery. Applied Neuropsychology: Child, 4(4), 266–276. 10.1080/21622965.2014.911094. [DOI] [PubMed] [Google Scholar]
  19. Higgins, K. L., Denney, R. L., & Maerlender, A.  Sandbagging on the immediate post-concussion assessment and cognitive testing (ImPACT) in a high school athlete population. Archives of Clinical Neuropsychology, 32(3), 259–266. 10.1093/arclin/acw108. [DOI] [PubMed] [Google Scholar]
  20. Houston, M. N., Pelt, K. L. V., D’Lauro, C., et al. (2021). Test–retest reliability of concussion baseline assessments in United States service academy cadets: A report from the National Collegiate Athletic Association (NCAA)–Department of Defense (DoD) CARE consortium. Journal of the International Neuropsychological Society, 27(1), 23–34. 10.1017/S1355617720000594. [DOI] [PubMed] [Google Scholar]
  21. Howell, D. R., Osternig, L. R., & Chou, L. S. (2016). Consistency and cost of dual-task gait balance measure in healthy adolescents and young adults. Gait & Posture, 49, 176–180. 10.1016/j.gaitpost.2016.07.008. [DOI] [PubMed] [Google Scholar]
  22. Howell, D. R., Seehusen, C. N., Wingerson, M. J., Wilson, J. C., Lynall, R. C., & Lugade, V. (2021). Reliability and minimal detectable change for a smartphone-based motor-cognitive assessment: Implications for concussion management. Journal of Applied Biomechanics, 37(4), 380–387. 10.1123/jab.2020-0391. [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. ImPACT version 4 administration manual . Published online  2022. ImPACT Applications Inc. Accessed August 1, 2022. https://impacttest.com/wp-content/uploads/LBL-01_v14_ImPACT-Version-4_Administration_Manual.pdf
  24. Iverson, G. L., Lovell, M. R., & Collins, M. W. (2003). Interpreting change on ImPACT following sport concussion. The Clinical Neuropsychologist, 17(4), 460–467. 10.1076/clin.17.4.460.27934. [DOI] [PubMed] [Google Scholar]
  25. Iverson, G. L., Ivins, B. J., Karr, J. E., Crane, P. K., Lange, R. T., Cole, W. R., et al. (2019). Comparing composite scores for the ANAM4 TBI-MIL for research in mild traumatic brain injury. Archives of Clinical Neuropsychology, 35(1), 56–69. 10.1093/arclin/acz021. [DOI] [PubMed] [Google Scholar]
  26. Karr, J. E., Mindt, M. R., & Iverson, G. L. (2022). Assessing cognitive decline in high-functioning Spanish-speaking patients: High score base rates on the Spanish-language NIH toolbox cognition battery. Archives of Clinical Neuropsychology, 37(5), 939–951. 10.1093/arclin/acab097. [DOI] [PMC free article] [PubMed] [Google Scholar]
  27. Koo, T. K., & Li, M. Y. (2016). A guideline of selecting and reporting Intraclass correlation coefficients for reliability research. Journal of Chiropractic Medicine, 15(2), 155–163. 10.1016/j.jcm.2016.02.012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Leddy, J. J., Sandhu, H., Sodhi, V., Baker, J. G., & Willer, B. (2012). Rehabilitation of concussion and post-concussion syndrome. Sports Health, 4(2), 147–154. 10.1177/1941738111433673. [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Levy, S. S., Thralls, K. J., & Kviatkovsky, S. A. (2018). Validity and reliability of a portable balance tracking system, BTrackS, in older adults. Journal of Geriatric Physical Therapy, 41(2), 102–107. 10.1519/JPT.0000000000000111. [DOI] [PubMed] [Google Scholar]
  30. Maerlender, A., Flashman, L., Kessler, A., Kumbhani, S., Greenwald, R., Tosteson, T., et al. (2010). Examination of the construct validity of impact™ computerized test, traditional, and experimental neuropsychological measures. The Clinical Neuropsychologist, 24(8), 1309–1325. 10.1080/13854046.2010.516072. [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Maerlender, A., Flashman, L., Kessler, A., Kumbhani, S., Greenwald, R., Tosteson, T., et al. (2013). Discriminant construct validity of ImPACT™: A companion study. The Clinical Neuropsychologist, 27(2), 290–299. 10.1080/13854046.2012.744098. [DOI] [PMC free article] [PubMed] [Google Scholar]
  32. Maietta, J. E., Renn, B. N., Goodwin, G. J., Maietta, L. N., Moore, S. A., Hopkins, N. A., et al. (2023). A systematic review and meta-analysis of factors influencing ImPACT concussion testing in high school and collegiate athletes with self-reported ADHD and/or LD. Neuropsychology, 37(2), 113–132. 10.1037/neu0000870. [DOI] [PubMed] [Google Scholar]
  33. Makdissi, M., Darby, D., Maruff, P., Ugoni, A., Brukner, P., & McCrory, P. R. (2010). Natural history of concussion in sport: Markers of severity and implications for management. The American Journal of Sports Medicine, 38(3), 464–471. 10.1177/0363546509349491. [DOI] [PubMed] [Google Scholar]
  34. Mason, S. J., Davidson, B. S., Lehto, M., Ledreux, A., Granholm, A. C., & Gorgens, K. A. (2020). A cohort study of the temporal stability of ImPACT scores among NCAA division I collegiate Athletes: Clinical implications of test–retest reliability for enhancing student athlete safety. Archives of Clinical Neuropsychology., 35(7), 1131–1144. 10.1093/arclin/acaa047. [DOI] [PMC free article] [PubMed] [Google Scholar]
  35. Meehan, W. P., III, d’Hemecourt, P., Collins, C. L., Taylor, A. M., & Comstock, R. D. (2012). Computerized neurocognitive testing for the Management of Sport-Related Concussions. Pediatrics, 129(1), 38–44. 10.1542/peds.2011-1972. [DOI] [PMC free article] [PubMed] [Google Scholar]
  36. Netzel, L., Moran, R., Hopfe, D., Salvatore, A. P., Brown, W., & Murray, N. G. (2022). Test–retest reliability of remote ImPACT administration. Archives of Clinical Neuropsychology., 37(2), 449–456. 10.1093/arclin/acab055. [DOI] [PMC free article] [PubMed] [Google Scholar]
  37. Oldham, J. R., Difabio, M. S., Kaminski, T. W., Dewolf, R. M., Howell, D. R., & Buckley, T. A. (2018). Efficacy of tandem gait to identify impaired postural control after concussion. Medicine & Science in Sports & Exercise, 50(6), 1162–1168. 10.1249/MSS.0000000000001540. [DOI] [PubMed] [Google Scholar]
  38. Patricios, J. S., Schneider, K. J., Dvorak, J., Ahmed, O. H., Blauwet, C., Cantu, R. C., et al. (2023). Consensus statement on concussion in sport: The 6th international conference on concussion in sport–Amsterdam, October 2022. British Journal of Sports Medicine, 57(11), 695–711. 10.1136/bjsports-2023-106898. [DOI] [PubMed] [Google Scholar]
  39. Pierpoint, L. A., & Collins, C. (2021). Epidemiology of sport-related concussion. Clinics in Sports Medicine, 40(1), 1–18. 10.1016/j.csm.2020.08.013. [DOI] [PubMed] [Google Scholar]
  40. Quigley, K. G., Taylor, M. R., Hopfe, D., Pavilionis, P., & Murray, N. G. (2022). Minimal detectable change for the ImPACT test administered remotely. Journal of Athletic Training, 58(11–12), 981–986. 10.4085/1062-6050-0381.22. [DOI] [PMC free article] [PubMed] [Google Scholar]
  41. Resch, J., Driscoll, A., McCaffrey, N., Brown, C., Ferrara, M. S., Macciocchi, S., et al. (2013). ImPact test-retest reliability: Reliably unreliable?  Journal of Athletic Training, 48(4), 506–511. 10.4085/1062-6050-48.3.09. [DOI] [PMC free article] [PubMed] [Google Scholar]
  42. Ries, J. D., Echternach, J. L., Nof, L., & Gagnon, B. M. (2009). Test-retest reliability and minimal detectable change scores for the timed “up & go” test, the six-minute walk test, and gait speed in people with Alzheimer disease. Physical Therapy, 89(6), 569–579. 10.2522/ptj.20080258. [DOI] [PubMed] [Google Scholar]
  43. Rund, B. R., Sundet, K., Asbjørnsen, A., Egeland, J., Landrø, N. I., Lund, A., et al. (2006). Neuropsychological test profiles in schizophrenia and non-psychotic depression. Acta Psychiatrica Scandinavica, 113(4), 350–359. 10.1111/j.1600-0447.2005.00626.x. [DOI] [PubMed] [Google Scholar]
  44. Sayegh, A. A., Sandford, D., & Carson, A. J. (2010). Psychological approaches to treatment of postconcussion syndrome: A systematic review. Journal of Neurology, Neurosurgery & Psychiatry., 81(10), 1128–1134. 10.1136/jnnp.2008.170092. [DOI] [PubMed] [Google Scholar]
  45. Schatz, P., & Maerlender, A. (2013). A two-factor theory for concussion assessment using ImPACT: Memory and speed. Archives of Clinical Neuropsychology, 28(8), 791–797. 10.1093/arclin/act077. [DOI] [PubMed] [Google Scholar]
  46. Schatz, P., Pardini, J. E., Lovell, M. R., Collins, M. W., & Podell, K. (2006). Sensitivity and specificity of the ImPACT test battery for concussion in athletes. Archives of Clinical Neuropsychology, 21(1), 91–99. 10.1016/j.acn.2005.08.001. [DOI] [PubMed] [Google Scholar]
  47. Stenberg, J., Karr, J. E., Karlsen, R. H., Skandsen, T., Silverberg, N. D., & Iverson, G. L. (2020). Examining test-retest reliability and reliable change for cognition endpoints for the CENTER-TBI neuropsychological test battery. Frontiers in Neurology, 11, 541533. 10.3389/fneur.2020.541533. [DOI] [PMC free article] [PubMed] [Google Scholar]
  48. The derived scores | ImPACT version 4 . Accessed August 14, 2023. ImPACT Applications Inc. https://impacttest.com/manual/administration-and-scoring-of-impact-version-4/the-derived-scores-for-impact-version-4/
  49. Thoma, R. J., Cook, J. A., McGrew, C., King, J. H., Pulsipher, D. T., Yeo, R. A., et al. (2018) Convergent and discriminant validity of the ImPACT with traditional neuropsychological measures. Walla  P, ed. Cogent Psychology, 5(1), 1430199. doi: 10.1080/23311908.2018.1430199 [DOI] [PMC free article] [PubMed] [Google Scholar]
  50. Tsushima, W. T., Siu, A. M., Pearce, A. M., Zhang, G., & Oshiro, R. S. (2016). Two-year test–retest reliability of ImPACT in high school Athletes. Archives of Clinical Neuropsychology, 31(1), 105–111. 10.1093/arclin/acv066. [DOI] [PMC free article] [PubMed] [Google Scholar]
  51. Tulsky, D. S., Carlozzi, N. E., Holdnack, J., Heaton, R. K., Wong, A., Goldsmith, A., et al. (2017). Using the NIH toolbox cognition battery (NIHTB-CB) in individuals with traumatic brain injury. Rehabilitation Psychology, 62(4), 413–424. 10.1037/rep0000174. [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from Archives of Clinical Neuropsychology are provided here courtesy of Oxford University Press

RESOURCES