Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2026 Jan 30.
Published in final edited form as: Brain Inj. 2021 Feb 2;35(4):426–435. doi: 10.1080/02699052.2021.1878554

How sandbag-able are concussion sideline assessments? A close look at eye movements to uncover strategies

John-Ross Rizzo a,b,c,d, Todd E Hudson a,b, John Martone b, Weiwei Dai e, Oluchi Ihionu b, Yash Chaudhry b, Ivan Selesnick e, Laura J Balcer b,f,g, Steven L Galetta b,g, Janet C Rucker b,g
PMCID: PMC12854065  NIHMSID: NIHMS2137493  PMID: 33529094

Abstract

Background:

Sideline diagnostic tests for concussion are vulnerable to volitional poor performance (“sandbagging”) on baseline assessments, motivated by desire to subvert concussion detection and potential removal from play. We investigated eye movements during sandbagging versus best effort on the King-Devick (KD) test, a rapid automatized naming (RAN) task.

Methods:

Participants performed KD testing during oculography following instructions to sandbag or give best effort.

Results:

Twenty healthy participants without concussion history were included (mean age 27 ± 8 years). Sandbagging resulted in longer test times (89.6 ± 39.2 s vs 48.2 ± 8.5 s, p < .001), longer inter-saccadic intervals (459.5 ± 125.4 ms vs 311.2 ± 79.1 ms, p < .001) and greater numbers of saccades (171.4 ± 47 vs 138 ± 24.2, p < .001) and reverse saccades (wrong direction for reading) (21.2% vs 11.3%, p < .001). Sandbagging was detectable using a logistic model with KD times as the only predictor, though more robustly detectable using eye movement metrics.

Conclusions:

KD sandbagging results in eye movement differences that are detectable by eye movement recordings and suggest an invalid test score. Objective eye movement recording during the KD test shows promise for distinguishing between best effort and post-injury performance, as well as for identifying sandbagging red flags.

Keywords: Concussion, king-Devick, rapid Automatized Naming Tasks, saccades, inter-saccadic Interval, sandbagging

Introduction

Concussion is a change in brain function following a force to the head or body that results in new and otherwise unexplained neurological symptoms (1). Dysfunction may occur in one or more domains, resulting in physical symptoms, behavioral changes, emotional disturbances, and sleep impairment. An estimated 1.6–3.8 million sports-related traumatic brain injuries occur annually in the United States and account for 6% of athletic injuries at the collegiate level (2,3). Due to the danger of undetected concussive events, a growing number of tests have been proposed as sensitive and useful measures for sideline concussion detection and prevention of premature return to play (49).

Tests frequently used on the sidelines to detect concussion, such as the Sports Concussion Assessment Tool (SCAT) Symptom Checklist (8,10), Standardized Assessment of Concussion (SAC) (11), and Balance Error Scoring System (BESS) (12), and the Vestibular-Ocular Motor Screening (VOMS) test(6), depend on the use of symptom checklists and/or brief physical and cognitive function assessments. Many of these tests require voluntary reporting of symptoms or subjective perception of balance difficulties by the examiner. Studies have shown that methods dependent on subjective report might be unreliable, as 43% of 262 athletes in a survey study reported that they knowingly hid concussion symptoms to stay in a game (13). In addition, physical assessment tests face potential problems with “volitional overlay”, which can be defined as the active and purposeful adjustment of test performance. Athletes can, either of their own initiative or through coaching, intentionally “sandbag” (i.e. fake poor performance on) baseline scores to prevent being removed from play following an impact during competition. With some athletes willing to compromise their baseline performance, questions can arise regarding the validity of the results of available concussion diagnostic tests. In fact, detection of sandbagging has been a focus of attention with regard to baseline computerized neurocognitive testing (CNT) [e.g. Immediate Post-Concussion Assessment and Cognitive Testing (ImPACT) (14) and CNS Vital Signs (15)]. While built-in validity indicators reliably detect most sandbagging attempts (1619), up to 30% of athletes voluntarily underperform on CNT (20,21).

Rapid automatized naming (RAN) tasks are tests of rapid number, picture, or symbol naming that have been utilized in neuropsychological studies since the early 20th century (22). One such test, the King–Devick (KD) test, has been validated as a sensitive and specific sideline performance measure for acute concussion detection (9). Others under investigation that show potential capacity for identifying concussion in athletes include a rapid picture naming test called the Mobile Universal Lexicon Evaluation System (MULES) (4,23) and another number naming test called the Staggered Uneven Number (SUN) test (24). As sideline concussion tests, these RAN tasks expand the potential to identify injury by incorporating vision and eye movement pathways into screening; however, RAN tasks are among the objective assessment tools that are susceptible to volitional overlay. The KD test involves reading a series of numbers on 3 test cards aloud as rapidly as possible. KD test performance outcomes include completion time and error rate. There are a number of ways to potentially sandbag performance on the KD test, including voluntarily prolonging fixations to delay progression between numbers, by looking off-target in random locations, introducing more eye movements than necessary to complete the task, or by subversively altering the number-naming pattern to impact performance.

Sandbagging of baseline RAN tasks by athletes may reduce the potential to detect concussion and allow an injured player to remain in the game, which leaves the player vulnerable to further injury. Thus, detection of sandbagging efforts during baseline testing is of critical importance. While there are a multitude of potential strategies to sandbag RAN tasks, there are countermeasures that can be put in place during clinical testing or used during research investigations to keep participants honest during test administration. One such approach is the use of eye tracking during digitized testing protocols. Many, if not all, of the potential strategies could not only be detectable but also classified and graded as aberrant patterns that could be monitored for consistency, variability and accuracy. The KD is one test that lends itself to this methodology, as virtual protocols have been developed that monitor saccade count, inter-saccadic intervals (ISI), test time, and other metrics, all of which are highly informative (25). The objective of this study was to examine the capacity of healthy participants to sandbag the KD test following coaching, with simultaneous quantitative eye movement recording. We hypothesized that objective ocular motor metrics would differentiate sandbagging versus full effort on the KD test.

Materials and methods

Participants

Healthy adult non-athlete volunteers without histories of traumatic brain injury, neurological impairment or visual dysfunction were invited to participate. All participants reported English as their native language. All research protocols were approved by the NYU Institutional Review Board. Written informed consent was obtained from each participant. Twenty adult volunteers (age 27 ± 8 years, range 19–53) were included. There were 12 (60%) male and 8 (40%) female participants.

Materials and procedures

A digitized version of the KD test was utilized for this study. It consisted of three computer-generated KD test cards that maintained consistency (e.g. numbers presented, spacing between numbers) with the original spiral-bound version of the KD test (26,27). Following presentation of an initial demonstration card, screens depicting the three test cards that comprise the KD test were presented serially on the computer monitor. Full details regarding the methodology of digitized KD testing with eye tracking and data analysis have been previously published (25).

Each participant completed a computerized version of the KD test twice under objective video-oculography (see below) in a crossover design. The initial condition completed was assigned randomly, with the examiner instructing the participant to either 1) name the numbers in the test “as fast as they can without making an error” (best effort condition) or 2) to read a standardized instruction prompt selected from one of four cards picked out of an envelope on the desk in front of them (sandbag condition). Each card was stated by the examiner to have different instructions, though each card contained the same coaching on how to sandbag the test; the instructional prompt read as follows: “Please attempt to intentionally delay your time on the test, as if your goal is to complete this test with a slightly worse test time than you are actually capable of. You are encouraged to pause briefly at each number, just slightly longer than what you think you would need to read it. The experimenter does not know you have been selected for this special condition, but will know if you make this too obvious, so please try and make your delayed number naming as subtle as possible.” Participants were instructed to follow the instructions to the best of their ability without making it obvious to the examiner which set of instructions had been given. The examiner was blinded to what was written on the instruction cards. Following completion of the first condition, each participant then took a self-initiated break for 1–5 minutes before completing the opposite condition. The KD test paradigm was controlled with custom MATLAB functions.

Eye movement recordings

Binocular eye movement recordings were obtained with the EyeLink 1000 Plus infrared-based video-oculographic camera system (SR Research, Mississauga, Ontario, Canada) in remote mode during performance of the KD test. A forehead rest was utilized for maximum head stability while simultaneously allowing for mouth movements required for number naming. The EyeLink sampled eye position at 500 Hz with a precision of about 0.1 degree. Participants completed a 13-point spatial calibration procedure, prior to each testing session. Eye position was recorded concurrently with KD testing.

Data analyses

Eye movement data were analyzed off-line using custom Matlab software. Eye movement data was found to be of poor quality and unusable at the point of data analysis for one participant, so only age and KD times were utilized for that participant. Saccades were identified via an adaptive threshold mechanism and velocities and accelerations were computed from position traces using a low-pass differentiator (28). ISI and spatial (e.g. movement patterns, numbers of saccades) and temporal (e.g. peak velocities, accelerations, decelerations, and durations) properties of saccades were extracted for statistical analysis. Wilcoxon signed-rank tests were used to assess differences of timing (ISI, KD times) and rate data, which are known to have non-Gaussian distributions (29).

Modelling

Detectability of sandbagging behaviour is based on the d metric, which is the basic measure of sensitivity in signal detection theory and the foundation of modern psychophysics (3032), and encodes the signal-to-noise ratio of the difference between sampling distributions derived from completing the KD test with best effort versus attempting to sandbag the test. Sampling distributions may be based on KD times alone, or on eye movement metrics, and will give rise to hits and false alarms at different rates that depend on the informativeness of KD times and eye movement metrics for differentiating sandbagging behaviour. To compute hits and false alarms based on KD times alone versus from eye movement metrics, we fit two logistic models:

Model1:p=1+bβ0+β1KD1
Model2:p=1+bβ0+β1EyeMetric1+βnEyeMetric2+βnEyeMetricn1

whose β predictors are linearly related to the log-odds logbp1p, with base b. Each of these logistic models is then used to detect sandbagging in our 40 datasets (20 KD, 20 sandbag), and the resulting hits and false alarms are z-transformed using the cumulative left-hand tail of the unit normal distribution to compute d′. The shortest 95% Bayesian confidence bounds of the probability distribution over computed in these two cases (32,33) are then compared, with non-overlapping bounds taken to indicate a statistically significant improvement of one over the other.

Results

KD times

KD test times, defined as the median time required to complete the three test cards, were substantially longer for trials performed under the sandbag condition than for best effort trials (89.6 ± 39.2 s vs 48.2 ± 8.5 s, p < .001, z = 3.9) (Figure 1). No number naming errors occurred in either testing condition. The mean increase in KD test time between best effort trials and sandbag trials was 41.5 ± 23.9 s (range 2.5–174.5 s). All participants’ best effort KD were shorter than their corresponding sandbag trial, and only four participants generated a sandbag KD time less than 10 seconds longer than their best effort.

Figure 1.

Figure 1.

Boxplot demonstrating average King-Devick time scores in the normal best effort and special sandbag conditions. The lines in the boxes represent the medians, and the boxes delineate the interquartile range (25th to 75th percentiles). Whiskers represent the range of observations minus outliers.

Ocular motor analysis

Saccadic spatial patterns

Eye movement recordings revealed that participants performed eye movement sequences (ocular motor patterns) similar to what would be expected when reading lines of text (i.e., numbers were named from left-to-right and from top-to-bottom) for both sandbag and best effort conditions (Figure 2). However, participants with sandbag test times that were close to their best effort test times showed sandbag ocular motor patterns that were similar to best effort ocular motor patterns (Figure 2A and B), whereas, participants with substantial prolongation of sandbag test times compared to their best effort test times showed different sandbag and best effort ocular motor patterns (Figure 2C and D).

Figure 2.

Figure 2.

Horizontal eye movement position traces show that number naming strategy appears consistent with that expected for reading (i.e., left to right and top to bottom) in both normal best effort (A and C) and special sandbag conditions (B and D) on the King-Devick (KD) test. An example of a “good” sandbagged test (B) that would be difficult to detect as a sandbagged test by test time (sandbag test time 46s versus best effort test time 39.3s) has a position trace that appears similar to the best effort KD (A) in this individual. In contrast, an example of a “poor” sandbagged test (D) that would be easy to detect as a sandbagged test by test time [sandbag test time 119.1s (which is reflected in the longer time scale of this trace) versus best effort test time 36.3s] has a position trace that differs substantially from the best effort KD (C) in this individual.

Inter-saccadic intervals

Inter-saccadic intervals (ISI), the duration of time between sequential saccades, were measured for each subject as the median interval between all task-specific saccades across the three test cards, due to substantial positive skew in the distribution of these values for all subjects. Median ISI was substantially longer among participants completing the trials under the sandbag condition when compared to trials under best effort (459.5 ± 125.4 ms vs 311.2 ± 79.1 ms, p < .001, z = 3.6) (Figure 3).

Figure 3.

Figure 3.

Histograms of inter-saccadic interval (ISI) durations while performing the King-Devick (KD) test under normal best effort (BE) and special sandbag conditions (SB).

Saccadic frequency

Saccadic frequency was higher under the sandbag condition compared to best effort trials (171.4 ± 47 vs 138 ± 24.2, p < .001, z = 3.5). Not only were a greater number of overall saccades made per test with sandbag trials, but there were also more saccades made in the wrong direction (opposite to reading direction, right-to-left) (21.2% vs 11.3%, p < .01, z = 3.2).

Saccadic kinematics

Of note, there were no differences in saccadic peak velocities, acceleration, deceleration or duration between trials and, therefore, main sequence relationships were preserved; these measures appear to be resilient to “sandbagging”.

Modelling

Sandbagging was detectable using both the logistic model with KD times as its only predictor, and from the logistic model using eye movement metrics. The model employing KD times as its only predictor had a best-fit d′ = 2. 07 (CI = [2.00 2.13]). Eye movement metrics were chosen in two stages. First, the model was fit using all eye movement metrics that showed a significant difference in sandbagging vs. best effort KD trials (ISI, SD of ISI, number of saccades, total path length, saccade amplitude, SD of saccade amplitude, endpoint error, SD of endpoint error, SD of microsaccade ISI, number of microsaccades, path length of microsaccades). This model resulted in a best-fit d′ = 3. 62 (CI = [3.47 3.78]). Next, metrics that were highly correlated (r > 0.7) were removed, and identical performance was achieved with a reduced model employing the ISI, SD of ISI, endpoint error, SD of endpoint error, number of microsaccades, and SD of microsaccade ISI metrics. Both of these models significantly outperformed the logistic model employing KD times as its only predictor (Figures 4 and 5).

Figure 4.

Figure 4.

Z-scores underlying the computation of the d-prime metric using a logistic model employing either King-Devick times as its only predictor (upper) or employing eye movement metrics (lower). Eye movement metrics are significantly more informative for identifying sandbagging behaviour.

Figure 5.

Figure 5.

Example fixations on the King-Devick (KD) test cards while performing the KD test under best effort (BE) and sandbag (SB) conditions for a participant who was accurately categorized by the logistic model using KD time as the only predictor (upper row) as sandbagging (right) or not (left), and for participants who were inaccurately categorized (lower row).

Discussion

In this study, eye movement behaviour during administration of the KD test was assessed in a cross-over design with two trial types. The first trial type asked participants to complete the test while giving best effort and the second utilized a special instruction set prompting the participant to intentionally sandbag their results, volitionally overriding their inherent physiologic capability in order to underperform. We discuss KD test times and each of the ocular motor findings in turn, with particular attention to the possibility that these results may have implications regarding strategies to “game” test outcome.

KD test times

As expected, total KD test times were prolonged in the sandbag condition when compared to trials under the best effort condition. The mean best effort KD time of 48.2 s with a range between 33.9 and 60.1 s is consistent with baseline KD times in healthy athletes in the literature (27,34,35). Athletes with concussion show a mean worsening from baseline of 4.8 s (9). In marked contrast to this modest mean increase in KD test times with concussion, the mean increase with sandbagging attempts was 41.5 s with a range between 2.5 and 174.5 s, suggesting that intentional sandbagging that mimics the physiologic effects of concussion on the test is difficult to perform. Only four of the 20 participants were able to sandbag the KD with a less than 10 s increase in test time from best effort. Indeed, while sandbagging was just detectable with the logistical model based on KD time alone (sandbag and best effort sampling distributions separated by 2 standard deviations), sandbagging was more robustly detected with analysis of quantified eye movement metrics (sampling distributions separated by over 3.5 standard deviations).

Saccadic spatial analysis & saccadic frequency

The majority of RAN tasks leverage a quasi-reading structure during test administration, as participants are asked to identify numbers and/or pictures arranged in rows on test cards. Prior investigations suggest that healthy individuals perform these tasks both efficiently and accurately. Results from studies investigating eye movement characteristics used during the KD test reveal that participants demonstrate minimal spatial error, landing close to the target of interest (centroid), and a well-organized saccadic planning strategy, using approximately 1.15 saccades per object identified (25). Our results suggest that, regardless of trial type, participants followed a fairly typical ocular motor sequence from left-to-right and from top-to-bottom (Figure 2). The gaze patterns, when superimposed on the test cards, showed minimal deviation from this typical reading progression. While it would be simple to alter the reading pattern used (i.e., to read a row twice or skip vertically between lines) to inflate test times, this would be conspicuous to the examiner. However, differences in horizontal eye position traces could be seen between “good” and “poor” sandbaggers. For participants with less prolongation of sandbag test times compared to best effort times (“good” sandbaggers), horizontal eye position traces were similar (Figure 2A and B) between test conditions; whereas, for participants with substantial prolongation of sandbag test times compared to best effort times (“poor” sandbaggers), horizontal eye position traces showed differences (Figure 2C and D).

With regards to saccadic frequency, our results reveal that more saccades overall were deployed on sandbag trials compared to best effort trials. A logistic model incorporating these parameters with other eye movement metrics can detect sandbagged trials. The number of saccades for sandbag versus best effort trials yields saccade per object identified ratios of 1.43 and 1.15, respectively. While the latter is consistent with prior reports in healthy individuals (25), the former is grossly elevated. In fact, the numbers of saccades were not only greater in the sandbagged trials, but there was also a larger proportion of saccades made in the direction opposite reading flow. In previous work exploring concussion and the KD with objective eye movement characterization, participants post-concussion used on average 157 saccades to complete the KD test, with a saccade to object identified ratio of 1.31 (36). In concussion, the need for more saccades to complete a given task may stem from saccadic dysmetria and requisite corrective saccades to take gaze close to the targets of visual interest. Hypometria has been described in lesions involving the cortex, thalamus, pretectum, superior colliculus, and cerebellum, while hypermetria typically implicates the cerebellum (3739). While further studies would allow a better understanding of how frequently these areas are involved in individuals with concussion, either looking off-target intentionally with over-corrections or completing staircase saccades in small increments are both potential avenues for sandbaggers to game their results.

Inter-saccadic interval analysis

Classically, the reaction time of an eye movement (its latency) is defined as the epoch between a visual stimulus presentation (time zero) and saccadic initiation; this temporal window is the time taken to identify a target, plan and initiate a saccade to the target of interest (40). When studying ocular motor behaviour, saccades are used in an exploratory nature. The pause between saccades (the inter-saccadic interval) is simply a measure of the time between saccades. During natural scene viewing or visual search activity, ISI represents the latency, in that it is the time between the end of one saccade and the initiation of another. While it may be arbitrary to divide ISI into components of saccadic latency versus fixational behaviour (which in RAN tasks includes language retrieval and attentional factors), mechanistically it is a useful tool for understanding the impact of sandbagging attempts or physiological changes from concussion on the ISI (36,41). ISI are critical to understanding RAN tasks, as the numbers or pictures are all simultaneously displayed on a per-card basis; a participant is asked to name or identify objects while shifting gaze from the upper left to bottom right of the test card, interleaving saccades and fixations.

Our results revealed that median ISIs were substantially longer for sandbag trials than for best effort trials (459.6 ms vs 311.2 ms). A logistic model incorporating this parameter with other eye movement metrics is able to detect sandbagged trials. ISI are particularly relevant for rapid number naming tasks and the broader concept of sandbagging baseline testing to elude later concussion detection. In previous studies, ISI durations have been shown to be significantly prolonged in individuals with concussion (36). As RAN assessments are dependent on a quasi-reading structure, one must be mindful of the fact that efficient reading demands eye movements that are not only accurate and rapid, but also requires the continuous processing of information obtained from sequential fixations. The participant must identify the number of interest, verbalize that number, direct attention appropriately to the next number, and plan the subsequent ocular motor movement (42). Prior studies have been unable to determine if ISI prolongation during KD test completion post-concussion is due to prolonged fixation from attentional or language factors or due to excessive saccadic latency (i.e., due to delayed target detection, prolonged saccade planning, impaired release from gaze holding, or faulty motor recruitment) or perhaps if both contribute. However, in the case of sandbagging, it would be improbable that the motor initiation of a triggered saccade would be pathologically altered; rather the fixational period is likely artificially elevated by intentionally pausing, thus voluntarily inhibiting the next saccade, for extra time at each number.

Saccadic kinematics

Saccade velocity and acceleration are largely controlled in the brainstem. Excitatory burst neurons in the horizontal and vertical saccade centres control the rate of saccadic initiation at the level of the midbrain and pons (43,44). While neural lesions in these burst neuron centres lead to saccade slowing, it would be improbable to volitionally or consciously control these gaze centres. The results from our study reveal that classical measures of saccadic function, inclusive of velocity, duration, acceleration, and deceleration showed no differences between sandbag and best effort trials. Therefore, main sequence relationships between saccade amplitude and duration and between saccade amplitude and peak velocity were similar between groups. Hence, saccade kinematics as controlled through the parapontine reticular formation and the midbrain rostral interstitial medial longitudinal fasciculus do not appear sensitive to volitional overlay. In fact, saccades are also not sensitive to kinematic impairment post-concussion, as has been demonstrated in reflexive saccade paradigms in both sub-acute (45) and chronic concussion (4648).

Limitations

The small participant sample provided here for this pilot investigation is a limitation of the study. This, in particular, leaves a vulnerability with regard to overfitting the logistic model that included ocular motor metrics. In future work, it would be helpful to further diversify with regards to age, especially younger ages, and sample size. This study included healthy non-athlete adults. Sampling of athletes from various sporting disciplines would be useful, from the youth setting up through the professional ranks, where competition is certainly fiercer and livelihood depends more critically on game-time performance. Another limitation is the lack of information regarding potential confounders to the participants’ ability to “sandbag” including sleep, medications and/or natural stimulants, such as caffeine or guarana.

Conclusions

Despite a wealth of literature regarding RAN tasks and performance in concussion-based sidelines assessment, little is known about how susceptible these tests are to volitional sandbagging or to the willful interference of baseline recordings to buffer later post-injury results. While sandbagging strategies may keep a player in the game, they leave them vulnerable to neurological sequelae that should be avoided. Investigation of a specific RAN task, the KD test, in which healthy participants without a history to concussion were asked to surreptitiously increase their testing times allowed characterization of the eye movement behaviours, under quantitative eye movement recordings, that may be used in sandbag attempts. We found that sandbagged KD tests are easily detectable by objective eye movement characterization and differentiable from best effort trials and from prior reported findings in concussion. Specifically, ISI prolongation, a greater number of saccades, and a greater number of wrong-direction saccades suggest “sandbagging” rather than best effort on KD testing. Such values detected on baseline assessment may suggest an invalid test score. Objective eye movement recordings during RAN task performance show promise for confirming best effort on baseline assessment, as well as for identifying red flags consistent with intentional sandbagging.

Funding

5K12HDOO1097 NICHD and NCMRR, National Institutes of Health Rehabilitation Medicine Scientist Training Program (JRR). Empire Clinical Research Investigator Program (ECRIP);Empire Clinical Research Investigator Program (ECRIP).;NICHD and NCMRR, National Institutes of Health Rehabilitation Medicine Scientist Training Program [5K12HDOO1097];

Footnotes

Disclosure of interest

No author has received any financial compensation or consultant fees from King-Devick Test, Inc. No author has other disclosures pertinent to this study.

Consent to participate

Written informed consent for participation was obtained from each participant.

Consent for publication

Written informed consent for publication was obtained from each participant. No private health information is included for publication.

Ethics approval

All research protocols were approved by the NYU Institutional Review Board.

Data availability

Original data will be made available if requested.

References

  • 1.Davis GA, Ellenbogen RG, Bailes J, Cantu RC, Johnston KM, Manley GT, Nagahiro S, Sills A, Tator CH, McCrory P. The berlin international consensus meeting on concussion in sport. Neurosurgery. 2018;82(2):232–36.doi: 10.1093/neuros/nyx344. [DOI] [PubMed] [Google Scholar]
  • 2.Langlois JA, Rutland-Brown W, Wald MM. The epidemiology and impact of traumatic brain injury: a brief overview. J Head Trauma Rehabil. 2006;21(5):375–78.doi: 10.1097/00001199-200609000-00001. [DOI] [PubMed] [Google Scholar]
  • 3.SL Z, Kerr ZY, Yengo-Kahn A, Wasserman E, Covassin T, Solomon GS. Epidemiology of sports-related concussion in NCAA athletes from 2009–2010 to 2013–2014: incidence, recurrence, and mechanisms. Am J Sports Med. 2015;43(11):2654–62.doi: 10.1177/0363546515599634. [DOI] [PubMed] [Google Scholar]
  • 4.Fallon S, Akhand O, Hernandez C, Galetta MS, Hasanaj L, Martone J, Webb N, Drattell J, Amorapanth P, Rizzo JR, et al. MULES on the sidelines: a vision-based assessment tool for sports-related concussion. J Neurol Sci. 2019;402:52–56. [DOI] [PubMed] [Google Scholar]
  • 5.Cobbs L, Hasanaj L, Amorapanth P, Rizzo JR, Nolan R, Serrano L, Raynowska J, Rucker JC, Jordan BD, Galetta SL, et al. Mobile Universal Lexicon Evaluation System (MULES) test: a new measure of rapid picture naming for concussion. J Neurol Sci. 2017;372:393–98. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Mucha A, Collins MW, Elbin RJ, Furman JM, Troutman-Enseki C, DeWolf RM, Marchetti G, Kontos AP. A brief Vestibular/Ocular Motor Screening (VOMS) assessment to evaluate concussions: preliminary findings. Am J Sports Med. 2014;42(10):2479–86. doi: 10.1177/0363546514543775. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.RN M, Covassin T, Elbin RJ, Gould D, Nogle S. Reliability and normative reference Values for the Vestibular/Ocular Motor Screening (VOMS) tool in youth athletes. Am J Sports Med. 2018;46(6):1475–80.doi: 10.1177/0363546518756979. [DOI] [PubMed] [Google Scholar]
  • 8.RJ E, Meeuwisse W, McCrory P, Davis GA, Putukian M, Leddy J, Makdissi M, Sullivan SJ, Broglio SP, Raftery M, et al. The Sport Concussion Assessment Tool 5th Edition (SCAT5): background and rationale. Br J Sports Med. 2017;51(11):848–50.doi: 10.1136/bjsports-2017-097506. [DOI] [PubMed] [Google Scholar]
  • 9.KM G, Liu M, Leong DF, Ventura RE, Galetta SL, Balcer LJ. The king-devick test of rapid number naming for concussion detection: meta-analysis and systematic review of the literature. Concussion. 2016;1(2):CNC8.doi: 10.2217/cnc.15.8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.McCrory P, Johnston K, Meeuwisse W, Aubry M, Cantu R, Dvorak J, Graf-Baumann T, Kelly J, Lovell M, Schamasch P. Summary and agreement statement of the 2nd international conference on concussion in sport, Prague 2004. Br J Sports Med. 2005;39(4):196–204.doi: 10.1136/bjsm.2005.018614. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.McCrea M, Kelly JP, Kluge J, Ackley B, Randolph C. Standardized assessment of concussion in football players. Neurology. 1997;48(3):586–88.doi: 10.1212/WNL.48.3.586. [DOI] [PubMed] [Google Scholar]
  • 12.McCrory P, Meeuwisse W, Johnston K, Dvorak J, Aubry M, Molloy M, Cantu R. Consensus statement on concussion in sport: the 3rd international conference on concussion in sport held in Zurich. Br J Sports Med. 2009;43(Suppl 1):i76–90. November 2008. doi: 10.1136/bjsm.2009.058248. [DOI] [PubMed] [Google Scholar]
  • 13.DM T, Galetta KM, Phillips HW, Dziemianowicz EM, Wilson JA, Dorman ES, Laudano E, Galetta SL, Balcer LJ. Sports-related concussion: anonymous survey of a collegiate cohort. Neurol Clin Pract. 2013;3(4):279–87.doi: 10.1212/CPJ.0b013e3182a1ba22. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.JD S, Register-Mihalik JK, Mihalik JP, Kerr ZY, Guskiewicz KM. Identifying impairments after concussion: normative data versus individualized baselines. Med Sci Sports Exerc. 2012;44(9):1621–28.doi: 10.1249/MSS.0b013e318258a9fb. [DOI] [PubMed] [Google Scholar]
  • 15.CT G, Johnson LG. Reliability and validity of a computerized neurocognitive test battery, CNS vital signs. Arch Clin Neuropsychol. 2006;21(7):623–43.doi: 10.1016/j.acn.2006.05.007. [DOI] [PubMed] [Google Scholar]
  • 16.Schatz P, Glatts C. “Sandbagging” baseline test performance on imPACT, without detection, is more difficult than it appears. Arch Clin Neuropsychol. 2013;28(3):236–44.doi: 10.1093/arclin/act009. [DOI] [PubMed] [Google Scholar]
  • 17.Erdal K. Neuropsychological testing for sports-related concussion: how athletes can sandbag their baseline testing without detection. Arch Clin Neuropsychol. 2012;27(5):473–79.doi: 10.1093/arclin/acs050. [DOI] [PubMed] [Google Scholar]
  • 18.Gaudet CE, Weyandt LL. Immediate Post-Concussion and Cognitive Testing (ImPACT): a systematic review of the prevalence and assessment of invalid performance. Clin Neuropsychol. 2017;31(1):43–58.doi: 10.1080/13854046.2016.1220622. [DOI] [PubMed] [Google Scholar]
  • 19.MN A, Lempke LB, Bell DH, Lynall RC, Schmidt JD. The ability of CNS vital signs to detect coached sandbagging performance during concussion baseline testing: a randomized control trial. Brain Inj. 2020;34(3):369–74.doi: 10.1080/02699052.2020.1724332. [DOI] [PubMed] [Google Scholar]
  • 20.Schatz P, Elbin RJ, Anderson MN, Savage J, Covassin T. Exploring sandbagging behaviors, effort, and perceived utility of the imPACT baseline assessment in college athletes.. Sport Exerc Perform Psychol. 2017;6(3):243–51.doi: 10.1037/spy0000100. [DOI] [Google Scholar]
  • 21.AJ S, Alosco ML, Fedor A, Gunstad J. Invalid performance and the imPACT in national collegiate athletic association division I football players. J Athl Train. 2013;48(6):851–55.doi: 10.4085/1062-6050-48.6.20. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Akhand O, Rizzo JR, Rucker JC, Hasanaj L, Galetta SL, Balcer LJ. History and future directions of vision testing in head trauma. J Neuroophthalmol. 2019;39(1):68–81.doi: 10.1097/WNO.0000000000000726. [DOI] [PubMed] [Google Scholar]
  • 23.Akhand O, Galetta MS, Cobbs L, Hasanaj L, Webb N, Drattell J, Amorapanth P, Rizzo JR, Nolan R, Serrano L, et al. The new Mobile Universal Lexicon Evaluation System (MULES): A test of rapid picture naming for concussion sized for the sidelines. J Neurol Sci. 2018;387:199–204. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Dahan N, Moehringer N, Hasanaj L, Serrano L, Joseph B, Wu S, Nolan-Kenney R, Rizzo JR, Rucker JC, Galetta SL, et al. The SUN test of vision: investigation in healthy volunteers and comparison to the mobile universal lexicon evaluation system (MULES). J Neurol Sci. 2020;415(116953):116953.doi: 10.1016/j.jns.2020.116953. [DOI] [PubMed] [Google Scholar]
  • 25.JR R, Hudson TE, Dai W, Desai N, Yousefi A, Palsana D, Seelesnick I, Balcer LJ, Galetta SL, Rucker JC. Objectifying eye movements during rapid number naming: methodology for assessment of normative data for the king–devick test. J Neurol Sci. 2016;362:232–39.doi: 10.1016/j.jns.2016.01.045. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.MK O, Marutani JK, Rouse MW, DeLand PN. Reliability study of the pierce and king-devick saccade tests. Am J Optom Physiol Opt. 1986;63(6):419–24.doi: 10.1097/00006324-198606000-00005. [DOI] [PubMed] [Google Scholar]
  • 27.KM G, Barrett J, Allen M, Madda F, Delicata D, Tennant AT, Branas CC, Maguire MG, Messner LV, Devick S, et al. The king-devick test as a determinant of head trauma and concussion in boxers and MMA fighters. Neurology. 2011;76(17):1456–62. doi: 10.1212/WNL.0b013e31821184c9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Dai W, Selesnick I, Rizzo JR, Rucker J, Hudson T. A nonlinear generalization of the savitzky-golay filter and the quantitative analysis of saccades. J Vis. 2017;17(9):10.doi: 10.1167/17.9.10. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.RH C, Williams ML. Neural computation of log likelihood in control of saccadic eye movements. Nature. 1995;377(6544):59–62. doi: 10.1038/377059a0. [DOI] [PubMed] [Google Scholar]
  • 30.DM G, Swets JA. Signal detection theory and psychophysics. New York: Wiley; 1954. [Google Scholar]
  • 31.Jamali M, Carriot J, Cullen KE, Cullen KE. Coding strategies in the otolith system differ for translational head motion vs. static orientation relative to gravity. eLife. 2019;8:e45573.doi: 10.7554/eLife.45573. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.TE H. Bayesian data analysis for the behavioral and neural sciences. Cambridge: Cambridge University Press; 2021. [Google Scholar]
  • 33.Gelman A, Carlin JB, Stern HS, Dunson DB, Vehtari A, Rubin DB. Bayesian data analysis. Boca Raton, Florida: Chapman & Hall; 2013. [Google Scholar]
  • 34.KM G, Morganroth J, Moehringer N, Mueller B, Hasanaj L, Webb N, Civitano C, Cardone DA, Silverio A, Galetta SL, et al. Adding vision to concussion testing: a prospective study of sideline testing in youth and collegiate athletes. J Neuroophthalmol. 2015;35(3):235–41.doi: 10.1097/WNO.0000000000000226. [DOI] [PubMed] [Google Scholar]
  • 35.KM G, Brandes LE, Maki K, Dziemianowicz MS, Laudano E, Allen M, Lawler K, Sennett B, Wiebe D, Devick S, et al. The king-devick test and sports-related concussion: study of a rapid visual screening tool in a collegiate cohort. J Neurol Sci. 2011;309(1–2):34–39.doi: 10.1016/j.jns.2011.07.039. [DOI] [PubMed] [Google Scholar]
  • 36.JR R, Hudson TE, Dai W, Birkemeier J, Pasculli RM, Selesnick I, Balcer LJ, Galetta SL, Rucker JC. Rapid number naming in chronic concussion: eye movements in the king–devick test. Ann Clin Transl Neurol. 2016;3(10):801–11. doi: 10.1002/acn3.345. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 37.Ohtsuka K, Noda H. The effect of microstimulation of the oculomotor vermis on discharges of fastigial neurons and visually-directed saccades in macaques. Neurosci Res. 1991;10(4):290–95.doi: 10.1016/0168-0102(91)90086-E. [DOI] [PubMed] [Google Scholar]
  • 38.EF K, Kenney DV, Gooley SG, Pratt SE, McGillis SL. Targeting errors and reduced oculomotor range following ablations of the superior colliculus or pretectum/thalamus. Behav Brain Res. 1986;22(3):191–210.doi: 10.1016/0166-4328(86)90064-1. [DOI] [PubMed] [Google Scholar]
  • 39.Pierrot-Deseilligny C, Rosa A, Masmoudi K, Rivaud S, Gaymard B. Saccade deficits after a unilateral lesion affecting the superior colliculus. J Neurol Neurosurg Psychiatry. 1991;54:1106–09. doi: 10.1136/jnnp.54.12.1106. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 40.RJ L, Kennard C. Using saccades as a research tool in the clinical neurosciences. Brain. 2004;127(Pt 3):460–77.doi: 10.1093/brain/awh035. [DOI] [PubMed] [Google Scholar]
  • 41.JC R, Calandrini DM, Carpenter RH. A single mechanism for the timing of spontaneous and evoked saccades. Exp Brain Res. 2008;187(2):283–93.doi: 10.1007/s00221-008-1304-1. [DOI] [PubMed] [Google Scholar]
  • 42.MT K, Schmidt PP. Effect of oculomotor and other visual skills on reading performance: a literature review. Optom Vis Sci. 1996;73(4):283–92.doi: 10.1097/00006324-199604000-00011. [DOI] [PubMed] [Google Scholar]
  • 43.AK H, Buttner-Ennever JA, Suzuki Y, Henn V. Histological identification of premotor neurons for horizontal saccades in monkey and man by parvalbumin immunostaining. J Comp Neurol. 1995;359(2):350–63.doi: 10.1002/cne.903590212. [DOI] [PubMed] [Google Scholar]
  • 44.AK H, Helmchen C, Wahle P. GABAergic neurons in the rostral mesencephalon of the macaque monkey that control vertical eye movements. Ann N Y Acad Sci. 2003;1004(1):19–28.doi: 10.1196/annals.1303.003. [DOI] [PubMed] [Google Scholar]
  • 45.MH H, Anderson TJ, Jones RD, Dalrymple-Alford JC, Frampton CM, Ardagh MW. Eye movement and visuomotor arm movement deficits following mild closed head injury. Brain: A Journal of Neurology. 2004;127(Pt 3):575–90.doi: 10.1093/brain/awh066. [DOI] [PubMed] [Google Scholar]
  • 46.MH H, Jones RD, Dalrymple-Alford JC, Frampton CM, Ardagh MW, Anderson TJ. Motor deficits and recovery during the first year following mild closed head injury. Brain Injury. 2006;20(8):807–24.doi: 10.1080/02699050600676354. [DOI] [PubMed] [Google Scholar]
  • 47.MF K, Little DM, Donnell AJ, Reilly JL, Simonian N, Sweeney JA. Oculomotor function in chronic traumatic brain injury. Cognitive and Behavioral Neurology: Official Journal of the Society for Behavioral and Cognitive Neurology. 2007;20(3):170–78. doi: 10.1097/WNN.0b013e318142badb. [DOI] [PubMed] [Google Scholar]
  • 48.MH H, Jones RD, Macleod AD, Snell DL, Frampton CM, Anderson TJ. Impaired eye movements in post-concussion syndrome indicate suboptimal brain function beyond the influence of depression, malingering or intellectual ability. Brain: A Journal of Neurology. 2009;132(Pt 10):2850–70.doi: 10.1093/brain/awp181. [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

Original data will be made available if requested.

RESOURCES