Skip to main content
Archives of Clinical Neuropsychology logoLink to Archives of Clinical Neuropsychology
. 2017 Aug 1;33(2):247–253. doi: 10.1093/arclin/acx067

Validity of a Computerized Cognitive Battery in Children and Adolescents with Neurological Diagnoses

Vickie Plourde 1,2,, Marianne Hrabok 3,4, Elisabeth M S Sherman 5, Brian L Brooks 1,2,6,7,8
PMCID: PMC6093387  PMID: 28981565

Abstract

Objective

Little is known about the validity of computerized cognitive batteries, such as CNS Vital Signs (CNSVS), in pediatric patients. The purpose of this study was to examine convergent and divergent validity of the CNSVS in a clinical pediatric sample with neurological diagnoses.

Method

Participants included 123 pediatric patients assessed in a tertiary care setting as part of clinical care. CNSVS (Memory, Psychomotor Speed, Reaction Time, Complex Attention, and Cognitive Flexibility domains, and a Neurocognition Index) and paper-and-pencil neuropsychological measures assessing learning, memory, processing speed, reaction time, attention, and executive functioning were administered

Results

Most correlations between CNSVS domain scores and neuropsychological measures assessing similar constructs were medium in strength. With the exception of stronger correlations between psychomotor speed tests, correlations between tests of similar constructs were not significantly higher than those between dissimilar constructs.

Conclusions

These results provide support for validity of the CNSVS battery, but also caution that many abilities are inter-correlated.

Keywords: Childhood neurologic disorders, Assessment, Convergent and divergent validity

Introduction

Many medical and neurological conditions are associated with changes in cognitive functioning (Strauss, Sherman, & Spreen, 2006), and timely identification of neuropsychological problems is associated with benefits in medical care. Neuropsychological assessment via standard paper-and-pencil measures continues to be the gold standard for assessment of cognitive and behavioral functioning, but may not always be practical in certain situations. Computer-administered cognitive tests are useful in clinical care settings as screening assessments or supplements to a traditional battery, as well as to rapidly track cognitive change over time (e.g., Bauer et al., 2012; Brooks & Barlow, 2011; Brooks & Sherman, 2012). Neuropsychologists are increasingly using computerized cognitive testing in their practice. This may be explained in part by advantages of computerized batteries, including: brevity; portability; automated scoring; enhanced accuracy of measurement and scoring; repeatability using alternate forms; and fewer administration materials (Bauer et al., 2012; Bilder, 2011; Crook, Kay, & Larrabee, 2009; De Marco & Broshek, 2016; Parsey & Schmitter-Edgecombe, 2013).

Given the potential practical value of computerized screening batteries in some circumstances, special considerations have been proposed to guide accurate and appropriate use of computerized batteries to maximize clinical utility and minimize misuse (Bauer et al., 2012). One consideration is the importance of subjecting computer-administered measures to rigorous psychometric evaluation, as has been done with paper-and-pencil based neuropsychological measures conventionally used by neuropsychologists in clinical care settings (Bauer et al., 2012; Strauss et al., 2006). Specifically, investigating psychometric properties such as reliability and validity of these tools is imperative (Bauer et al., 2012).

Many cognitive computerized batteries have been developed in the last few years and one of the most commonly used (Dede, Zalonis, Gatzonis, & Sakas, 2015) is CNS Vital Signs (CNSVS; www.cnsvs.com). This battery can be administered to persons 7–90 years of age to rapidly screen cognitive functioning (e.g., memory, psychomotor speed, reaction time, attention, and executive functions). Previous research in adults has suggested adequate test–retest reliability, discriminability of domain scores between various adult clinical groups, and correlations of medium and large strength with a some traditional paper-and-pencil neuropsychological measures assessing learning and memory, psychomotor speed, and attention/executive functioning (e.g., Gualtieri & Johnson, 2006).

Despite increasing interest in using the CNSVS as a screening tool in pediatric populations, only a small number of studies have been conducted to support the reliability and validity of this battery in youth. Test–retest reliabilities of CNSVS in youth have been shown to range from marginal (r = .63, Complex Attention) to high (r = .82, Psychomotor Speed; as reported in Brooks and Barlow, 2011). In a case study using reliable change methodology, the CNSVS was useful in tracking cognitive response to steroid treatment in one adolescent patient with Hashimoto’s encephalitis (Brooks & Barlow, 2011). Other studies’ results have shown CNSVS domain scores effectively differentiate between controls and pediatric samples, including neurological disorders (Brooks & Sherman, 2012), obesity (Bozkurt et al., 2016), familial Mediterranean fever (Özer, Bozkurt, Yilmaz, Sonmezgoz, & Butun, 2015), depression (Brooks, Iverson, Sherman, & Roberge, 2010), and mild traumatic brain injury (mTBI) when tested in the emergency department (Brooks, Khan, Daya, Mikrogianakis, & Barlow, 2014).

The existing literature suggests the CNSVS can be used as a brief screen of cognitive functioning in research and in healthcare professionals with training in test administration and interpretation. However, psychometric evaluation of this battery in youth remains insufficient according to the guidelines set forth by the American Academy of Clinical Neuropsychology and the National Academy of Neuropsychology (Bauer et al., 2012). Test validity of this battery, which pertains to evidence that supports whether or not a test score is measuring what it purports to actually measure with a given population and in a given situation (Sherman, Brooks, Iverson, Slick, & Strauss, 2011) has not been adequately studied in pediatric patients.

Therefore, the purpose of this study was to examine construct validity, more specifically convergent and divergent validity, of the CNSVS in pediatric patients with neurological diagnoses. Computerized cognitive screening tests from CNSVS were expected to be associated with conventional neuropsychological tests that are routinely used in a paper-and-pencil screening assessment and measure the same theoretical constructs. Based on existing construct validity results presented in adults by Gualtieri and Johnson (2006), we hypothesized that CNSVS domain scores would yield medium strength and significant correlations with neuropsychological measures.

Methods

Participants

Participants included children and adolescents (7–16 years old) who were evaluated through the neuropsychology service at a tertiary hospital as part of routine clinical care. Patients were referred for assessments by neurologists, neurosurgeons, physiatrists, and pediatricians if significant concerns related to their cognitive or overall functioning were raised by the family or the medical team. Data were included if patients met the following criteria: (a) neurological diagnosis (e.g., epilepsy, traumatic brain injury, stroke, hydrocephalus); (b) administration of the CNSVS battery as part of their neuropsychological assessment; (c) no prior exposure to the CNSVS, and (d) scores available for the other neuropsychological measures presented below. There were 177 children who met inclusion criteria a–c and completed at least one subtest of the CNSVS; however, the final sample included N = 123 participants who completed both the computerized and at least one paper-and-pencil measures. The number of children who completed each neuropsychological measure in the current study varied from n = 91 to n = 123. Data were collected through clinical care, which can explain why there are partial missing data.

Measures

CNSVS is a 30-min computerized battery comprised of seven subtests that yield an overall summary domain score Neurocognition Index (NCI), five primary domain scores, and four secondary domain scores. These scores are standardized to a mean of 100 and standard deviation of 15. Briefly, NCI is an average of the five primary domain scores (Memory, Psychomotor Speed, Reaction Time, Complex Attention, and Cognitive Flexibility). The Memory domain score represents overall memory abilities and is a composite score reflecting Verbal and Visual Memory scores. Verbal Memory corresponds to memory for words and includes correct responses (hits and passes for immediate and delayed trials) from the Verbal Memory test. Visual Memory corresponds to memory for geometric designs and includes correct responses (hits and passes for immediate and delayed trials) from the Visual Memory test. The Psychomotor Speed domain score represents visual-motor processing speed and includes the total of right and left taps from the Finger Tapping Test and total correct responses on the Symbol Digit Coding Test. The Reaction Time domain score represents decision-making speed in a task of inhibitory control and includes average of the two complex Reaction Time scores from the Stroop Test. The Complex Attention domain score represents sustained and divided attention and includes addition of errors in the Continuous Performance Test, the Shifting Attention Test, and the Stroop Test. Finally, the Cognitive Flexibility domain represents inhibition and mental flexibility and includes the number of correct responses on the Shifting Attention Test, minus the number of errors on the Shifting Attention Test, and the commission errors from the Stroop Test. Additional information is available from the test publisher (see www.cnsvs.com). CNSVS testing software was downloaded on two identical IBM Lenovo Thinkpad laptop computers and ran without using an internet connection.

The paper-and-pencil neuropsychological measures used as the comparisons for the CNSVS are widely used, standardized, and norm-referenced, with published psychometric and normative information (see Strauss et al., 2006). These measures can constitute a rapid screening battery that is roughly equivalent in administration time and covers similar cognitive domains as CNSVS. First, learning and memory was assessed by the California Verbal Learning Test, Children’s Version (CVLT-C; Delis, Kramer, Kaplan, & Ober, 1994). Second, psychomotor processing speed was assessed using the Wechsler Intelligence Scale for Children, Fourth Edition Processing Speed Index (WISC-IV PSI; Wechsler, 2003) (including Digit Symbol Coding and Symbol Search subtests scores) and the Naming condition combined score (time and error) from the NEPSY-II Inhibition subtest (Korkman, Kirk, & Kemp, 2007). Third, executive functioning (e.g., complex attention/cognitive flexibility) was measured using two Inhibition subtests (Inhibition and Inhibition-Switching combined scores for time and errors) from the NEPSY-II.

All neuropsychological variables, as well as neurological and sociodemographic information presented in the study, were extracted from medical chart review. Collection and use of these data was approved by the Conjoint Health Research Ethics Board (CHREB) of the University of Calgary.

Data Analyses

All analyses were completed using SPSS v.19. Assumptions for parametric analyses were met. Variables were normally distributed, with the exception of CVLT-C Recognition Hits score, for which a square root transformation was performed and used for all analyses. First, likelihood ratio tests were performed on sex and parent education, and one-way analyses of variance (ANOVAs) were performed on age and the CNSVS domain scores to investigate potential differences between the sample (n = 123) used for the study and the other patients who had scores available only for CNSVS (n = 54). These analyses were performed to make sure there were no biases in participants’ selection and that the participants who completed computerized testing were similar to those who did not. Second, one-way ANOVAs were performed as preliminary analyses to investigate potential sex and age differences (comparing 7–11 years old with 12–16 years old) as well as differences between the different diagnoses groups on all scores. To control for multiple comparisons, the significance level was set a priori at p < .01.

To test the main objective of the study, Pearson correlations were performed and strength of correlations were interpreted using Cohen’s conventions (Cohen, 1992), including small (r ≥ .10), medium (r ≥ .30), and large (r ≥ .50). Partial correlations were computed using age groups as a covariate on cognitive scores that differed between age groups. Only correlations of medium and large strength were interpreted as being meaningful. Finally, we conducted two-tailed Fisher r-to-z transformations to compare correlation coefficients of CNSVS scores and similar constructs with those between CNSVS scores and less similar constructs, using http://vassarstats.net/rdiff.html.

Results

Descriptive Statistics: Demographic, Clinical, and Cognitive Measures

There were no differences on any of the CNSVS scores (NCI: F(1, 127) = 6.72; Memory: F(1, 136) = 1.79; Psychomotor Speed: F(1, 137) = .40; Reaction Time: F(1, 175) = 5.95; Complex Attention: F(1, 133) = 2.77; Cognitive Flexibility: F(1, 166) = .14; ps ≥ .01), age (F(1, 175) = 1.86, p = .18), sex (χ2(1) = .09, p = .76), and parent education (mother: χ2(13) = 8.72, p = .79; father: χ2(13) = 10.78, p = .63) when comparing the 123 participants with the 54 participants who were excluded because they did not complete both the computerized and the paper-and-pencil screening batteries.

Descriptive statistics of the final sample are presented in Table 1. Most children were Caucasian (for whom ethnicity data was available) and parents were well-educated (58%–66% with college or university training). The most common neurologic conditions were epilepsy, followed by traumatic brain injury, stroke, and hydrocephalus. There was a group with mixed diagnoses that included encephalitis, genetic, cerebrovascular or autoimmune diseases, and other medical conditions having a neurological affect. There were not significant differences on CNSVS scores across sex (NCI: F(1, 91) = .14; Memory: F(1, 97) = .01; Psychomotor Speed: F(1, 98) = 1.48; Reaction Time: F(1, 121) = .00; Complex Attention: F(1, 94) = .03; Cognitive Flexibility: F(1, 114) = .27; all ps ≥ .01) or diagnostic groups (NCI: F(4, 88) = 1.86; Memory: F(4, 94) = .87; Psychomotor Speed: F(4, 95) = 1.50; Reaction Time: F(4, 118) = .91; Complex Attention: F(4, 91) = 1.59; Cognitive Flexibility: F(4, 111) = 1.85; all ps ≥ .01). There were differences between age groups (7–11 years old and 12–16 years old) on the NCI (F(1, 91) = 7.85, p = .006), CNSVS Memory domain score (F(1, 97) = 7.55, p = .007), and CNSVS Visual Memory score (F(1, 97) = 10.14, p = .002), with older participants obtaining higher scores, and no significant differences on other cognitive scores (ps ≥ .01).

Table 1.

Demographic information

Descriptive variables N Percentage M SD Range
Age (years) 123 -- 12.7 2.7 7.0–16.8
 7–11 years 49 39.8
 12–16 years 74 60.2
Gender
 Male 63 51.2
 Female 60 48.8
Ethnicity
 Caucasian 78 63.4
 Hispanic 4 3.3
 Asian 3 2.4
 Middle-Eastern American 2 1.6
 African American 1 .8
 Other 5 4.0
 Unknown 30 24.4
Mother’s education 117 -- 14.0 2.5 4–20
 Less than high school 9 7.2
 High school 27 22.0
 Some post-secondary 45 36.6
 University degree 36 29.2
 Not documented 6 4.9
Father’s education 110 -- 14.0 2.7 4–20
 Less than high school 15 12.2
 High school 24 19.5
 Some post-secondary 27 21.9
 University degree 44 35.8
 Not documented 13 10.6
Diagnoses
 Epilepsy 49 39.8
 Traumatic brain injury 27 22.0
 Stroke 18 14.6
 Hydrocephalus 8 6.5
 Other 21 17.1

Note: M = mean; SD = standard deviation.

Descriptive data for the cognitive measures used in the current study are provided in Table 2. Mean performances for the children ranged from low average to average on all CNSVS and conventional neuropsychological measures.

Table 2.

Descriptive statistics and correlations between CNSVS and neuropsychological measures

CNSVS domain scores CVLT-C T1–5 CVLT-C LDFR CVLT-C REC 1 Wechsler PSI NEPSY-II IN-NA NEPSY-II IN-IN NEPSY-II IN-SW
M (SD) n 48.7 (10.2) 123 −0.09 (1.06) 123 −0.05 (0.91) 123 87.0 (14.5)123 8.1 (3.1) 123 8.0 (3.8) 123 8.7 (3.6) 119
Neurocognition Index2 89.2 (13.4) 93 r .34** .31** .35** .55** .46** .44** .60**
p <.001 .001 <.001 <.001 <.001 <.001 <.001
n 93 93 93 93 93 93 91
Memory2 92.2 (19.4) 99 r .32** .32** .30** .36** .29** .21* .38**
p .001 .001 .002 <.001 .003 .033 <.001
n 99 99 99 99 99 99 96
 Verbal Memory 93.8 (21.1) 100 r .32** .33** .23* .28** .25* .14 .26**
p .001 .001 .023 .005 .012 .180 .011
n 100 100 100 100 100 100 97
 Visual Memory2 93.7 (15.1) 99 r .19* .25* .24* .33** .21* .20* .37**
p .046 .011 .013 <.001 .030 .044 <.001
n 99 99 99 99 99 99 96
Psychomotor Speed 84.8 (12.5) 100 r .13ad .17be .12cf .48**abc .45**def .40** .54**
p .189 .085 .219 <.001 <.001 <.001 <.001
n 100 100 100 100 100 123 97
Reaction Time 94.3 (19.7) 123 r .27** .17 .30** .29** .33** .35** .39**
p .002 .055 .001 .001 <.001 <.001 <.001
n 123 123 123 123 123 123 119
Complex Attention 86.7 (19.6) 96 r .17 .14 .16 .32** .32** .28** .34**
p .089 .190 .124 .002 .002 .005 .001
n 96 96 96 96 96 96 93
Cognitive Flexibility 89.2 (17.9) 116 r .21* .21* .18 .38** .29** .22* .40**
p .022 .027 .058 <.001 .002 .017 <.001
n 116 116 116 116 116 116 113

Note: M = mean; SD = standard deviation; r = correlation; n = number; p = p values. CNSVS = CNS Vital Signs; CVLT-C = California Verbal Learning Test Children’s version; T1–5 = Trials 1–5; LDFR = Long-Delay Free Recall; REC = Recognition; PSI = Processing Speed Index; IN-NA = Inhibition-Naming condition; IN-IN Inhibition-Inhibition condition; IN-SW = Inhibition-Switching condition. *p < .05. **p < .01. Correlations were performed using pairwise deletion of missing data (n = 91–123). Correlations exceeding r ≥ .30 (medium) are in bold and highlighted in gray. Scores presented are adjusted for age. 1Square root transformed score was used for the correlations. az = 2.74**, bz = 2.45*, cz = 2.80**, dz = 2.48*, ez = 2.18*, fz = 2.53*. 2Partial correlations controlling for age groups differences were performed.

Convergent and Divergent Validity: Correlations Between CNSVS Domain Scores and Traditional Neuropsychological Measures

Correlations between CNSVS domain scores and paper-and-pencil neuropsychological measures are also presented in Table 2. A number of correlations were medium in strength. First, there were significant correlations of medium strength between CNSVS Verbal Memory score and CVLT-C variables (Total Trials 1–5 and Long Delay Free Recall). CNSVS Visual Memory scores had medium correlations with two non-memory scores (WISC-IV processing speed and NEPSY Inhibition-Switching subtests). Second, correlations between CNSVS Psychomotor Speed domain and measures of processing speed (WISC-IV PSI; NEPSY-II Inhibition-Naming subtest) were significant and medium in strength. Third, CNSVS Reaction Time was moderately associated with one memory score (CVLT-C recognition) as well as the three NEPSY-II subtests scores. Fourth, the associations between CNSVS Complex Attention and CNSVS Cognitive Flexibility domains and measures of processing speed (WISC-IV PSI; NEPSY-II Inhibition-Naming subtest) and cognitive flexibility (NEPSY-II Inhibition-Switching subtest) were also significant and medium in strength. Finally, correlations between CNSVS scores and similar constructs were not significantly different than correlations between CNSVS scores and dissimilar constructs, with the exception of significant differences (p < .01) between the correlations of CNSVS Psychomotor Speed and CVLT-C variables (Total 1–5 trials and total recognition) when compared to correlations of CNSVS Psychomotor Speed and measures of processing speed (WISC-IV PSI; NEPSY-II Inhibition-Naming subtest). More specifically, correlations between CNSVS Psychomotor Speed and measures of processing speed were higher than correlations between CNSVS Psychomotor Speed and CVLT-C variables.

Discussion

The availability of computerized cognitive batteries, the advantages of using these measures for screening purposes, and their increasing use underline the importance of conducting research to evaluate psychometric values of these measures to examine their validity as clinical tools. CNSVS is one of the most frequently studied brief computerized cognitive batteries (Dede, Zalonis, Gatzonis, & Sakas, 2015) and previous studies have supported discriminability of CNSVS domain scores in pediatric samples (e.g., Bozkurt et al., 2016; Brooks & Sherman, 2012; Brooks et al., 2010, 2014; Özer et al., 2015). However, no known studies have investigated the construct validity of CNSVS in a pediatric sample with neurological conditions. The main objective of this study was thus to examine CNSVS convergent and divergent validity in pediatric patients with neurological diagnoses.

Most correlations between CNSVS domain scores and paper-and-pencil neuropsychological measures assessing similar constructs were medium in strength, providing support for convergent validity. These results are also consistent with those by Gualtieri and Johnson (2006), comparing CNSVS tests scores and different neuropsychological measures in a heterogeneous sample of adults who were healthy or had a psychiatric diagnosis. However, support for divergent validity is lacking. Except for CNSVS Psychomotor Speed domain, correlations between CNSVS domain scores and paper-and-pencil neuropsychological tests measuring similar constructs were not large in strength and were not significantly higher than cross-correlations with dissimilar constructs. One potential explanation of these results is that these tests are measuring overlapping neuropsychological constructs (see Maruff et al., 2009 for an example of study showing similar results). Accordingly, other work comparing paper-and-pencil neuropsychological measures has also shown correlations between overlapping constructs. For instance, Korkman, Kirk, and Kemp (2007) reported moderate correlations between some attention/executive functioning subtests of the NEPSY-II and measures of processing speed (WISC-IV) and memory (subtests of the Children’s Memory Scale).

Interestingly, the correlations between CNSVS Psychomotor Speed domain score and paper-and-pencil processing speed measures were medium in strength and were significantly larger than those between CNSVS Psychomotor Speed domain score and learning/memory measures. These results provide limited support for convergent validity of this CNSVS domain score and are consistent with previous studies comparing conventional processing speed measures with other computerized cognitive batteries, such as the Immediate Post-Concussion Assessment and Cognitive Testing – ImPACT battery (Iverson, Lovell, & Collins, 2005) in adults with prior concussion and the Pediatric ImPACT battery in children (5–12 years old) with mild traumatic brain injury (Newman, Reesman, Vaughan, & Gioia, 2013).

This study has some limitations. First, the sample was heterogeneous, which can limit conclusions specific to sub-groups composing the sample. For instance, despite not finding differences on the tests between the sub-groups, we did not have sufficient statistical power to compare the pattern of correlations between groups. Second, this is a clinical and convenience sample and the participants were selected retrospectively and only if they had completed the CNSVS for the first time and also had other neuropsychological measures. Therefore, not all patients completed these measures so there is a potential for recruitment bias. Despite these limitations, this study has strong ecological validity, as the data used are from patients with neurological conditions evaluated at a pediatric tertiary hospital, a population highly likely to be assessed by neuropsychologists who use cognitive screening tools. Future studies are warranted to replicate and extend these results with more homogeneous pediatric samples, as well as to investigate factor structure and the criterion validity of the CNSVS battery.

Funding

Dr. Brian Brooks acknowledges salary funding from the Canadian Institutes of Health Research (CIHR). Dr. Vickie Plourde was financially supported for her fellowship by the Alberta Children's Hospital Research Institute, the Integrated Concussion Research Program at the University of Calgary, Cumming School of Medicine at the University of Calgary, and Alberta Innovates Health Solutions.

Conflict of Interest

Collection of computer-assisted cognitive data was partially supported by in-kind funding (i.e., test credits) to Dr. Brooks from the publisher of the CNS Vital Signs test. Drs. Brooks and Sherman receive royalties from Psychological Assessment Resources Inc. for the sales of pediatric neuropsychological tests [Child and Adolescent Memory Profile (ChAMP), Sherman & Brooks, 2015; Memory Validity Profile (MVP), Sherman & Brooks, 2015; Multidimensional Everyday Memory Ratings for Youth (MEMRY), Sherman & Brooks, 2017]. Drs. Hrabok, Sherman, and Brooks receive book royalties from Oxford University Press. None of the authors nor their families have a financial interest in the CNS Vital Signs.

Acknowledgments

The authors thank Dr. Helen Carlson for her assistance with data management, as well as (alphabetically) Kalina Askin, Hussain Daya, Christianne Laliberté, Courtney Habina, Andrea Jubinville, Lonna Mitchell, Emily Tam, and Julie Wershler for data entry.

References

  1. Bauer R. M., Iverson G. L., Cernich A. N., Binder L. M., Ruff R. M., & Naugle R. I. (2012). Computerized neuropsychological assessment devices: Joint position paper of the American Academy of Clinical Neuropsychology and the National Academy of Neuropsychology. The Clinical Neuropsychologist, 26, 177–196. 10.1080/13854046.2012.663001. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Bilder R. M. (2011). Neuropsychology 3.0: Evidence-based science and practice. Journal of the International Neuropsychological Society, 17, 7–13. 10.1017/S1355617710001396. [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Bozkurt H., Özer S., Yılmaz R., Sönmezgöz E., Kazancı Ö., Erbaş O., et al. (2016). Assessment of neurocognitive functions in children and adolescents with obesity. Applied Neuropsychology: Child, Online view. 10.1080/21622965.2016.1150184. [DOI] [PubMed] [Google Scholar]
  4. Brooks B. L., & Barlow K. M. (2011). A methodology for assessing treatment response in Hashimoto’s encephalopathy: A case study demonstrating repeated computerized neuropsychological testing. Journal of Child Neurology, 26, 786–791. 10.1177/0883073810391532. [DOI] [PubMed] [Google Scholar]
  5. Brooks B. L., Iverson G. L., Sherman E. M. S., & Roberge M.-C. (2010). Identifying cognitive problems in children and adolescents with depression using computerized neuropsychological testing. Applied Neuropsychology, 17, 37–43. 10.1080/09084280903526083. [DOI] [PubMed] [Google Scholar]
  6. Brooks B. L., Khan S., Daya H., Mikrogianakis A., & Barlow K. M. (2014). Neurocognition in the emergency department after a mild traumatic brain injury in youth. Journal of Neurotrauma, 31, 1744–1749. 10.1089/neu.2014.3356. [DOI] [PubMed] [Google Scholar]
  7. Brooks B. L., & Sherman E. M. S. (2012). Computerized neuropsychological testing to rapidly evaluate cognition in pediatric patients with neurologic disorders. Journal of Child Neurology, 27, 982–991. 10.1177/0883073811430863. [DOI] [PubMed] [Google Scholar]
  8. Cohen J. (1992). A power primer. Psychological Bulletin, 112, 155–159. 10.1037/0033-2909.112.1.155. [DOI] [PubMed] [Google Scholar]
  9. Crook T. H., Kay G. G., & Larrabee G. J. (2009). Computer-based cognitive testing In Grant I., & Adams K. M. (Eds.), Neuropsychological assessment of neuropsychiatric and neuromedical disorders (pp. 84–100). New York, NY: Oxford University Press, Inc. [Google Scholar]
  10. Dede E., Zalonis I., Gatzonis S., & Sakas D. (2015). Integration of computers in cognitive assessment and level of comprehensiveness of frequently used computerized batteries. Neurology Psychiatry and Brain Research, 21, 128–135. 10.1016/j.npbr.2015.07.003. [DOI] [Google Scholar]
  11. Delis D. C., Kramer J. H., Kaplan E., & Ober B. A. (1994). Manual for The California Verbal Learning Test for Children. New York: Psychological Corporation. [Google Scholar]
  12. De Marco A. P., & Broshek D. K. (2016). Computerized cognitive testing in the management of youth sports-related concussion. Journal of Child Neurology, 31, 68–75. 10.1177/0883073814559645. [DOI] [PubMed] [Google Scholar]
  13. Gualtieri C. T., & Johnson L. G. (2006). Reliability and validity of a computerized neurocognitive test battery, CNS Vital Signs. Archives of Clinical Neuropsychology, 21, 623–643. 10.1016/j.acn.2006.05.007. [DOI] [PubMed] [Google Scholar]
  14. Iverson G. L., Lovell M. R., & Collins M. W. (2005). Validity of ImPACT for measuring processing speed following sports-related concussion. Journal of Clinical and Experimental Neuropsychology, 27, 683–689. 10.1080/13803390490918435. [DOI] [PubMed] [Google Scholar]
  15. Korkman M., Kirk U., & Kemp S. L. (2007). NEPSY II: Clinical and Interpretative Manual. San Antonio, TX: Psychological Corporation. [Google Scholar]
  16. Maruff P., Thomas E., Cysique L., Brew B., Collie A., Snyder P., et al. (2009). Validity of the CogState brief battery: Relationship to standardized tests and sensitivity to cognitive impairment in mild traumatic brain injury, schizophrenia, and AIDS dementia complex. Archives of Clinical Neuropsychology, 24, 165–178. 10.1093/arclin/acp010. [DOI] [PubMed] [Google Scholar]
  17. Newman J. B., Reesman J. H., Vaughan C. G., & Gioia G. A. (2013). Assessment of processing speed in children with mild TBI: A “first look” at the validity of pediatric ImPACT. The Clinical Neuropsychologist, 27, 779–793. 10.1080/13854046.2013.789552. [DOI] [PubMed] [Google Scholar]
  18. Özer S., Bozkurt H., Yilmaz R., Sonmezgoz E., & Butun I. (2015). Evaluation of executive functions in children and adolescents with familial Mediterranean fever. Child Neuropsychology: A Journal on Normal and Abnormal Development in Childhood and Adolescence, Online view. 10.1080/09297049.2015.1108397. [DOI] [PubMed] [Google Scholar]
  19. Parsey C. M., & Schmitter-Edgecombe M. (2013). Applications of technology in neuropsychological assessment. The Clinical Neuropsychologist, 27, 1328–1361. [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Sherman E. M. S., Brooks B. L., Iverson G. L., Slick D. J., & Strauss E. (2011). Reliability and validity in neuropsychology In Schoenberg M. R., & Scott J. G. (Eds.), The little black book of neuropsychology (pp. 873–892). New York, NY: Springer Science+Business Media LLC. [Google Scholar]
  21. Strauss E., Sherman E. M. S., & Spreen O. (2006). A Compendium of Neuropsychological Tests: Administration, norms, and commentary. New York: Oxford University Press. [Google Scholar]
  22. Wechsler D. (2003). Wechsler intelligence scale for children-Fourth Edition (WISC-IV). San Antonio, TX: The Psychological Corporation. [Google Scholar]

Articles from Archives of Clinical Neuropsychology are provided here courtesy of Oxford University Press

RESOURCES