Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2014 Nov 1.
Published in final edited form as: Clin Neuropsychol. 2013 Sep 16;27(8):10.1080/13854046.2013.834971. doi: 10.1080/13854046.2013.834971

Applications of Technology in Neuropsychological Assessment

Carolyn M Parsey 1, Maureen Schmitter-Edgecombe 1
PMCID: PMC3869883  NIHMSID: NIHMS521590  PMID: 24041037

Abstract

Most neuropsychological assessments include at least one measure that is administered, scored, or interpreted by computers or other technologies. Despite supportive findings for these technology-based assessments, there is resistance in the field of neuropsychology to adopt additional measures that incorporate technology components. This literature review addresses the research findings of technology-based neuropsychological assessments, including computer-, and virtual reality-based measures of cognitive and functional abilities. We evaluate the strengths and limitations of each approach, and examine the utility of technology-based assessments to obtain supplemental cognitive and behavioral information that may be otherwise undetected by traditional paper and pencil measures. We argue that the potential of technology use in neuropsychological assessment has not yet been realized, and continued adoption of new technologies could result in more comprehensive assessment of cognitive dysfunction and in turn, better informed diagnosis and treatments. Recommendations for future research are also provided.

Keywords: Neuropsychological assessment, Technology, Computerized Testing, Virtual Reality


Historically, neuropsychological assessment has relied on paper and pencil based tests to assess cognitive abilities, and studies conducted with these tests have generated thousands of scholarly articles promoting their strengths and debating their weaknesses. However, in recent years, increasing numbers of researchers and clinicians have started to use various technologies to improve the efficiency, reliability, and cost-effectiveness of neuropsychological assessment. Rapid advances in technology, including improved computer programming, have allowed many assessment measures to be administered, scored, or interpreted without the direct interaction of a clinician.

The recent surge in technological developments has also affected the way that individuals carry out their everyday activities. In contrast to current older adult cohorts, children and teenagers born in the new millennium have never known a day without technology, such as the internet, cellular phones, and laptop computers. This increase in technology use in recent generations has produced an apparent cohort effect in the comfort and ease in which individuals use electronics. In fact, recent studies have found that cohorts with greater exposure to computers perform better on computer-based assessments than those with less computer experience (Tun & Lachman, 2010), suggesting positive effects of familiarity with technology on computer-based testing. In addition, major college and graduate school entrance exams (e.g., SAT or GRE) are computer-administered, and neuropsychologists are occasionally asked to recommend academic accommodations for these students. Thus, having neuropsychological examinations that are computer-based would provide the clinician with added information about computerized testing abilities. Given that this pattern of technology use is likely to continue increasing in coming decades (Riva, 2003), it is important to consider whether neuropsychological assessment should adapt to technological developments so as to maintain status with the experiences of clients (e.g., exposure and comfort with computers).

Questions and Aims of this Review

While other reviews have assessed the psychometric properties of computer-based assessments (e.g., Wild, Howieson, Webbe, Seelye, & Kaye, 2008), this paper focuses on the utility of technology use in the field of neuropsychology and future directions for research. The following questions are addressed:

  1. What are the strengths and weaknesses of computer- and virtual reality-based neuropsychological assessment of cognition and everyday functioning?

  2. How can technology-based measures add value to neuropsychological assessment data?

  3. What are the major factors to be considered when developing or implementing technology-based assessments?

  4. What are future clinical and research directions within the field of neuropsychological assessment related to technology use?

Literature Search Procedure

PsycINFO, PubMed, and IEEE Xplore databases were searched between July 1, 2012 and October 10, 2012 for this literature review. Searches included terms for cognition and/or a specific neurological disorder with the modality for assessment (i.e., computer-based, virtual reality); however, these terms varied by the modality for assessment. Search terms for each modality are provided in Table 1.

Table 1.

Search Terms for Literature Review: PsycInfo, PubMed, IEEE Xplore

Modality / Sub Category Search Terms
Computer-Based Assessment cognit* OR memory OR executive OR attention OR processing speed OR iadl OR adl OR daily living OR daily skills OR everyday function* AND (comput* OR technolog*)
Virtual Reality cognit* OR memory OR executive OR attention OR processing speed OR iadl or adl OR daily living OR daily skills OR everyday function* OR gait* OR fall* risk OR fall detect* AND (virtual* OR three-dimension* OR 3D OR immers*)

Inclusion Criteria

Studies were included based on the following criteria: participants of all ages with or without cognitive impairment or neurological disease (e.g., dementia, Parkinson’s disease, brain injury). Studies were compared based on sample demographics. For example, studies with adult participants were only compared to other studies using adults (not children). Studies were excluded if normative participants demonstrated severe psychopathology (e.g., schizophrenia, bipolar disorder) and/or severe intellectual, physical, or neurodevelopmental disability (e.g., mental retardation, cerebral palsy). Studies that used interventions, such as cognitive rehabilitation or pharmacological regimens, were included only when baseline measures were provided. Technologies such as robots/androids or other physical assistive devices (e.g., intelligent wheelchairs or robotic prostheses) were excluded. Book chapters and dissertations were included when appropriate, such as states of the field or reviews of current research. Publication date was not a consideration for exclusion. Industry-sponsored studies of technology were excluded due to the potential conflict of interest. Of the 167 articles initially found in the literature search, 108 were included for this review, including 11 studies of individual computer-based neuropsychological tests (see Table 2), 80 studies using 6 different computerized neuropsychological test batteries (see Table 3), and 17 studies evaluating 8 different virtual reality-based assessments (see Table 4).

Table 2.

Studies of Individual Computer-Based Neuropsychological Tests

Authors Participants Assessments Administration Modalities Additional Variables Assessed Findings & Advantages Disadvantages & Author Critiques
Guevara et al. (2009) None. Overview of test protocol for computer administration. HANOIPC3 (computer version of Tower of Hanoi) Computer Customized modification of rules, on-line feedback, time-stamped latencies for moves. Only one level of difficulty (but can be manipulated by administrator)
Mataix-Cols & Bartres-Faz (2002) 43 healthy young adults, (mean age 22.5; no range reported) Tower of Hanoi Manual and Computer Gender No differences in administration modality. Computer version may result in fewer scoring errors on a complex test. Participants were university students with computer experience. Age and computer use effects should be addressed.
Noyes & Garland (2003) 3 studies: 39, 76, and 56 healthy young adults (age 18-23.58) Tower of Hanoi Manual, Computer, and Mental Representation Computer version revealed better overall scores, more moves needed, and longer time for completion. Computer may reduce participant effort and increase trial-and-error.
Williams & Noyes (2007) 60 healthy younger adults, (age 20.08-35.58) Tower of Hanoi Manual, Computer, and Mental Representation No significant differences in administration modality. Computer version may reduce load on working memory.
Salnaitis et al. (2011) 83 adults with low, moderate, and high levels of psychopathic tendencies (age 18-54) Tower of Hanoi Manual and Computer Psychopathy, Impulse response styles, Gender Impulsive response style was associated with poorer performance on computer but not manual version. Need for additional norms for computerized version, rather than comparing results to manual administration norms.
Tien et al. (1996) 33 adults with psychiatric disorders (age 17-66) WCST Manual and Computer More total errors and non-perseverative errors on computer version. No differences on perseverative errors/responses, or categories. Computer version allows for reliable scoring, administration, concentration on behavioral observations during task.
Feldstein et al. (1999) 22 healthy young adults for each of 5 conditions (mean age 24.88-29.40) mean age 28.64 WCST Manual and 4 Computer versions Normative differences between manual and computer versions. Norms specific to computer version are needed.
Elkind et al. (2001) 63 healthy adults (age 18-74) WCST, Look for a Match (LFAM) Virtual Reality WCST Computer, Virtual Reality VR LFAM was more difficult but reportedly more enjoyable than computer WCST. VR may increase interest and effort, suggesting more accurate findings.
Steinmetz et al. (2010) 100 healthy adults (age 18-58) WCST Manual and Computer Computer version produced larger variances for total errors, perseverative errors, and failure to maintain set. Computer and manual versions cannot be used interchangeably with the same norms.
Edwards et al. (1996) 3 studies, 20, 27, and 60 healthy adults (age 18-41) Stroop Manual and Computer Caffeine Faster responses over trials in computer version. Computer and manual versions may not be analogous and require separate norms.
Parsons et al. (2011) 20 healthy young adults (age not reported) Stroop Manual, Computer (ANAM), & Virtual Reality Low-and High-threat stimuli in VR environment VR task more sensitive to reaction time and inhibition measures. May be useful to test executive abilities and interference from external stimuli.

Note. ANAM = Automated Neuropsychological Assessment Metrics; WCST = Wisconsin Card Sorting Test; VR = Virtual Reality.

Table 3.

Studies of Computer-Based Neuropsychological Test Batteries

Battery Name Normal Aging Mild Cognitive Impairment Alzheimer’s Disease & Dementia Parkinson’s Disease Head Injury Other
ANAM Repeated assessments in military populations are necessary to establish stable performance (Eonta et al., 2011). Normative data for assessment of TBI in military population (Vincent et al., 2012). 100% classification compared to healthy older adults (Levinson et al., 2005). Variable subtest reliability but high global reliability for cognitive changes in PD (Hawkins et al., 2012). Differentiation between impaired PD and controls (Kane et al., 2007). Strong construct validity demonstrated in TBI population (Bleiberg et al., 2000). Erratic and inconsistent stability in performance within-and across-days compared to controls (Bleiberg et al., 1997). Sensitive to improvements in cognition post-concussion (Daniel et al. 1999). 91.6% classification rate for TBI versus controls (Levinson & Reeves, 1997). Poorer performance in TBI compared to controls (Bryan & Hernandez, 2012). No relationship between number of lifetime TBIs and ANAM performance (Ivins et al., 2009).
CANTAB Overall performance declined with age and IQ (Robbins et al., 1994). Practice effects present in healthy elderly (Lowe & Rabbitt, 1998). Paired associates learning and spatial recognition subtests distinguished pre-clinical memory deficits from healthy controls (de Jager & Budge, 2005). Sensitive to executive function deficits in normal aging (Robbins et al., 1998). Paired associate learning subtest was sensitive to amnestic MCI (Junkkila et al., 2012; Egerhazi et al., 2007). Longitudinal assessment with CANTAB differentiated early stages of dementia from controls (Fowler et al., 2002). Paired associate learning subtest was sensitive to Alzheimer’s disease (Egerhazi et al., 2007; O’Connell et al., 2004; Sahgal et al., 1991). Visual-spatial and visual memory impairments could distinguish stages of AD (Sahakian et al., 1988; Sahgal et al., 1991). Memory impairment without attentional deficits in AD (Sahakian et al., 1990). Tower subtest was sensitive to cognitive deficits in PD (McKinlay et al., 2009), including planning time and spatial working memory (Morris et al., 1988). Visual search and set-shifting deficits in PD patients (Downes et al., 1989). ADHD: Significantly more impairment in ADHD than controls (Fried et al., 2012; Gau & Shang, 2010). Used widely to assess executive functioning skills in ADHD and effects of medication on symptoms (see review in Chamberlain et al., 2011).
CAMCOG Assessments with the CAMCOG were not as sensitive to mild memory problems compared to CANTAB (de Jager & Budge, 2005). Useful screening tool for populations with low-education (Aprahamian et al., 2011a). Language, Memory, Praxis, and Calculation subscales were sensitive to MCI (Aprahamian et al., 2011b). Reduced version of CAMCOG was sensitive to MCI (Aprahamian et al., 2011b). Sensitive as an initial screening of cognitive deficits in Alzheimer’s disease (Heinik et al., 2004). Orientations and Memory subscales were sensitive to AD deficits and conversion from MCI to AD (Conde-Sala et al., 2012). Total scores sensitive to conversion from MCI to dementia (Gallagher et al., 2010). Distinguished MCI from dementia (Nunes et al., 2008). Decline in performance over 13-month period (Athey & Walker, 2006). Sensitive to previously undetected cognitive deficits in PD (Athey et al., 2005; Hobson & Meara, 1999). Stroke: CAMCOG-R is comparable to CAMCOG in detection of cognitive deficits in stroke patients (te Winkel-Witlox et al., 2008; Leeds et al., 2001). CAMCOG-R demonstrated 83% sensitivity to post-stroke dementia (de Koning et al., 2005).
CNS Vital Signs Test-retest reliability and normative data for decades of normal aging (Gualtieri & Johnson, 2005). Distinguished MCI from normal aging (Gualtieri & Johnson, 2005). Distinguished dementia from mild cognitive impairment on subscales of memory, processing speed, and cognitive flexibility (Gualtieri & Johnson, 2005). Subscales of psychomotor speed and cognitive flexibility were most sensitive to severity level of TBI (Gualtieri & Johnson, 2008). Good discriminant validity for post-concussion and severe TBI (Gualtieri & Johnson, 2006). ADHD: Good discriminant validity for both treated and untreated ADHD symptoms (Gualtieri & Johnson, 2006).
CogState Effective test-retest reliability for monitoring mild decline in normal aging over 12-month period (Darby et al., 2012). Stability in performance over 12 months (Fredrickson, 2010). Practice effects were most significant between 1st and 2nd administrations in one day (Collie et al., 2003). Practice effects were present at one-week follow-up but were significantly reduced after one month (Falleti et al., 2006). Factor analysis for performance stability on GMLT subscale (Pietrzak et al., 2008). Decline in performance associated with amount of cerebral amyloid in non-demented older adults (Darby et al., 2011). Continuous Paired Associate Learning was sensitive to amnestic MCI (Harel et al., 2011). Working and delayed memory were most impaired for MCI (Lim et al., 2012). Continuous learning task performance significantly declined for amnestic MCI participants over 12 months (Maruff et al., 2004). Multiple administrations in one day reliably distinguished MCI from healthy older adults (Darby et al., 2002). Distinguished from healthy older adults by 3rd and 4th administration in 3-hour span (Darby et al., 2011). Minimal practice effects (Hammers et al., 2011). Inability to distinguish different dementia types (Hammers et al., 2012). Development of criterion validity measures for mild head injury (Maruff et al., 2009). ADHD: Differentiated from healthy controls (Yamashita et al., 2011)
Stroke: Detection and Identification task performance 2 weeks after stroke was related to 3-month follow-up testing (Cumming et al., 2012).
Mindstreams Participant and administrator feedback for older adult samples (Fillit et al., 2008). MCI performed more poorly than older adult controls (Doniger et al., 2005, 2006, 2009; Dwolatzky et al., 2003, 2004; Osher et al., 2011). Distinguished from healthy older adults (Dwolatzky et al., 2010) and MCI (Doniger, 2006; Dwolatzky et al., 2004). Mild dementia and MCI discrimination (Doniger et al., 2005). Distinguished by severity on CDR assessment (Dwolatzky et al., 2010). ADHD: Increased problems with sustained attention in ADHD relative to controls (Schweiger et al., 2007).

Note. ANAM = Automated Neuropsychological Assessment Metrics; CANTAB = Cambridge Neuropsychological Test Automated Battery; CAMCOG = Cambridge Cognition Examination; CAMCOG-R = Cambridge Cognitive Assessment – Revised; CDR = Clinical Dementia Rating; MCI = Mild Cognitive Impairment; ADHD = Attention-Deficit/Hyperactivity Disorder

Table 4.

Virtual Reality Based Neuropsychological Assessments

Assessment VR Task Environment & Level of Immersion Cognitive Variables Assessed Everyday Functioning Variables Assessed Clinical Populations Assessed
“Look for a Match” Based on WCST (Elkind et al., 2001) Beach scene. 3D Computer simulation with stereographic eyewear. Executive functioning Interference from external stimuli. Healthy young adults (Elkind et al., 2001)
Virtual Reality Stroop Task (Parsons et al., 2011) Military Humvee. 3D Computer simulation with head-mounted display and steering wheel. Executive functioning, reaction/ response time Cognitive interference from driving and physiological response to high/low threat conditions. TBI and healthy controls (Parsons et al., 2011)
Virtual Reality Paced Serial Assessment Test (Parsons et al., 2012) Middle Eastern city and military “check point”. 3D Computer simulation with head-mounted display. Processing speed, attention, mathematical processing, reaction time Cognitive interference in high and low stress scenarios, and environmental distractions. TBI and healthy controls (Parsons et al., 2012)
Virtual Reality Cognitive Performance Assessment Test (Parsons et al., 2008) City scene. 3D Computer simulation with head-mounted display. Verbal and visual learning and memory Cognitive interference by environmental distractions. Healthy younger adults, age 21-36 (Parsons & Rizzo, 2008)
Virtual Classroom (Rizzo et al., 2006) Elementary classroom. 3D Computer simulation with head-mounted display. Attention, language (Boston Naming Test), executive functioning (Stroop) Classroom demands of children. Cognitive interference and distractibility (head tracking) from environmental stimuli. ADHD and healthy controls (Adams et al., 2009; Parsons et al., 2007)
Virtual Errands Test (McGeorge et al., 2001) Supermarket. 3D Computer simulation. Memory, attention, executive functioning, planning, organization, multi-tasking Shopping and planning. Cognitive interference and distractibility from environmental stimuli. Autism spectrum (Rajendran et al., 2011); healthy young adults (Law et al., 2006)
Virtual Multiple Errands Test (Rand et al., 2009) Food market and supermarket. 3D Computer simulation. Memory, attention, visuospatial skills, executive functioning, planning, organization, multi-tasking Shopping, planning, driving. Cognitive interference and distractibility from environmental stimuli. Stroke (Rand et al., 2009; Raspelli et al., 2011). Parkinson’s disease (Albani et al., 2010, 2011)
Virtual kitchen (Zhang et al., 2001) Kitchen. 3D Computer simulation. IQ (WAIS-R), attention, memory, executive functioning Meal preparation: accuracy, sequencing, total time, assistance needed. TBI (Zhang et al., 2001; 2003)

Note. ADHD = Attention Deficit/Hyperactivity Disorder, TBI = Traumatic Brain Injury; WAIS-R = Wechsler Adult Intelligence Scale – Revised.

Cognitive Assessment using Computers

Computer-based assessment consists of a wide range of approaches that use a computer interface to administer, score, or interpret cognitive tests. Bauer, Iverson, Cernich, Binder, Ruff, and Naugle (2012) define computer-based assessments as “any instrument that utilizes a computer, digital tablet, handheld device, or other digital interface instead of a human examiner to administer, score, or interpret tests of brain function and related factors relevant to questions of neurologic health and illness”, (p. 2). This section summarizes briefly the development of computerized assessment and the adaptation of paper and pencil measures into computer-administered tests (see Noyes & Garland, 2008 for a full review of such comparisons). We also discuss the development of neuropsychological test batteries that were specifically designed for computer administration, highlighting the use of these tests with clinical populations (e.g., dementia, Parkinson’s disease, TBI).

Computer-Based Assessment

Although computer-based assessment is discussed as a newer approach to cognitive testing, military and sport psychology fields have used computerized assessments since the 1980s to quickly and efficiently evaluate cognitive functioning. Within the United States military, computerized batteries such as the Automated Neuropsychological Assessment Metrics (ANAM) have been used for both pre- and post-deployment assessment. This data collection has led to an excellent database for development of specialized measures, such as the ANAM TBI battery (Vincent, Roebuck-Spencer, Gilliland, & Schlegel, 2012). Reliance on computer-based assessment has permitted ease in administration and data collection with thousands of military personnel within clinics and abroad (for a comprehensive review of the ANAM development see Reeves, Winter, Bleiburg, & Kane, 2007). Similarly, clinical sports psychologists have used computerized batteries, such as ImPact or Cogsport (e.g., Allen & Gfeller, 2011; Broglio, Ferrara, Macciocchi, Baumgartner, & Elliott, 2007; Maerlander et al., 2010) to assess for mild traumatic brain injury (e.g., Lovell et al., 2003; Macciocchi, Barth, Alves, Rimel, & Jane, 1996) and to inform return-to-play decisions (e.g., Lovell, 2002; Lovell, Collins, & Bradley, 2004; Schatz, Pardini, Lovell, Collins, & Podell, 2006). Within the fields of military and sport psychology assessment, there is general acceptance of technology-driven assessment as an efficient, well-standardized and readily accessible approach to testing (Lovell, 2002), and such assessments have become the rule, rather than the exception.

Computer-based assessment has also infiltrated the field of neuropsychology beyond the military and determining the effects of concussions in sports medicine. Neuropsychologists who work with cognitively impaired populations likely use at least one assessment measure that relies on a computer for administration, scoring and/or interpretation. As seen in Table 2, several studies have investigated the psychometric properties of computerized versions of paper and pencil neuropsychological tasks. These studies have concentrated on measures of executive skills and higher order functioning, including the Tower of Hanoi (5 studies), WCST (4 studies) and Stroop (2 studies). In general, these studies found similar, if not improved, reliability (e.g., fewer scoring errors) for the computer-based administration compared to the original tests (e.g., Mataix-Cols & Bartes-Faz, 2002). The studies also showed that normative data that exists for traditional versions of tests cannot simply be applied to computerized versions as performances on traditional and computer-based administrations are not directly comparable (Steinmetz, Brunner, Loarer, & Houssemand, 2010). For example, computerized versions of the Tower of Hanoi test were found to increase trial-and-error approaches (Noyes & Garland, 2003) and reduce load on working memory (Williams & Noyes, 2007). Embedded measures of response time have also been used to assess constructs of processing speed (e.g., Edwards et al., 1996). However, aside from more micro-level measures of processing speed and/or response latency (e.g., time-stamped latencies for moves on the Tower of Hanoi; Guevara et al., 2009), the computer-based administrations of paper and pencil neuropsychological tests did not capitalize on the opportunity to obtain additional diagnostically useful cognitive and behavioral information. The above findings represent a significant gap in the application of technology to improve neuropsychological assessment.

Attempts to address this gap have led researchers to design computer-administered tests to improve measurement constructs. This process often requires internal measures designed to capture underlying cognitive performances. For example, time sensitive parameters, such as reaction time (e.g., Robbins et al., 1994) or inspection time (e.g., Kush, Spring, & Barkand, 2012), can be measured more accurately using computers. In addition, many computerized tests use algorithms for administration purposes. For instance, “adaptive” testing approaches consist of algorithms that select future test items based on prior performance. This approach allows for more precise assessment of cognitive limits (e.g., floor or ceiling) while also reducing test administration time by discontinuing the test after the minimum criteria have been reached. Further, these algorithms can be used to improve sensitivity to a particular disorder by targeting hallmark symptoms (e.g., executive functioning deficits in Parkinson’s disease; Hanna-Pladdy et al., 2010), which may assist neuropsychologists in differential diagnoses.

To target specific disorders, and thus improve sensitivity to detect characteristic deficits, batteries of computerized tests have also been developed for both clinical and research purposes. Table 3 shows data from studies that examined the sensitivity and specificity of computer-administered test batteries for the detection of cognitive disorders, including mild cognitive impairment, dementia, Parkinson’s disease and traumatic brain injury (TBI). For example, the CogState and CNS Vital Signs have differentiated healthy controls from individuals with TBI (e.g., Gualtieri & Johnson, 2008) and dementia (e.g., Darby et al., 2011; Gualtieri & Johnson, 2005; Levinson, Reeves, Watson, & Harrison, 2005; Lim et al., 2012). In addition, the CAMCOG and CANTAB batteries have successfully distinguished controls from clinical populations, including MCI (Junkkila, Oja, Laine, & Karrasch, 2012), Alzheimer’s disease (Egerhazi, Berecz, Bartok, & Degrell, 2007), and Parkinson’s disease (McKinlay et al., 2009). Fewer studies have attempted to differentiate between different neurological disorders (e.g., Alzheimer’s disease versus Huntington’s disease; Rosser & Hodges, 1994 or Parkinson’s disease; Burton, McKeith, Burn, Williams, & O’Brien, 2004).

Cognitive Assessment Using Virtual Reality

Virtual reality encompasses a wide variety of technologies and devices to assess manipulation of objects (including the self) in a virtual space and time. For applications in psychology, these environments are usually three-dimensional, allowing for a 360-degree view of the virtual landscape, and allow for interaction with objects in the environment (Kalawsky, 1996). Virtual environments can range from basic rooms for navigation tasks to detailed spaces (e.g., office, classroom) to asses more complex activities. Furthermore, virtual reality technologies differ in the type of immersion used, including (a) non-immersive three-dimensional computer-screens with a mouse, joystick, or sensor-based gloves, (b) semi-immersive large screen displays using shutter glasses, and (c) full immersive environments with a “green screen” and head-mounted display (Costello, 1997; Kalawsky, 1996). For the purpose of this review, we included all types of virtual reality technologies that have been used to assess cognitive and everyday performances. This section provides a discussion of the initial developments of virtual reality for cognitive assessment and the more recent applications of these technologies to assess everyday functioning (e.g., grocery shopping). We also examine the effects of environmental distractors on cognitive and IADL performance. In particular, we highlight the use of virtual reality as a tool to obtain behavioral and cognitive information beyond what is collected in traditional cognitive assessments.

Development of Virtual Reality Assessments

Similar to computer-based assessments, the beginning of virtual reality-based cognitive assessment involved integrating computerized versions of traditional paper and pencil based tests into a virtual environment. Theoretically, this integration allowed clinicians to observe the participant’s approach to task completion in a simulated environment that may better represent everyday life. Table 4 contains findings from virtual reality-based neuropsychological assessments, including studies evaluating their sensitivity to various cognitive disorders (e.g., mild cognitive impairment, dementia, TBI), and comparisons with traditional paper and pencil based tasks. For example, one research group developed a three-dimensional test for a virtual reality platform that mimics the Wisconsin Card Sorting Test (“Look for a Match”; Elkind, Rubin, Rosenthal, Skoff, & Prather, 2001). When compared to the manual administration of the WCST, participants appeared to do more poorly on the virtual reality-based version; however, they reported enjoying the virtual reality test more than the traditional WCST (Elkind et al., 2001). The authors concluded that the distractions present during the virtual reality administration may have negatively affected performance; however, other explanations, such as task difficulty or novelty should be investigated. Thus, virtual reality allows for external stimuli to be part of the assessment battery while still measuring targeted cognitive abilities. Researchers have suggested that these distractions may improve the ecological validity of tasks because the test will better represent cognitive performance with real-world interruptions (Rizzo et al., 2006; Parsons et al., 2011). Consistent with computerized versions of cognitive assessments, virtual reality measures require demonstration of their validity and reliability as well as unique normative data specific to the individual test (Schatz & Browndyke, 2002).

As seen in Table 4, virtual reality tasks have been used to assess the cognitive variables of attention, memory, and executive functioning. However, because the virtual reality approach could change the fundamental nature of a task, one should not assume that the same cognitive constructs are being measured by similar traditional and virtual reality-based measures. Studies in Table 4 highlight the utility of virtual reality scenarios for expanding on traditional tests, including multi-tasking components and/or simulation of higher-order tasks that demand multiple cognitive domains (e.g., working memory, attention, and visual spatial skills while learning in a classroom; Rizzo et al., 2006). Table 4 also shows that virtual reality tests have been developed to assess multi-tasking in a simulated virtual environment, including running errands and completing kitchen tasks. These studies have targeted clinical populations for stroke (Rand, Basha-Abu, Weiss, & Katz, 2009), Parkinson’s disease (Albani et al., 2010), and TBI (Zhang et al., 2003) to identify cognitive deficits associated with completing everyday tasks, such as purchasing groceries or making meals.

Manipulation of External Stimuli in Virtual Reality Assessments

One of the important questions that is often asked in neuropsychology is whether cognitive performances are affected by external stimuli (e.g., everyday stressors or distractions; Pugnetti et al., 1995; Rizzo & Buckwalter, 1997). More specifically, this question considers whether neuropsychological testing conducted in a plain room with limited distractions is an accurate portrayal of “real world” abilities. Adding controlled distractions to testing could improve ecological validity. Emulating real-world combat scenarios, Parsons and colleagues (2011) recently developed a virtual-reality Stroop task (VRST) for assessment of TBIs without external symptomatology. The VRST consists of a three-dimensional administration of the Stroop task while “immersed” in a virtual Humvee in either low-threat or high-threat scenarios (defined by intensity of environmental threat). The initial study revealed that when individuals with TBIs completed the VRST in a military simulation, significant differences were present for the Interference condition in both the high and low threat conditions; these differences did not achieve significance for the paper and pencil or ANAM versions of the Stroop task.

More recently, Parsons and colleagues (2012) conducted a similar study using a virtual reality version of the Paced Auditory/Visual Serial Addition Tests (VR PA/VSAT; staged in a virtual military environment), comparing paper and pencil tests with virtual reality version of the PASAT over four sessions. In a military cohort, they found that the VRPASAT and VRPVSAT did not correlate with the paper-based PASAT, and, relative to the paper version, revealed slower mathematical processing and reaction times for the first two sessions, but not sessions 3 and 4, which are considered more cognitive challenging (Parsons et al., 2012). Taken together, these findings using virtual-reality based cognitive assessments suggest that including simulated distractions within the testing environment may lead to a better understanding of the influence of external stimuli on cognition in a real-world scenario.

Additional virtual reality assessments have been developed that are not based on traditional paper and pencil tests. Parsons and colleagues (2008a) developed the Virtual Reality Cognitive Performance Assessment Test (VRCPAT) as an alternative approach to a traditional cognitive screening. In a 15-minute battery of virtual reality-based tests (e.g., navigation of a virtual environment and delayed recall for items viewed in the scenario), the VRCPAT yields two total scores – one for learning and one for memory, which have been found to correlate significantly with traditional assessments of memory, attention, processing speed, and executive functioning (Parsons, Silva, Pair, & Rizzo, 2008a). However, poorer performances are evident when external distractions are present in the virtual environment, relative to assessments without any additional distracting stimuli (Parsons et al., 2008a; Parsons et al., 2011). Here we see that measures of learning and memory, which generally rely on components of attention, are affected by environmental distractions. This finding suggests that external stimuli in a virtual environment could be used to gather behavioral data of distractibility and attentional lapses that may otherwise go unmeasured in a traditional neuropsychological evaluation.

Attention deficit research has also explored the utility of controlled distractions in virtual reality-based assessments. Rizzo and colleagues (2006) developed the Virtual Classroom in which standardized neuropsychological tests are converted into computer-administered versions, and then embedded into an interactive, three-dimensional classroom. An assessment in the Virtual Classroom not only provides the total score for the embedded test (e.g., Boston Naming Test, Stroop Color-Word Test), but also gathers information about distractibility in real-time (obtained through an algorithmically-defined count of head movement away from stimuli). The administrator can control the amount and types of distractors that are present in the classroom (e.g., other students talking or moving in their seats, activity out a classroom window). Preliminary comparisons of the Virtual Classroom with continuous performance tests for ADHD have revealed that the Virtual Classroom better classified children with ADHD (Adams, Finn, Moes, Flannery, & Rizzo, 2009). Additional measures of distractibility from the Virtual Classroom (e.g., head movement tracking) also revealed greater inattentive symptoms in children with ADHD relative to controls (Parsons, Bowerly, Buckwalter & Rizzo, 2007). These data of real-time distractibility during testing may provide a more thorough and/or more ecologically valid assessment of cognitive performance for children with ADHD (Parsons et al., 2007). Furthermore, such additional measures could better inform diagnoses as well as treatments for behaviorally-focused disorders, such as ADHD or Conduct Disorder. While supplemental measures of distractibility and inattention are a highlight of virtual-reality technology, simulated environments have also assessed the relationship between cognitive domains and everyday functioning.

Assessment of Everyday Functioning Using Virtual Reality

The ability to incorporate complex environments into testing scenarios has allowed for measurement of real-world cognitive abilities that are otherwise constrained to abstract tasks in the laboratory. Virtual reality assessments provide a variety of environments, including driving simulations and grocery stores, for assessing cognitive and functional status. Virtual simulations of everyday tasks, such as driving (e.g., Legenfelder, Schultheis, Al-Shihabi, Mourant, & DeLuca, 2002; Schultheis, Rebimbas, Mourant, & Millis, 2007), shopping (e.g., Kang et al., 2008), and meal preparation (e.g., Zhang et al., 2001; Zhang et al., 2003), provide a safe and controlled environment for assessing functional capacities of clinical populations. Furthermore, because these environments are controlled by the experimenter, distractions or interruptions can be implemented to assess higher functions (e.g., multi-tasking and executive functioning) and target cognitive deficits that characterize specific disorders. Here we discuss the applications of virtual reality to assessing instrumental activities of daily living (IADL), including driving simulation and functional assessment of everyday tasks.

Driving simulation

Numerous studies have used driving simulation to determine specific cognitive demands needed during driving navigation. Although this large body of research is beyond the scope of this review, the efforts of researchers in this area bear mention. Virtual reality driving simulators have been used to assess various aspects of driving (e.g., distractibility, reaction time) in many clinical populations, including dementia (e.g., Rizzo, McGehee, Dawson, & Anderson, 2001), multiple sclerosis (e.g., Schultheis, Weisser, Manning, Blasco, & Ang, 2009), brain injury (e.g., Cox et al., 2010; Schultheis et al., 2006; Wald, Liu, Hirsekorn, & Taylar, 2000), and spinal cord injury (Carlozzi, Gade, Rizzo, & Tulsky, 2012). Driving simulators are becoming increasingly more realistic, including use of full-immersion screens and multi-sensory feedback, which has further improved their similarity to a real-world driving experience (e.g., Mayhew et al., 2011; Wang et al., 2010). Although some research has provided support for virtual reality simulation testing in place of self-report data and testing for underlying cognitive domains that are necessary for driving, such as sustained attention, problem-solving, and multi-tasking (e.g., Gaspar, Neider, & Kramer, 2013), other findings suggest that driving simulators may not be adequate for determining driving abilities and risk without supplemental information (e.g., Asimakopulos et al., 2012; Lundqvist, Gerdle, & Ronnberg, 2000). Thus, driving simulators provide an opportunity for obtaining basic information about driving abilities in a safe environment; however, driving simulators may be better used in combination with other assessments of cognitive and physical functioning for a more comprehensive assessment of skills needed for driving (Lew et al., 2005; Patomella, Tham, & Kottorp, 2006). For additional articles about driving simulation see Calhoun and Pearlson (2012) or Schultheis et al. (2007).

Everyday functioning

In addition to general driving abilities, virtual reality has been used increasingly in the assessment of everyday tasks. For example, the Multiple Errands Test (MET; Alderman, Burgess, Knight, Henman, 2003; Shallice & Burgess, 1991), a complex test of executive abilities that was developed for assessment of frontal lobe lesions and carried out in a shopping mall, has been adapted for administration in virtual environments. These adaptations include the Virtual Errands Test (VET; McGeorge et al., 2001), which assesses planning abilities associated with multi-tasking, and the Virtual Multiple Errands Tests (VMET; Rand et al., 2009), which measures strategy and interweaving of task information in addition to general memory and executive skills. In the VMET, participants navigate a virtual supermarket to obtain items from a predetermined shopping list. In addition, they must obtain information (e.g., time the store closes) and obey rules to promote planning and efficiency (e.g., cannot go to the same aisle twice). Studies have found that the VMET correlates strongly with other measures of executive function and attention (e.g., Dysexecutive Questionnaire, Test of Everyday Attention). The VMET has also distinguished among different age groups of cognitively healthy adults, as well as participants who have experienced stroke (Rand et al., 2009; Raspelli et al., 2010) or Parkinson’s disease (Albani et al., 2010) from healthy controls. These findings suggest the utility of evaluating performance of a complex task like shopping in a virtual environment, which is both safe and controlled.

Research has also evaluated the utility of a virtual kitchen to assess meal preparation skills (e.g., preparing soup and a sandwich). For example, Zhang and colleagues (2001) developed a computer-simulated kitchen to measure deficits in problem solving, sequencing, and speeded processing. Their virtual reality meal preparation task has distinguished between TBI and healthy controls (Zhang et al., 2001) with satisfactory construct validity and test-retest reliability (Zhang et al., 2003). They also found that performance on their virtual reality meal preparation task correlated strongly with independent evaluation of IADLs by occupational therapists (Zhang et al., 2003). These findings demonstrate some of the possibilities for virtual reality technology in clinical settings, and the utility of virtual reality to gather information that would otherwise go unnoticed in a traditional neuropsychological assessment. Although the previous studies indicate significant progress, challenges remain.

Computer-Based and Virtual Reality Assessments: Conclusions and Future Directions

The use of technology in neuropsychological assessment is continuing to expand and improve upon traditional approaches of the past. However, along with advantages of using technology-based assessment come challenges. Although some challenges are common across all types of neuropsychological assessment (e.g., establishing psychometric properties and adequate normative data), other limitations are unique to assessments driven by technology. Below we highlight strengths and weaknesses of technology in neuropsychological assessment and consider directions for future research.

What are the strengths and weaknesses of computer- and virtual reality-based neuropsychological assessment of cognition and everyday functioning?

Strengths

Research suggests that current computerized assessments are comparably reliable and valid to other neuropsychological testing measures when used with appropriate normative data (e.g., Mataix-Cols & Bartes-Faz, 2002; Schatz & Browndyke, 2002). Computerized tests have the advantage of algorithmic design, which can be used to tailor testing for specific populations (e.g., Hanna-Pladdy et al., 2010; Wouters et al., 2011). Computerized tests have also been shown to provide increased ease and standardization of administration (Fillit, Simon, Doniger, & Cummings, 2008), reduction in errors during scoring and interpretation (Koski et al., 2011), and readily accessible assessment without significant time devoted to preparation of materials (Koski et al., 2011). Notably, there are several computerized batteries that are more comprehensive and sensitive to mild cognitive deficits (e.g., CANTAB, Mindstreams), whereas other batteries appear to be better suited for brief cognitive screenings to determine whether full evaluation is necessary (e.g., ANAM, CogState, CNS Vital Signs). Findings regarding test-retest reliability, sensitivity, and specificity vary widely among the batteries, suggesting that standardization of approaches and improvements should be considered for future versions (see Wild et al., 2008).

Clinical application of virtual reality has provided new opportunities for assessment, including customization for target populations, specific cognitive domains, and unique settings. Virtual reality scenarios allow for measurement of simulated everyday tasks in a safe and controlled environment (Rizzo et al., 2006). Furthermore, the opportunity for researchers and clinicians to assess the influence of environmental stimuli (e.g., distractions, interruptions) on cognitive performance may provide a more ecologically valid assessment of everyday skills (Parsons et al., 2007). Thus, virtual reality platforms have promoted the collection of additional cognitive and behavioral information about neuropsychological testing performance beyond the traditional data obtained through paper and pencil assessments.

Weaknesses and Limitations

Disadvantages to technology-based testing are not to be dismissed. These include general technological issues, such as variations in computer hardware, and the currently limited information on psychometric and normative properties for different clinical populations (Bauer et al., 2012). Addressing these issues will require full psychometric evaluations and adjustments to future versions of virtual-reality assessments in order to improve internal and ecological validity, test-retest reliability, and utility for various populations (Bauer et al., 2012; Retzlaff & Gibertini, 2000). These limitations are consistent with many of the initial challenges posed when developing any new assessment, and thus are not beyond the capacity of researchers to overcome. In addition, performance on technology-based assessments may be influenced by knowledge of computers or other technology (Fazeli, Ross, Vance, & Ball, 2012), suggesting that cohort effects based on comfort with using new technology may need to be investigated.

Limitations unique to virtual reality are dominated by physiological concerns (e.g., motion sickness). There is currently no research consensus on the influence of virtual motion sickness on cognitive performance in virtual environments. Evidence has suggested tolerance of motion sickness in older adults (Flynn et al., 2003; Mullen, Weaver, Reindeau, Morrison, & Bedard, 2010), and most studies with virtual reality technology incorporate measures for motion sickness after completing the assessment. However, studies have not investigated the impact of sub-threshold motion sickness on cognitive performance and this issue requires further investigation (e.g., Nichols & Patel, 2002).

The extent to which virtual reality simulations reflect the real experiences of individuals varies widely among users. For some individuals, the novelty of wearing a head-mounted viewer may be enough to alter their behaviors, whereas others are more motivated because they find the technology to be interesting or even enjoyable (Gaggioli, 2012). Evidence for ecological validity of virtual reality has generally been limited to comparisons between traditional and virtual reality assessments, which have demonstrated adequate construct validity (e.g., Matheis et al., 2007). Similarly, evidence for effects of computer experience has produced mixed results (e.g., Fazeli, et al., 2012) and requires further investigation. Consistent with the disadvantages of computer-based systems, virtual reality assessments are subject to greater individual variability, which has been associated with differences in computer experience, as well as learning and adaptation to the virtual environment (see Chen, 2000) for a thorough discussion of individual differences in virtual reality). In addition, virtual-reality assessments are relatively high-cost technologies that require regular maintenance. Cost-efficiency should be considered relative to costs of paper-and-pencil measures. As with any computer-based technology, data gathered from virtual reality tasks needs to be securely stored to preserve patient privacy. While the issue of privacy and data security is important, it is regularly encountered in clinical settings. And with proper security measures in place, this should not be considered a weakness of virtual-reality technology.

How can technology-based measures add value to neuropsychological assessment data?

Advances in technology offer the opportunity to gain valuable information from assessment that is not obtained through paper and pencil measures, such as algorithmically-defined approaches to a particular test (e.g., organization, planning), evaluation of pauses, perseverations, and domain-specific errors, and/or response times in very specific measurements (e.g., Sahakian & Owen, 1992). By embracing the utility of technology to provide additional measures, neuropsychological assessment could expand the ways in which cognitive deficits are evaluated and ultimately treated. In addition, using technology could improve the psychometric properties of neuropsychological assessments, including better inter-rater reliability in scoring procedures, more consistent administration procedures across clinics, and ultimately more confidence in final reporting of findings. Further, the infiltration of technology into everyday life has become so significant that differences in familiarity with computers can affect performance on standardized cognitive tasks (e.g., Tun & Lachman, 2010). If technology use continues to increase, it will be imperative that neuropsychological assessments incorporate more tasks that reflect this adaptation to technology in everyday life (Bauer et al., 2012).

Virtual reality technologies have provided opportunities for neuropsychologists to evaluate clients’ cognitive and everyday abilities within simulated environments, which have supplied valuable information about the effect of environment on cognitive performance. For example, the assimilation of computer-based tests of memory and attention in a virtual classroom setting revealed that increased numbers of environmental distractors affected individuals with ADHD significantly more than controls (e.g., Adams et al., 2009). In addition, findings from virtual reality research have provided evidence for the ability of technology to acquire additional information about participant performance, such as head tracking as a measure of attention and distractibility (Parsons et al., 2007; Parsons et al., 2011). These added measures of attention complement the traditional neuropsychological tests of short-term and sustained attention, providing a more thorough evaluation of real-world indicators of attentional problems. Ideally, these supplementary measures will provide additional information to more accurately diagnose attentional disorders (e.g., ADHD), and in turn inform effective treatments for deficits on a spectrum.

In addition to cognitive processing domains, neuropsychologists have begun to seek ecologically valid assessments of everyday functioning. Assessment of IADLs has varied from virtual reality environments, such as shopping malls and grocery stores, to staged apartments and homes equipped with sensor technologies. Findings from virtual reality studies have suggested that individuals with cognitive impairments exhibit greater difficulties with driving and navigation than older adult controls (e.g., Hagler et al., 2010; Rizzo et al., 2001). Recent research movements in the fields of occupational therapy, physical therapy, nursing, as well as psychology have focused on everyday functioning and longitudinal monitoring of daily activity performance (Kaye, 2008; Marcotte, Scott, Kamat, & Heaton, 2010). This transition from general cognitive abilities to real-world assessment has sparked development of tests specific to everyday functional skills and improvements in ecological validity of cognitive testing. For example, sensor-based assessment of gait and physical strength have been associated with various cognitive disorders (e.g., Buchman, Boyle, Leurgans, Barnes, & Bennett., 2011; de Melo Coelho et al., 2012; Hagler et al., 2010; McGough et al., 2011), and assessment with non-invasive sensors in smart home testbeds or resident homes have revealed behavioral patterns of activities that relate to cognitive disorders (e.g., Dawadi et al., 2011; Hayes et al., 2008). The continuous measurement data afforded by these technologies are beyond the capabilities of traditional assessment procedures, including self-report (Stanley & Osgood, 2011), and can provide a more comprehensive assessment of patient functioning. Relative to traditional paper and pencil tests of cognitive domains, these findings suggest that technology-based assessment could contribute additional valuable information to aid with diagnosis and treatment.

What are the major factors to be considered when developing or implementing technology-based assessments?

Future developments of technology-based assessments should consider several issues for new tests, including the influence of cohort effects (e.g., knowledge base of technology), cost-effectiveness, construct validity and standardization procedures. Research has suggested that individuals with greater computer experience perform better on computer-based assessments than those with less computer experience (Iverson et al., 2009; Tun & Lachman, 2010). Thus, user-friendly interfaces should be used to promote ease of use for individuals with less experience. In addition, the cost of equipment, such as replacing paper-and-pencil tests with computers or purchasing virtual reality systems, is a consideration for increased use in common assessment procedures. Currently these technologies may exceed the financial limits of a clinic. As with other technologies, as virtual reality becomes more commonplace, costs will likely decrease; however, cost-benefit analyses should be conducted to ensure accuracy of assessment while maintaining a reasonable cost (Crandall & Cook, 2012). Technology-based assessments may promote shorter evaluation times while gathering more cognitive and behavioral data than a traditional neuropsychological assessment. Research should also investigate the construct validity of new technology-based assessments to address whether the cognitive constructs tapped by traditional paper-and-pencil tests are comparable to those assessed by computerized or virtual-reality versions. As with any new measure, psychometric and normative properties of technology-based assessments should also be researched and provided to users for accurate evaluation and interpretation of clinical data.

Using technology for assessment also poses an added challenge in that equipment can vary by laboratory or user. For example, the speed of a computer processor can affect the accuracy with which reaction time speed is measured. Thus, equipment standardization may be necessary in developing technology-based assessments with consistent results across administration sites. In addition, research has suggested that computer-based assessments, as well as virtual-reality tasks, are subject to individual variability. Studies have found that familiarity with computers (Tun & Lachman, 2010), motivation and interest in the technology (Gaggioli, 2012), and ease of learning and adaptation to virtual environments (Chen, 2000) can influence performance on computerized assessments. Therefore normative bases should include base rate influences of individual variability when developing new assessments.

What are directions for future clinical and research within the field of neuropsychological assessment related to technology use?

Computerized and Virtual Reality Assessments

Future research of technology-based assessment should focus on improving normative properties and standardization of current measures (e.g., sensitivity to target populations), as well as developing assessments that are sensitive to processes that cannot be assessed easily using paper and pencil measures (e.g., reaction time). Also, studies should assess how technology knowledge (e.g., being “tech-savvy” versus a first-time computer user) affect performance on assessments administered via technological modalities (e.g., computer, virtual reality). In addition, functional brain imaging to evaluate whether neuroanatomical correlates are similar between virtual reality and traditional tests of everyday functioning may also be useful to guide development of new assessments. Lastly, comparisons should be made between real-world performances of activities of daily living and virtual reality assessment designed to resemble everyday tasks.

Smart Environments

Recent strides in computer science have led to activity recognition in passive monitoring of IADLs (e.g., Dawadi et al., 2011; Rashidi et al., 2011; Singla et al., 2010). Smart environment studies have revealed relationships between direct observations of real-world tasks, such as cooking and cleaning, and sensor-based data (e.g., motion sensor activity) from a smart environment (Dawadi, Cook, & Schmitter-Edgecombe, in press). Collaborative work with computer scientists has revealed that smart environment data, such as activity patterns derived from motion sensors, can differentiate normal older adults from individuals with cognitive impairment (Dawadi et al., 2011). Additional efforts are concentrating on identifying behavioral changes associated with normal aging and cognitive decline (e.g., Hayes et al., 2008; Kaye et al., 2011), as well as communicative systems (e.g., intercoms) and prompting technologies to promote independent living and delay the need for long-term assistive care (Schmitter-Edgecombe et al., 2013; Seelye et al., 2012). Longitudinal monitoring of IADLs in real-world settings (e.g., patients living in smart homes) is ongoing, and finding correlations between cognitive and everyday functioning over time could better inform preventative and diagnostic efforts, as well as treatments in cognitive decline (Schmitter-Edgecombe et al., 2013). Data obtained through passive sensor monitoring could provide supplemental information about everyday functioning that is not subject to self-report bias and subtle, yet informative changes that may otherwise go unnoticed (Stanley & Osgood, 2011). Given that the response to smart home technology is generally positive (e.g., Courtney, 2008; Demiris, Hansel, Skubic, & Rantz, 2008; Wild et al., 2012), research to improve clinical utility of smart homes is necessary if wide-spread use of sensor systems is desired; however continued caution in privacy matters should be taken into account in future research.

Real-Time Data Collection

Another area of interest for neuropsychologists may be patient data collection in real-time outside of the clinic. Ecological momentary assessment (EMA) is a data collection approach using telephone-, PDA-, or tablet-administered self-report questionnaires. EMA is used to obtain subjective measures multiple times per day (Smyth & Stone, 2003; Stone & Shiffman, 1994). Because traditional neuropsychological assessments occur, most often, during a single appointment, the data may not reflect daily changes or fluctuations in cognition over the day or the week. One of the greatest advantages of EMA is that researchers can gather data in ‘real time’, which may result in less bias than in-clinic data resulting from recollective self-report (Jones & Johnston, 2011; Shiffman, Stone, & Hufford, 2008). Greater accuracy in EMA approaches could be helpful in clinical populations who demonstrate limited insight into cognitive deficits and/or experience memory difficulties (e.g., mild cognitive impairment, dementia; Cain, Depp, & Jeste, 2009). In addition, the data can be captured in the participant’s natural environment without drastically changing or influencing his or her daily routine (Jones & Johnston, 2011). Thus, adding an EMA to assessment protocols may assist neuropsychologists in identifying fluctuations in cognitive abilities over time, as well as the relationship of subjective experiences (e.g., mood, pain) on cognitive functioning.

Final Remarks and Recommendations

As we have reviewed the latest progress in technology use in neuropsychological assessment, there are several general conclusions to be made. First, studies generally support the psychometric properties of computerized testing and suggest that these administrations are not inferior to traditional paper and pencil tests. In addition, using computers to obtain supplemental cognitive and behavioral information about performance could result in more comprehensive evaluations of neurological functioning. Thus, it is recommended that new assessments consider embedded measures of cognitive abilities that are challenging to measure with traditional paper and pencil-based administrations but may be informative for diagnosis and treatment, such as approach to testing, response latencies, and/or task planning.

Virtual reality technology offers a unique opportunity to gain perspective on IADL performance of real-world scenarios in a safe and controlled environment. The types of scenarios are endless, as developers can create tasks that mimic those of everyday life, including grocery shopping, driving, or cooking. In addition, experimenter-controlled distractions and interruptions can provide valuable information about how performance is affected by everyday disruptions. These adaptations are more representative of real-world tasks in which cognitive demands are rarely carried out in a vacuum. Thus, the utility of virtual reality, even with its limitations, has yet to be realized in the field of neuropsychology. It is recommended that continued research investigates the relationships of virtual reality with real-world performance so that improvements to assessments can be made.

More recent focus has been placed on the evaluation of everyday functioning, above and beyond the theoretical relationships between neuropsychological tests and IADL performance. This has placed a strain on the field to develop more accurate and ecologically valid measures of everyday abilities. The current reliance on self-report measures for IADL performance poses limitations, as it is well established that self-report of IADL performance is subject to personal biases and the influence of cognitive awareness (Dassel & Schmitt, 2008; Suchy et al., 2010; Suchy, Kraybill & Franchow, 2011). Using smart environment “laboratories” in which participants complete scripted everyday tasks, neuropsychologists may gain a better understanding of relationships between performances on traditional tests of cognition and IADLs, which could better inform treatment recommendations and test development. Furthermore, collecting real-time data through EMA approaches could shed light on relationships between subjective experiences and cognition, as well as the influence of cognitive abilities on daily tasks. Complementing the collection of EMA data, continuous activity monitoring through smart environments may also be useful in understanding relationships between self-reported measures of functioning (e.g., EMA) and non-invasive sensors data that represents activity in the home. Thus, having individuals live in a smart environment and simultaneously participating in EMA data collection could provide a valuable dataset for investigating the relationship between self-reported and actual performance of IADLs. Connections between everyday functioning and cognitive abilities could further the development of effective monitoring and treatment options for individuals at risk or currently diagnosed with neurological disorders (Kaye, 2008).

Technology is continuing to grow, and with every new electronic gadget our society experiences a shift in new knowledge, novel application of information, and even development of new associated skills. With these transformations come challenges for the field of neuropsychology. Neuropsychological assessment is strained to maintain pace with the latest technology and determine how these advances influence human cognition. Thus, neuropsychologists should continue to make strides in researching new ways to assess cognitive and functional abilities in order to provide quality assessment and care for future generations of clients.

References

  1. Adams R, Finn P, Moes E, Flannery K, Rizzo A. Distractibility in Attention Deficit/Hyperactivity Disorder (ADHD): The Virtual Reality Classroom. Child Neuropsychology. 2009;15(2):120–135. doi: 10.1080/09297040802169077. [DOI] [PubMed] [Google Scholar]
  2. Albani G, Raspelli S, Carelli L, Morganti F, Weiss PL, Kizony R, Katz N, et al. Executive functions in a virtual world: a study in Parkinson’s disease. Studies in health technology and informatics. 2010;154(1):92–96. [PubMed] [Google Scholar]
  3. Albani G, Raspelli S, Carelli L, Priano L, Pignatti R, Morganti F, Gaggioli A, et al. Sleep dysfunctions influence decision making in undemented Parkinson’s disease patients: a study in a virtual supermarket. Studies in Health Technology and Informatics. 2011;163:8–10. [PubMed] [Google Scholar]
  4. Alderman N, Burgess PW, Knight C, Henman C. Ecological validity of a simplified version of the multiple errands shopping test. Journal of the International Neuropsychological Society. 2003;9(1):31–44. doi: 10.1017/s1355617703910046. [DOI] [PubMed] [Google Scholar]
  5. Allen BJ, Gfeller JD. The Immediate Post-Concussion Assessment and Cognitive Testing battery and traditional neuropsychological measures: a construct and concurrent validity study. Brain Injury. 2011;25(2):179–191. doi: 10.3109/02699052.2010.541897. [DOI] [PubMed] [Google Scholar]
  6. Aprahamian I, Diniz BS, Izbicki R, Radanovic M, Nunes PV, Forlenza OV. Optimizing the CAMCOG test in the screening for mild cognitive impairment and incipient dementia: saving time with relevant domains. International Journal of Geriatric Psychiatry. 2011;26(4):403–408. doi: 10.1002/gps.2540. [DOI] [PubMed] [Google Scholar]
  7. Aprahamian I, Martinelli JE, Cecato J, Izbicki R, Yassuda MS. Can the CAMCOG be a good cognitive test for patients with Alzheimer’s disease with low levels of education? International Psychogeriatrics. 2011;23(1):96–101. doi: 10.1017/S104161021000116X. [DOI] [PubMed] [Google Scholar]
  8. Asimakopulos J, Boychuck Z, Sondergaard D, Poulin V, Ménard I, Korner-Bitensky N. Assessing executive function in relation to fitness to drive: A review of tools and their ability to predict safe driving. Australian Occupational Therapy Journal. 2012;59(6):402–427. doi: 10.1111/j.1440-1630.2011.00963.x. [DOI] [PubMed] [Google Scholar]
  9. Athey RJ, Porter RW, Walker RW. Cognitive Assessment of a Representative Community Population with Parkinson’s Disease Using the Cambridge Cognitive Assessment–Revised (CAMCOG-R) Age and Ageing. 2005;34(3):268–273. doi: 10.1093/ageing/afi098. [DOI] [PubMed] [Google Scholar]
  10. Athey RJ, Walker RW. Demonstration of cognitive decline in Parkinson’s disease using the Cambridge Cognitive Assessment Revised (CAMCOG-R) International Journal of Geriatric Psychiatry. 2006;21(10):977–982. doi: 10.1002/gps.1595. [DOI] [PubMed] [Google Scholar]
  11. Bauer RM, Iverson GL, Cernich AN, Binder LM, Ruff RM, Naugle RI. Computerized Neuropsychological Assessment Devices: Joint Position Paper of the American Academy of Clinical Neuropsychology and the National Academy of Neuropsychology. The Clinical Neuropsychologist. 2012;26(2):177–196. doi: 10.1080/13854046.2012.663001. [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Bleiberg J, Garmoe WS, Halpern EL, Reeves DL, Nadler JD. Consistency of within-day and across-day performance after mild brain injury. Neuropsychiatry, neuropsychology, and behavioral neurology. 1997;10(4):247–253. [PubMed] [Google Scholar]
  13. Bleiberg J, Kane RL, Reeves DL, Garmoe WS, Halpern E. Factor analysis of computerized and traditional tests used in mild brain injury research. The Clinical Neuropsychologist. 2000;14(3):287–294. doi: 10.1076/1385-4046(200008)14:3;1-P;FT287. [DOI] [PubMed] [Google Scholar]
  14. Broglio SP, Ferrara MS, Macciocchi SN, Baumgartner TA, Elliott R. Test-retest reliability of computerized concussion assessment programs. Journal of Athletic Training. 2007;42(4):509–514. [PMC free article] [PubMed] [Google Scholar]
  15. Bryan C, Hernandez AM. Magnitudes of Decline on Automated Neuropsychological Assessment Metrics Subtest Scores Relative to Predeployment Baseline Performance Among Service Members Evaluated for Traumatic Brain Injury in Iraq. Journal of Head Trauma Rehabilitation. 2012;27(1):45–54. doi: 10.1097/HTR.0b013e318238f146. [DOI] [PubMed] [Google Scholar]
  16. Buchman AS, Boyle PA, Leurgans SE, Barnes LL, Bennett DA. Cognitive Function Is Associated With the Development of Mobility Impairments in Community-Dwelling Elders. American Journal of Geriatric Psychiatry. 2011;19(6):571–580. doi: 10.1097/JGP.0b013e3181ef7a2e. [DOI] [PMC free article] [PubMed] [Google Scholar]
  17. Burgess PW, Alderman N, Forbes C, Costello A, Coates LM-A, Dawson DR, Anderson ND, et al. The case for the development and use of “ecologically valid” measures of executive function in experimental and clinical neuropsychology. Journal of the International Neuropsychological Society. 2006;12(2):194–209. doi: 10.1017/S1355617706060310. [DOI] [PubMed] [Google Scholar]
  18. Burton EJ, McKeith IG, Burn DJ, Williams ED, O’Brien JT. Cerebral atrophy in Parkinson’s disease with and without dementia: a comparison with Alzheimer’s disease, dementia with Lewy bodies and controls. Brain. 2004;127(4):791–800. doi: 10.1093/brain/awh088. [DOI] [PubMed] [Google Scholar]
  19. Cain AE, Depp CA, Jeste DV. Ecological momentary assessment in aging research: a critical review. Journal of psychiatric research. 2009;43(11):987–996. doi: 10.1016/j.jpsychires.2009.01.014. [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Calhoun VD, Pearlson GD. A selective review of simulated driving studies: Combining naturalistic and hybrid paradigms, analysis approaches, and future directions. NeuroImage. 2012;59(1):25–35. doi: 10.1016/j.neuroimage.2011.06.037. [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Carlozzi NE, Gade V, Rizzo AS, Tulsky DS. Using virtual reality driving simulators in persons with spinal cord injury: three screen display versus head mounted display. Disability and Rehabilitation: Assistive Technology. 2012 doi: 10.3109/17483107.2012.699990. [DOI] [PubMed] [Google Scholar]
  22. Chamberlain SR, Robbins TW, Winder-Rhodes S, Müller U, Sahakian BJ, Blackwell AD, Barnett JH. Translational approaches to frontostriatal dysfunction in attention-deficit/hyperactivity disorder using a computerized neuropsychological battery. Biological Psychiatry. 2011;69(12):1192–1203. doi: 10.1016/j.biopsych.2010.08.019. [DOI] [PubMed] [Google Scholar]
  23. Chen C. Individual differences in a spatial-semantic virtual environment. Journal of the American Society for Information Science. 2000;51(6):529–542. [Google Scholar]
  24. Coelho FGdeM, Stella F, de Andrade LP, Barbieri FA, Santos-Galduróz RF, Gobbi S, Costa JLR, et al. Gait and risk of falls associated with frontal cognitive functions at different stages of Alzheimer’s disease. Neuropsychology, Development, and Cognition: Section B. 2012;19(5):644–656. doi: 10.1080/13825585.2012.661398. [DOI] [PubMed] [Google Scholar]
  25. Collie A, Maruff P, Darby DG, McStephen M. The effects of practice on the cognitive test performance of neurologically normal individuals assessed at brief test-retest intervals. Journal of the International Neuropsychological Society. 2003;9(3):419–428. doi: 10.1017/S1355617703930074. [DOI] [PubMed] [Google Scholar]
  26. Conde-Sala JL, Garre-Olmo J, Vilalta-Franch J, Llinàs-Reglà J, Turró-Garriga O, Lozano-Gallego M, Hernández-Ferrándiz M, et al. Predictors of cognitive decline in Alzheimer’s disease and mild cognitive impairment using the CAMCOG: a five-year follow-up. International Psychogeriatrics. 2012;24(6):948–958. doi: 10.1017/S1041610211002158. [DOI] [PubMed] [Google Scholar]
  27. Costello PJ. Health and Safety Issues associated with Virtual Reality – A Review of the Current Literature. Exploiting Virtual Reality Techniques in Education and Training: Technological Issues. 1997 SIMA Report Series http://www.agocg.ac.uk/reports/virtual/37/37.pdf.
  28. Courtney KL. Privacy and Senior Willingness to Adopt Smart Home Information Technology in Residential Care Facilities. Methods of Information in Medicine. 2008;47(1):76–81. doi: 10.3414/me9104. [DOI] [PubMed] [Google Scholar]
  29. Cox DJ, Davis M, Singh H, Barbour B, Nidiffer FD, Trudel T, Mourant R, et al. Driving rehabilitation for military personnel recovering from traumatic brain injury using virtual reality driving simulation: a feasibility study. Military Medicine. 2010;175(6):411–416. doi: 10.7205/milmed-d-09-00081. [DOI] [PubMed] [Google Scholar]
  30. Crandall A, Cook DJ. Smart home in a box: A large scale smart home deployment. Proceedings of the Workshop on Large Scale Intelligent Environments. 2012;13(1):169–178. [Google Scholar]
  31. Cumming TB, Brodtmann A, Darby D, Bernhardt J. Cutting a long story short: Reaction times in acute stroke are associated with longer term cognitive outcomes. Journal of the Neurological Sciences. 2012;322(1-2):102–106. doi: 10.1016/j.jns.2012.07.004. [DOI] [PubMed] [Google Scholar]
  32. Daniel JC, Olesniewicz MH, Reeves DL, Tam D, Bleiberg J, Thatcher R, Salazar A. Repeated measures of cognitive processing efficiency in adolescent athletes: implications for monitoring recovery from concussion. Neuropsychiatry, neuropsychology, and behavioral neurology. 1999;12(3):167–169. [PubMed] [Google Scholar]
  33. Darby DG, Brodtmann A, Pietrzak RH, Fredrickson J, Woodward M, Villemagne VL, Fredrickson A, et al. Episodic memory decline predicts cortical amyloid status in community-dwelling older adults. Journal of Alzheimer’s Disease. 2011;27(3):627–637. doi: 10.3233/JAD-2011-110818. [DOI] [PubMed] [Google Scholar]
  34. Darby DG, Pietrzak RH, Fredrickson J, Woodward M, Moore L, Fredrickson A, Sach J, et al. Intraindividual cognitive decline using a brief computerized cognitive screening test. Alzheimer’s & dementia: the journal of the Alzheimer’s Association. 2012;8(2):95–104. doi: 10.1016/j.jalz.2010.12.009. [DOI] [PubMed] [Google Scholar]
  35. Darby D, Maruff P, Collie A, McStephen M. Mild cognitive impairment can be detected by multiple assessments in a single day. Neurology. 2002;59(7):1042–1046. doi: 10.1212/wnl.59.7.1042. [DOI] [PubMed] [Google Scholar]
  36. Dassel KB, Schmitt FA. The impact of caregiver executive skills on reports of patient functioning. The Gerontologist. 2008;48(6):781–792. doi: 10.1093/geront/48.6.781. [DOI] [PubMed] [Google Scholar]
  37. Dawadi P, Cook D, Schmitter-Edgecombe M. Automated cognitive health assessment using smart home monitoring of complex tasks. IEEE Transactions on Human-Machine Systems. doi: 10.1109/TSMC.2013.2252338. in press. [DOI] [PMC free article] [PubMed] [Google Scholar]
  38. Dawadi P, Parsey C, Schneider M, Schmitter-Edgecombe M, Cook D. An approach to cognitive assessment in smart homes. Proceedings of the KDD Workshop on Medicine and Healthcare. 2011:56–59. [Google Scholar]
  39. de Jager CA, Budge MM. Stability and predictability of the classification of mild cognitive impairment as assessed by episodic memory test performance over time. Neurocase. 2005;11(1):72–79. doi: 10.1080/13554790490896820. [DOI] [PubMed] [Google Scholar]
  40. Demiris G, Hensel BK, Skubic M, Rantz M. Senior residents’ perceived need of and preferences for “smart home” sensor technologies. International Journal of Technology Assessment in Health Care. 2008;24(1):120–124. doi: 10.1017/S0266462307080154. [DOI] [PubMed] [Google Scholar]
  41. Dickerson A, Reistetter T, Trujillo L. Using an IADL Assessment to Identify Older Adults Who Need a Behind-the-Wheel Driving Evaluation. Journal of Applied Gerontology. 2009;29(4):494–506. [Google Scholar]
  42. Doniger GM. Computerized cognitive testing battery identifies mild cognitive impairment and mild dementia even in the presence of depressive symptoms. American Journal of Alzheimer’s Disease and Other Dementias. 2006;21(1):28–36. doi: 10.1177/153331750602100105. [DOI] [PMC free article] [PubMed] [Google Scholar]
  43. Doniger GM, Simon ES. Computerized cognitive testing in aging. Alzheimer’s & Dementia. 2009;5(5):439–440. doi: 10.1016/j.jalz.2009.03.003. [DOI] [PubMed] [Google Scholar]
  44. Doniger GM, Zucker DM, Schweiger A, Dwolatzky T, Chertkow H, Crystal H, et al. Towards practical cognitive assessment for detection of early dementia: a 30-minute computerized battery discriminates as well as longer testing. Current Alzheimer Research. 2005;2(2):117–124. doi: 10.2174/1567205053585792. [DOI] [PubMed] [Google Scholar]
  45. Downes JJ, Roberts AC, Sahakian BJ, Evenden JL, Morris RG, Robbins TW. Impaired extra-dimensional shift performance in medicated and unmedicated Parkinson’s disease: evidence for a specific attentional dysfunction. Neuropsychologia. 1989;27(11-12):1329–1343. doi: 10.1016/0028-3932(89)90128-0. [DOI] [PubMed] [Google Scholar]
  46. Dwolatzky T, Dimant L, Simon ES, Doniger GM. Validity of a short computerized assessment battery for moderate cognitive impairment and dementia. International Psychogeriatrics. 2010;22(5):795–803. doi: 10.1017/S1041610210000621. [DOI] [PubMed] [Google Scholar]
  47. Dwolatzky T, Whitehead V, Doniger GM, Simon ES, Schweiger A, Jaffe D, Chertkow H. Validity of a novel computerized cognitive battery for mild cognitive impairment. BioMed Central (BMC) Geriatrics. 2003;3(4) doi: 10.1186/1471-2318-3-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
  48. Dwolatzky T, Whitehead V, Doniger GM, Simon ES, Schweiger A, Jaffe D, Chertkow H. Validity of the Mindstreams computerized cognitive battery for mild cognitive impairment. Journal of Molecular Neuroscience. 2004;24(1):33–44. doi: 10.1385/jmn:24:1:033. [DOI] [PubMed] [Google Scholar]
  49. Edwards S, Brice C, Craig C, Penri-Jones R. Effects of caffeine, practice, and mode of presentation on stroop task performance. Pharmacology Biochemistry and Behavior. 1996;54(2):309–315. doi: 10.1016/0091-3057(95)02116-7. [DOI] [PubMed] [Google Scholar]
  50. Égerházi A, Berecz R, Bartók E, Degrell I. Automated Neuropsychological Test Battery (CANTAB) in mild cognitive impairment and in Alzheimer’s disease. Progress in Neuro-Psychopharmacology and Biological Psychiatry. 2007;31(3):746–751. doi: 10.1016/j.pnpbp.2007.01.011. [DOI] [PubMed] [Google Scholar]
  51. Elkind JS, Rubin E, Rosenthal S, Skoff B, Prather P. A simulated reality scenario compared with the computerized Wisconsin card sorting test: an analysis of preliminary results. Cyberpsychology & Behavior: The Impact of the Internet, Multimedia and Virtual Reality on Behavior and Society. 2001;4(4):489–496. doi: 10.1089/109493101750527042. [DOI] [PubMed] [Google Scholar]
  52. Eonta SE, Carr W, McArdle JJ, Kain JM, Tate C, Wesensten NJ, Norris JN, et al. Automated Neuropsychological Assessment Metrics: repeated assessment with two military samples. Aviation, space, and environmental medicine. 2011;82(1):34–39. doi: 10.3357/asem.2799.2011. [DOI] [PubMed] [Google Scholar]
  53. Falleti MG, Maruff P, Collie A, Darby DG. Practice effects associated with the repeated assessment of cognitive function using the CogState battery at 10-minute, one week and one month test-retest intervals. Journal of Clinical and Experimental Neuropsychology. 2006;28(7):1095–1112. doi: 10.1080/13803390500205718. [DOI] [PubMed] [Google Scholar]
  54. Fazeli PL, Ross LA, Vance DE, Ball K. The Relationship Between Computer Experience and Computerized Cognitive Test Performance Among Older Adults. The Journals of Gerontology Series B: Psychological Sciences and Social Sciences. 2012 doi: 10.1093/geronb/gbs071. [DOI] [PMC free article] [PubMed] [Google Scholar]
  55. Feldstein SN, Keller FR, Portman RE, Durham RL, Klebe KJ, Davis HP. A Comparison of Computerized and Standard Versions of the Wisconsin Card Sorting Test. The Clinical Neuropsychologist. 1999;13(3):303–313. doi: 10.1076/clin.13.3.303.1744. [DOI] [PubMed] [Google Scholar]
  56. Flynn D, van Schaik P, Blackman T, Femcott C, Hobbs B, Calderon C. Developing a virtual reality-based methodology for people with dementia: a feasibility study. Cyberpsychology & Behavior. 2003;6(6):591–611. doi: 10.1089/109493103322725379. [DOI] [PubMed] [Google Scholar]
  57. Fillit HM, Simon ES, Doniger GM, Cummings JL. Practicality of a computerized system for cognitive assessment in the elderly. Alzheimer’s and Dementia. 2008;4(1):14–21. doi: 10.1016/j.jalz.2007.09.008. [DOI] [PubMed] [Google Scholar]
  58. Fowler KS, Saling MM, Conway EL, Semple JM, Louis WJ. Paired associate performance in the early detection of DAT. Journal of the International Neuropsychological Society. 2002;8(1):58–71. [PubMed] [Google Scholar]
  59. Fredrickson J, Maruff P, Woodward M, Moore L, Fredrickson A, Sach J, Darby D. Evaluation of the usability of a brief computerized cognitive screening test in older people for epidemiological studies. Neuroepidemiology. 2010;34(2):65–75. doi: 10.1159/000264823. [DOI] [PubMed] [Google Scholar]
  60. Fried R, Hirshfeld-Becker D, Petty C, Batchelder H, Biederman J. How Informative Is the CANTAB to Assess Executive Functioning in Children With ADHD? A Controlled Study. Journal of Attention Disorders. 2012 doi: 10.1177/1087054712457038. in press. [DOI] [PubMed] [Google Scholar]
  61. Gaggioli A. Quality of experience in real and virtual environments: some suggestions for the development of positive technologies. Studies in health technology and informatics. 2012;181(1):177–181. [PubMed] [Google Scholar]
  62. Gallagher D, Mhaolain AN, Coen R, Walsh C, Kilroy D, Belinski K, Bruce I, et al. Detecting prodromal Alzheimer’s disease in mild cognitive impairment: utility of the CAMCOG and other neuropsychological predictors. International Journal of Geriatric Psychiatry. 2010;25(12):1280–1287. doi: 10.1002/gps.2480. [DOI] [PubMed] [Google Scholar]
  63. Gaspar JG, Neider MB, Kramer AF. Falls risk and simulated driving performance in older adults. Journal of aging research. 2013;2013:356948. doi: 10.1155/2013/356948. [DOI] [PMC free article] [PubMed] [Google Scholar]
  64. Gau SS-F, Shang C-Y. Executive functions as endophenotypes in ADHD: evidence from the Cambridge Neuropsychological Test Battery (CANTAB) Journal of Child Psychology and Psychiatry. 2010;51(7):838–849. doi: 10.1111/j.1469-7610.2010.02215.x. [DOI] [PubMed] [Google Scholar]
  65. Gualtieri CT, Johnson LG. Neurocognitive testing supports a broader concept of mild cognitive impairment. American Journal of Alzheimer’s Disease and Other Dementias. 2005;20(6):359–366. doi: 10.1177/153331750502000607. [DOI] [PMC free article] [PubMed] [Google Scholar]
  66. Gualtieri CT, Johnson LG. Reliability and validity of a computerized neurocognitive test battery, CNS Vital Signs. Archives of Clinical Neuropsychology. 2006;21(7):623–643. doi: 10.1016/j.acn.2006.05.007. [DOI] [PubMed] [Google Scholar]
  67. Gualtieri CT, Johnson LG. A computerized test battery sensitive to mild and severe brain injury. Medscape Journal of Medicine. 2008;10(4):90. [PMC free article] [PubMed] [Google Scholar]
  68. Guevara MA, Rizo L, Ruiz-Díaz M, Hernández-González M. HANOIPC3: A computer program to evaluate executive functions. Computer Methods and Programs in Biomedicine. 2009;95(2):158–165. doi: 10.1016/j.cmpb.2009.02.007. [DOI] [PubMed] [Google Scholar]
  69. Hagler S, Austin D, Hayes TL, Kaye J, Pavel M. Unobtrusive and Ubiquitous In-Home Monitoring: A Methodology for Continuous Assessment of Gait Velocity in Elders. IEEE Transactions on Biomedical Engineering. 2010;57(4):813–820. doi: 10.1109/TBME.2009.2036732. [DOI] [PMC free article] [PubMed] [Google Scholar]
  70. Hammers D, Spurgeon E, Ryan K, Persad C, Barbas N, Heidebrink J, Darby D, et al. Validity of a brief computerized cognitive screening test in dementia. Journal of Geriatric Psychiatry and Neurology. 2012;25(2):89–99. doi: 10.1177/0891988712447894. [DOI] [PubMed] [Google Scholar]
  71. Hammers D, Spurgeon E, Ryan K, Persad C, Heidebrink J, Barbas N, Albin R, et al. Reliability of repeated cognitive assessment of dementia using a brief computerized battery. American Journal of Alzheimer’s Disease and Other Dementias. 2011;26(4):326–333. doi: 10.1177/1533317511411907. [DOI] [PMC free article] [PubMed] [Google Scholar]
  72. Hanna-Pladdy B, Enslein A, Fray M, Gajewski BJ, Pahwa R, Lyons KE. Utility of the NeuroTrax computerized battery for cognitive screening in Parkinson’s disease: comparison with the MMSE and the MoCA. The International Journal of Neuroscience. 2010;120(8):538–543. doi: 10.3109/00207454.2010.496539. [DOI] [PMC free article] [PubMed] [Google Scholar]
  73. Harel BT, Darby D, Pietrzak RH, Ellis KA, Snyder PJ, Maruff P. Examining the nature of impairment in visual paired associate learning in amnestic mild cognitive impairment. Neuropsychology. 2011;25(6):752–762. doi: 10.1037/a0024237. [DOI] [PubMed] [Google Scholar]
  74. Hawkins KA, Jennings D, Vincent AS, Gilliland K, West A, Marek K. Traditional neuropsychological correlates and reliability of the automated neuropsychological assessment metrics-4 battery for Parkinson’s disease. Parkinsonism & related disorders. 2012;18(7):864–870. doi: 10.1016/j.parkreldis.2012.04.021. [DOI] [PubMed] [Google Scholar]
  75. Hayes TL, Abendroth F, Adami A, Pavel M, Zitzelberger TA, Kaye JA. Unobtrusive assessment of activity patterns associated with mild cognitive impairment. Alzheimer’s and Dementia. 2008;4(6):395–405. doi: 10.1016/j.jalz.2008.07.004. [DOI] [PMC free article] [PubMed] [Google Scholar]
  76. Heinik J, Solomesh I, Berkman P. Correlation between the CAMCOG, the MMSE, and three clock drawing tests in a specialized outpatient psychogeriatric service. Archives of Gerontology and Geriatrics. 2004;38(1):77–84. doi: 10.1016/j.archger.2003.08.004. [DOI] [PubMed] [Google Scholar]
  77. Hobson P, Meara J. The detection of dementia and cognitive impairment in a community population of elderly people with Parkinson’s disease by use of the CAMCOG neuropsychological test. Age and ageing. 1999;28(1):39–43. doi: 10.1093/ageing/28.1.39. [DOI] [PubMed] [Google Scholar]
  78. Iverson GrantL, Brooks BL, Ashton VL, Johnson LG, Gualtieri CT. Does familiarity with computers affect computerized neuropsychological test performance? Journal of Clinical and Experimental Neuropsychology. 2009;31(5):594–604. doi: 10.1080/13803390802372125. [DOI] [PubMed] [Google Scholar]
  79. Jones M, Johnston D. Understanding phenomena in the real world: the case for real time data collection in health services research. Journal of Health Services Research & Policy. 2011;16(3):172–176. doi: 10.1258/jhsrp.2010.010016. [DOI] [PubMed] [Google Scholar]
  80. Junkkila J, Oja S, Laine M, Karrasch M. Applicability of the CANTAB-PAL Computerized Memory Test in Identifying Amnestic Mild Cognitive Impairment and Alzheimer’s Disease. Dementia and Geriatric Cognitive Disorders. 2012;34(2):83–89. doi: 10.1159/000342116. [DOI] [PubMed] [Google Scholar]
  81. Kalawsky RS. Exploiting Virtual Reality Techniques in Education and Training: Technological Issues. SIMA Report Series. 1996 [Google Scholar]
  82. Kang YJ, Ku J, Han K, Kim SI, Yu TW, Lee JH, Park CI. Development and Clinical Trial of Virtual Reality-Based Cognitive Assessment in People with Stroke: Preliminary Study. CyberPsychology & Behavior. 2008;11(3):329–339. doi: 10.1089/cpb.2007.0116. [DOI] [PubMed] [Google Scholar]
  83. Kaye J. Home-based technologies: a new paradigm for conducting dementia prevention trials. Alzheimer’s & Dementia. 2008;4(1 Supplement 1):S60–66. doi: 10.1016/j.jalz.2007.10.003. [DOI] [PubMed] [Google Scholar]
  84. Kaye JA, Maxwell SA, Mattek N, Hayes TL, Dodge H, Pavel M, Jimison HB, et al. Intelligent Systems for Assessing Aging Changes: Home-Based, Unobtrusive, and Continuous Assessment of Aging. The Journals of Gerontology Series B: Psychological Sciences and Social Sciences. 2011;66B(Supplement 1):i180–i190. doi: 10.1093/geronb/gbq095. [DOI] [PMC free article] [PubMed] [Google Scholar]
  85. Koning Ide, Kooten Fvan, Koudstaal PJ, Dippel DWJ. Diagnostic value of the Rotterdam-CAMCOG in post-stroke dementia. Journal of Neurology, Neurosurgery & Psychiatry. 2005;76(2):263–265. doi: 10.1136/jnnp.2004.039511. [DOI] [PMC free article] [PubMed] [Google Scholar]
  86. Koski L, Brouillette M-J, Lalonde R, Hello B, Wong E, Tsuchida A, Fellows L. Computerized testing augments pencil-and-paper tasks in measuring HIV-associated mild cognitive impairment. HIV Medicine. 2011;12(8):472–480. doi: 10.1111/j.1468-1293.2010.00910.x. [DOI] [PubMed] [Google Scholar]
  87. Kush JC, Spring MB, Barkand J. Advances in the assessment of cognitive skills using computer-based measurement. Behavior Research Methods. 2012;44(1):125–134. doi: 10.3758/s13428-011-0136-2. [DOI] [PubMed] [Google Scholar]
  88. Law AS, Logie RH, Pearson DG. The impact of secondary tasks on multi-tasking in a virtual environment. Acta Psychologica. 2006;122(1):27–44. doi: 10.1016/j.actpsy.2005.09.002. [DOI] [PubMed] [Google Scholar]
  89. Leeds L, Meara RJ, Woods R, Hobson JP. A comparison of the new executive functioning domains of the CAMCOG-R with existing tests of executive function in elderly stroke survivors. Age and Ageing. 2001;30(3):251–254. doi: 10.1093/ageing/30.3.251. [DOI] [PubMed] [Google Scholar]
  90. Levinson DM, Reeves DL. Monitoring recovery from traumatic brain injury using automated neuropsychological assessment metrics (ANAM V1.0) Archives of Clinical Neuropsychology. 1997;12(2):155–166. doi: 10.1093/arclin/12.2.155. [DOI] [PubMed] [Google Scholar]
  91. Levinson D, Reeves D, Watson J, Harrison M. Automated neuropsychological assessment metrics (ANAM) measures of cognitive effects of Alzheimer’s disease. Archives of Clinical Neuropsychology. 2005;20(3):403–408. doi: 10.1016/j.acn.2004.09.001. [DOI] [PubMed] [Google Scholar]
  92. Lew HL, Poole JH, Lee EH, Jaffe DL, Huang H-C, Brodd E. Predictive validity of driving-simulator assessments following traumatic brain injury: a preliminary study. Brain Injury. 2005;19(3):177–188. doi: 10.1080/02699050400017171. [DOI] [PubMed] [Google Scholar]
  93. Lim YY, Ellis KA, Harrington K, Ames D, Martins RN, Masters CL, Rowe C, et al. Use of the CogState Brief Battery in the assessment of Alzheimer’s disease related cognitive impairment in the Australian Imaging, Biomarkers and Lifestyle (AIBL) study. Journal of Clinical and Experimental Neuropsychology. 2012;34(4):345–358. doi: 10.1080/13803395.2011.643227. [DOI] [PubMed] [Google Scholar]
  94. Lovell M, Collins M, Bradley J. Return to play following sports-related concussion. Clinics in Sports Medicine. 2004;23(3):421–441. doi: 10.1016/j.csm.2004.04.001. [DOI] [PubMed] [Google Scholar]
  95. Lovell MR. The relevance of neuropsychologic testing for sports-related head injuries. Current Sports Medicine Reports. 2002;1(1):7–11. doi: 10.1249/00149619-200202000-00003. [DOI] [PubMed] [Google Scholar]
  96. Lovell MR, Collins MW, Iverson GL, Field M, Maroon JC, Cantu R, Podell K, et al. Recovery from mild concussion in high school athletes. Journal of neurosurgery. 2003;98(2):296–301. doi: 10.3171/jns.2003.98.2.0296. [DOI] [PubMed] [Google Scholar]
  97. Lowe C, Rabbitt P. Test/re-test reliability of the CANTAB and ISPOCD neuropsychological batteries: theoretical and practical issues. Cambridge Neuropsychological Test Automated Battery. International Study of Post-Operative Cognitive Dysfunction. Neuropsychologia. 1998;36(9):915–923. doi: 10.1016/s0028-3932(98)00036-0. [DOI] [PubMed] [Google Scholar]
  98. Lundqvist A, Gerdle B, Rönnberg J. Neuropsychological aspects of driving after a stroke - in the simulator and on the road. Applied Cognitive Psychology. 2000;14(2):135–150. [Google Scholar]
  99. Macciocchi SN, Barth JT, Alves W, Rimel RW, Jane JA. Neuropsychological functioning and recovery after mild head injury in collegiate athletes. Neurosurgery. 1996;39(3):510–514. [PubMed] [Google Scholar]
  100. Marcotte TD, Scott JC, Kamat R, Heaton RK. Neuropsychology and the prediction of everyday functioning. In: Marcotte TD, Grant I, editors. Neuropsychology of Everyday Functioning. New York: The Guilford Press; 2010. pp. 5–38. [Google Scholar]
  101. Maruff P, Collie A, Darby D, Weaver-Cargin J, Masters C, Currie J. Subtle Memory Decline over 12 Months in Mild Cognitive Impairment. Dementia and Geriatric Cognitive Disorders. 2004;18(3-4):342–348. doi: 10.1159/000080229. [DOI] [PubMed] [Google Scholar]
  102. Maruff Paul, Thomas E, Cysique L, Brew B, Collie A, Snyder P, Pietrzak RH. Validity of the CogState brief battery: relationship to standardized tests and sensitivity to cognitive impairment in mild traumatic brain injury, schizophrenia, and AIDS dementia complex. Archives of Clinical Neuropsychology. 2009;24(2):165–178. doi: 10.1093/arclin/acp010. [DOI] [PubMed] [Google Scholar]
  103. Mataix-Cols D, Bartrés-Faz D. Is the Use of the Wooden and Computerized Versions of the Tower of Hanoi Puzzle Equivalent? Applied Neuropsychology. 2002;9(2):117–120. doi: 10.1207/S15324826AN0902_8. [DOI] [PubMed] [Google Scholar]
  104. Matheis RJ, Schultheis MT, Tiersky LA, DeLuca J, Millis SR, Rizzo A. Is learning and memory different in a virtual environment? The Clinical Neuropsychologist. 2007;21(1):146–161. doi: 10.1080/13854040601100668. [DOI] [PubMed] [Google Scholar]
  105. Mayhew DR, Simpson HM, Wood KM, Lonero L, Clinton KM, Johnson AG. On-road and simulated driving: Concurrent and discriminant validation. Journal of Safety Research. 2011;42(4):267–275. doi: 10.1016/j.jsr.2011.06.004. [DOI] [PubMed] [Google Scholar]
  106. McGeorge P, Phillips LH, Crawford JR, Garden SE, Sala SD, Milne AB, Hamilton S, et al. Using Virtual Environments in the Assessment of Executive Dysfunction. Presence: Teleoperators and Virtual Environments. 2001;10(4):375–383. [Google Scholar]
  107. McGough EL, Kelly VE, Logsdon RG, McCurry SM, Cochrane BB, Engel JM, Teri L. Associations between physical performance and executive function in older adults with mild cognitive impairment: gait speed and the timed “up & go” test. Physical Therapy. 2011;91(8):1198–1207. doi: 10.2522/ptj.20100372. [DOI] [PMC free article] [PubMed] [Google Scholar]
  108. McKinlay A, Grace RC, Kaller CP, Dalrymple-Alford JC, Anderson TJ, Fink J, Roger D. Assessing cognitive impairment in Parkinson’s disease: a comparison of two tower tasks. Applied Neuropsychology. 2009;16(3):177–185. doi: 10.1080/09084280903098661. [DOI] [PubMed] [Google Scholar]
  109. Morris RG, Downes JJ, Sahakian BJ, Evenden JL, Heald A, Robbins TW. Planning and spatial working memory in Parkinson’s disease. Journal of Neurology, Neurosurgery, and Psychiatry. 1988;51(6):757–766. doi: 10.1136/jnnp.51.6.757. [DOI] [PMC free article] [PubMed] [Google Scholar]
  110. Mullen NW, Weaver B, Riendeau JA, Morrison LE, Bédard M. Driving performance and susceptibility to simulator sickness: are they related? The American Journal of Occupational Therapy. 2010;64(2):288–295. doi: 10.5014/ajot.64.2.288. [DOI] [PubMed] [Google Scholar]
  111. Nichols S, Patel H. Health and safety implications of virtual reality: a review of empirical evidence. Applied Ergonomics. 2002;33(3):251–271. doi: 10.1016/s0003-6870(02)00020-0. [DOI] [PubMed] [Google Scholar]
  112. Noyes JM, Garland KJ. Solving the Tower of Hanoi: does mode of presentation matter? Computers in Human Behavior. 2003;19(5):579–592. [Google Scholar]
  113. Noyes JM, Garland KJ. Computer- vs. paper-based tasks: are they equivalent? Ergonomics. 2008;51(9):1352–1375. doi: 10.1080/00140130802170387. [DOI] [PubMed] [Google Scholar]
  114. Nunes PV, Diniz BS, Radanovic M, Abreu ID, Borelli DT, Yassuda MS, Forlenza OV. CAMCOG as a screening tool for diagnosis of mild cognitive impairment and dementia in a Brazilian clinical sample of moderate to high education. International Journal of Geriatric Psychiatry. 2008;23(11):1127–1133. doi: 10.1002/gps.2038. [DOI] [PubMed] [Google Scholar]
  115. O’Connell H, Coen R, Kidd N, Warsi M, Chin A-V, Lawlor BA. Early detection of Alzheimer’s disease (AD) using the CANTAB paired Associates Learning Test. International Journal of Geriatric Psychiatry. 2004;19(12):1207–1208. doi: 10.1002/gps.1180. [DOI] [PubMed] [Google Scholar]
  116. Osher Y, Dobron A, Belmaker RH, Bersudsky Y, Dwolatzky T. Computerized testing of neurocognitive function in euthymic bipolar patients compared to those with mild cognitive impairment and cognitively healthy controls. Psychotherapy and Psychosomatics. 2011;80(5):298–303. doi: 10.1159/000324508. [DOI] [PubMed] [Google Scholar]
  117. Parsons TD, Bowerly T, Buckwalter JG, Rizzo AA. A controlled clinical comparison of attention performance in children with ADHD in a virtual reality classroom compared to standard neuropsychological methods. Child Neuropsychology. 2007;13(4):363–381. doi: 10.1080/13825580600943473. [DOI] [PubMed] [Google Scholar]
  118. Parsons TD, Courtney CG, Arizmendi B, Dawson M. Virtual Reality Stroop Task for neurocognitive assessment. Studies in Health Technology and Informatics. 2011;163(1):433–439. [PubMed] [Google Scholar]
  119. Parsons TD, Courtney C, Rizzo AA, Armstrong C, Edwards J, Reger G. Virtual reality paced serial assessment test for neuropsychological assessment of a military cohort. Studies in Health Technology and Informatics. 2012;173(1):331–337. [PubMed] [Google Scholar]
  120. Parsons TD, Rizzo AA. Initial validation of a virtual environment for assessment of memory functioning: virtual reality cognitive performance assessment test. Cyberpsychology & Behavior. 2008;11(1):17–25. doi: 10.1089/cpb.2007.9934. [DOI] [PubMed] [Google Scholar]
  121. Parsons TD, Silva TM, Pair J, Rizzo AA. Virtual environment for assessment of neurocognitive functioning: virtual reality cognitive performance assessment test. Studies in Health Technology and Informatics. 2008;132(1):351–356. [PubMed] [Google Scholar]
  122. Patomella A-H, Tham K, Kottorp A. P-drive: assessment of driving performance after stroke. Journal of rehabilitation medicine. 2006;38(5):273–279. doi: 10.1080/16501970600632594. [DOI] [PubMed] [Google Scholar]
  123. Pietrzak RH, Maruff P, Mayes LC, Roman SA, Sosa JA, Snyder PJ. An examination of the construct validity and factor structure of the Groton Maze Learning Test, a new measure of spatial working memory, learning efficiency, and error monitoring. Archives of Clinical Neuropsychology. 2008;23(4):433–445. doi: 10.1016/j.acn.2008.03.002. [DOI] [PubMed] [Google Scholar]
  124. Pugnetti L, Mendozzi L, Motta A, Cattaneo A, Barbieri E, Brancotti A. Evaluation and retraining of adults’ cognitive impairment: which role for virtual reality technology? Computers in Biology andMmedicine. 1995;25(2):213–227. doi: 10.1016/0010-4825(94)00040-w. [DOI] [PubMed] [Google Scholar]
  125. Rajendran G, Law AS, Logie RH, van der Meulen M, Fraser D, Corley M. Investigating multitasking in high-functioning adolescents with autism spectrum disorders using the Virtual Errands Task. Journal of Autism and Developmental Disorders. 2011;41(11):1445–1454. doi: 10.1007/s10803-010-1151-3. [DOI] [PubMed] [Google Scholar]
  126. Rand D, Basha-Abu Rukan S, Weiss PLT, Katz N. Validation of the Virtual MET as an assessment tool for executive functions. Neuropsychological Rehabilitation. 2009;19(4):583–602. doi: 10.1080/09602010802469074. [DOI] [PubMed] [Google Scholar]
  127. Rashidi P, Cook DJ, Holder LB, Schmitter-Edgecombe M. Discovering Activities to Recognize and Track in a Smart Environment. IEEE Transactions on Knowledge and Data Engineering. 2011;23(4):527–539. doi: 10.1109/TKDE.2010.148. [DOI] [PMC free article] [PubMed] [Google Scholar]
  128. Raspelli S, Carelli L, Morganti F, Poletti B, Corra B, Silani V, Riva G. Implementation of the multiple errands test in a NeuroVR-supermarket: a possible approach. Studies in Health Technology and Informatics. 2010;154(1):115–119. [PubMed] [Google Scholar]
  129. Raspelli S, Pallavicini F, Carelli L, Morganti F, Poletti B, Corra B, Silani V, et al. Validation of a Neuro Virtual Reality-based version of the Multiple Errands Test for the assessment of executive functions. Studies in Health Technology and Informatics. 2011;167(1):92–97. [PubMed] [Google Scholar]
  130. Reeves DL, Winter KP, Bleiberg J, Kane RL. ANAM genogram: historical perspectives, description, and current endeavors. Archives of clinical neuropsychology: the official journal of the National Academy of Neuropsychologists. 2007;22(Suppl 1):S15–37. doi: 10.1016/j.acn.2006.10.013. [DOI] [PubMed] [Google Scholar]
  131. Retzlaff PD, Gibertini M. Neuropsychometric issues and problems. In: Vanderploeg RD, editor. Clinician’s guide to neuropsychological assessment. 2. Mahwah, NJ: Lawrence Erlbaum; 2000. pp. 277–299. [Google Scholar]
  132. Riva G. Applications of virtual environments in medicine. Methods of Information in Medicine. 2003;42(5):524–534. [PubMed] [Google Scholar]
  133. Rizzo AA, Buckwalter JG. Virtual reality and cognitive assessment and rehabilitation: the state of the art. Studies in Health Technology and Informatics. 1997;44(1):123–145. [PubMed] [Google Scholar]
  134. Rizzo AA, Bowerly T, Buckwalter JG, Klimchuk D, Mitura R, Parsons TD. A virtual reality scenario for all seasons: the virtual classroom. CNS Spectrums. 2006;11(1):35–44. doi: 10.1017/s1092852900024196. [DOI] [PubMed] [Google Scholar]
  135. Rizzo M, McGehee DV, Dawson JD, Anderson SN. Simulated car crashes at intersections in drivers with Alzheimer disease. Alzheimer Disease and Associated Disorders. 2001;15(1):10–20. doi: 10.1097/00002093-200101000-00002. [DOI] [PubMed] [Google Scholar]
  136. Robbins TW, James M, Owen AM, Sahakian BJ, Lawrence AD, McInnes L, Rabbitt PM. A study of performance on tests from the CANTAB battery sensitive to frontal lobe dysfunction in a large sample of normal volunteers: implications for theories of executive functioning and cognitive aging. Cambridge Neuropsychological Test Automated Battery. Journal of the International Neuropsychological Society. 1998;4(5):474–490. doi: 10.1017/s1355617798455073. [DOI] [PubMed] [Google Scholar]
  137. Robbins TW, James M, Owen AM, Sahakian BJ, McInnes L, Rabbitt P. Cambridge Neuropsychological Test Automated Battery (CANTAB): a factor analytic study of a large sample of normal elderly volunteers. Dementia. 1994;5(5):266–281. doi: 10.1159/000106735. [DOI] [PubMed] [Google Scholar]
  138. Rosser A, Hodges JR. Initial letter and semantic category fluency in Alzheimer’s disease, Huntington’s disease, and progressive supranuclear palsy. Journal of Neurology, Neurosurgery & Psychiatry. 1994;57(11):1389–1394. doi: 10.1136/jnnp.57.11.1389. [DOI] [PMC free article] [PubMed] [Google Scholar]
  139. Sahakian BJ, Downes JJ, Eagger S, Everden JL, Levy R, Philpot MP, Roberts AC, et al. Sparing of attentional relative to mnemonic function in a subgroup of patients with dementia of the Alzheimer type. Neuropsychologia. 1990;28(11):1197–1213. doi: 10.1016/0028-3932(90)90055-s. [DOI] [PubMed] [Google Scholar]
  140. Sahakian BJ, Owen AM. Computerized assessment in neuropsychiatry using CANTAB: discussion paper. Journal of the Royal Society of Medicine. 1992;85(1):399–402. [PMC free article] [PubMed] [Google Scholar]
  141. Sahgal A, Sahakian BJ, Robbins TW, Wray CJ. Detection of visual memory and learning deficits in Alzheimer’s disease using the Cambridge Neuropsychological Test Automated Battery. Dementia. 1991;2(3):150–158. [Google Scholar]
  142. Salnaitis CL, Baker CA, Holland J, Welsh M. Differentiating Tower of Hanoi Performance: Interactive Effects of Psychopathic Tendencies, Impulsive Response Styles, and Modality. Applied Neuropsychology. 2011;18(1):37–46. doi: 10.1080/09084282.2010.523381. [DOI] [PubMed] [Google Scholar]
  143. Schatz P, Browndyke J. Applications of computer-based neuropsychological assessment. Journal of Head Trauma Rehabilitation. 2002;17:395–410. doi: 10.1097/00001199-200210000-00003. [DOI] [PubMed] [Google Scholar]
  144. Schatz P, Pardini JE, Lovell MR, Collins MW, Podell K. Sensitivity and specificity of the ImPACT Test Battery for concussion in athletes. Archives of Clinical Neuropsychology. 2006;21(1):91–99. doi: 10.1016/j.acn.2005.08.001. [DOI] [PubMed] [Google Scholar]
  145. Schmitter-Edgecombe M, Seelye A, Cook DJ. Technologies for health assessment, promotion and assistance: Focus on gerontechnology. In: Randolph JJ, editor. Positive Neuropsychology: An Evidence-Based Perspective on Promoting Cognitive Health. New York, NY: Springer Science and Business Media, LLC; 2013. [Google Scholar]
  146. Schultheis MT, Rebimbas J, Mourant R, Millis SR. Examining the usability of a virtual reality driving simulator. Assistive Technology. 2007;19(1):1–8. doi: 10.1080/10400435.2007.10131860. [DOI] [PubMed] [Google Scholar]
  147. Schultheis MT, Simone LK, Roseman E, Nead R, Rebimbas J, Mourant R. Stopping behavior in a VR driving simulator: a new clinical measure for the assessment of driving. Conference proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society. 2006;1(1):4921–4924. doi: 10.1109/IEMBS.2006.260389. [DOI] [PubMed] [Google Scholar]
  148. Schultheis MT, Weisser V, Manning K, Blasco A, Ang J. Driving behaviors among community-dwelling persons with multiple sclerosis. Archives of Physical Medicine and Rehabilitation. 2009;90(6):975–981. doi: 10.1016/j.apmr.2008.12.017. [DOI] [PubMed] [Google Scholar]
  149. Schweiger A, Abramovitch A, Doniger GM, Simon ES. A clinical construct validity study of a novel computerized battery for the diagnosis of ADHD in young adults. Journal of Clinical and Experimental Neuropsychology. 2007;29(1):100–111. doi: 10.1080/13803390500519738. [DOI] [PubMed] [Google Scholar]
  150. Seeyle AM, Smith A, Schmitter-Edgecombe M, Cook CJ. Cueing technologies for assisting persons with mild cognitive impairment in IADL completion in an experimenter-assisted smart environment. Presented at the 30th annual meeting of the National Academy of Neuropsychology; Vancouver, BC. 2010. [Google Scholar]
  151. Seelye AM, Schmitter-Edgecombe M, Das B, Cook DJ. Application of cognitive rehabilitative theory to the development of smart prompting technologies. IEEE Reviews on Biomedical Engineering. 2012;5:29–44. doi: 10.1109/RBME.2012.2196691. [DOI] [PMC free article] [PubMed] [Google Scholar]
  152. Shallice T, Burgess PW. Deficits in strategy application following frontal lobe damage in man. Brain. 1991;114(2):727–741. doi: 10.1093/brain/114.2.727. [DOI] [PubMed] [Google Scholar]
  153. Shiffman S, Stone AA, Hufford MR. Ecological Momentary Assessment. Annual Review of Clinical Psychology. 2008;4(1):1–32. doi: 10.1146/annurev.clinpsy.3.022806.091415. [DOI] [PubMed] [Google Scholar]
  154. Singla G, Cook DJ, Schmitter-Edgecombe M. Recognizing independent and joint activities among multiple residents in smart environments. Journal of Ambient Intelligence and Humanized Computing. 2010;1(1):57–63. doi: 10.1007/s12652-009-0007-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  155. Smyth JM, Stone AA. Ecological momentary assessment research in behavioral medicine. Journal of Happiness Studies. 2003;4:35–52. [Google Scholar]
  156. Stanley KG, Osgood ND. The potential of sensor-based monitoring as a tool for health care, health promotion, a research. Annals of Family Medicine. 2011;9(4):296–298. doi: 10.1370/afm.1292. [DOI] [PMC free article] [PubMed] [Google Scholar]
  157. Steinmetz J-P, Brunner M, Loarer E, Houssemand C. Incomplete psychometric equivalence of scores obtained on the manual and the computer version of the Wisconsin Card Sorting Test? Psychological Assessment. 2010;22(1):199–202. doi: 10.1037/a0017661. [DOI] [PubMed] [Google Scholar]
  158. Stone A, Shiffman S. Ecological Momentary Assessment (EMA) in behavioral medicine. Annals of Behavioral Medicine. 1994;16:199–202. [Google Scholar]
  159. Suchy Y, Kraybill ML, Franchow E. Instrumental activities of daily living among community-dwelling older adults: discrepancies between self-report and performance are mediated by cognitive reserve. Journal of Clinical and Experimental Neuropsychology. 2011;33(1):92–100. doi: 10.1080/13803395.2010.493148. [DOI] [PubMed] [Google Scholar]
  160. Suchy Y, Williams PG, Kraybill ML, Franchow E, Butner J. Instrumental activities of daily living among community-dwelling older adults: personality associations with self-report, performance, and awareness of functional difficulties. The Journals of Gerontology. Series B, Psychological Sciences and Social Sciences. 2010;65(5):542–550. doi: 10.1093/geronb/gbq037. [DOI] [PubMed] [Google Scholar]
  161. te Winkel-Witlox ACM, Post MWM, Visser-Meily JMA, Lindeman E. Efficient screening of cognitive dysfunction in stroke patients: comparison between the CAMCOG and the R-CAMCOG, Mini Mental State Examination and Functional Independence Measure-cognition score. Disability and Rehabilitation. 2008;30(18):1386–1391. doi: 10.1080/09638280701623000. [DOI] [PubMed] [Google Scholar]
  162. Tien AY, Spevack TV, Jones DW, Pearlson GD, Schlaepfer TE, Strauss ME. Computerized Wisconsin Card Sorting Test: comparison with manual administration. The Kaohsiung Journal of Medical Sciences. 1996;12(8):479–485. [PubMed] [Google Scholar]
  163. Tun PA, Lachman ME. The association between computer use and cognition across adulthood: Use it so you won’t lose it? Psychology and Aging. 2010;25(3):560–568. doi: 10.1037/a0019543. [DOI] [PMC free article] [PubMed] [Google Scholar]
  164. Vincent AS, Roebuck-Spencer T, Gilliland K, Schlegel R. Automated Neuropsychological Assessment Metrics (v4) Traumatic Brain Injury Battery: military normative data. Military medicine. 2012;177(3):256–269. doi: 10.7205/milmed-d-11-00289. [DOI] [PubMed] [Google Scholar]
  165. Wang Y, Mehler B, Reimer B, Lammers V, D’Ambrosio LA, Coughlin JF. The validity of driving simulation for assessing differences between in-vehicle informational interfaces: A comparison with field testing. Ergonomics. 2010;53(3):404–420. doi: 10.1080/00140130903464358. [DOI] [PubMed] [Google Scholar]
  166. Wald J, Liu L, Hirsekorn L, Taylar S. The use of virtual reality in the assessment of driving performance in persons with brain injury. Studies in Health Technology and Informatics. 2000;70(1):365–367. [PubMed] [Google Scholar]
  167. Wild K, Howieson D, Webbe F, Seelye A, Kaye J. The status of computerized cognitive testing in aging: A systematic review. Alzheimer’s & Dementia. 2008;4(6):428–437. doi: 10.1016/j.jalz.2008.07.003. [DOI] [PMC free article] [PubMed] [Google Scholar]
  168. Wild KV, Mattek NC, Maxwell SA, Dodge HH, Jimison HB, Kaye JA. Computer-related self-efficacy and anxiety in older adults with and without mild cognitive impairment. Alzheimer’s & dementia: the journal of the Alzheimer’s Association. 2012;8(6):544–552. doi: 10.1016/j.jalz.2011.12.008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  169. Williams DJ, Noyes JM. Effect of experience and mode of presentation on problem solving. Computers in Human Behavior. 2007;23(1):258–274. [Google Scholar]
  170. Wouters H, van Campen J, Appels B, Lindeboom R, Buiter M, de Haan RJ, et al. Does adaptive cognitive testing combine efficiency with precision? Prospective findings. Journal of Alzheimer’s Disease. 2011;25(4):59–603. doi: 10.3233/JAD-2011-101743. [DOI] [PubMed] [Google Scholar]
  171. Yamashita Y, Mukasa A, Anai C, Honda Y, Kunisaki C, Koutaki J, Tada Y, et al. Summer treatment program for children with attention deficit hyperactivity disorder: Japanese experience in 5 years. Brain & Development. 2011;33(3):260–267. doi: 10.1016/j.braindev.2010.09.005. [DOI] [PubMed] [Google Scholar]
  172. Zhang L, Abreu BC, Masel B, Scheibel RS, Christiansen CH, Huddleston N, Ottenbacher KJ. Virtual reality in the assessment of selected cognitive function after brain injury. American Journal of Physical Medicine & Rehabilitation. 2001;80(8):597–604. doi: 10.1097/00002060-200108000-00010. [DOI] [PubMed] [Google Scholar]
  173. Zhang Ling, Abreu BC, Seale GS, Masel B, Christiansen CH, Ottenbacher KJ. A virtual reality environment for evaluation of a daily living skill in brain injury rehabilitation: reliability and validity. Archives of Physical Medicine and Rehabilitation. 2003;84(8):1118–1124. doi: 10.1016/s0003-9993(03)00203-x. [DOI] [PubMed] [Google Scholar]

RESOURCES