Skip to main content
Archives of Clinical Neuropsychology logoLink to Archives of Clinical Neuropsychology
. 2018 Jan 10;33(8):1040–1045. doi: 10.1093/arclin/acx140

Validity of Teleneuropsychological Assessment in Older Patients with Cognitive Disorders

Hannah E Wadsworth 1,, Kaltra Dhima 1, Kyle B Womack 1,2, John Hart Jr 1,2, Myron F Weiner 1, Linda S Hynan 1,3, C Munro Cullum 1,2,4
PMCID: PMC6887729  PMID: 29329363

Abstract

Objective

The feasibility and reliability of neuropsychological assessment at a distance have been demonstrated, but the validity of this testing medium has not been adequately demonstrated. The purpose of this study was to determine the ability of video teleconferencing administration of neuropsychological measures (teleneuropsychology) in discriminating cognitively impaired from non-impaired groups of older adults. It was predicted that measures administered via video teleconference would distinguish groups and that the magnitude of differences between impaired and non-impaired groups would be similar to group differences achieved in traditional administration.

Methods

The sample consisted of 197 older subjects, separated into two groups, with and without cognitive impairment. The cognitive impairment group included 78 individuals with clinical diagnoses of mild cognitive impairment or Alzheimer’s disease. All participants completed counterbalanced neuropsychological testing using alternate test forms in both a teleneuropsychology and a traditional face-to-face (FTF) administration condition. Tests were selected based upon their common use in dementia evaluations, brevity, and assessment of multiple cognitive domains. Results from FTF and teleneuropsychology test conditions were compared using individual repeated measures ANCOVA, controlling for age, education, gender, and depression scores.

Results

All ANCOVA models revealed significant main effects of group and a non-significant interaction between group and administration condition. All ANCOVA models revealed non-significant main effects for administration condition, except category fluency.

Conclusions

Results derived from teleneuropsychologically administered tests can distinguish between cognitively impaired and non-impaired individuals similar to traditional FTF assessment. This adds to the growing teleneuropsychology literature by supporting the validity of remote assessments in aging populations.

Keywords: Telemedicine, Telehealth, Neuropsychology, Dementia

Introduction

With the aging population growing rapidly and the baby boomer generation reaching age 65, it is estimated that by the year 2050, between 13.8 and 16 million people will be living with Alzheimer’s disease in the United States alone (Alzheimer’s Association, 2016). Therefore, expanding access to high-quality dementia evaluations and care has become a priority for this population. This is particularly important in rural areas, as many individuals in these regions have limited access to specialized care. Without the support of specialized experts in dementia, it has been found that patients with dementia are underdiagnosed in approximately 40% of primary care settings (Chodosh et al., 2004). Along these lines, a survey of rural general practitioners identified limited access to resources such as consultants, community support, and education as potential barriers to diagnosing and treating dementia (Teel, 2004). To help address some of these barriers, the field of telehealth has quickly grown over the past decade. With the addition of video teleconference-based neuropsychological testing (i.e. teleneuropsychology), neuropsychologists are now able to remotely administer select neuropsychological tests (Grosch, Weiner, Hynan, Shore & Cullum, 2015) and assist in remote neuropsychiatric diagnosis (Harrell, Wilkins, Connor & Chodosh, 2014).

The feasibility and reliability of video teleconferencing (VTC) techniques for the remote administration of neuropsychological tests have been established in adults with and without cognitive impairment (Cullum, Hynan, Grosch, Parikh & Weiner, 2014; Cullum, Weiner, Gehrmann & Hynan, 2006; Hildebrand, Chow, Williams, Nelson & Wass, 2004; Jacobsen, Sprenger, Andersson & Drogstan, 2003; Loh et al., 2004; Poon, Hui, Dai, Kwok & Woo, 2005; Vestal, Smith-Olinde, Hicks, Hutton & Hart, 2006). Additionally, Wadsworth et al. (2016) demonstrated feasibility and reliability VTC in a large sample of rural American Indians, similar to previous studies. Despite this growing body of literature, the ability of teleneuropsychological tests administered to distinguish between healthy and cognitively compromised groups (i.e. construct validity) has not yet been examined. It was hypothesized that neuropsychological test results obtained via VTC and traditional FTF administration would be similar in distinguishing between individuals with and without cognitive impairment.

Methods

Participants

As part of a larger teleneuropsychology study (Cullum et al., 2014; also see Wadsworth et al., 2016), 197 participants with and without cognitive impairment were recruited through the NIA-funded Alzheimer’s Disease Center (ADC) at the University of Texas Southwestern Medical Center in Dallas as well as its satellite clinic in Talihina, Oklahoma, which provides services for the Choctaw Nation Health Services Authority. Subjects underwent neurodiagnostic evaluations by the same multidisciplinary ADC consensus group according to standard diagnostic criteria. For the purpose of this general diagnostic validity study, participants were categorized into two groups: a cognitively impaired group (n = 78), which included those with mild cognitive impairment (MCI; Petersen, 2004) or Alzheimer’s disease (AD; McKhann et al., 2011), and a healthy control group (n = 119). MCI and AD groups were combined to maximize sample size and to focus upon the aim of the study to demonstrate the ability of neuropsychological tests administered via VTC to distinguish cognitively impaired vs. non-impaired groups. The cognitively impaired group was 46.2% female, 61.5% Caucasian, 29.5% American Indian, with a mean MMSE of 25.5 (SD = 4.15), and the control group was 74.8% female, 49.6% Caucasian, 46.2% American Indian, with a mean MMSE of 28.3 (SD = 1.14). Additional demographic characteristics of each group are listed in Table 1.

Table 1.

Demographic characteristics

Measure Controls Cognitively Impaired
Age in years, mean (SD)* 66.10 (9.21) 72.71 (8.43)
Education in years, mean (SD) 13.87 (2.46) 14.56 (3.10)
Sex, Female n (%)* 74.8% 46.2%
Race Caucasian n (%) 59 (49.6%) 48 (61.5%)
American Indian n (%) 55 (46.2%) 23 (29.5%)
MMSE, mean (SD)* 28.92 (1.20) 25.57 (3.98)
GDS-15, mean (SD)* 1.09 (1.69) 1.66 (1.64)

Note: MMSE = Mini-Mental State Exam; GDS-15 = Geriatric Depression Scale-15 item.

*p < .05.

Materials

The neuropsychological measures selected were chosen because of their common use in dementia evaluations, assessment of multiple cognitive domains, and amenability to VTC administration. Tests included the MMSE (Folstein, Folstein & McHugh, 1975) to assess global cognitive functioning, the Hopkins Verbal Learning Test-Revised (HVLT-R) as a measure of verbal learning and memory (Benedict, Schretlen, Groninger & Brandt, 1998), letter fluency (FAS, Gladsjo et al., 1999), category fluency (Animals, Gladsjo et al., 1999), Boston Naming Test-15 item to assess language (Mack, Freed, Williams & Henderson, 1992), Digit Span forward and backward to capture auditory attention and working memory, Clock Drawing Test to assess visuospatial organization and construction, and the Geriatric Depression Scale-15 item (GDS-15) to assess depressive symptoms (Lesher, 1986). Many of these measures have been found to discriminate between healthy controls and cognitively impaired individuals (Katsumata et al., 2015; Leung, Lee, Lam, Chan, & Wu, 2011; Lortie, Remington, Hoffmann, & Shea, 2012; Nishiwaki et al., 2004; Shapiro, Benedict, Schretlen, & Brandt, 1999; Teng et al., 2013; Weakley, Schmitter-Edgecombe, & Anderson, 2013), and the GDS-15 has been shown to detect depression in older individuals (Lesher, 1986).

Procedure

This research was approved by the Institutional Review Boards of the University of Texas Southwestern Medical Center and the Choctaw Nation Health Services Authority. All subjects provided written informed consent prior to participation. Subjects’ vision and hearing were determined to be adequate for testing and the monitor volume was adjusted to conversation-level prior to VTC testing. All subjects were fluent in English and tested by experienced examiners.

For the VTC administration condition, a Polycom iPower 680 series videoconferencing system was utilized, connected via the internet, as previously described (Cullum et al., 2014). In this condition, participants were seated 30″ away from a 26-inch LCD monitor with the assistance of a local staff member and were subsequently introduced to the remote examiner on screen. Examiners viewed subjects on a 26-inch LCD monitor with picture-in-picture display and mobile camera, allowing the examiner to view the participant and test materials simultaneously. Staff were available on-site to assist with VTC equipment but were not present in the room during testing, as previous experience demonstrated that subjects did not require special assistance.

The order of test administration condition (i.e., in-person vs. video teleconference) was determined a priori and counterbalanced across consecutive subjects. Test order was fixed and tests were administered according to standard procedures, with alternate test forms used to the extent possible (i.e. alternate words on MMSE and HVLT, different letters and categories on verbal fluency tasks, different number strings on digit span, and different time settings on Clock Drawing). The only modification to procedures necessary for the VTC condition involved the scoring of the participants’ drawings from the MMSE and Clock Drawing Test. In these cases, the subjects were asked to hold up their drawings to the camera in order for the examiner to score them. Scores obtained in the VTC condition were double-checked when test forms were subsequently sent to the local examiner.

All but two subjects were retested within approximately 2.5 h of the first administration condition. The remaining two participants, both controls, were retested at 7 and 14 days due to scheduling challenges. Because the test results from these two subjects did not differ significantly from the rest of the control group, their scores were included in analyses.

Statistical Analyses

Statistical analyses were conducted using IBM© SPSS Statistics V22 (IBM Corp, SPSS Statistics V22, Armonk, NY, 2013) with p < .05 as the cutoff for significance of the primary measures and p < .15 for covariates. Independent samples t-tests were performed to compare the control (n = 119) and cognitively impaired (n = 78) groups on the continuous variables (Age, Education, GDS-15). Chi-square analyses were used to compare sex and race between groups. Repeated measures ANCOVAs were run for each neuropsychological measure to compare test performance between groups with the repetition variable as the administration condition (FTF and VTC). For each model, all covariates were included (Age, Education, Gender, GDS-15) in the full model and remained in the final model if p < 0.15 as per convention (Table 2). The statistical assumptions for all ANCOVA models were reviewed and it was concluded that any minor violations did not influence the final model results. To extend group comparisons, Cohen’s d effect sizes were calculated manually, using adjusted means, for condition effect in both groups (Zakzanis, 2001).

Table 2.

ANCOVA: healthy controls vs. cognitively impaired participants in FTF and VTC test conditions after controlling for significant covariates

Test FTF Adjusted Means (SD) VTC Adjusted Means (SD) Administration Condition p Administration Condition Cohen’s d
Unimpaired Impaired Unimpaired Impaired Unimpaired Impaired
Clock Totala 5.79 (.81) 5.40 (.82) 5.81 (.85) 5.33 (.86) .520 .024 .083
Digit Span Forwardb 6.46 (1.39) 5.87 (1.40) 6.20 (1.33) 5.88 (1.33) .276 .191 .007
Digit Span Backwardc 4.91 (1.20) 4.45 (1.22) 4.76 (1.26) 4.51 (1.27) .635 .123 .048
BNT-15d 14.01 (1.85) 12.22 (1.86) 13.83 (2.18) 12.00 (2.19) .806 .089 .109
HVLT-R Totale 25.43 (5.48) 18.35 (5.60) 26.02 (5.66) 19.50 (5.78) .457 .106 .202
HVLT-R Delayed Recalle 8.96 (3.28) 4.99 (3.35) 9.44 (2.98) 4.90 (3.04) .735 .153 .028
FASf 41.69 (12.07) 34.19 (12.20) 40.68 (12.31) 34.40 (12.44) .814 .083 .017
Animalse 18.46 (4.76) 14.38 (4.89) 18.76 (5.07) 13.45 (5.22) <.001* .063 .184

Note: FTF = Face-to-Face; VTC = Video teleconferencing; BNT-15 = Boston Naming Test-15 item; HVLT-R = Hopkins Verbal Learning Test-Revised. Significant covariates included: (a) Education and Age; (b) Education and GDS-15; (c) Education, Gender, and GDS-15; (d) Education; (e) Education, Gender, and Age; (f) Education and Gender.

*p < .001.

Results

T-tests and chi-square results revealed significant differences between groups (Control N = 119; Cognitively Impaired N = 78) for age, sex, and GDS-15 (Table 1). Within each ANCOVA model, participants with missing data were excluded, with final sample sizes ranging from 107 to 119 participants in the control group and 69 to 77 in the cognitively impaired group. As predicted, within each model, after controlling for significant covariates, the main effect of group was found to be significant, and the interaction between group and administration condition was non-significant. In addition, the main effect for administration condition was non-significant, except for a slight but statistically significant difference in category fluency (Table 2). Likewise, effect sizes for administration condition were small in both groups for all assessments, including category fluency (d ranges .007 to .202; Cohen, 1988).

Discussion

A growing body of literature has demonstrated that VTC-based administration of select neuropsychological tests is feasible and reliable, producing results very similar to traditional in-person evaluations on a variety of standard measures (Cullum et al., 2014; Hildebrand et al., 2004; Jacobsen et al., 2002; Loh et al., 2004; Poon et al., 2005; Vestal et al., 2006) and in diverse populations (Wadsworth et al., 2016). The current findings expand upon existing feasibility and reliability studies and support the construct validity of teleneuropsychological assessment by demonstrating that neuropsychological tests administered via VTC are able to distinguish cognitively impaired from non-impaired individuals, similar to results from standard in-person assessment. Specifically, it was found that regardless of administration condition, tests accurately distinguished between those with and without cognitive impairment, thereby establishing the validity of the procedure in these populations. These findings support the notion that teleneuropsychology-based assessment can detect cognitive impairment, and is comparable to traditional FTF test administration.

Overall, teleneuropsychological testing scores did not differ significantly from FTF scores on tests examined, with the exception of a small but statistically significant difference on category fluency. Furthermore, all of the resulting effect sizes for both groups were small, ranging from .007 to .202, suggesting that only a very small amount of the variance within the ANOVA models could be explained by the difference between testing conditions (Cohen, 1988). In regard to category fluency, although there was a statistically significant main effect for test condition, the effect size was small (d = .184 for the impaired group), indicating that approximately 90% of participants performed the same in both test conditions, thereby demonstrating virtually no actual effect of this factor (Zakzanis, 2001). Furthermore, the difference in category fluency between testing conditions was less than one point. When considered clinically, a difference of this magnitude carries little meaning and therefore performances in the two testing conditions should be considered equal across all measures examined. Similarly, all of the differences between testing conditions were found to be within appropriate test-retest findings (i.e. all well within one standard deviation) for each individual measure, including the difference in category fluency (Barr, 2003; Bird, Papadopoulou, Ricciardelli, Rossor, and Cipolotti (2004); Flanagan & Jackson, 1997; Levine, Miller, Becker, Selnes & Cohen, 2004; Strauss, Sherman & Spreen, 2006).

Limitations

Given that the sample consisted of older subjects with and without cognitive impairment (specifically, MCI and AD), these results may not be generalizable to younger individuals or those with other clinical diagnoses. Similarly, while the current findings support the use of teleneuropsychology in identifying cognitive impairment similar to traditional FTF assessment, the ability of VTC-based testing to differentiate between dementia types remains to be shown. Since our cognitively impaired group was generally mildly impaired and contained few subjects with severe deficits (i.e., only 7 scored below 20 on the MMSE, with the lowest score being 10), teleneuropsychological techniques should be examined in those with more severe cognitive impairment in order to confirm that the procedures are similarly effective. Additionally, the demographic homogeneity of this study sample must be noted, as the majority of participants werewell educated volunteers obtained through an ADC, which may limit generalizability. Therefore, findings may require replication in more heterogeneous community populations. Finally, future research should focus on validating the use of teleneuropsychology to differentiate between dementia types.

Conclusion

A growing literature supports teleneuropsychology as a feasible and reliable way of administering neuropsychological tests, and this study is one of the first to demonstrate the construct validity of teleneuropsychological assessment outcomes in relatively large samples of older individuals with and without cognitive impairment. Specifically, the findings suggest that teleneuropsychological administration of the selected measures in these populations was as effective at distinguishing cognitively impaired from non-impaired groups as traditional in-person administration. Thus, teleneuropsychology shows promise as a means to improve access to neuropsychological services, providing telehealth clinics the opportunity to improve neurodiagnostic services for rural aging and underserved populations.

Acknowledgements

Mr. Carey Fuller was involved with data collection, and a special thanks to the participants and the Choctaw National Health Services Authority who provided assistance and consultation for this project.

Funding

This work was funded in part by National Institute on Aging Grant R01 AG027776 and the Alzheimer’s Disease Center Grant 3P30AG012300-21S1.

Conflict of Interest

None declared.

References

  1. Alzheimer’s Association, 2016. (2016). Alzheimer’s disease facts and figures. http://www.alz.org/facts/overview.asp. [DOI] [PubMed]
  2. Barr W. B. (2003). Neuropsychological testing of high school athletes. Preliminary norms and test-retest indices. Archives of Clinical Neuropsychology, 18, 91–101. 10.1016/S0887-6177(01)00185-8. [DOI] [PubMed] [Google Scholar]
  3. Benedict R. H. B., Schretlen D., Groninger L., & Brandt J. (1998). Hopkins verbal learning test-revised: Normative data and analysis of inter-form and test-retest reliability. Clinical Neuropsychology, 12, 43–55. doi:1385-4046/98/1201-043.9460734 [Google Scholar]
  4. Bird C. M., Papadopoulou K., Ricciardelli P., Rossor M. N., & Cipolotti L. (2004). Monitoring cognitive changes: Psychometric properties of six cognitive tests. The British Journal of Clinical Psychology/the British Psychological Society, 43, 197–210. 10.1348/014466504323088051. [DOI] [PubMed] [Google Scholar]
  5. Chodosh J., Petitti D., Elliott M., Hays R., Crooks V., Ruben D., et al. (2004). Physician recognition of cognitive impairment: Evaluating the need for improvement. Journal of the American Geriatrics Society, 52, 1051–1059. [DOI] [PubMed] [Google Scholar]
  6. Cohen J. (1988). The effect size index: d In Statistical power analysis for the behavioral sciences (second edition, pp. 20–26). New Jersey: Lawrence Erlbaum Associates. [Google Scholar]
  7. Cullum C. M., Hynan L. S., Grosch M., Parikh M., & Weiner M. F. (2014). Teleneuropsychology: Evidence for video teleconference-based neuropsychological assessment. Journal of the International Neuropsychological Society : JINS, 20, 1–6. 10.1017/S1355617714000872. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Cullum C. M., Weiner M. F., Gehrmann H. R., & Hynan L. S. (2006). Feasibility of telecognitive assessment in dementia. Assessment, 13, 385–390. 10.1177/1073191106289065. [DOI] [PubMed] [Google Scholar]
  9. Flanagan J. L., & Jackson S. T. (1997). Test-retest reliability of three aphasia tests: performance of non-brain-damaged older adults. Journal of Communication Disorders, 30, 33–43. 10.1016/S0021-9924(96)00039-1. [DOI] [PubMed] [Google Scholar]
  10. Folstein M. F., Folstein S. E., & McHugh P. R. (1975). “Mini-Mental State” A practical method for grading the cognitive state of patients for the clinician. Journal of Psychiatric Research, 12, 189–198. 10.1016/0022-3956(75)90026-6. [DOI] [PubMed] [Google Scholar]
  11. Gladsjo J. A., Schuman C. C., Evans J. D., Peavy G. M., Miller S. W., & Heaton R. K. (1999). Norms for letter and category fluency: Demographic corrections for age, education, and ethnicity. Assessment, 6, 147–178. 10.1177/107319119900600204. [DOI] [PubMed] [Google Scholar]
  12. Grosch M. C., Weiner M. F., Hynan L. S., Shore J., & Cullum C. M. (2015). Video teleconference-based neurocognitive screening in geropsychiatry. Psychiatry Research, 225, 734–735. 10.1016/j.psychres.2014.12.040. [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Harrell K., Wilkins S., Connor M., & Chodosh J. (2014). Telemedicine and the evaluation of cognitive impairment: The additive value of neuropsychological assessment. Journal of the American Medical Directors Association, 15, 600–606. 10.1016/j.jamda.2014.04.015. [DOI] [PubMed] [Google Scholar]
  14. Hildebrand R., Chow H., Williams C., Nelson M., & Wass P. (2004). Feasibility of neuropsychological testing of older adults via videoconferencing: Implications for assessing the capacity for independent living. Journal of Telemedicine and Telecare, 10, 130–134. 10.1258/135763304323070751. [DOI] [PubMed] [Google Scholar]
  15. Jacobsen S. E., Sprenger T., Andersson S., & Drogstan J. (2003). Neuropsychological assessment and telemedicine: A preliminary study examining the reliability of neuropsychology services performed via telecommunication. Journal of the International Neuropsychological Society : JINS, 9, 472–478. 10.1017/S1355617703930128. [DOI] [PubMed] [Google Scholar]
  16. Katsumata Y., Mathews M., Abner E. L., Jicha G. A., Caban-Holt A., Smith C. D., et al. (2015). Assessing the discriminant ability, reliability, and comparability of multiple short forms of the Boston Naming Test in an Alzheimer’s disease center cohort. Dementia and Geriatric Cognitive Disorders, 39, 215–227. 10.1159/000370108. [DOI] [PMC free article] [PubMed] [Google Scholar]
  17. Lesher E. L. (1986). Validation of geriatric depression scale among nursing home residents. Clinical Gerontologist, 4, 21–28. 10.1300/J018v04n04_04. [DOI] [Google Scholar]
  18. Leung J. L. M., Lee G. T. H., Lam Y. H., Chan R. C. C., & Wu J. Y. M. (2011). The use of the Digit Span Test in screening for cognitive impairment in acute medical inpatients. International Psychogeriatrics, 23, 1569–1574. 10.1017/S1041610211000792. [DOI] [PubMed] [Google Scholar]
  19. Levine A. J., Miller E. N., Becker J. T., Selnes O. A., & Cohen B. A. (2004). Normative data for determining significance of test-retest differences on eight common neuropsychological instruments. Clinical Neuropsychology, 18, 373–384. 10.1080/1385404049052420. [DOI] [PubMed] [Google Scholar]
  20. Loh P. K., Ramesh P., Maher S., Saligari J., Flicker L., & Goldswain P. (2004). Can patients with dementia be assessed at a distance? The use of telehealth and standardized assessments. Internal Medicine Journal, 34, 239–242. 10.1111/j.1444-0903.2004.00531.x. [DOI] [PubMed] [Google Scholar]
  21. Lortie J. J., Remington R., Hoffmann H., & Shea T. B. (2012). Lacking of correlation of WAIS Digit Span with Clox 1 and the Dementia Rating Scale in MCI. International Journal of Alzheimer’s Disease, 2012, 1–4. 10.1155/2012/829743. [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. Mack W. J., Freed D. M., Williams B. W., & Henderson V. W. (1992). Boston Naming Test: Shortened versions for use in Alzheimer’s disease. Journal of Gerontology, 47, 154–158. 10.1093/geronj/47.3.P154. [DOI] [PubMed] [Google Scholar]
  23. McKhann G. M., Knopman D. S., Chertkow H., Hyman B. T., Jack C. R. Jr., Kawas C. H., et al. (2011). The diagnosis of dementia due to Alzheimer’s disease: Recommendations from the National Institute on Aging-Alzheimer’s Association workgroups on diagnostic guidelines for Alzheimer’s disease. Alzheimers Dementia, 7, 263–269. 10.1016/j.jalz.2011.02.005. [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Nishiwaki Y., Breeze E., Smeeth L., Bulpitt C. J., Peters R., & Fletcher A. E. (2004). Validity of the Clock-Drawing Test as a screening tool for cognitive impairment in the elderly. American Journal of Epidemiology, 160, 797–807. [DOI] [PubMed] [Google Scholar]
  25. Petersen R. C. (2004). Mild cognitive impairment as a diagnostic entity. Journal of Internal Medicine, 256, 183–194. 10.1111/j.1365-2796.2004.01388.x. [DOI] [PubMed] [Google Scholar]
  26. Poon P., Hui E., Dai D., Kwok T., & Woo J. (2005). Cognitive intervention for community-dwelling older persons with memory problems: Telemedicine versus face-to-face treatment. International Journal of Geriatric Psychiatry, 20, 285–286. 10.1002/gps.1282. [DOI] [PubMed] [Google Scholar]
  27. Shapiro A. M., Benedict R. H. B., Schretlen D., & Brandt J. (1999). Construct and concurrent validity of the Hopkins Verbal Learning Test-Revised. The Clinical Neuropsychologist, 13, 348–358. doi:1385-4046/99/1303-348. [DOI] [PubMed] [Google Scholar]
  28. Strauss E., Sherman E. M. S., & Spreen O. (2006). A compendium of neuropsychological tests. New York, New York: Oxford University Press. [Google Scholar]
  29. Teel C. (2004). Rural practioners’ experiences in dementia diagnosis and treatment. Aging and Mental Health, 8, 422–429. 10.1080/13607860410001725018. [DOI] [PubMed] [Google Scholar]
  30. Teng E., Leone-Friedman J., Lee G. J., Woo S., Apostolova L. G., Harrell S., et al. (2013). Similar verbal fluency patterns in amnestic Mild Cognitive Impairment and Alzheimer’s disease. Archives of Clinical Neuropsychology, 28, 400–410. 10.1093/arclin/act039. [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Vestal L., Smith-Olinde L., Hicks G., Hutton T., & Hart J. J. (2006). Efficacy of language assessment in Alzheimer’s disease: Comparing in-person examination and telemedicine. Clinical Interventions in Aging, 1, 467–471. 10.2147/ciia.2006.1.4.467. [DOI] [PMC free article] [PubMed] [Google Scholar]
  32. Wadsworth H. E., Galusha-Glasscock J. M., Womack K. B., Quiceno M., Weiner M. F., Hynan L. S., et al. (2016). Remote neuropsychological assessment in rural American Indians with and without cognitive impairment. Archives of Clinical Neuropsychology, 31, 420–425. 10.1093/arclin/acw030. [DOI] [PMC free article] [PubMed] [Google Scholar]
  33. Weakley A., Schmitter-Edgecombe M., & Anderson J. (2013). Analysis of verbal fluency ability in amnestic and non-amnestic Mild Cognitive Impairment. Archives of Clinical Neuropsychology, 28, 721–731. 10.1093/arclin/act058. [DOI] [PMC free article] [PubMed] [Google Scholar]
  34. Zakzanis K. K. (2001). Statistics to tell the truth, the whole truth, and nothing but the truth: Formulae, illustrative numerical examples, and heuristic interpretation of effect size analyses for neuropsychological researchers. Archives of Clinical Neuropsychology, 16, 653–667. 10.1016/S0887-6177(00)00076-7. [DOI] [PubMed] [Google Scholar]

Articles from Archives of Clinical Neuropsychology are provided here courtesy of Oxford University Press

RESOURCES