Skip to main content
Archives of Clinical Neuropsychology logoLink to Archives of Clinical Neuropsychology
. 2015 Feb 27;30(3):280–291. doi: 10.1093/arclin/acv005

Regression-Based Norms for a Bi-factor Model for Scoring the Brief Test of Adult Cognition by Telephone (BTACT)

Ashita S Gurnani 1, Samantha E John 1, Brandon E Gavett 1,*
PMCID: PMC4635635  PMID: 25724515

Abstract

The current study developed regression-based normative adjustments for a bi-factor model of the The Brief Test of Adult Cognition by Telephone (BTACT). Archival data from the Midlife Development in the United States-II Cognitive Project were used to develop eight separate linear regression models that predicted bi-factor BTACT scores, accounting for age, education, gender, and occupation-alone and in various combinations. All regression models provided statistically significant fit to the data. A three-predictor regression model fit best and accounted for 32.8% of the variance in the global bi-factor BTACT score. The fit of the regression models was not improved by gender. Eight different regression models are presented to allow the user flexibility in applying demographic corrections to the bi-factor BTACT scores. Occupation corrections, while not widely used, may provide useful demographic adjustments for adult populations or for those individuals who have attained an occupational status not commensurate with expected educational attainment.

Keywords: Occupational status, Cognitive assessment, Statistical norms, Neuropsychology, Test norms, Statistical regression

Introduction

Screening measures are a relatively inexpensive way of obtaining a quick estimate of one's cognitive functioning in comparison to in-person neuropsychological assessments (Knopman et al., 2010). Telephone cognitive instruments can help mitigate some of the practical challenges encountered with in-person evaluations, thereby increasing the feasibility of performing initial and follow-up evaluations (Knopman et al., 2010). Telephone cognitive instruments have been used to monitor recovery post-discharge from in-patient rehabilitation units (Guerini et al., 2008; Jones, Miller, & Petrella, 2002). Health care reimbursement often dictates early discharge from hospitals (Gillen, Tennen, & McKee, 2007); in such instances, telephone instruments are a cost-effective and efficient way of monitoring recovery. In addition, telephone cognitive measures have been effective in the identification of cognitive impairment when used as screening measures and have utility in discerning the need for more comprehensive in-person neuropsychological testing (e.g., Hill et al., 2005; Lipton et al., 2003).

One commonly used telephone administered cognitive assessment is the Brief Test of Adult Cognition by Telephone (BTACT; Tun & Lachman, 2006). The BTACT consists of six-individual tests that provide a brief (15–20 min) measure of memory and various aspects of cognitive functioning, including verbal episodic memory, working memory span, verbal fluency, inductive reasoning, speed of processing, and task switching ability (Tun & Lachman, 2006). Many telephone cognitive assessments are designed to detect specific clinical deficits and are useful at distinguishing between clinical syndromes in a relatively older population (Duff, Beglinger, & Adams, 2009). In contrast, their ability to identify those at risk of developing specific cognitive disorders in younger cohorts and detecting subtle differences in cognitive functioning within the general population is decreased (Lopez & Kuller, 2010). The BTACT differs from other telephone cognitive instruments in that it is appropriate for use in both well-functioning and cognitively impaired individuals (Ryff & Lachman, 2007; Tun & Lachman, 2006). Typically, telephone cognitive instruments are normed only in adults with relatively small sample sizes and limited age ranges; in contrast, the BTACT data were collected from a large and diverse sample, including individuals ranging in age from 25 to 84 (Brim, Ryff, & Kessler, 2004). It also provides a global composite score that is representative of an individual's overall performance, increasing its utility as a quick screening measure for use by clinicians and researchers (Tun & Lachman, 2006).

The BTACT has demonstrated good test–retest and alternate forms reliability. It has been validated against an in-person cognitive battery that differs from the BTACT with respect to the mode of stimuli presentation and response, length of test, as well as specific subtests administered (Lachman, Agrigoroaei, Tun, & Weaver, 2013). Recent studies examining the psychometric properties of the BTACT yielded a good model fit with a two-factor solution consisting of episodic memory and executive functioning (Lachman, Agrigoroaei, Murphy, & Tun, 2010; Lachman et al., 2013). Psychosocial and behavioral variables have been found to affect performance on the episodic memory and executive functioning factors of the BTACT in middle age and older adults (Agrigoroaei & Lachman, 2011). Engaging in frequent cognitive activity has been found to offset the negative effects of low education on performance on the episodic memory factor of the BTACT (Lachman et al., 2010). Although a two factor model has produced a suitable fit to the BTACT data, other research has suggested that the individual test scores produced by the BTACT are also modeled well by a bi-factor model (Gavett, Crane, & Dams-O'Connor, 2013). One relative weakness of the two-factor model for the BTACT relative to the bi-factor model is that the standard approach to obtaining the two specific domain scores (i.e., episodic memory and executive functioning) for the BTACT is based in classical test theory (CTT), whereas the approach to obtaining a global cognitive ability estimate with the bi-factor model is based in more modern approaches to test scaling.

CTT can limit the ability of a test such as the BTACT to track changes over time and accurately measure individual differences in cognitive ability. It assumes dependency between item and person statistics, resulting in a composite score that neither reflects item difficulty nor individual ability (Crane et al., 2008; MacDonald & Paunonen, 2002). To facilitate interpretation of the BTACT, Gavett et al. (2013) used confirmatory factor analysis (CFA) with a bi-factor structure to validate a model for the BTACT to measure global cognitive ability. In contrast to CTT, factor scores estimated from this model have interval measurement properties and a linear relationship with respect to the ability being assessed (e.g., Crane et al., 2008; Mungas & Reed, 2000). Unlike the global composite and domain scores produced by conventional BTACT scoring, the bi-factor score includes performance on a task-switching (Red/Green test) test, thereby allowing for the inclusion of a broader range of abilities in the estimate of global cognition. Similar to the conventional BTACT scores, the bi-factor scores have the same mean of 0 and standard deviation of 1. (For an in-depth summary and depiction of the bi-factor model of the BTACT, see Gavett et al., 2013.)

The bi-factor model for the BTACT estimates global cognitive ability without adjusting for demographic variables. However, demographic variables alone and in combination can have a robust effect on test score variance in neuropsychological tests, suggesting that norms accounting for various demographic variables may wish to be considered when interpreting results (Marcopulos, McLain, & Giuliano, 1997). Age, education, gender, ethnicity, and occupation have been found to significantly affect performance on a variety of cognitive and neuropsychological measures in healthy adults (Reynolds, Chastain, Kaufman, & McLean, 1988). Normative data can be useful for accurate identification of the cognitive changes that occur due to normal aging as well as abnormal cognitive changes caused by neurodegenerative disease. Because normative corrections for various demographic variables increase the specificity of classification (O'Connell & Tuokko, 2010), it is best practice to include them when validating new models of existing cognitive measures (Busch, Chelune, & Suchy, 2006).

Though the direct effects of demographic variables such as age, education, race/ethnicity, and gender on neuropsychological test performance have been well studied, a limited number of studies have examined the relationship between occupational status and cognitive test performance. Existing evidence supports a connection between the two, such that occupational attainment may serve as either a proxy variable for another demographic variable (e.g., education level) or reflect the influence of other factors, such as socioeconomic status and diet (Low et al., 2004). Occupational attainment is an easily measured proxy variable for lifetime exposure to cognitive activity that may potentially lead to increased efficiency of brain networks and cognitive strategies that result in a higher probability of effective performance even in the presence of disease pathology (Barulli & Stern, 2013; Stern, 2003; 2012). Accounting for occupational experience when interpreting cognitive test scores may have value, especially because high levels of cognitive reserve can make it difficult to diagnose dementia during the early stages of the disease process when reserve may be masking the effects of pathology (Stern, 2002; 2012).

In deriving normative data for the bi-factor BTACT, we sought to include all relevant variables for group comparison while also allowing for variability in the types of demographic corrections that users of the test can choose to apply. The continuous norming approach used in the current study can overcome challenges that occur with unequal group numbers such as unstable group means and non-normal distributions within age bands (Busch et al., 2006). Continuous norming uses polynomial regression models to predict standard scores or percentile ranks (Busch et al., 2006). This method utilizes descriptive statistics from the entire normative group to create normalized distributions within each age band. Continuous norms provide better estimates for each age level, increases stability of group means, and corrects inequalities between age bands, while also taking into account practice effects, measurement error, and regression to the mean (Busch et al., 2006).

The purpose of this study was to develop regression-based norms for the bi-factor model of scoring the BTACT. Normative adjustments for age, gender, education, and occupation were explored. The data for the BTACT were obtained from participants in the National Survey of Midlife Development in the United States (MIDUS II): Cognitive Project (Ryff & Lachman, 2007). The MIDUS II data includes various occupation classes with relatively large sample sizes in each class. As a result, normative adjustments for occupation level or attainment were also explored. Based on prior literature, we expect that demographic variables are associated with score differences on the BTACT. Thus, we hypothesize that correcting for age, gender, education, and occupation will provide an important context for interpreting the BTACT factor scores for estimating global cognitive ability.

Method

Participants

We obtained archival data from the MIDUS-II Cognitive Project (Ryff & Lachman, 2007). Participants were volunteers selected by random digit dialing as part of a follow-up to a national survey of non-institutionalized adults and were administered the BTACT as part of the MIDUS II study between the years of 2004 and 2006 (Brim et al., 2004). A total of 4,963 participants were part of the original data set. We excluded 1,867 participants who endorsed medical conditions that may have affected performance on the measure (e.g., neurological disorders, head injury, and stroke). Global factor scores on the BTACT were generated for 3,096 participants, which included 1,378 men and 1,718 women. Participants' highest level of education and description of their “main job” were used to code their education and occupation, respectively. Occupation was classified according to the 1990 Alphabetic Index of Industries and Occupations and the Production Coder Manual published by the US Census Bureau.

Materials

The following subtests are components of the BTACT:

  • Rey Auditory Verbal Learning Test (RAVLT; Rey, 1964): The RAVLT is a 15-item word list that uses free recall to measure immediate and delayed episodic memory. The BTACT protocol involves one immediate and one delayed recall trial with a delay interval of 15 min. Participants are given 90 s to recall as many words as they can after the administration of each trial.

  • Digits Backward: The backward digit span test used in the BTACT paradigm is adapted from the WAIS-III (Wechsler, 1997). In this test, participants are asked to orally reproduce a series of digits read to them in the reverse order. Two trials of the same span length are administered with the span length increasing to a maximum of 8 digits.

  • Category Fluency (Animals): This task is used to assess executive functioning by asking participants to verbally generate as many animals as possible in 1 min.

  • Red/Green Test: This is a two-choice response task consisting of three trials that is used to measure reaction time in task-switching. In the normal trial, participants are asked to respond with either “stop” or “go,” when they hear either the word “red” or the word “green,” respectively. The reverse trial requires that the participants respond with “stop” when they hear the word “green” or with “go” when they hear either the word “red.” The mixed trial alternates unexpectedly between the normal and the reverse phases. Only accuracy data were used for the analysis of performance on the Red/Green test due to the potential for confounding factors when measuring reaction time over the telephone.

  • Number Series: This task measures inductive reasoning and involves asking the participant to provide the sixth number to a series of five digits read to the participant within 15 s. A total of five series that represent three levels of difficulty are presented.

  • Backward Counting: This task assesses processing speed by asking participants to count backwards from 100 by ones, as fast as possible, for a total of 30 s.

For more information on administration and scoring the BTACT, refer to Lachman and Tun (2012). The BTACT materials are available at http://www.brandeis.edu/departments/psych/lachman/instruments/index.html.

Procedure and Data Analysis

We obtained archival BTACT data that were gathered through an individually administered telephone evaluation. Data were analyzed using R version 3.0.2 (R Core Team, 2013). Scores generated from each of the BTACT subtests were converted into a global composite score using the bi-factor model (for a more comprehensive discussion, see Gavett et al., 2013). Eight separate linear regression models accounting for age, gender, education, and occupation, alone and in various combinations were used to predict bi-factor BTACT scores. The residuals from these models were used to develop normative data for the bi-factor BTACT.

Results

The total sample used in the current analyses consisted of 3,096 participants, including 1,378 men and 1,718 women. The 1,867 excluded participants who endorsed medical conditions were significantly younger (t = 3.28, df = 3,771.46, p < .05), less educated (t = 3.37, df = 3719.03, p < .05), and scored significantly lower on the BTACT (t = 7.26, df = 1,650.68, p < .05) than individuals who did not endorse medical conditions capable of interfering with cognitive functioning. The age of the sample ranged from 32 to 84 years (M = 55.90, SD = 12.20) and was made up of 2,802 white participants (90.5%), 78 black and/or African-American participants (2.52%), 17 Native American participants (0.55%), 13 Asian participants (0.42%), 2 Native Hawaiian participants (0.06%), 104 participants of mixed racial origin (3.36%), 61 participants of other racial origins not described above (1.97%), and 19 participants who reported not knowing their race. Of the 3,096 participants, 80 (2.58%) reported a Hispanic ethnic background. Participants were grouped into 11 different education categories and 9 different major occupation groups, as shown in Table 1. Table 2 shows the frequencies of each education-occupation pairing observed in the sample.

Table 1.

Participant characteristics

Variable Group n MAge SDAge MBTACT SDBTACT
Education
No school—junior high school (0–8 years) 36 63.89 12.08 −1.22 0.76
Some high school (9–12 years, no diploma/no GED) 114 62.10 12.31 −0.85 0.83
GED 38 54.71 12.32 −0.32 0.89
Graduated from high school 790 58.27 12.2 −0.30 0.81
1–2 years of college, no degree yet 524 56.09 12.02 −0.05 0.84
3 or more years of college, no degree yet 117 53.14 12.73 0.20 0.83
Graduated from 2-year college, vocational school, or Associate's degree 236 54.22 12.01 0.06 0.85
Graduated from 4- or 5-year college, or Bachelor's degree 626 53.26 12.09 0.35 0.75
Some graduate school 102 56.20 10.83 0.33 0.84
Master's degree 352 54.61 11.67 0.45 0.75
Ph.D., Ed.D., MD, DDS, LLB, LLD, JD, or other professional degree 157 55.03 10.55 0.46 0.79
Occupation
Operator, laborer, and military 236 57.63 12.03 −0.43 0.89
Executive, administrative, and managerial 665 54.38 11.57 0.24 0.81
Professional specialty 639 54.92 11.81 0.38 0.78
Technician and related support 115 53.97 12.14 0.15 0.91
Sales occupation 294 57.68 12.73 −0.13 0.8
Administrative support, including clerical 494 56.62 12.39 0.02 0.84
Service occupation 291 56.33 12.81 −0.31 0.93
Farming, forestry, and fishing 58 60.21 13.28 −0.2 0.77
Precision production, crafts, and repair 246 55.99 12.06 −0.27 0.84

Notes: n = sample size; Mage = mean age (years); SDage = standard deviation of age (years). MBTACT = average bifactor BTACT factor scores; SDBTACT = standard deviation of bifactor BTACT factor score.

Table 2.

Cross-tabulated frequencies of education and occupation status in the current sample

Education Occupation, n (%)
Labor Exec Prof Tech Sales Admin Service FFF Repair Total
Junior 7 (0.23%) 5 (0.16%) 2 (0.06%) 1 (0.03%) 7 (0.23%) 6 (0.19%) 5 (0.16%) 1 (0.03%) 4 (0.13%) 38 (1.23%)
SomeHigh 9 (0.29%) 1 (0.03%) 1 (0.03%) 1 (0.03%) 7 (0.23%) 3 (0.10%) 6 (0.19%) 2 (0.06%) 3 (0.10%) 33 (1.07%)
GED 30 (0.97%) 14 (0.45%) 1 (0.03%) 2 (0.06%) 9 (0.29%) 8 (0.26%) 25 (0.81%) 1 (0.03%) 21 (0.68%) 111 (3.59%)
High 117 (3.78%) 106 (3.42%) 23 (0.74%) 17 (0.55%) 99 (3.20%) 179 (5.78%) 105 (3.39%) 15 (0.48%) 106 (3.42%) 767 (24.77%)
SomeCol1–2 33 (1.07%) 104 (3.36%) 39 (1.26%) 28 (0.90%) 52 (1.68%) 133 (4.30%) 66 (2.13%) 13 (0.42%) 49 (1.58%) 517 (16.70%)
SomeCol3+ 4 (0.13%) 23 (0.74%) 25 (0.81%) 3 (0.10%) 17 (0.55%) 21 (0.68%) 16 (0.52%) 1 (0.03%) 6 (0.19%) 116 (3.75%)
Assoc 14 (0.45%) 44 (1.42%) 33 (1.07%) 21 (0.68%) 20 (0.65%) 38 (1.23%) 27 (0.87%) 8 (0.26%) 28 (0.90%) 233 (7.53%)
BA 11 (0.36%) 192 (6.20%) 202 (6.52%) 24 (0.78%) 56 (1.81%) 77 (2.49%) 19 (0.61%) 12 (0.39%) 21 (0.68%) 614 (19.83%)
SomeGrad 3 (0.10%) 42 (1.36%) 32 (1.03%) 4 (0.13%) 2 (0.06%) 8 (0.26%) 6 (0.19%) 0 (0.00%) 5 (0.16%) 102 (3.29%)
MA 6 (0.19%) 101 (3.26%) 184 (5.94%) 11 (0.36%) 14 (0.45%) 15 (0.48%) 14 (0.45%) 4 (0.13%) 1 (0.03%) 350 (11.30%)
PhD 2 (0.06%) 33 (1.07%) 96 (3.10%) 3 (0.10%) 10 (0.32%) 4 (0.13%) 2 (0.06%) 1 (0.03%) 2 (0.06%) 153 (4.94%)
Total 236 (7.62%) 665 (21.48%) 638 (20.61%) 115 (3.71%) 293 (9.46%) 492 (15.89%) 291 (9.4%) 58 (1.87%) 246 (7.95%) 3034 (98.00%)

Notes: Occupation: Labor = Operator, Laborer, and Military; Exec = Executive, Administrative, and Managerial; Prof = Professional Specialty; Tech = Technician and Related Support; Sales = Sales Occupation; Admin = Administrative Support, Including Clerical; Service = Service Occupation; FFF = Farming, Forestry, and Fishing; Repair = Precision Production, Crafts, and Repair. Education: Junior = No school—junior high school (0–8 years); SomeHigh = (9–12 years, no diploma/no GED); High = Graduated from high school; SomeCol1–2 = 1–2 years of college, no degree yet; SomeCol3+ = 3 or more years of college, no degree yet; Assoc = Graduated from 2-year college, vocational school, or Associate's Degree; BA = Graduated from 4- or 5-year college, or Bachelor's Degree; MA = Master's degree; PhD = Ph.D., Ed.D., MD, DDS, LLB, LLD, JD, or other professional degree.

Age, education, gender, and occupation served as the predictor variables for the bi-factor BTACT global composite score, the dependent measure of interest. Age and the bi-factor global cognition score were continuous variables whereas education, gender, and occupation were categorical variables. Dummy coding was applied to the categorical variables, such that high school education; male gender; and the “Operator, Laborer, and Military” occupation were the reference groups for their respective demographic categories. The regression-based algorithms to apply demographic corrections to the bi-factor BTACT scores are listed below. Values for each predictor in these models are listed in the Appendix. Using the regression-based equations converts bi-factor BTACT global composite scores into standardized z-scores. For the convenience of users, a web-based scoring program has been created to calculate the bi-factor BTACT global composite score along with the various combinations of demographic corrections listed below. The calculator can be accessed at https://begavett.shinyapps.io/BTACT.

Age corrections:

p(BTACTbifactor)=1.83+0.03×AgeZAge=BTACTbifactorp(BTACTbifactor)0.79 (1)

Education corrections:

p(BTACTbifactor)=0.03+AppendixAZEdu=BTACTbifactorp(BTACTbifactor)0.80 (2)

Gender corrections

p(BTACTbifactor)=0.02+AppendixBZGender=BTACTbifactorp(BTACTbifactor)0.88 (3)

Occupation corrections

p(BTACTbifactor)=0.43+AppendixCZOcc=BTACTbifactorp(BTACTbifactor)0.83 (4)

Age and education corrections

p(BTACTbifactor)=1.32+0.43×AppendixDZA+E=BTACTbifactorp(BTACTbifactor)0.73 (5)

Age and occupation corrections

p(BTACTbifactor)=1.32+0.03×Age+Appendix EZA+O=BTACTbifactorp(BTACTbifactor)0.75 (6)

Education and occupation corrections

p(BTACTbifactor)=0.45+AppendixFEdu+AppendixFOccZE+O=BTACTbifactorp(BTACTbifactor)0.79 (7)

Age, education, and occupation corrections

p(BTACTbifactor)=1.18+0.03×Age+AppendixGEdu+AppendixGOccZA+E+O=BTACTbifactorp(BTACTbifactor)0.72 (8)

Overall Results

The regression model with age, education, and occupation accounted for the greatest proportion of variance when compared with other combinations of predictor variables. The model explained 32.8% of the variance in the bi-factor BTACT global cognition score F(19, 3014) = 77.49, p < .001. Education and occupation were largely redundant, as the ΔR2 values for occupation added to education were small (i.e., 0.01). When occupation was not considered, 31.5% of the variance in bi-factor BTACT scores was accounted for by a combination of age and education. When education was not considered, a model including age and occupation explained 27.4% of the variance in bi-factor global cognition scores. Gender provided virtually no incremental improvement in model fit (see Table 3), which suggests that bi-factor BTACT scores are not influenced by gender; therefore, normative corrections for gender are likely to be unnecessary.

Table 3.

Eight linear regression models predicting bi-factor BTACT scores

Model Multiple R2 Adjusted R2 SEE
A 0.20 0.20 0.79
E 0.17 0.17 0.80
G 0.00 0.00 0.88
O 0.10 0.10 0.83
A + E 0.32 0.31 0.73
A + O 0.27 0.27 0.75
E + O 0.18 0.18 0.79
A + E + O 0.33 0.32 0.72

Notes: A = age, E = education, G = gender, O = occupation. SEE = standard error of estimate.

Discussion

The aim of the current study was to develop regression-based norms accounting for various combinations of age, education, gender, and occupation for scoring the bi-factor BTACT. All eight regression models were statistically significant, but the practical significance of gender corrections is negligible at best. The model accounting for age, education, and occupation explained 32.8% of the variance in the overall bi-factor BTACT score. Gender did not have a strong effect on BTACT scores. Thus, adjusting bi-factor BTACT scores to account for age, education, or occupation may facilitate test score interpretation.

The BTACT is a telephone-administered measure that offers a global composite score derived from measures of immediate and delayed episodic memory, working memory, semantic fluency, inductive reasoning, processing speed, and mental flexibility (Tun & Lachman, 2006). Telephone-administered cognitive instruments can minimize examinee burden and provide a less costly alternative to in-person neuropsychological testing. In fact, the auditory tests of the BTACT were recently validated against an in-person cognitive battery with results showing the two tests to be highly correlated, suggesting that the results are not affected by mode of administration and length of test (Lachman et al., 2013). However, many telephone batteries rely predominantly on memory assessment for detection of dementia, limiting their usefulness for characterizing impairment and making differential diagnoses (Crooks, Clark, Petitti, Chui, & Chiu, 2005). In contrast, the bi-factor BTACT includes measures of set shifting, response inhibition, and attention, in addition to memory, adding to the comprehensiveness of the global cognitive score (Gavett et al., 2013). The bi-factor BTACT scores possess linear measurement properties, which is an advantage of that approach over the traditional method for scaling the BTACT. The scores produced by the bi-factor BTACT model represent a person's cognitive status, scaled as a z-score (M = 0, SD = 1). In many situations (e.g., identifying unsafe drivers in an older adult sample), it may be preferable to interpret this score without consideration of the examinee's standing on a number of demographic variables known to affect cognition. In contrast, there may be circumstances where it may be desirable to contextualize an examinee's global cognitive ability estimate based on aspects of his or her demographics (e.g., identifying subtle processing speed changes in a highly educated individual). The present study adds to the utility of the bi-factor BTACT by providing normative adjustments, which may be desirable in clinical or research use.

Gender, age, education, and ethnicity have been shown to significantly impact performance on word list learning tests as well as frequently used neuropsychological test batteries such as the Wechsler Adult Intelligence Test- Revised (WAIS—III) and Wechsler Memory Scale (WMS—III; Lange, Chelune, Taylor, Woodward & Heaton, 2006; Norman, Evans, Miller, & Heaton, 2000). The WAIS-III, WAIS-IV, WMS-III, and WMS-IV index scores are differentially affected by various demographic variables, which impacts the sensitivity of diagnostic classification (Drozdick, Holdnack, & Hilsabeck, 2011; Lange et al., 2006; Lichtenberger & Kaufman, 2013; Weiss, Salkofske, Coalson, & Raiford, 2010). Indices of the Repeatable Battery for the Assessment of Neuropsychological Status (RBANS) are also affected by age and education, and regression-based normative corrections for the same are needed for appropriate adjustment of scores (Gontkovsky, Mold, & Beatty, 2002). Even though demographic variables affect neuropsychological test performance, it is important to note that demographic corrections should only be used when an individual's demographic characteristics are represented by the normative sample. With respect to diagnostic accuracy, it is important to bear in mind that demographically corrected scores tend to be more specific than raw scores whereas raw scores tend to maximize sensitivity when compared with demographically corrected scores (O'Connell & Tuokko, 2010).

A substantial body of research has examined the level of agreement between in-person cognitive testing and telephone instruments (Rapp et al., 2012; Wilson et al., 2010). Although mode of test administration has shown little effect on telephone cognitive test performance, education, premorbid IQ, and age all influence cognitive test scores (Crooks et al., 2005; Rapp et al., 2012; Wilson et al., 2010). The present study provides normative adjustments for age, gender, education, and occupation for the bi-factor BTACT global composite score. The model comprising three demographic variables (age, education, and occupation) accounted for the largest amount of variance (33%) in BTACT scores. The amount of variance accounted for by this model is comparable with other regression-based normative systems (e.g., the Boston Naming Test; Fastenau, Denberg, & Mauer, 1998). Gender did not provide any meaningful improvement in the fit of the linear models to the data; therefore, it is not necessary for users of the bi-factor BTACT to apply demographic corrections for gender. Although education and occupation are closely linked, occupation corrected normative data may be advantageous in situations where educational attainment is unclear or when occupational and educational attainment are markedly different. In addition, because education and occupational attainment have been shown to contribute collectively and independently to greater cognitive reserve (Evans et al., 1993; Stern et al., 1994, 1995; Tucker & Stern, 2011), adjusting for occupational attainment in addition to education can allow for adjustments that comprehensively cover both areas of attained reserve: early life learning through formal education and later life learning through occupational gains (Stern, 2002). It remains to be seen whether correcting bi-factor BTACT scores for education and occupation together allows for better characterization of cognitive changes in older adults than separate education or occupation corrections.

Consistent with earlier studies, individuals with at least a few years of college education tended to obtain higher scores on the BTACT than those with only a year or less of college education (Agrigoroaei & Lachman, 2011). In addition, our results indicate that occupations involving service and skills trade are associated with lower cognitive ability than those involving professional and executive positions (Jorm et al., 1998). One hypothesized reason for this finding is that occupations such as service and skills trade primarily involve manual labor, with a decreased psychological and cognitive load. On the other hand, occupations involving professional and executive roles tend to have greater cognitive demands since they utilize greater literacy and intellectual skills (Jorm et al., 1998). Though occupational attainment may partially reflect one's age and education, studies have found that, independent of age and education, occupation is a significant predictor of performance on cognitive screening measures such as the MMSE (Mini Mental State Examination; Alvarado, Zunzunegui, Del Ser, & Beland, 2002; Frisoni, Rozzini, Bianchetti, & Trabucchi, 1993). In addition, education corrections can be imprecise or misleading, since variability exists in intelligence, quality of education, and literacy within a given level of education attainment (Fine, Delis, & Holdnack, 2011; Manly, Touradji, Tang, & Stern, 2003; Sisco et al., 2014). As such, in some situations, it may be advantageous for neuropsychologists to have a choice between applying education corrections, occupation corrections, or both. For example, one person in the current sample was a 45-year-old individual with an educational classification of Ph.D. or related degree whose employment classification was Operator, Laborer, and Military. This person's uncorrected global cognitive ability estimated by the bi-factor BTACT was z = −0.009. Correcting this score based on age and educational attainment resulted in a z-score of −0.889. On the other hand, correcting this score based on age and occupation resulted in a normed z-score of 0.028. These two demographic corrections produce discrepant values, which highlights the potential benefits of both education-based and occupation-based norms in some situations. Of course, it is up to the individual clinician using this test to make the appropriate clinical judgment about which demographic corrections are most suitable for a given examinee.

There are several limitations of the study. The demographic corrections provided in this study are limited in that the data lack racial heterogeneity, thereby precluding the use of race-based normative corrections for the bi-factor BTACT. Similarly, participants were assigned to traditional gender categories (i.e., women and men) based on self-report, which did not capture other gender identities, if present in the current sample. The study sample may have included participants with undiagnosed mild cognitive impairment or early dementia, since exclusion of individuals with medical or neurological conditions was based solely on self-report. Though occupation corrections are a novel feature of this study, this variable does not explain much variance beyond what is explained by education. Occupation corrections, therefore, may be most useful in situations where education and occupation levels are discrepant or when education is difficult to determine. Further, the clinical utility of occupation corrections may be limited by the potential for ambiguity associated with coding an individual's occupation. It is also important to note that the BTACT is limited in its ability to measure global cognition due to the lack of a visual component in testing. Telephone cognitive instruments may not always remain standardized, as there is no way to ensure that participants are in a distraction free area and working independently of external tools during the administration of the BTACT. The BTACT does not assess mood, which may impact performance.

Our normative adjustments provide a useful resource for assessment of global cognitive functioning of adults via telephone. The use of a large sample that includes participants of varying ages from diverse geographical locations and occupational backgrounds increases the generalizability of these normative adjustments. The ability to apply occupation-based norms makes the BTACT unique as a screening measure of global cognition. The demographic corrections provided in this study offer clinicians the opportunity to select the combination of normative adjustments they deem appropriate for each examinee. Though efforts are underway to validate the BTACT in different samples, one of the next steps should be to establish the clinical validity of the norms presented in this study for the bi-factor BTACT.

Funding

This work received no direct source of financial support.

Conflict of interest

None declared.

Acknowledgements

The data from this manuscript were presented at The National Academy of Neuropsychology conference in 2013.

Appendix

Table A1.

Regression Coefficients for Equation (2) (Education)

Education b
No School—Junior High School (0–8 years) −0.92
Some High School (9–12 years, no diploma/no GED) −0.55
GED −0.02
Graduated from high school 0
1–2 years of college, no degree yet 0.25
3 or more years of college, no degree yet 0.50
Graduated from 2-year college, vocational school, or Associate's Degree 0.36
Graduated from 4- or 5-year college, or Bachelor's Degree 0.65
Some graduate school 0.63
Master's degree 0.75
Ph.D., Ed.D., MD, DDS, LLB, LLD, JD, or other professional degree 0.76

Table A2.

Regression Coefficients for Equation (3) (Gender)

Gender b
Male 0
Female 0.10

Table A3.

Regression Coefficients for Equation (4) (Occupation)

Occupation b
Operator, laborer, and military 0
Service occupation 0.12
Precision production, crafts, and repair 0.16
Farming, forestry, and fishing 0.23
Sales occupation 0.31
Administrative support, including clerical 0.45
Technician and related support 0.58
Executive, administrative, and managerial 0.67
Professional specialty 0.82

Table A4.

Regression Coefficients for Equation (5) (Age + Education)

Education b
No school—junior high school (0–8 years) −0.76
Some high school (9–12 years, no diploma/no GED) −0.44
GED −0.12
Graduated from high school 0
1–2 years of college, no degree yet 0.19
3 or more years of college, no degree yet 0.36
Graduated from 2-year college, vocational school, or Associate's Degree 0.24
Graduated from 4- or 5-year college, or Bachelor's degree 0.52
Some graduate school 0.57
Master's Degree 0.65
Ph.D., Ed.D., MD, DDS, LLB, LLD, JD, or other professional degree 0.67

Table A5.

Regression Coefficients for Equation (6) (Age + Occupation)

Occupation b
Operator, laborer, and military 0
Service occupation 0.08
Precision production, crafts, and repair 0.11
Sales occupation 0.31
Farming, forestry, and fishing 0.31
Administrative support, including clerical 0.42
Technician and related support 0.47
Executive, administrative, and managerial 0.58
Professional specialty 0.74

Table A6.

Regression Coefficients for Equation (7) (Education + Occupation)

Education b
No School—junior high school (0–8 years) −0.83
Some high school (9–12 years, no diploma/no GED) −0.48
GED −0.02
Graduated from high school 0
1–2 years of college, no degree yet 0.20
3 or more years of college, no degree yet 0.43
Graduated from 2-year college, vocational school, or Associate's Degree 0.29
Graduated from 4- or 5-year college, or Bachelor's degree 0.53
Some graduate school 0.50
Master's degree 0.61
Ph.D., Ed.D., MD, DDS, LLB, LLD, JD, or other professional degree 0.62
Occupation b
Operator, laborer, and military 0
Service occupation 0.02
Farming, forestry, and fishing 0.03
Precision production, crafts, and repair 0.08
Sales occupation 0.12
Administrative support, including clerical 0.27
Technician and related support 0.29
Executive, administrative, and managerial 0.32
Professional specialty 0.32

Table A7.

Regression Coefficients for Equation (8) (Age + Education + Occupation)

Education b
No school—junior high school (0–8 years) −0.69
Some high school (9–12 years, no diploma/no GED) −0.38
GED −0.12
Graduated from high school 0
1–2 years of college, no degree yet 0.14
3 or more years of college, no degree yet 0.28
Graduated from 2-year college, vocational school, or Associate's degree 0.18
Graduated from 4- or 5-year college, or Bachelor's degree 0.39
Some graduate school 0.45
Master's degree 0.51
Ph.D., Ed.D., MD, DDS, LLB, LLD, JD, or other professional degree 0.53
Occupation b
Operator, laborer, and military 0
Service occupation 0.01
Precision production, crafts, and repair 0.05
Farming, forestry, and fishing 0.15
Sales occupation 0.17
Technician and related support 0.26
Administrative support, including clerical 0.28
Executive, administrative, and managerial 0.30
Professional specialty 0.34

References

  1. Agrigoroaei S., Lachman M. E. (2011). Cognitive functioning in midlife and old age: Combined effects of psychosocial and behavioral factors . Journal of Gerontology: Psychological Sciences, 66, 130–140. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Alvarado B. E., Zunzunegui M. V., Del Ser T., Beland F. (2002). Cognitive decline is related to education and occupation in a Spanish elderly cohort. Aging Clinical and Experimental Research, 14, 132–142. [DOI] [PubMed] [Google Scholar]
  3. Barona A., Reynolds C. R., Chastain R. (1984). A demographically based index of pre-morbid intelligence for the WAIS-R. Journal of Consulting and Clinical Psychology, 52, 885–887. [Google Scholar]
  4. Barulli D., Stern Y. (2013). Efficiency, capacity, compensation, maintenance, plasticity: Emerging concepts in cognitive reserve. Trends in Cognitive Sciences, 17, 502–509. [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Brim O., Ryff C., Kessler R. (2004). How healthy are we?: A national study of well-being at midlife. Chicago: University of Chicago Press. [Google Scholar]
  6. Busch R. M., Chelune G. J., Suchy Y. A. N. A. (2006). Using norms in neuropsychological assessment of the elderly. In Attix D. K., Welsh-Bohmer K. A. (Eds.), Geriatric neuropsychology: Assessment and intervention (pp. 133–157). New York, NY: The Guildford Press. [Google Scholar]
  7. Crane P. K., Narasimhalu K., Gibbons L. E., Mungas D. M., Haneuse S., Larson E. B., et al. (2008). Item response theory facilitated cocalibrating cognitive tests and reduced bias in estimated rates of decline. Journal of Clinical Epidemiology, 61, 1018–1027. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Crooks V. C., Clark L., Petitti D. B., Chui H., Chiu V. (2005). Validation of multi-stage telephone-based identification of cognitive impairment and dementia. BMC Neurology, 5, 8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Drozdick L. W., Holdnack J. A., Hilsabeck R. C. (2011). Essentials of WMS-IV assessment. Hoboken, NJ: John Wiley & Sons. [Google Scholar]
  10. Duff K., Beglinger L. J., Adams W. H. (2009). Validation of the modified telephone interview for cognitive status in amnestic mild cognitive impairment and intact elders. Alzheimer Disease and Associated Disorders, 23, 38–43. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Evans D. A., Beckett L. A., Albert M. S., Hebert L. E., Scherr P. A., Funkenstein H. H., et al. (1993). Level of education and change in cognitive function in a community population of older persons. Annals of Epidemiology, 3, 71–77. [DOI] [PubMed] [Google Scholar]
  12. Fastenau P. S., Denburg N. L., Mauer B. A. (1998). Parallel short forms for the Boston Naming Test: Psychometric properties and norms for older adults. Journal of Clinical and Experimental Neuropsychology, 20, 828–834. [DOI] [PubMed] [Google Scholar]
  13. Fine E. M., Delis D. C., Holdnack J. (2011). Normative adjustments to the D-KEFS Trail Making Test: Corrections for education and vocabulary level. The Clinical Neuropsychologist, 25, 1331–1344. [DOI] [PubMed] [Google Scholar]
  14. Frisoni G. B., Rozzini R., Bianchetti A., Trabucchi M. (1993). Principal lifetime occupation and MMSE score in elderly persons. Journal of Gerontology, 48, S310–S314. [DOI] [PubMed] [Google Scholar]
  15. Gavett B. E., Crane P. K., Dams-O'Connor K. (2013). Bi-factor analyses of the Brief Test of Adult Cognition by Telephone. Neurorehabilitation, 32, 253–365. [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Gillen R., Tennen H., McKee T. (2007). The impact of the inpatient rehabilitation facility prospective payment system on stroke program outcomes. American Journal of Physical Medicine and Rehabilitation, 86, 356–363. [DOI] [PubMed] [Google Scholar]
  17. Gontkovsky S. T., Mold J. W., Beatty W. W. (2002). Age and educational influences on RBANS index scores in a nondemented geriatric sample. The Clinical Neuropsychologist, 16, 258–263. [DOI] [PubMed] [Google Scholar]
  18. Guerini F., Frisoni G. B., Marrè A., Turco R., Bellelli G., Trabucchi M. (2008). Subcortical vascular lesions predict falls at 12 months in elderly patients discharged from a rehabilitation ward. Archives of Physical Medicine and Rehabilitation, 89, 1522–1527. [DOI] [PubMed] [Google Scholar]
  19. Hambleton R. K., Swaminathan H., Rogers H. J. (1991). Fundamentals of item response theory. Newbury Park, CA: Sage Publications, Inc. [Google Scholar]
  20. Harwell M. R., Gatti G. G. (2001). Rescaling ordinal data to interval data in educational research. Review of Educational Research, 71, 105–131. [Google Scholar]
  21. Hill J., McVay J. M., Walter-Ginzburg A., Mills C. S., Lewis J., Lewis B. E., et al. (2005). Validation of a brief screen for cognitive impairment (BSCI) administered by telephone for use in the medicare population. Disease Management, 8, 223–234. [DOI] [PubMed] [Google Scholar]
  22. Jones G. R., Miller T. A., Petrella R. J. (2002). Evaluation of rehabilitation outcomes in older patients with hip fractures. American Journal of Physical Medicine and Rehabilitation, 81, 489–497. [DOI] [PubMed] [Google Scholar]
  23. Jorm A. F., Rodgers B., Henderson A. S., Korten A. E., Jacomb P. A., Christensen H., et al. (1998). Occupation type as a predictor of cognitive decline and dementia in old age. Age and Ageing, 27, 477–483. [DOI] [PubMed] [Google Scholar]
  24. Knopman D. S., Roberts R. O., Geda Y. E., Pankratz V. S., Christianson T. J., Petersen R. C., et al. (2010). Validation of the telephone interview for cognitive status- modified in subjects with normal cognition, mild cognitive impairment, or dementia. Neuroepidemiology, 34, 34–42. [DOI] [PMC free article] [PubMed] [Google Scholar]
  25. Lachman M. E., Agrigoroaei S., Murphy C., Tun P. A. (2010). Frequent cognitive activity compensates for education differences in episodic memory. The American Journal of Geriatric Psychiatry, 18, 4–10. [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Lachman M. E., Agrigoroaei S., Tun P. A., Weaver S. L. (2013). Monitoring cognitive functioning: Psychometric properties of the Brief Test of Adult Cognition by Telephone. Assessment, 21, 404–417. [DOI] [PMC free article] [PubMed] [Google Scholar]
  27. Lachman M. E., Tun P. A. (2012, March). Brief Test of Adult Cognition by Telephone (BTACT) with Stop & Go Switch Task (SGST). Retrieved January 24, 2013, from http://www.brandeis.edu/departments/psych/lachman/pdfs/btact%20forms%20and%20information%204.9.12.pdf.
  28. Lange R. T., Chelune G. J., Taylor M. J., Woodward T. S., Heaton R. K. (2006). Development of demographic norms for four new WAIS-III/WMS-III indexes. Psychological Assessment, 18, 174–181. [DOI] [PubMed] [Google Scholar]
  29. Lichtenberger E. O., Kaufman A. S. (2013). Essentials of WAIS-IV assessment (2nd ed.). Hoboken, NJ: John Wiley & Sons. [Google Scholar]
  30. Lipton R. B., Katz M. J., Kuslansky G., Sliwinski M. J., Stewart W. F., Verghese J., et al. (2003). Screening for dementia by telephone using the memory impairment screen. Journal of the American Geriatrics Society, 51, 1382–1390. [DOI] [PubMed] [Google Scholar]
  31. Lopez O. L., Kuller L. H. (2010). Telephone interview for cognitive status. Neuroepidemiology, 34, 63–64. [DOI] [PMC free article] [PubMed] [Google Scholar]
  32. Low L. F., Brodaty H., Edwards R., Kochan N., Draper B., Trollor J., et al. (2004). The prevalence of ‘cognitive impairment no dementia’ in community-dwelling elderly: A pilot study. Australian and New Zealand Journal of Psychiatry, 38, 725–731. [DOI] [PubMed] [Google Scholar]
  33. Macdonald P., Paunonen S. V. (2002). A Monte Carlo comparison of item and person statistics based on item response theory versus classical test theory. Educational and Psychological Measurement, 62, 921–943. [Google Scholar]
  34. Manly J. J., Touradji P., Tang M. X., Stern Y. (2003). Literacy and memory decline among ethnically diverse elders. Journal of Clinical and Experimental Neuropsychology, 25, 680–690. [DOI] [PubMed] [Google Scholar]
  35. Marcopulos B. A., McLain C. A., Giuliano A. J. (1997). Cognitive impairment or inadequate norms? A study of healthy, rural, older adults with limited education. The Clinical Neuropsychologist, 11, 111–131. [Google Scholar]
  36. Mungas D., Reed B. R. (2000). Application of item response theory for development of a global functioning measure of dementia with linear measurement properties. Statistics in Medicine, 19, 1631–1644. [DOI] [PubMed] [Google Scholar]
  37. Norman M. A., Evans J. D., Miller W. S., Heaton R. K. (2000). Demographically corrected norms for the California verbal learning test. Journal of Clinical and Experimental Neuropsychology, 22, 80–94. [DOI] [PubMed] [Google Scholar]
  38. O'Connell M. E., Tuokko H. (2010). Age corrections and dementia classification accuracy. Archives of Clinical Neuropsychology, 25, 126–138. [DOI] [PubMed] [Google Scholar]
  39. R Core Team (2013). R: A language and environment for statistical computing (Version 3.02) [Software]. Vienna, Austria: R Foundation for Statistical Computing; Retrieved January 24, 2013, from http://www.R-project.org/. [Google Scholar]
  40. Rapp S. R., Legault C., Espeland M. A., Resnick S. M., Hogan P. E., Coker L. H., et al. (2012). Validation of a cognitive assessment battery administered over the telephone. Journal of the American Geriatrics Society, 60, 1616–1623. [DOI] [PMC free article] [PubMed] [Google Scholar]
  41. Reed B. R., Dowling M., Tomaszewski Farias S., Sonnen J., Strauss M., Schneider J. A., et al. (2011). Cognitive activities during adulthood are more important than education in building reserve. Journal of the International Neuropsychological Society, 17, 615–624. [DOI] [PMC free article] [PubMed] [Google Scholar]
  42. Rey A. (1964). L'examen clinique en psychologie. Paris: Presses Universitaires de France. [Google Scholar]
  43. Reynolds C. R., Chastain R. L., Kaufman A. S., McLean J. E. (1988). Demographic characteristics and IQ among adults: Analysis of the WAIS-R standardization sample as a function of the stratification variables. Journal of School Psychology, 25, 323–342. [Google Scholar]
  44. Richards M., Sacker A. (2003). Lifetime antecedents of cognitive reserve. Journal of Clinical and Experimental Neuropsychology, 25, 614–624. [DOI] [PubMed] [Google Scholar]
  45. Ryff C. D., Lachman M. E. (2007). National survey of midlife development in the united states (MIDUS II): Cognitive project, 2004–2006 [Data file]. ICPSR25281-v1 Ann Arbor, MI: Inter-university Consortium for Political and Social Research [distributor], 2010-07-13. doi:10.3886/ICPSR25281. Retrieved January 24, 2013, from http://www.icpsr.umich.edu/icpsrweb/ICPSR/studies/04652. [Google Scholar]
  46. Sisco S., Gross A. L., Shih R. A., Sachs B. C., Glymour M. M., Bangen K. J., et al. (2014). The role of early-life educational quality and literacy in explaining racial disparities in cognition in late life. The Journals of Gerontology Series B: Psychological Sciences and Social Sciences. doi:10.1093/geronb/gbt133. Retrieved November 21, 2014, from http://psychsocgerontology.oxfordjournals.org/content/early/2014/02/27/geronb.gbt133. [DOI] [PMC free article] [PubMed]
  47. Stern Y. (2002). What is cognitive reserve? Theory and research application of the reserve concept. Journal of the International Neuropsychological Society, 8, 448–460. [PubMed] [Google Scholar]
  48. Stern Y. (2003). The concept of cognitive reserve: A catalyst for research. Journal of Clinical and Experimental Neuropsychology, 25, 589–593. [DOI] [PubMed] [Google Scholar]
  49. Stern Y. (2012). Cognitive reserve in ageing and Alzheimer's disease. The Lancet Neurology, 11, 1006–1012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  50. Stern Y., Albert S., Tang M. X., Tsai W. Y. (1999). Rate of memory decline in AD is related to education and occupation: Cognitive reserve? Neurology, 53, 1942–1947. [DOI] [PubMed] [Google Scholar]
  51. Stern Y., Alexander G. E., Prohovnik I., Stricks L., Link B., Lennon M. C., et al. (1995). Relationship between lifetime occupation and parietal flow Implications for a reserve against Alzheimer's disease pathology. Neurology, 45, 55–60. [DOI] [PubMed] [Google Scholar]
  52. Stern Y., Gurland B., Tatemichi T. K., Tang M. X., Wilder D., Mayeux R. (1994). Influence of education and occupation on the incidence of Alzheimer's disease. JAMA: The Journal of the American Medical Association, 271, 1004–1010. [PubMed] [Google Scholar]
  53. Tucker A. M., Stern Y. (2011). Cognitive reserve in aging. Current Alzheimer Research, 8, 354–360. [DOI] [PMC free article] [PubMed] [Google Scholar]
  54. Tun P. A., Lachman M. E. (2006). Telephone assessment of cognitive function in adulthood: The Brief Test of Adult Cognition by Telephone. Age and Ageing, 35, 629–632. [DOI] [PubMed] [Google Scholar]
  55. Tun P. A., Lachman M. E. (2008). Age differences in reaction time and attention in a national telephone sample of adults: Education, sex, and task complexity matter. Developmental Psychology, 44, 1421–1429. [DOI] [PMC free article] [PubMed] [Google Scholar]
  56. Wechsler D. (1997). Wechsler adult intelligence scale—Third edition. New York: Psychological Corporation. [Google Scholar]
  57. Weiss L. G., Salkofske D. H., Coalson D., Raiford S. E. (2010). Theoretical, empirical, and clinical foundations of the WAIS-IV index scores. In Weiss L. G., Salkofske D. H., Coalson D., Raiford S. E. (Eds.), WAIS-IV clinical use and interpretation: Scientist-practitioner perspectives (pp. 61–94). San Diego, CA: Academic Press. [Google Scholar]
  58. Wilson R. S., Leurgans S. E., Foroud T. M., Sweet R. A., Graff-Radford N., Mayeux R., et al. (2010). Telephone assessment of cognitive function in the late-onset Alzheimer's disease family study. Archives of Neurology, 67, 855–861. [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from Archives of Clinical Neuropsychology are provided here courtesy of Oxford University Press

RESOURCES