Abstract
Background
The MATRICS Consensus Cognitive Battery (MCCB) and proposed co-primary measures are gaining momentum as outcome measures in clinical trials, highlighting the need to evaluate their psycho-metric properties. The MCCB composite score has been proposed to be the optimal primary outcome measure, though its validity is unknown. This study aimed to evaluate the factor structure of the MCCB in a schizophrenia sample and determine whether its cognitive domains are separable.
Methods
183 outpatients with schizophrenia or schizoaffective disorder completed a comprehensive test battery. Confirmatory factor analysis was used to test the factor structure of the MCCB; hierarchical regression then examined the relative contribution of individual cognitive variables to predict the MCCB factor scores. Finally, the relationships between the resulting factors and two performance-based measures of functional capacity were explored.
Results
A three-factor MCCB model representing processing speed, attention/working memory, and learning fits the data well and was an improvement over a unifactorial model. Symbol coding, spatial span, and visual learning were the most robust predictors for each of the three factors; symbol coding proved to be the best single predictor of overall cognitive performance. The three factors were also significantly related to a performance-based measure of everyday functioning but not a performance-based measure of social skills.
Conclusions
These analyses suggest that the six MCCB “domains” as constructed can be collapsed into fewer domains composed of multiple item scores; they also support the notion that impaired processing speed is a fundamental cognitive deficit in schizophrenia and that MCCB performance is related to functional capacity. Cognition and functional capacity measures require more research to determine if they differ.
Keywords: Cognition, Everyday functioning, Schizophrenia, MCCB, UPSA-B
1. Introduction
The cognitive deficits associated with schizophrenia are stable and enduring; they adversely affect everyday functioning and predict functional outcome (Heinrichs and Zakzanis, 1998; Elvevag and Goldberg, 2000; Green et al., 2000; Heaton et al., 2001; Reichenberg and Harvey, 2007). As awareness of cognitive impairment in schizophrenia and its effect on real-world functioning has grown, so has appreciation of the need for standardized assessment of cognition and its impact on functioning. The MATRICS (Measurement and Treatment Research to Improve Cognition in Schizophrenia) Consensus Cognitive Battery (MCCB; Nuechterlein et al., 2008) was developed expressly to evaluate cognitive functioning and potential treatment response in schizophrenia, with more recent efforts aiming to identify co-primary measures that are functionally meaningful (Green et al., 2008; Green et al., 2011).
The MCCB was designed to efficiently evaluate cognition in clinical trials of potential cognition-enhancing pharmacologic agents (Nuechterlein et al., 2008). Tests for the final consensus battery were selected based on five criteria: test–retest reliability, utility as a repeated measure, relationship to functional outcome, potential changeability in response to pharmacological agents, and practicality and tolerability (Nuechterlein et al., 2008). As the MCCB and proposed co-primary measures gain momentum in clinical trials for schizophrenia, it is increasingly important to consider the psychometric properties of the battery. Theoretically, the ten tests selected to represent seven conceptual cognitive domains should “hang together” because all cognitive scores tend to have at least moderate correlations with one another and the MCCB consists in many cases of one test per domain. Thus, the MCCB is a “sampling” across multiple domains of tests aimed at detecting a global ability index. The use of a composite score greatly simplifies the battery approach, reduces burden for researchers in search of a quick and efficient means of measuring cognition, and enhances the broad applicability of the battery.
Although previous research has investigated the reliability and validity of the tests comprising the MCCB as well as the test–retest reliability and practice effects of the MCCB composite score (Nuechterlein et al., 2008; Silverstein et al., 2010; Keefe et al., 2011), to date no study has explored the construct validity of the MCCB cognitive variables. The aim of this study was to examine the construct validity of the nine MCCB cognitive variables (excluding the social cognition measure due to concern that it would substantially differ from neurocognitive measures in terms of its relationship to everyday functioning) by evaluating the proposed unifactorial structure and comparing this structure to other hypothesized models. We hypothesized that a unifactorial model would best fit the data. We also investigated which MCCB measures best predicted the factor score(s), and explored the degree of relationship between the MCCB factor(s) and two measures of functional capacity.
2. Method
2.1. Participants
Participants included 183 community-dwelling outpatients diagnosed with schizophrenia or schizoaffective disorder who provided complete assessment data after undergoing testing at one of three sites: UCSD Outpatient Psychiatric Services (n = 94), Atlanta VA Medical Center (n = 38), or Atlanta Skyland Trail (n = 51) as part of a study to validate assessments of everyday real-world outcomes (Harvey et al., 2011). The study was IRB approved at all participating institutions, and all participants provided written informed consent for the study. Sample characteristics are presented in Table 1; participants had a mean age of 44 and completed 13 years of education. The majority of the sample was male, Caucasian, and living independently. On average, participants endorsed positive and negative symptom severity in the 35th and 20th percentiles, respectively, in comparison to other patients with schizophrenia (Kay et al., 1987), and mild current depressive symptoms (Beck et al., 1996). This sample also performed slightly better than other schizophrenia groups on the MCCB and functional capacity measures, in comparison to normative samples and recent publications (Patterson et al., 2001; Mausbach et al., 2007; Keefe et al., 2011; Kern et al., 2011). These differences may be attributable to the relatively higher level of education and independent living in our sample. There were some demographic site differences between Atlanta and San Diego that are outlined in a previous publication (Harvey et al., 2011), but there were no significant differences between the sites with regard to performance on the MCCB or the functional capacity measures.
Table 1.
n | Mean (SD) or % | |
---|---|---|
Age (years) | 183 | 44.45 (11.47) |
Education (years) | 170 | 13.00 (2.57) |
Male | 127 | 69.4 |
Caucasian, including Hispanic | 101 | 55.2 |
Hispanic | 23 | 12.6 |
Living independently | 112 | 61.2 |
Employed | 23 | 12.6 |
Prescribed antipsychotic medication | 165 | 90.2 |
Estimated premorbid IQ (WRAT-3 Reading) | 173 | 97.23 (9.88) |
PANSS positive total | 183 | 15.92 (5.37) |
PANSS negative total | 183 | 15.95 (5.42) |
PANSS total score | 183 | 62.69 (14.02) |
BDI-II total | 181 | 15.81 (12.13) |
MCCB modified cognitive composite (T-score average) | 183 | 38.00 (6.92) |
UPSA-B total score | 183 | 76.62 (13.00) |
UPSA-B finances score | 183 | 8.81 (1.63) |
UPSA-B communication score | 183 | 6.55 (1.54) |
SSPA mean score | 176 | 3.98 (0.60) |
Note. BDI-II = Beck Depression Inventory – II; MCCB = MATRICS Consensus Cognitive Battery; PANSS = Positive and Negative Syndrome Scale; SSPA = Social Skills Performance Assessment; UPSA-B = UCSD Performance-Based Skills Assessment, Brief version; WRAT-3 = Wide Range Achievement Test, Third Edition.
2.2. Procedure
Potential participants were referred to the study by treating clinicians or self-response to recruitment flyers posted in psychiatric care centers. Participants’ diagnoses were confirmed via structured diagnostic interviews; the Atlanta sites used the Structured Clinical Interview for DSM-IV Axis I Disorders (First et al., 1995) and the San Diego site used the Mini International Neuropsychiatric Interview (Sheehan et al., 1998). Inclusion criteria were: a) age between 18 and 65, b) DSM-IV diagnosis of schizophrenia or schizoaffective disorder, c) English-speaking, and d) minimum 8th grade reading level. Patients were excluded if they had a) history of unconsciousness greater than 10 min, b) a seizure disorder or other neurodegenerative condition, c) current substance abuse or dependence, and d) for patients aged 55 or older, a score less than 27 on the Mini Mental State Exam (Folstein et al., 2001). A single trained rater administered a comprehensive neuropsychological, clinical, and functional battery to each participant, and raters were trained to high levels of inter-rater reliability (intraclass correlations > .80) on interview measures; for this study, only participants with complete data were included.
2.3. Measures
Psychiatric symptom severity was measured with the Positive and Negative Syndrome Scale (Kay et al., 1987) and the Beck Depression Inventory — II (Beck et al., 1996). Premorbid intellectual functioning was measured with the reading subtest of the Wide Range Achievement Test — Third Edition (Wilkinson, 1993), and current cognitive functioning was measured with a modified version of the MCCB: the social cognition measure was excluded due to concern that social cognition measures would differ from neurocognitive measures in terms of their relationship to everyday functioning. The other nine subtests of the MCCB measured six domains, as follows (Nuechterlein et al., 2008):
Speed of processing (Trail Making Test, Part A; the Brief Assessment of Cognition in Schizophrenia [BACS] symbol coding subtest; the category fluency test, animal naming)
Attention/vigilance (Continuous Performance Test-Identical Pairs Version [CPT-IP])
Working memory (Wechsler Memory Scale, 3rd edition [WMS-III], spatial span subtest; Letter–Number Span test)
Verbal learning (Hopkins Verbal Learning Test-Revised [HVLT-R], immediate recall)
Visual learning (Brief Visuospatial Memory Test-Revised [BVMT-R], immediate recall)
Reasoning and problem solving (Neuropsychological Assessment Battery [NAB], mazes subtest)
A modified MCCB cognitive composite score was calculated using the mean of the nine non-demographically corrected t-scores.
Everyday functional skills were evaluated with two performance-based functional capacity assessments: the UCSD Performance-based Skills Assessment, Brief version (UPSA-B; Mausbach et al., 2007), and the Social Skills Performance Assessment (SSPA; Patterson et al., 2001). For the UPSA-B, participants are asked to perform everyday tasks related to finance (e.g., write a check to pay a utilities bill), and communication (e.g., call a doctor to reschedule an appointment). The UPSA-B takes 10–15 min to administer and yields raw subscale scores as well as raw scores that are converted into a total score ranging from 0 to 100, with higher scores indicating better functional capacity. The SSPA is a role-play measure in which participants must navigate two, three-minute social interactions (meeting a new neighbor and convincing a landlord to fix a leak); participants are scored on several domains (e.g., fluency, focus, social appropriateness), yielding a mean score ranging from 1 to 5 on each scene, with higher scores indicating better social skills.
2.4. Data analyses
All continuous variables (raw scores) were found to be normally distributed. To explore the factor structure of the MCCB, three models were proposed: a six-factor model with each cognitive domain representing one factor, a one-factor model including all nine raw MCCB scores, and a conceptually-derived three-factor model including processing speed (timed tests, including Trail Making Test Part A, BACS symbol coding, animal naming, and NAB mazes), attention and working memory (CPT-IP, spatial span, and letter-number span), and learning (HVLT-R and BVMT-R). Models were first evaluated for statistical fit with a chi-square test for nested models, comparing the proposed model to the null model. A significant p-value indicates inadequate fit, as in this case a significant p-value would indicate that the null model fit better than the proposed model. The chi-square test has documented limitations (i.e., influenced by larger sample size and distribution), and therefore several descriptive fit indices that were developed to avoid these pitfalls will be used to compare models (Hu and Bentler, 1999). Although there are no universally accepted cutoffs to determine descriptive model fit, for the purpose of these analyses we followed Hu and Bentler's (1999) recommendations, where model fit is considered good if the Standardized Root Mean Square Residual (SRMR) statistic is less than 0.08, if the Root Mean Square Error of Approximation (RMSEA) is less than 0.06, and if the Comparative Fit Index (CFI) is greater than 0.95. The fit of these models was also compared with chi-square difference test for nested models when possible (significant p-value indicates that the target model fits better), or the Akaike Information Criterion (AIC) value for non-nested models (the model with the smaller index value is a better-fitting model). All confirmatory analyses were conducted in EQS (version 6.1). Cronbach's coefficient alpha was also calculated to examine the internal consistency of the computed factors.
Once the best-fitting cognitive model was determined, hierarchical regression examined the relative contribution of the cognitive variables to prediction of the MCCB factor score(s). Finally, Pearson correlations examined the relationships between the factor score(s) and the two performance-based measures, the UPSA-B and the SSPA. Regression and correlation analyses were conducted in IBM SPSS (version 20).
3. Results
Using confirmatory factor analysis, numerous models were compared for goodness-of-fit (see Table 2 for model results). The six-factor model could not be optimized in the statistical software, presumably due to the presence of multiple single-item factors. For the other models, as expected due to the large sample size, the chi-square statistical test of model fit was significant (Bentler and Bonett, 1980). The three-factor model representing three conceptual neuropsychological domains included in the MCCB was compared to a one-factor model of the MCCB (see Table 2). For the three-factor model, two of the three descriptive fit indices (SRMR and CFI) suggested good fit and the other (RMSEA) was near the cutoff. The one-factor model did not fit well statistically and had mixed descriptive fit results, with the SRMR measure indicating good fit and the RMSEA and CFI indices indicating inadequate fit. The chi-square difference test for nested models indicated that the three-factor model was statistically a better fit than the one-factor model (Δχ2 = 20.6; Δdf = 3; p < .001). Cronbach's alpha for the nine MCCB raw test scores included in the one-factor model was .76, which falls in the “acceptable” range for internal consistency proposed by Nunnally (1978). The standardized factor loadings for the three-factor model are provided in Table 3. The resulting factors were named processing speed, attention and working memory, and learning. Cronbach's alpha was also computed for the MCCB raw test scores included in each of the three factors from the three-factor model. The alpha values are as follows: processing speed, .70 (4 variables); attention and working memory, .52 (3 variables); and learning, .53 (2 variables). Pearson correlations for the associations between the raw MCCB variables, the three factor scores, and the functional capacity scores are presented in Table 4. All of the raw cognitive variables were highly correlated with all three factor scores (all ps < .001); most cognitive variables were correlated with the UPSA-B and few were correlated with the SSPA.
Table 2.
Measures | χ 2 | df | p | AIC | SRMR | RMSEA | CFI |
---|---|---|---|---|---|---|---|
Desired range for “good fit” | < .08 | <.06 | >.95 | ||||
MCCB 1-factor | 68.3 | 27 | <.001 | 14.29 | .06 | .09 | .92 |
MCCB 3-factor | 47.7 | 24 | .003 | –0.28 | .05 | .07 | .95 |
MCCB 6-factor | Could not be optimized statistically |
Note. AIC = Akaike Information Criterion; CFI = Comparative Fit Index; MCCB = MATRICS Consensus Cognitive Battery; RMSEA = Root Mean Square Error of Approximation; SRMR = Standardized Root Mean Square Residual.
Table 3.
Processing speed factor | Attention and working memory factor | Learning factor | |
---|---|---|---|
Trail Making, Part A | —0.730 | – | – |
BACS symbol coding | 0.814 | – | – |
Category fluency | 0.414 | – | – |
NAB mazes | 0.700 | – | – |
CPT-IP | – | 0.620 | – |
WMS-III spatial span | – | 0.694 | – |
Letter–Number Span | – | 0.584 | – |
HVLT-R | – | – | 0.504 |
BVMT-R | – | – | 0.794 |
Note. BACS = Brief Assessment of Cognition in Schizophrenia; BVMT-R = Brief Visuospatial Memory Test-Revised; CPT-IP = Continuous Performance Test-Identical Pairs; HVLT-R = Hopkins Verbal Learning Test-Revised; MCCB = MATRICS Consensus Cognitive Battery; NAB = Neuropsychological Assessment Battery; SSPA = Social Skills Performance Assessment; UPSA-B = UCSD Performance-Based Skills Assessment, Brief Version; WMS-III = Wechsler Memory Scale, Third Edition.
Table 4.
Processing speed factor | Attention & working memory factor | Learning factor | UPSA-B total score | SSPA mean score | |
---|---|---|---|---|---|
Trail Making, Part A | —.716 | —.703 | —.703 | —.276 | —.118 |
BACS symbol coding | .822 | .810 | .811 | .349 | .136 |
Category fluency | .409 | .401 | .403 | .139 | .056 |
NAB mazes | .716 | .707 | .707 | .195 | .036 |
CPT-IP | .604 | .618 | .606 | .282 | .181a |
WMS-III spatial span | .686 | .702 | .690 | .344 | .067 |
Letter–Number Span | .580 | .592 | .582 | .516 | .161a |
HVLT-R | .499 | .502 | .509 | .310 | .090 |
BVMT-R | .787 | .789 | .802 | .359 | .028 |
MCCB modified cognitive composite (T-score average) | .882 | .836 | .780 | .459 | .146a |
Note. Bold font indicates correlations significant at the 0.01 level. BACS = Brief Assessment of Cognition in Schizophrenia; BVMT-R = Brief Visuospatial Memory Test-Revised; CPT-IP = Continuous Performance Test-Identical Pairs; HVLT-R = Hopkins Verbal Learning Test-Revised; MCCB = MATRICS Consensus Cognitive Battery; NAB = Neuropsychological Assessment Battery; SSPA = Social Skills Performance Assessment; UPSA-B = UCSD Performance-Based Skills Assessment, Brief Version; WMS-III = Wechsler Memory Scale, Third Edition.
Indicates correlations significant at the 0.05 level.
Next, using the nine variables included in the MCCB three-factor model, hierarchical regression indicated that BACS symbol coding was the strongest predictor of the processing speed factor score (R2 change = .676), WMS-III spatial span was the strongest predictor of the attention and working memory factor score (R2 change = .492), and BVMT was the strongest predictor of the learning factor score (R2 change = .643). R2 change for the remaining six variables was less than or equal to .119. When these three variables were entered into a simultaneous regression to predict the average cognitive composite t-score from the MCCB, they accounted for 83% of variance. In this regression, the semipartial correlations for BACS symbol coding, WMS-III spatial span, and BVMT were .385, .278, and .257, respectively.
Finally, all three MCCB factors were significantly associated with the UPSA-B total score (speed of processing r = .461; p < .001; attention and working memory r = .468; p < .001; learning r = .465; p < .001), but none of the factors were significantly related to the SSPA mean score (all ps > .07).
4. Discussion
These analyses support neither a unifactorial structure of the MCCB (and corresponding construct validity) nor a truly separable “domain” model. Rather, it may be useful to consider not only the MCCB composite score but a small subset of domains, as these results suggest that the six domains as constructed could be collapsed into a smaller set. Although the confirmatory factor analytic procedures indicated that a three-factor model was a superior fit to a unifactorial model, the large and significant correlations among the cognitive variables and all three factors suggest that the variables do hang together to some degree. These results are consistent with recent findings that the MCCB subtests are highly correlated in schizophrenia patients (August et al., 2011) and the notion of generalized cognitive impairment in schizophrenia (Dickinson et al., 2008; Dickinson and Harvey, 2009). Also, the internal consistencies of the attention/working memory and learning factors were inadequate, though this result was affected by the small number of variables used in the domains.
As researchers and clinicians continue to search for brief, efficient assessments that are still reliable and valid, there is value in creating abbreviated batteries. These results indicate that three MCCB subtests account for a large proportion of variance in the MCCB factor scores. At least in schizophrenia samples, it may therefore be possible to administer just three MCCB subtests (symbol coding, spatial span, and BVMT) when necessary, without sacrificing the reliability of the full battery. In fact when we used these three scores to predict the MCCB cognitive composite t-score, they accounted for 83% of the variance. Regression analysis indicated that symbol coding performance explained the most variance in MCCB total score, and supports the idea that symbol coding is a sensitive measure and explains the most variance in overall cognitive performance. Other groups have found that symbol coding tests are the most impaired in schizophrenia (Keefe et al., 2006a; Dickinson et al., 2007), further supporting the sensitivity of this measure.
Finally, these results suggest that the UPSA-B is more strongly related to neuropsychological performance than the SSPA. Although these analyses yielded somewhat lower correlations than previous reports, the pattern is consistent with several published studies that have reported strong positive correlations between neuropsychological measures and the UPSA or UPSA-B (.42–.79; Twamley et al., 2002; Bowie et al., 2006; Keefe et al., 2006b; Mausbach et al., 2007; Green et al., 2008; Harvey et al., 2009; Leifker et al., 2009; Pietrzak et al., 2009; Bowie et al., 2010; Silverstein et al., 2010; Green et al., 2011; Keefe et al., 2011) and relatively modest relationships between neuropsychological measures and the SSPA (.2–.3 range; McClure et al., 2007; Leifker et al., 2009; Bowie et al., 2010). This is especially relevant as functionally meaningful co-primary measures to be administered alongside the MCCB are identified and utilized.
There are limitations of the study worth noting: first, we chose to exclude the MSCEIT social cognition measure of the MCCB. Therefore, it is unknown whether and how including the MSCEIT would affect the factor structure of the entire MCCB battery. Previous studies have suggested that adding the MSCEIT reduces the fit of overall factor structures in schizophrenia samples (Eack et al., 2009). The low correlations between the MCCB cognitive scores and the SSPA were not unexpected, given that we did not include the MSCEIT. Social cognition is more strongly related to social outcomes than is neurocognition (Fett et al., 2011); the MSCEIT might well add to the prediction of scores on the SSPA. In addition, it is possible that method variance may explain the associations between the cognitive measures and UPSA-B performance (e.g., the finance subtest may reflect working memory skills as well as the ability to create change). Furthermore, as the study was cross-sectional, the temporal stability of these factors and their relationships with other variables should also be examined. This study was also conducted with clinically stable outpatients with chronic schizophrenia; the results may not generalize, therefore, to first-episode or chronically hospitalized patients. Finally, although a six-factor model was attempted, CFA factors account for shared variance between multiple observed variables; therefore this model was not ideal because it was forced to have single item factors due to the limited number of tests representing each domain in the battery.
Despite these limitations, this study adds valuable information to the assessment of cognition and functional capacity in individuals with schizophrenia. Knowledge regarding the factor structure of the MCCB cognitive variables as well as their relationship to functional capacity is vital, particularly as use of the MCCB continues to grow and as functionally meaningful co-primary measures are identified. Consistent with the guidance of the MATRICS initiative, these results indicate that future research should include measures of cognition as well as performance-based functional capacity when examining functional outcomes in people with schizophrenia.
Acknowledgment
The authors would like to thank Laura Vergel de Dios for her assistance with the preparation of this manuscript.
Role of funding source
This work was supported by the National Institute of Mental Health (R01 MH078737 to T.L.P. and R01 MH078775 to P.D.H.; C.Z.B. was supported by T32 MH019934).
Footnotes
Contributors
Authors Harvey, Patterson, Heaton, and Twamley designed the study and wrote the protocol. Author Burton managed the literature search. Authors Burton and Vella undertook the statistical analysis, and author Burton wrote the first draft of the manuscript. All authors contributed to and have approved the final manuscript.
Conflict of interest
Dr. Harvey has received consulting fees from Abbott Labs, Amgen, Boehringer Ingelheim, Genentech, Johnson and Johnson, Otsuka America, Pharma Neuroboost, Roche Parma, Sunovion Pharma, and Takeda Pharma during the past year. Dr. Patterson has served as a consultant for Abbott Labs and Amgen. No other authors have competing interests to report.
References
- August SM, Kiwanuka JN, McMahon RP, Gold JM. The MATRICS Consensus Cognitive Battery (MCCB): clinical and cognitive correlates. Schizophr. Res. 2011;134:76–82. doi: 10.1016/j.schres.2011.10.015. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Beck AT, Steer RA, Brown GK. Manual for the Beck Depression Inventory-II. Psychological Corporation; Texas: 1996. [Google Scholar]
- Bentler PM, Bonett DG. Significance tests and goodness of fit in the analysis of covariance structures. Psychol. Bull. 1980;88:588–606. [Google Scholar]
- Bowie CR, Reichenberg A, Patterson TL, Heaton RK, Harvey PD. Determinants of real-world functional performance in schizophrenia subjects: correlations with cognition, functional capacity, and symptoms. Am. J. Psychiatry. 2006;163:418–425. doi: 10.1176/appi.ajp.163.3.418. [DOI] [PubMed] [Google Scholar]
- Bowie CR, Depp C, McGrath JA, Wolyniec P, Mausbach BT, Thornquist MH, Luke J, et al. Prediction of real-world functional disability in chronic mental disorders: a comparison of schizophrenia and bipolar disorder. Am. J. Psychiatry. 2010;167:1116–1124. doi: 10.1176/appi.ajp.2010.09101406. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Dickinson D, Harvey PD. Systemic hypotheses for generalized cognitive deficits in schizophrenia: a new take on an old problem. Schizophr. Bull. 2009;35:403–414. doi: 10.1093/schbul/sbn097. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Dickinson D, Ramsey ME, Gold JM. Overlooking the obvious: a meta-analytic comparison of digit symbol coding tasks and other cognitive measures in schizophrenia. Arch. Gen. Psychiatry. 2007;64:532–542. doi: 10.1001/archpsyc.64.5.532. [DOI] [PubMed] [Google Scholar]
- Dickinson D, Ragland JD, Gold JM, Gur RC. General and specific cognitive deficits in schizophrenia: Goliath defeats David? Biol. Psychiatry. 2008;64:823–827. doi: 10.1016/j.biopsych.2008.04.005. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Eack SM, Pogue-Geile MF, Greeno CG, Keshavan MS. Evidence of factorial variance of the Mayer–Salovey–Caruso Emotional Intelligence Test across schizophrenia and normative samples. Schizophr. Res. 2009;114:105–109. doi: 10.1016/j.schres.2009.05.011. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Elvevag B, Goldberg TE. Cognitive impairment in schizophrenia is the core of the disorder. Crit. Rev. Neurobiol. 2000;14:1–21. [PubMed] [Google Scholar]
- Fett AJ, Viechtbauer W, Dominguez M, Penn DL, van Os J, Krabbendam L. The relationship between neurocognition and social cognition with functional outcomes in schizophrenia: a meta-analysis. Neurosci. Biobehav. Rev. 2011;35:573–588. doi: 10.1016/j.neubiorev.2010.07.001. [DOI] [PubMed] [Google Scholar]
- First MB, Spitzer RL, Gibbon M, Williams JBW. Structured Clinical Interview for DSM-IV Axis I Disorders (SCID I) Biometrics Research; New York: 1995. [Google Scholar]
- Folstein MF, Folstein SE, McHugh PR, Fanjiang G. Mini-Mental State Examination User's Guide. Psychological Assessment Resources; Florida: 2001. [Google Scholar]
- Green MF, Kern RS, Braff DL, Mintz J. Neurocognitive deficits and functional outcome in schizophrenia: are we measuring the “right stuff”? Schizophr. Bull. 2000;26:119–136. doi: 10.1093/oxfordjournals.schbul.a033430. [DOI] [PubMed] [Google Scholar]
- Green MF, Nuechterlein KH, Kern RS, Baade LE, Fenton WS, Gold JM, Keefe RS, et al. Functional co-primary measures for clinical trials in schizophrenia: results from the MATRICS psychometric and standardization study. Am. J. Psychiatry. 2008;165:221–228. doi: 10.1176/appi.ajp.2007.07010089. [DOI] [PubMed] [Google Scholar]
- Green MF, Schooler NR, Kern RS, Frese FJ, Granberry W, Harvey PD, Karson CN, et al. Evaluation of functionally meaningful measures for clinical trials of cognition enhancement in schizophrenia. Am. J. Psychiatry. 2011;168:400–407. doi: 10.1176/appi.ajp.2010.10030414. [DOI] [PubMed] [Google Scholar]
- Harvey PD, Helldin L, Bowie CR, Heaton RK, Olsson A, Hjärthag F, Norlander T, et al. Performance-based measurement of functional disability in schizophrenia: a cross-national study in the United States and Sweden. Am. J. Psychiatry. 2009;166:821–827. doi: 10.1176/appi.ajp.2009.09010106. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Harvey PD, Raykov T, Twamley EW, Vella L, Heaton RK, Patterson TL. Validating the measurement of real-world functional outcomes: phase 1 results of the VALERO study. Am. J. Psychiatry. 2011;168:1195–1201. doi: 10.1176/appi.ajp.2011.10121723. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Heaton RK, Gladsjo JA, Palmer BW, Kuck J, Marcotte TD, Jeste DV. Stability and course of neuropsychological deficits in schizophrenia. Arch. Gen. Psychiatry. 2001;58:24–32. doi: 10.1001/archpsyc.58.1.24. [DOI] [PubMed] [Google Scholar]
- Heinrichs RW, Zakzanis KK. Neurocognitive deficit in schizophrenia: a quantitative review of the evidence. Neuropsychology. 1998;3:426–445. doi: 10.1037//0894-4105.12.3.426. [DOI] [PubMed] [Google Scholar]
- Hu L, Bentler PM. Cutoff criteria for fit index in covariance structure analysis: conventional criteria versus new alternatives. Struct. Equ. Model. 1999;6:1–55. [Google Scholar]
- Kay SR, Fiszbein A, Opler LA. The positive and negative syndrome scale (PANSS) for schizophrenia. Schizophr. Bull. 1987;13:261–276. doi: 10.1093/schbul/13.2.261. [DOI] [PubMed] [Google Scholar]
- Keefe RS, Bilder RM, Harvey PD, Davis SM, Palmer BW, Gold JM, Meltzer HY, et al. Baseline neurocognitive deficits in the CATIE schizophrenia trial. Neuropsychopharmacology. 2006a;31:2033–2046. doi: 10.1038/sj.npp.1301072. [DOI] [PubMed] [Google Scholar]
- Keefe RS, Poe M, Walker TM, Harvey PD. The relationship of the brief assessment of cognition in schizophrenia (BACS) to functional capacity and real-world functional outcome. J. Clin. Exp. Neuropsychol. 2006b;28:260–269. doi: 10.1080/13803390500360539. [DOI] [PubMed] [Google Scholar]
- Keefe RS, Fox KH, Harvey PD, Cucchiaro J, Siu C, Loebel A. Characteristics of the MATRICS consensus cognitive battery in a 29-site antipsychotic schizophrenia clinical trial. Schizophr. Res. 2011;125:161–168. doi: 10.1016/j.schres.2010.09.015. [DOI] [PubMed] [Google Scholar]
- Kern RS, Gold JM, Dickinson D, Green MF, Nuechterlein KH, Baade LE, Keefe RS, et al. The MCCB impairment profile for schizophrenia outpatients: results from the MATRICS psychometric and standardization study. Schizophr. Res. 2011;126:124–131. doi: 10.1016/j.schres.2010.11.008. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Leifker FR, Bowie CR, Harvey PD. Determinants of everyday outcomes in schizophrenia: the influences of cognitive impairment, functional capacity, and symptoms. Schizophr. Res. 2009;115:82–87. doi: 10.1016/j.schres.2009.09.004. [DOI] [PubMed] [Google Scholar]
- Mausbach BT, Harvey PD, Goldman SR, Jeste DV, Patterson TL. Development of a brief scale of everyday functioning in persons with serious mental illness. Schizophr. Bull. 2007;33:1364–1372. doi: 10.1093/schbul/sbm014. [DOI] [PMC free article] [PubMed] [Google Scholar]
- McClure MM, Bowie CR, Patterson TL, Heaton RK, Weaver C, Anderson H, Harvey PD. Correlations of functional capacity and neuropsychological performance in older patients with schizophrenia: evidence for specificity of relationships? Schizophr. Res. 2007;89:330–338. doi: 10.1016/j.schres.2006.07.024. [DOI] [PubMed] [Google Scholar]
- Nuechterlein KH, Green MF, Kern RS, et al. The MATRICS Consensus Cognitive Battery, part 1: test selection, reliability, and validity. Am. J. Psychiatry. 2008;165:203–213. doi: 10.1176/appi.ajp.2007.07010042. [DOI] [PubMed] [Google Scholar]
- Nunnally J. Psychometric Theory. McGraw-Hill; New York: 1978. [Google Scholar]
- Patterson TL, Moscona S, McKibbin CL, Davidson K, Dilip JV. Social skills performance assessment among older patients with schizophrenia. Schizophr. Res. 2001;48:351–360. doi: 10.1016/s0920-9964(00)00109-2. [DOI] [PubMed] [Google Scholar]
- Pietrzak RH, Olver J, Norman T, Piskulic D, Maruff P, Snyder P. A comparison of the CogState Schizophrenia Battery and the Measurement and Treatment Research to Improve Cognition in Schizophrenia (MATRICS) battery in assessing cognitive impairment in chronic schizophrenia. J. Clin. Exp. Neuropsychol. 2009;31:848–859. doi: 10.1080/13803390802592458. [DOI] [PubMed] [Google Scholar]
- Reichenberg A, Harvey PD. Neuropsychological impairments in schizophrenia: integration of performance-based and brain imaging findings. Psychol. Bull. 2007;133:833–858. doi: 10.1037/0033-2909.133.5.833. [DOI] [PubMed] [Google Scholar]
- Sheehan DV, Lecrubier Y, Sheehan KH, et al. The Mini-International Neuropsychiatric Interview (M.I.N.I.): the development and validation of a structured diagnostic psychiatric interview for DSM-IV and ICD-10. J. Clin. Psychiatry. 1998;59:22–33. [PubMed] [Google Scholar]
- Silverstein SM, Jaeger J, Donovan-Lepore AM, et al. A comparative study of the MATRICS and IntegNeuro cognitive assessment batteries. J. Clin. Exp. Neuropsychol. 2010;32:937–952. doi: 10.1080/13803391003596496. [DOI] [PubMed] [Google Scholar]
- Twamley EW, Doshi RR, Nayak GV, Palmer BW, Golshan S, Heaton RK, Patterson TL, et al. Generalized cognitive impairments, ability to perform everyday tasks, and level of independence in community living situations of older patients with psychosis. Am. J. Psychiatry. 2002;159:2013–2020. doi: 10.1176/appi.ajp.159.12.2013. [DOI] [PubMed] [Google Scholar]
- Wilkinson GS. Wide-Range Achievement Test 3: Administration Manual. Wide Range Inc.; Delaware: 1993. [Google Scholar]