Abstract
Attention deficits, among other cognitive deficits, are frequently observed in schizophrenia. Although valid and reliable neurocognitive tasks have been established to assess attention deficits in schizophrenia, the hierarchical value of those tests as diagnostic discriminants on a single-subject level remains unclear. Thus, much research is devoted to attention deficits that are unlikely to be translated into clinical practice. On the other hand, a clear hierarchy of attention deficits in schizophrenia could considerably aid diagnostic decisions and may prove beneficial for longitudinal monitoring of therapeutic advances. To propose a diagnostic hierarchy of attention deficits in schizophrenia, we investigated several facets of attention in 86 schizophrenia patients and 86 healthy controls using a set of established attention tests. We applied state-of-the-art machine learning algorithms to determine attentive test variables that enable an automated differentiation between schizophrenia patients and healthy controls. After feature preranking, hypothesis building, and hypothesis validation, the polynomial support vector machine classifier achieved a classification accuracy of 90.70% ± 2.9% using psychomotor speed and 3 different attention parameters derived from sustained and divided attention tasks. Our study proposes, to the best of our knowledge, the first hierarchy of attention deficits in schizophrenia by identifying the most discriminative attention parameters among a variety of attention deficits found in schizophrenia patients. Our results offer a starting point for hierarchy building of schizophrenia-associated attention deficits and contribute to translating these concepts into diagnostic and therapeutic practice on a single-subject level.
Key words: focused attention, sustained attention, selective attention, alternating attention, divided attention, machine learning
Introduction
The identification of biological markers of schizophrenia is a central desiderate of biological psychiatry. However, reliable and objective biomarkers are still missing and clinicians depend on formal diagnostic criteria and their clinical impression, consequently running the risk to bias objective criteria toward subjective experience.
Cognitive deficits are among the most consistent findings in schizophrenia research and represent an important predictor of functional outcome.1,2 Meta-analyses have suggested generalized deficits of various cognitive functions in schizophrenia, including verbal and nonverbal memory, motor performance, attention, general intelligence, and executive function.3,4 Notably, high heterogeneity and discrepancy among the studies are reported.5,6 Several neurocognitive test batteries, such as the Measurement and Treatment Research to Improve Cognition in Schizophrenia, the Cognitive Neuroscience Treatment Research to Improve Cognition in Schizophrenia, or the Cambridge Neuropsychological Test Automated Battery, have been established, containing valid and reliable neurocognitive tasks in order to assess deficits of different cognitive domains in schizophrenia and cognitive effects of compounds in clinical trials.7–9 However, given the nonspecific nature of cognitive deficits in schizophrenia and the heterogeneity of findings across studies, the value of those tests as diagnostic differentiators between schizophrenia and healthy subjects remains unclear.
Among the variety of cognitive deficits in schizophrenia, deficits of attention are most robust findings.10,11 Attention is usually fractioned into functional subcomponents;12–14 accordingly, neurocognitive schizophrenia research usually focuses on specific attention subcomponents that are assessed by numerous different tasks. For example, deficits of sustained attention have been frequently observed in different versions of the Continuous Performance Test (CPT);15–17 selective attention deficits involving alerting, orienting, and conflict were found using the Attention Network Test (ANT);18,19 reduced performance was further observed in variants of the Wisconsin Card Sorting Test (WCST), the Stroop task, Trail Making test, or masking tasks.20–25 Complicating reconciliation of these studies, performance deficits are often assigned to different cognitive functions across studies and/or to a diffuse nosological terminology that does not clearly distinguish between, eg, “executive,” “prefrontal,” or “top-down,” functions. In this context, agreeing upon a commonly accepted model of attention could help drawing together those dispersed findings. Next, when employing a coherent model of attention deficits, a clinically relevant hierarchy of attention deficits could considerably aid diagnostic decisions and may prove beneficial for longitudinal monitoring of therapeutic advances.
The present study focuses on the clinical model of attention proposed by Sohlberg and Mateer13 that distinguishes focused, sustained, divided, alternating, and selective attention on the basis of patient histories who successively recover these attention domains after suffering from organic brain damage. The advantage of using this model for machine learning classification is based on integrating only those attention domains that are clinically clearly separable and thus do not contain overlapping domains, which would inevitably bias any classificatory model toward these overlaps. We employed an evenly distributed set of attention domain tests in a large cohort of schizophrenia patients and healthy controls and applied state-of-the-art machine learning algorithms to determine those attention domain test variables that differentiate between patients and controls on a single-subject level.
Methods
Subjects
Eighty-six schizophrenia patients (38 females, 48 males) participated in this study. They met DSM-IV criteria for schizophrenia, and had no psychiatric disorder other than schizophrenia and nicotine abuse/ dependence. Current drug abuse as determined by urine toxicology led to exclusion from the study. None of the included patients had a history of severe medical disorder or severe neurological disorder or history of electroconvulsive therapy. All patients were recruited from the inpatient unit and the outpatient facility of the Department of Psychiatry, Charité University Medicine, Berlin, Germany.
Eighty-six healthy subjects (34 females, 52 males) recruited via newspaper served as controls. Control participants were screened for mental and physical health and were excluded when fulfilling the criteria of psychiatric disorders according to DSM-IV as determined by structured clinical interviews.26 Further reasons for exclusion were a family history of psychiatric illness, medical or neurological disorders, or intake of psychotropic drugs as confirmed by urine toxicology before participating in the study.
All participants were right handed and reported normal or corrected-to-normal vision and audition. All subjects gave written informed consent before participating in this study. This study was approved by the ethics committee of the University Hospital Benjamin Franklin, Charité University Medicine, Berlin, Germany, and was conducted in accordance with the declaration of Helsinki and its subsequent amendments.
Cognitive Tasks
All participants completed a series of hierarchically organized attention domain tests including Continuous Performance Test-Identical Pairs (CPT-IP), ANT, WCST, and alerting and divided attention subtests of a German test of attention performance (Testbatterie zur Aufmerksamkeitsprüfung; TAP; version 1.7). The TAP contains 13 subtests evaluating different components of attention.27 In our study, participants completed subtests for focused and divided attention. Every attention test was performed on a personal computer with a 17-inch cathode ray tube monitor. For every test, a set of 6 variables was extracted for further analyses.
Focused Attention.
In this TAP subtest, participants were instructed to press the response key whenever a cross appears on the screen. The appearance of a cross was preceded by a warning tone. Following a training session with 5 trials, a total of 40 test trials were completed. Outcome measures consisted of mean reaction time (RT), number of valid reactions, anticipation errors, errors of omission, lapses of attention, and a specific parameter indicative of phasic alertness.
Sustained Attention.
The CPT-IP was developed as a test of sustained visual attention in both schizophrenia patients and healthy controls.28 Using their dominant hand, participants pressed the mouse button as fast as possible when 2 identical pairs of symbols were presented in sequence. Following a trainings session with 10 trials, 150 stimuli were presented with an invariant stimulus-onset asynchrony of 1000ms and a stimulus duration of 50ms. Thirty out of 150 stimuli served as target stimuli. Outcome measures were the d′ and log β, standard measures in signal detection theory representing the signal-to-noise ratio (d′), and the ratio of signal likelihood to noise likelihood (log β); hit rate (percentage correct hits) and mean RT for correct hits; and percentage of and RT for false alarms.
Selective Attention.
The ANT is a test of selective visual attention that combines a cued detection paradigm with a flanker task.29 In the task, informative cues are presented for 100ms, followed by targets with a stimulus-onset asynchrony of 500ms. The cues appeared in the center, superimposed on the fixation cross (center cue; only temporally informative); simultaneously above and below the center (double cue; only temporally informative); either above or below the fixation cross (spatial cue; temporally and spatially informative); or were not presented (no cue). Targets consisted of 5 horizontally arranged arrows or lines presented above or below the fixation cross. Participants were instructed to indicate the direction of a central arrow by making a left or right button press on a keyboard, irrespective of the (compatible or incompatible) flanking arrows. Following a training session of 24 pseudo-randomized trials with feedback, participants completed a total of 288 pseudo-randomized trials, divided into 3 experimental blocks, without feedback. Outcome measures were attention network effects of alerting, orienting, and conflict, calculated as the difference in RT between different task conditions. Other dependent variables extracted for machine learning analysis were mean RT across trials, errors of commission, and errors of omission.
Alternating Attention.
The WCST measures alternating attention by assessing estimates of establishing and shifting cognitive sets.30 Participants are instructed to sort stimulus cards on the basis of color, form, or number of symbols. The only feedback provided is whether the current response was correct or incorrect. The sorting rule changed after 10 consecutive correct responses. The test was discontinued when the participant has learned 2 iterations of each sorting rule, or completed 128 trials. The primary outcome measures used for this study were failures to maintain set, numbers of conceptual level responses, perseverative responses, perseverative errors, nonperseverative errors, and number of categories completed.
Divided Attention.
The TAP subtest for divided attention combines a visual and an auditory task, both of which have to be processed in parallel. During the test, varying numbers of crosses appeared on the screen simultaneously, while a high and a low tone were presented in sequence at the same time. Participants were instructed to press a response key when 4 crosses formed a square or when the same tone was presented twice in a row. Following a training session of 10 trials, a total of 50 test trials was presented. Outcome measures were mean RT, number of valid reactions, errors of commission, errors of omission, lapses of attention, and late reactions.
Additionally, all participants were tested with a basic neuropsychological test battery including verbal31 and nonverbal intelligence tests32 along with the Digit Symbol Test and the Trail Making Test (see table 1).
Table 1.
Summary of Demographic and Clinical Data
| Schizophrenia | Controls | P | |
|---|---|---|---|
| N (females/males) | 86 (34/ 52) | 86 (38/ 48) | .033 |
| Age (y) | 36.33±10.6 | 32.58±9.3 | .015 |
| Age (range) | 18–67 | 18–65 | — |
| Education (y) | 13.33±2.6 | 15.29±2.4 | <.001 |
| Verbal IQ | 106.18±14.8 | 114.93±14.1 | <.001 |
| Nonverbal IQ | 103.75±11.6 | 114.29±8.6 | <.001 |
| Digit Symbol Test | 43.66±11.8 | 60.07±12.9 | <.001 |
| Trail Making Test—A | 36.48±15.2 | 26.14±6.8 | <.001 |
| Trail Making Test—B | 93.84±51.7 | 56.62±18.1 | <.001 |
| DOI (mo) | 97.21±101.8 | — | — |
| N episodes | 3.54±2.8 | — | — |
| PANSS positive scale | 13.18±4.3 | — | — |
| PANSS negative scale | 17.46±6.3 | — | — |
| PANSS general scale | 31.89±9.1 | — | — |
| CPZ equivalents (mg/d) | 510.25±374.2 | — | — |
Note: DOI, duration of illness; PANSS, Positive And Negative Syndrome Scale; CPZ, chlorpromazine.
Between-group differences were assessed with t test, for independent samples, or chi-square test, as appropriate.
Machine Learning Pattern Classification
The machine learning discovery methods involve 3 separate stages of analysis: feature selection, hypothesis ranking, and validation. The bulk of this analysis was done in Matlab (Mathworks) with help from the open-source packages Weka (data mining software in Java, available online at http://www.cs.waikato.ac.nz/ml/weka/), LibSVM (library for support vector machines, available online at http://www.csie.ntu.edu.tw/~cjlin/libsvm/), and bolasso.m (bootstrap-enhanced least absolute shrinkage operator, available online at http://code.google.com).
Machine learning analysis was done entering n = 30 outcome measures of attention tasks. Digit Symbol Test (N correct) and IQ (mean IQ, averaged across verbal and nonverbal domain) were entered as additional features because of substantial evidence for a large effect schizophrenia diagnosis on both Digit Symbol Test (DST)6 and IQ performance,3,4 so that these measures may add to classification accuracy on a single-subject level.
The first 2 stages involve a randomly chosen subset of the entire data set which comprises 75% of the data, ie, the training subset. In the first stage (feature selection), features (preselected for relevance on physiological grounds) are ranked according to each feature’s strength of difference between control and patient groups as measured by the Kolmogorov-Smirnov test. In order to offer an intuitive measure of strength of difference (other than the P value of the Kolmogorov-Smirnov test significance) we also provide the classification value of the linear discriminant analysis classifier, meaning the division of the real line into 2 segments (the simplest means of classifying a 1 dimensional value). The justification for providing such an error value is to give an approximate lower bound for possible final classification accuracy values of more complex, multidimensional classifiers.
In the next step of analysis (hypothesis ranking), a large number of hypotheses are evaluated over the training subset in terms of their classification error measured according to the 10-fold cross-validated balanced error rate, which, in the case of equal numbers of samples from each class, is the same as the error rate proper. The hypotheses are formed as such: a candidate feature combination set is constructed and consists of exhaustive combinatorial subsets with up to 5 maximal features, to which an element consisting of all strong features is added. A set product of the candidate feature combination set and the classifier set is formed which provides the hypothesis set. The classifier set is a list of common classification algorithms used in machine learning practice, and is composed of: linear and quadratic discriminants (LDA/ QDA and their diagonal variants), support vector machines (SVM with radial basis, linear, polynomial, and multilayer perceptron kernels), naive Bayes, k-nearest neighbors with Euclidean and cosine distance measures, and Mahalanobis with different types of covariance estimators (no regularization, robust regression, and regularized/sparsified estimates: model consistent Lasso estimation through the bootstrap [BOLASSO] and minimum description length [MDL]). A total of 15 classifier types is thus estimated, and a total of 405 hypotheses per data set are tested. The testing of a much larger set of hypotheses, far greater than the number of samples, is liable to over-fit even at this stage. The 10-fold cross-validated (optimizing on 9/10 of the training data, evaluating on the remaining 1/10th, repeated 10 times and averaged) classification error called the predicted error is used to a final ranking of hypotheses in the second stage of analysis. Hypotheses whose predicted accuracy was worse than other considered hypotheses with the same set of features were discarded.
In the third and final step (validation), hypotheses are evaluated (best-to-worst) on the validation subset, ie, 25% remaining from the original 75%/25% split, which until this step has not played part in any fitting or optimization. Hypotheses that resulted in tested over-fit according to a 2-sided binomial test were discarded.
Results
Table 2 summarizes the results from feature preranking according to the accuracy of individual discrimination between controls and patients, based on separately calculated between-group significances with Kolmogorov-Smirnov test. This first step in our machine learning algorithm allowed for exclusion of noninformative cognitive test variables and established a preliminary set of 18 cognitive measures that principally discriminate between groups.
Table 2.
Preranking of Statistically Significant Test Variables
| Test variable | P a | Cohen’s d b |
|---|---|---|
| DST | <.001 | 1.33 |
| CPT-IP: hits | <.001 | 1.17 |
| IQ | <.001 | 1.00 |
| TAP (divided attention): valid reactions | <.001 | 0.96 |
| ANT: mean RT | <.001 | 0.83 |
| TAP (divided attention): errors of omission | <.001 | 0.83 |
| TAP (divided attention): mean RT | <.001 | 0.78 |
| CPT-IP: d′ | <.001 | 0.77 |
| TAP (focused attention): mean RT | <.001 | 0.72 |
| WCST: nonperseverative errors | .001 | 0.71 |
| WCST: categories completed | .002 | 0.70 |
| TAP (divided attention): delayed reactions | .012 | 0.70 |
| WCST: perseverative errors | <.001 | 0.62 |
| WCST: perseverative responses | <.001 | 0.59 |
| TAP: (focused attention) phasic alertness | .002 | 0.37 |
| ANT: Errors of omission | .004 | 0.31 |
| WCST: Conceptual level responses | .03 | 0.31 |
| CPT-IP: log β | .004 | 0.28 |
Note: DST, Digit Symbol Test; CPT-IP, Continuous Performance Test-Identical Pairs; TAP, Testbatterie zur Aufmerksamkeitsprüfung; IQ, intelligence quotient; ANT, Attention Network Test; WCST, Wisconsin Card Sorting Test.
aBetween-group significances as obtained in Kolmogorov-Smirnov test (uncorrected).
bCohen’s d was computed as
.
Based on feature preranking, we subsequently identified strong features that differentiate between control and patient groups. These identified strong features are illustrated in figure 1. This procedure allowed for excluding the majority of variables, thus leaving a set of 5 cognitive measures that are further grouped into hypothetical clusters based on their between-group differences of cumulative probability density function (see figure 1).
Fig. 1.
Strong features found in training set. The x axis of each panel is a nonlinear transformation of the basic value shown as 2/π atan(x/2) that is chosen to provide a common visual axis for features that occur on different scales and ranges. The curves and the intensity levels show a smoothed cumulative probability density function for each class, along with the distribution of the absolute difference between these two (red line, the Kolmogorov-Smirnov test statistic represents the maximum value). The figure shows that while similar types of features have similar distribution shapes (eg, omissions and late reaction in the divided attention test) and vice versa, the strongest differences are distributed among various feature types, suggesting that each type provides complementary information about the diagnostic or subject state. cpt05: CPT-IP hits; d3aug: TAP (divided attention) errors of omission; d3mwg: TAP (divided attention) mean reaction time; speed 1: DST; d3slow: TAP (divided attention) late reactions.
Following hypothesis ranking, the top 5 hypotheses identified are shown in table 3, in terms of predicted and validation errors, along with the expected SD of the binomial distribution based on the validation error and the number of samples in the validation set. Note that the predicted and validated errors are within the margin of error. The results revealed validation errors of around 9%–12% along with a mean SD of around 3% among the top 5 hypotheses. Here, radial basis function SVM achieved a classification accuracy of 90.70% ± 2.9% using psychomotor speed and 3 different attention parameters that belonged to the sustained and divided attention domains.
Table 3.
Classification Accuracy Measured as Prediction and Validation Error Rates
| Machine | SVM (Radial Basis Function) | SVM (Linear) | SVM (Radial Basis Function) | LDA | LDA |
|---|---|---|---|---|---|
| PE | 12.38% | 18.13% | 13.15% | 17.23% | 18.60% |
| VE ± SD | 9.30±2.9% | 9.30±3.4% | 11.63±3.0% | 11.63±3.3% | 11.63±3.4% |
| Feature 1 | CPT-IP: hits | TAP-D: Eom | CPT-IP: hits | CPT-IP: hits | CPT-IP: hits |
| Feature 2 | TAP-D: Eom | TAP-D: RT | DST | TAP-D: RT | TAP-D: RT |
| Feature 3 | DST | DST | TAP-D: late | DST | DST |
| Feature 4 | TAP-D: late | — | — | TAP-D: late | — |
Note: PE, prediction error; VE, validation error; SVM, support vector machine (with different kernels); LDA, linear discriminant analysis; CPT-IP, Continuous Performance Test-Identical Pairs; TAP-D: subtest for divided attention; Eom, errors of omission; late, late reactions; RT, reaction time; DST, Digit Symbol Test.
Tested methods ranked according to VE. The ± error corresponds to the SD of the binomial distribution at the VE and validation set size (approximately corresponding to a 75% CI). For each machine learning classifier, prediction and VE rates along with selected feature sets are given.
Of note, chosen features were identified by virtually all machines and are not simply the first N-ranked features, but subsets of features.
Discussion
We investigated specific attention domains according to the clinical model of attention proposed by Sohlberg and Mateer13 in schizophrenia patients and healthy controls. Using advanced machine learning methods, we identified a set of 4 test features that sufficed to obtain a cumulative classification accuracy of 90.70%. The closeness of the prediction error (12.38%) to the validation error (9.30%) is particularly convincing and argues against overfitting the mathematical model to the data. The identified feature set consisted of variables extracted from psychomotor speed as well as sustained and divided attention tasks that distinguish between patients and controls over and above statistical significance. This classification establishes, to the best of our knowledge, the first hierarchy of attention deficits in schizophrenia, suggesting superiority of our combination of attention deficits for diagnostic single-subject classification of schizophrenia.
Most classification studies of schizophrenia used elements of structural or functional magnetic resonance imaging as classifiers, most of them also achieving 80–90% classification accuracy.33–36 Other classification studies focused on electrophysiological methods and obtained classification accuracies of around 70–80%.37–40 Neurocognitive measurements, though most easily accessible and frequently used in both research and clinical routine, have not yet been employed in cross-sectional classification studies. A recent study applied neurocognitive pattern classification to predict conversion to psychosis in at-risk mental states and achieved around 90% classification accuracy in a small and thus preliminary sample.41 By using carefully assembled attention tests that each tap into a different domain, we arrived at a classification accuracy comparable to that reported in the aforementioned studies.
Beyond the classification accuracy, the incremental value of the current study is best considered in the light of a recent criticism expressed by Kapur et al.42 They delineated reasons for the inability of biological psychiatry to establish biomarkers and emphasized that ‘significance chasing’ is frequently performed at the expense of meaningful effect sizes. The implementation of study designs is usually geared toward existing, significant data that either replicate previous findings or that are thought to be replicable. The significance of a biological finding, however, does not necessarily mean that the respective effect is clinically important. Our study addresses this criticism by quantifying the diagnostic accuracy of attention deficits in schizophrenia and thus estimating the hierarchical utility of associated measures. When performing conventional Kolmogorov-Smirnov tests, 16 variables extracted from all performed attention domain tests were statistically significant between the 2 diagnostic groups. However, most of those variables, though statistically significant, remain irrelevant for diagnostic purposes, thus nicely illustrating the difference between statistical significance and clinical usefulness of research data.
The identified discriminators of the present study are consistent with previous findings for the most part. A poorer performance in CPT is reported for both schizophrenia patients and their first-degree relatives.43–45 Measures of hit rate have been widely used as measure of CPT performance and showed high test-retest reliability for schizophrenia patients.43,46,47 Concerning DST, Dickinson and colleagues (2007) have found a very large effect size of 1.57 (Hedges g), which is essentially replicated by our study (Cohen’s d = 1.33).6 Divided attention tasks are rarely applied in schizophrenia research; where it is, the number of omission errors is frequently used to characterize patients with schizophrenia or subjects at high risk of developing psychosis.48,49 However, there is scarcely existing data on the reliability of this measure outcome. Here, our study may serve as a starting point in establishing a classificatory hierarchy of attention deficits in schizophrenia that includes errors of omission as an appropriate measure of divided attention.
There are several limitations to the present study. First, the number of samples can be considered as relatively small for machine learning algorithms and may have resulted in imprecise classification accuracy. On the other hand, and from a clinical point of view, the total of 172 subjects represents a large study population compared to the majority of cognitive studies. Another limitation are the significant between-group differences in education and IQ that may influence attention functions of participants. This serious potential confound is, however, inherent to virtually all cognitive studies into schizophrenia, given the debilitating nature of the disorder. To minimize this confound, mean IQ was entered as a feature in our machine learning analysis but did not contribute to final single-subject classification. Finally, the study focused on attention deficits, while other reported relevant cognitive deficits in schizophrenia, such as verbal memory or working memory, were not primarily considered. Studies including additional neurocognitive tasks will likely result in further improved classification accuracy.
In conclusion, the present study has established a hierarchy of attention deficits in schizophrenia and determined neurocognitive test variables that allow an automated discrimination between healthy and schizophrenic subjects. The study further highlights the potential of machine learning algorithms in the identification of biomarkers that might be employed in addition to clinical assessments.
Acknowledgments
We thank all participants who contributed their time and effort to this study. There is no conflict of interest for any of the authors.
References
- 1. Green MF, Kern RS, Braff DL, Mintz J. Neurocognitive deficits and functional outcome in schizophrenia: are we measuring the “right stuff”? Schizophr Bull. 2000;26:119–136 [DOI] [PubMed] [Google Scholar]
- 2. Green MF, Kern RS, Heaton RK. Longitudinal studies of cognition and functional outcome in schizophrenia: implications for MATRICS. Schizophr Res. 2004;72:41–51 [DOI] [PubMed] [Google Scholar]
- 3. Heinrichs RW, Zakzanis KK. Neurocognitive deficit in schizophrenia: a quantitative review of the evidence. Neuropsychology. 1998;12:426–445 [DOI] [PubMed] [Google Scholar]
- 4. Mesholam-Gately RI, Giuliano AJ, Goff KP, Faraone SV, Seidman LJ. Neurocognition in first-episode schizophrenia: a meta-analytic review. Neuropsychology. 2009;23:315–336 [DOI] [PubMed] [Google Scholar]
- 5. Fioravanti M, Carlone O, Vitale B, Cinti ME, Clare L. A meta-analysis of cognitive deficits in adults with a diagnosis of schizophrenia. Neuropsychol Rev. 2005;15:73–95 [DOI] [PubMed] [Google Scholar]
- 6. Dickinson D, Ramsey ME, Gold JM. Overlooking the obvious: a meta-analytic comparison of digit symbol coding tasks and other cognitive measures in schizophrenia. Arch Gen Psychiatry. 2007;64:532–542 [DOI] [PubMed] [Google Scholar]
- 7. Barnett JH, Robbins TW, Leeson VC, Sahakian BJ, Joyce EM, Blackwell AD. Assessing cognitive function in clinical trials of schizophrenia. Neurosci Biobehav Rev. 2010;34:1161–1177 [DOI] [PubMed] [Google Scholar]
- 8. Nuechterlein KH, Green MF, Kern RS, et al. The MATRICS Consensus Cognitive Battery, part 1: test selection, reliability, and validity. Am J Psychiatry. 2008;165:203–213 [DOI] [PubMed] [Google Scholar]
- 9. Carter CS, Minzenberg M, West R, Macdonald A., 3rd CNTRICS imaging biomarker selections: Executive control paradigms. Schizophr Bull. 2012;38:34–42 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10. Elvevåg B, Goldberg TE. Cognitive impairment in schizophrenia is the core of the disorder. Crit Rev Neurobiol. 2000;14:1–21 [PubMed] [Google Scholar]
- 11. Nuechterlein KH, Barch DM, Gold JM, Goldberg TE, Green MF, Heaton RK. Identification of separable cognitive factors in schizophrenia. Schizophr Res. 2004;72:29–39 [DOI] [PubMed] [Google Scholar]
- 12. Posner MI, Petersen SE. The attention system of the human brain. Annu Rev Neurosci. 1990;13:25–42 [DOI] [PubMed] [Google Scholar]
- 13. Sohlberg MM, Mateer CA. Improving attention and managing attentional problems. Adapting rehabilitation techniques to adults with ADD. Ann N Y Acad Sci. 2001;931:359–375 [PubMed] [Google Scholar]
- 14. Luck SJ, Gold JM. The construct of attention in schizophrenia. Biol Psychiatry. 2008;64:34–39 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15. Cornblatt BA, Keilp JG. Impaired attention, genetics, and the pathophysiology of schizophrenia. Schizophr Bull. 1994;20:31–46 [DOI] [PubMed] [Google Scholar]
- 16. Thaden E, Rhinewine JP, Lencz T, et al. Early-onset schizophrenia is associated with impaired adolescent development of attentional capacity using the identical pairs continuous performance test. Schizophr Res. 2006;81:157–166 [DOI] [PubMed] [Google Scholar]
- 17. Birkett P, Brindley A, Norman P, Harrison G, Baddeley A. Control of attention in schizophrenia. J Psychiatr Res. 2006;40:579–588 [DOI] [PubMed] [Google Scholar]
- 18. Wang K, Fan J, Dong Y, Wang CQ, Lee TM, Posner MI. Selective impairment of attentional networks of orienting and executive control in schizophrenia. Schizophr Res. 2005;78:235–241 [DOI] [PubMed] [Google Scholar]
- 19. Neuhaus AH, Trempler NR, Hahn E, et al. Evidence of specificity of a visual P3 amplitude modulation deficit in schizophrenia. Schizophr Res. 2010;124:119–126 [DOI] [PubMed] [Google Scholar]
- 20. Hartman M, Steketee MC, Silva S, Lanning K, Andersson C. Wisconsin Card Sorting Test performance in schizophrenia: the role of working memory. Schizophr Res. 2003;63:201–217 [DOI] [PubMed] [Google Scholar]
- 21. Koren D, Seidman LJ, Harrison RH, et al. Factor structure of the Wisconsin Card Sorting Test: dimensions of deficit in schizophrenia. Neuropsychology. 1998;12:289–302 [DOI] [PubMed] [Google Scholar]
- 22. Moritz S, Andresen B, Jacobsen D, et al. Neuropsychological correlates of schizophrenic syndromes in patients treated with atypical neuroleptics. Eur Psychiatry. 2001;16:354–361 [DOI] [PubMed] [Google Scholar]
- 23. Holthausen EA, Wiersma D, Sitskoorn MM, et al. Schizophrenic patients without neuropsychological deficits: subgroup, disease severity or cognitive compensation? Psychiatry Res. 2002;112:1–11 [DOI] [PubMed] [Google Scholar]
- 24. Brazo P, Delamillieure P, Morello R, Halbecq I, Marié RM, Dollfus S. Impairments of executive/attentional functions in schizophrenia with primary and secondary negative symptoms. Psychiatry Res. 2005;133:45–55 [DOI] [PubMed] [Google Scholar]
- 25. Lalanne L, Dufour A, Després O, Giersch A. Attention and masking in schizophrenia. Biol Psychiatry. 2012;71:162–168 [DOI] [PubMed] [Google Scholar]
- 26. First M, Spitzer R, Gibbon M, Williams J. Structured Clinical Interview for DSM-IV-TR Axis I Disorders—Non-patient Edition. New York, NY: New York State Psychiatric Institute; 2001. [Google Scholar]
- 27. Zimmermann P, Fimm B. Testbatterie zur Aufmerkeitsprüfung (TAP), Version 1.7. Herzogenrath, Germany: PSYTEST Psychologische Testsysteme; 2002. [Google Scholar]
- 28. Cornblatt BA, Risch NJ, Faris G, Friedman D, Erlenmeyer-Kimling L. The Continuous Performance Test, identical pairs version (CPT-IP): I. New findings about sustained attention in normal families. Psychiatry Res. 1988;26:223–238 [DOI] [PubMed] [Google Scholar]
- 29. Fan J, McCandliss BD, Sommer T, Raz A, Posner MI. Testing the efficiency and independence of attentional networks. J Cogn Neurosci. 2002;14:340–347 [DOI] [PubMed] [Google Scholar]
- 30. Heaton R. Wisconsin Card Sorting Test manual. Odessa, FL: Psychological Assessment Resources; 1981. [Google Scholar]
- 31. Lehrl S, Triebig G, Fischer B. Multiple choice vocabulary test MWT as a valid and short test to estimate premorbid intelligence. Acta Neurol Scand. 1995;91:335–345 [DOI] [PubMed] [Google Scholar]
- 32. Horn W. Leistungsprüfsystem L-P-S – Handanweisung. 2nd ed. Göttingen: Hogrefe; 1983. [Google Scholar]
- 33. Davatzikos C, Shen D, Gur RC, et al. Whole-brain morphometric study of schizophrenia revealing a spatially complex set of focal abnormalities. Arch Gen Psychiatry. 2005;62:1218–1227 [DOI] [PubMed] [Google Scholar]
- 34. Pardo PJ, Georgopoulos AP, Kenny JT, Stuve TA, Findling RL, Schulz SC. Classification of adolescent psychotic disorders using linear discriminant analysis. Schizophr Res. 2006;87:297–306 [DOI] [PubMed] [Google Scholar]
- 35. Kawasaki Y, Suzuki M, Kherif F, et al. Multivariate voxel-based morphometry successfully differentiates schizophrenia patients from healthy controls. Neuroimage. 2007;34:235–242 [DOI] [PubMed] [Google Scholar]
- 36. Yoon U, Lee JM, Im K, et al. Pattern classification using principal components of cortical thickness and its discriminative pattern in schizophrenia. Neuroimage. 2007;34:1405–1415 [DOI] [PubMed] [Google Scholar]
- 37. Iyer D, Zouridakis G. Single-trial analysis of the auditory N100 improves separation of normal and schizophrenia subjects. Conf Proc IEEE Eng Med Biol Soc. 2008;2008:3840–3843 [DOI] [PubMed] [Google Scholar]
- 38. Winterer G, Ziller M, Dorn H, et al. Frontal dysfunction in schizophrenia–a new electrophysiological classifier for research and clinical applications. Eur Arch Psychiatry Clin Neurosci. 2000;250:207–214 [DOI] [PubMed] [Google Scholar]
- 39. Neuhaus AH, Popescu FC, Bates JA, Goldberg TE, Malhotra AK. Single-subject classification of schizophrenia using event-related potentials obtained during auditory and visual oddball paradigms. Eur Arch Psychiatry Clin Neurosci. 2013;263:241–247 [DOI] [PubMed] [Google Scholar]
- 40. Neuhaus AH, Popescu FC, Grozea C, et al. Single-subject classification of schizophrenia by event-related potentials during selective attention. Neuroimage. 2011;55:514–521 [DOI] [PubMed] [Google Scholar]
- 41. Koutsouleris N, Davatzikos C, Bottlender R, et al. Early recognition and disease prediction in the at-risk mental states for psychosis using neurocognitive pattern classification. Schizophr Bull. 2012;38:1200–1215 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 42. Kapur S, Phillips AG, Insel TR. Why has it taken so long for biological psychiatry to develop clinical tests and what to do about it? Mol Psychiatry. 2012;17:1174–1179 [DOI] [PubMed] [Google Scholar]
- 43. Kahn PV, Walker TM, Williams TS, Cornblatt BA, Mohs RC, Keefe RS. Standardizing the use of the Continuous Performance Test in schizophrenia research: a validation study. Schizophr Res. 2012;142:153–158 [DOI] [PubMed] [Google Scholar]
- 44. Barrantes-Vidal N, Aguilera M, Campanera S, et al. Working memory in siblings of schizophrenia patients. Schizophr Res. 2007;95:70–75 [DOI] [PubMed] [Google Scholar]
- 45. Laurent A, Saoud M, Bougerol T, et al. Attentional deficits in patients with schizophrenia and in their non-psychotic first-degree relatives. Psychiatry Res. 1999;89:147–159 [DOI] [PubMed] [Google Scholar]
- 46. Birkett P, Sigmundsson T, Sharma T, et al. Reaction time and sustained attention in schizophrenia and its genetic predisposition. Schizophr Res. 2007;95:76–85 [DOI] [PubMed] [Google Scholar]
- 47. Chen WJ, Hsiao CK, Hsiao LL, Hwu HG. Performance of the Continuous Performance Test among community samples. Schizophr Bull. 1998;24:163–174 [DOI] [PubMed] [Google Scholar]
- 48. Moritz S, Ferahli S, Naber D. Memory and attention performance in psychiatric patients: lack of correspondence between clinician-rated and patient-rated functioning with neuropsychological test results. J Int Neuropsychol Soc. 2004;10:623–633 [DOI] [PubMed] [Google Scholar]
- 49. Kim SJ, Lee YJ, Jang JH, Lim W, Cho IH, Cho SJ. The relationship between psychotic-like experiences and attention deficits in adolescents. J Psychiatr Res. 2012;46:1354–1358 [DOI] [PubMed] [Google Scholar]

