Abstract
Background
The Social Security Administration is considering whether schizophrenia may warrant inclusion in their new “Compassionate Allowance” process, which aims to identify diseases and other medical conditions that almost always qualify for Social Security disability benefits simply on the basis of their confirmed presence. This paper examines the reliability and validity of schizophrenia diagnosis, how a valid diagnosis is established, and the stability of the diagnosis over time. A companion paper summarizes evidence on the empirical association between schizophrenia and disability, thus leading to this paper that evaluates how valid clinical diagnoses of schizophrenia are.
Methods
Literature review and synthesis, based on a workplan developed in an expert meeting convened by the National Institute of Mental Health and the Social Security Administration.
Findings
At least since the introduction of the 3rd edition of the American Psychiatric Association’s Diagnostic and Statistical Manual (DSM-III) in 1980, diagnoses of schizophrenia made by mental health specialists are valid, reliable, and stable over time, across community as well as academic practice settings, and across different assessment methods. These analyses are particularly valid during the time-frame relevant to social security awards: at least two years after the initial stages of illness. We could not find studies that have evaluated the validity or reliability of schizophrenia diagnoses made by exclusively by primary care providers (vs. mental health professionals).
Discussion
In the post-DSM-III era, schizophrenia diagnosis – using modern diagnostic criteria – is valid and reliable when performed by doctoral-level mental health specialists (i.e., psychiatrists and psychologists), in community as well as academic settings.
I. BACKGROUND
As we note in the accompanying white paper, it is our position that the presence of schizophrenia is consistently associated with the occurrence of impairments in the ability to function adequately in everyday life. In this paper, we present information regarding the accuracy with which schizophrenia in established cases with an extended duration of illness can be identified by mental health clinicians in regular community settings. We also consider the reliability of the identification of schizophrenia as a diagnosis across different sources of diagnostic information, using different methods, and across individuals who may be generating these diagnoses (typically, doctoral-level mental health professionals). We also consider whether there are any specific procedures that may be required to generate a valid and reliable diagnosis of schizophrenia.
Of particular interest is the duration of illness after which the diagnosis of schizophrenia and its associated disability can be considered stable. The SSA requires a continuous duration of illness and disability of 2 years before considering a person with the illness for disability compensation. Thus, the critical focus is not prior to the illness (the prodome), or at the first episode, or after accrual of a minimal treatment history, but rather after there is an established illness. Much of the data that we review is older, but still supportive of the conclusion that after a certain initial period of evaluation and treatment, a clinically derived diagnosis is likely to be temporally stable and obtainable from a variety of medical records generated by mental health professionals.
Schizophrenia as a Nosological Entity
The concept of psychotic disorders has a long history, but the systematic descriptions of German and Swiss phenomenologists at the end of the 19th century provide the current basis for our thinking about the clinical symptoms of schizophrenia. Kraepelin (1919) and Bleuler (1911) defined psychotic conditions consistent with the modern view of schizophrenia, that could be differentiated from other serious mental illnesses such as manic-depression (i.e., bipolar disorder, major depressive disorder). These descriptions consistently noted that disability in the performance of everyday functions was a component of the illness, although the two classic descriptions by Kraepelin and Bleuler differed somewhat in the extent to which they believed that disability was ubiquitously present and in the extent to which recovery was possible.
These definitions were clear, specific and definitive, and lead to a revolution in the assessment of severe mental illnesses. However, there was a period of time, particularly in the United States, where the nosological entity of schizophrenia became more vaguely defined and reliability of the diagnosis was uncertain. The first two editions of the American Psychiatric Association’s Diagnostic and Statistical Manual of Mental Disorders (DSM), in fact, had diagnostic definitions of schizophrenia that were problematic because they lacked clear behavioral referents (Davison and Neale, 1975). The diagnoses of psychotic disorders with these criteria were unreliable, with percentage agreements between two raters for the diagnosis of schizophrenia as low as 53% (Beck et al., 1962). In contrast, the subsequent editions of the DSM, beginning with DSM-III (American Psychiatric Association 1980) in 1980, contained diagnostic definitions of schizophrenia that were characterized by a focus on specific and enduring behavioral manifestations of the illness and accompanied by high levels of reliability of the diagnosis when using structured assessment procedures as described below. The reliability of the diagnosis of schizophrenia at the time of the introduction of DSM-III was very high, with a kappa coefficient of .81. As kappa coefficients are corrected for chance agreements and baseline frequencies of occurrence, a kappa coefficient this high is typically associated with at least 90% agreement between two mental health clinicians on the diagnosis of schizophrenia.
The American Psychiatric Association is currently developing the 5th edition of the DSM (DSM-V); one of the authors of the current article (WTC) chairs the Psychotic Disorders Workgroup for the APA’s DSM-V Task Force. With the caveat that the DSM-V has not been finalized at the time of this writing, this paper incorporates potentially relevant revisions to the schizophrenia diagnostic criteria. The most likely alterations in the DSM-V are aimed at increasing the overall confidence in the diagnosis, by requiring the presence of more psychotic symptoms and eliminating the diagnosis of schizophrenia on the basis of a single symptom. The requirements for the presence of disability and a certain duration of illness are not expected to change.
II. METHODS
The companion paper describes the methods for these two papers.
III. VALIDITY AND RELIABILITY OF SCHIZOPHRENIA DIAGNOSIS
Structured Diagnostic Assessment
One of the responses to the problem of the low reliability of psychiatric diagnoses in the pre-DSM-III era was the development of structured diagnostic interviews. These interviews, which began with the Present State Exam (PSE) (Wing et al., 1967) in the UK and the Current and Past Psychopathology Scales (CAPPS) (Spitzer and Endicott, 1968) in the USA, responded to low inter-rater reliability of diagnosis by the development of structured interviews to collect diagnostic information. The PSE was based on a thorough assessment of symptoms relevant to determination of the presence of psychosis and differential diagnosis of psychotic disorders. The CAPPS included semi-structured prompts for the interview, clearly defined branching procedures to ensure thorough collection of information about symptoms that are present without burdening the respondent with answering multiple questions about symptoms they do not have. These interviews include collection of data about both the presence of schizophrenia, and the presence of other conditions which would raise doubt about the presence of schizophrenia. Further, these interview procedures collected data about the course of illness, which is needed to help differentiate between schizophrenia and other more transient illnesses with some related symptomatic features (as described further below). Finally, the utilization of these structured, in-person assessments generally is seen as part of an overall diagnostic strategy that includes a “best-estimate” research diagnosis. Such a “best estimate” research diagnosis includes all available information, including patient responses to questions over the course of the structured interview, interviewer observation of the behavior of the patient, and input from available medical records and informant reports. This diagnosis typically is generated by an individual clinician, who then presents this information at a consensus meeting. The generation of a best estimate research diagnosis is a comprehensive procedure that is not commonly employed, nor possible, in general clinical settings. Thus, the generation of such a diagnosis is based, but not exclusively reliant, upon the centerpiece of a structured in-person diagnostic interview.
Results based on reliance on this strategy in research settings have led to repeated findings of substantial Kappa coefficients in the diagnosis of schizophrenia starting with the field trials of DSM-III (American Psychiatric Association, 1980). A question that arises is whether the increased reliability of the diagnosis with the “DSM-III or later” criteria is a result of the clarity and specificity of the criteria or the use of structured procedures to collect diagnostic information. This question is partially confounded in that, prior to the wide acceptance of DSM-III criteria and structured diagnostic procedures as a centerpiece for research diagnosis, the diagnostic criteria themselves were vague and challenging to implement. Improvements in diagnostic criteria occurred in concert with standardization of methods for collecting the necessary information.
Reliability of Community Diagnostic Methods
There has been a “bootstrapping” process over the last 30 years, wherein the increase in clarity of the diagnostic criteria for schizophrenia occurred concurrently to the acceptance of a systematic, objective approach to observation and the collection of diagnostic information by researchers as the “gold standard” for research diagnosis. There has been no similar movement on the part of psychiatric clinicians to adopt similarly structured diagnostic procedures, and their diagnoses are based on observations that are markedly less structured.
In the context of using a schizophrenia diagnosis as a sole basis for determining SSDI/SSI eligibility, it is thus important to evaluate whether diagnoses generated by mental health clinicians are valid. This can be evaluated to an extent by examination of whether diagnoses that do not rely on structured diagnostic interviews but are based on contemporary “DSM-III or later” diagnostic criteria are convergent with those that are based on a structured research diagnostic procedure in addition to the more modern criteria themselves. If they are, then the need for an application of a structured research diagnostic process might not be necessary to generate a valid diagnosis. Herein lies the critical question for our effort: is a diagnosis contained in a clinical chart likely to be congruent with a diagnosis generated with the “best estimate” research diagnostic procedure?
Older data can guide our search for the answer to this question. Studies of the reliability of psychiatric diagnoses based on older criteria (pre DSM-III) implicated deficiencies in the diagnostic criteria as a major source of unreliability. In a series of studies (Beck et al., 1962, Ward et al., 1967) identified and explicated in a classic Abnormal Psychology textbook (Davison and Neale, 1975), a sophisticated research team examined the inter-rater agreement for the diagnosis of schizophrenia and other conditions, and also examined the reasons for inconsistency when diagnosticians did not arrive at the same diagnosis. The overall reliability of the diagnosis of schizophrenia in those studies was 53% agreement. When corrected into a kappa coefficient (the contemporary agreement coefficient which corrects for chance agreement), the result would be a kappa coefficient of k=.38, which compares very poorly to the results of the DSM-III field trials described above. When cases where there was a diagnostic disagreement were examined, the reasons for such disagreement were quite interesting. Inconsistent reports by the patients across the two assessments accounted for only 5% of the disagreements, while inconsistent application of the diagnostic criteria by the clinicians was responsible for 32.5%.
The largest source of disagreement was inconsistency in following the rather vague diagnostic criteria, which accounted for 62.5% of the diagnostic inaccuracies. Thus, the majority of the variance in diagnostic unreliability in studies in the pre-DSM-III era may be attributed to deficiencies in the criteria, not failures on the part of the interviewer to ask questions that adequately covered the diagnostic domains, or inconsistency in report on the part of patients. These findings raise the question as to whether structured interviews and the associated steps in the “best estimate” procedure are really required to collect accurate diagnostic information or whether conscientious attention to collection of diagnostic information by an experienced clinician, prior to charting a diagnosis, combined with accurate entry of clinical information into a chart, is equivalent to the best estimate research diagnosis.
Convergence of Chart-based Diagnoses with Structured Assessment
The current generation of mental health professionals have been trained and have practiced after the introduction of the more reliable DSM-III in 1980. They have never attempted to diagnose patients in a criterion-free context. A large number of studies have compared diagnoses derived from clinical chart information to diagnoses that were generated on the basis of “all sources best estimate” research procedures. These studies, reviewed in this section, strongly support the idea that current (DSM) diagnostic criteria for schizophrenia and closely related conditions lead to clinical chart diagnoses that are quite convergent with the diagnoses obtained through “all sources best estimate” research procedures.
There are several approaches that can be employed to compare diagnoses coming from the clinical chart with the results of more structured diagnostic procedures. For instance, one method is to compare the overlap between diagnoses generated by clinicians and entered into clinical databases with diagnoses for the same patients generated by “all sources best estimate” procedures. Another method is to compile clinical information entered by clinicians into charts, generate diagnoses based on this information, and then compare those diagnoses to the diagnoses in the charts. This procedure determines whether chart based information, the source of data used to award an SSA disability, substantiates the diagnoses entered by clinicians into the clinical chart. These diagnoses can then be compared to the results of diagnoses generated with more structured procedures.
Across these strategies, the results appear similar, supporting the idea that clinicians are making valid diagnoses with the current criteria, regardless of how they generate the required information for the diagnoses. Further, the data suggest that clinicians are entering information into the chart that is adequate to generate diagnoses, and that converge with structured interview procedures and the diagnoses that the clinicians themselves enter. In research comparing clinical database information to the results of “all sources best estimate” research procedures, the results for psychotic disorders have been encouraging. In a study that employed a national psychiatric register and compared those registry diagnoses entered by clinicians treating the patients to those generated with an “all sources best estimate” research procedure on the same patients, results for both broad (psychotic spectrum) and narrow (schizophrenia based on the Research Diagnostic Criteria: RDC; Spitzer et al., 1977) criteria were similar. This study was conducted using the Israeli Psychiatric Registry and examined the convergence between registry diagnoses of psychotic disorders in general and more narrowly defined RDC schizophrenia (Weiser et al., 2005). Out of 169 patients meeting RDC for any psychotic disorder based on a structured psychiatric interview, 150 also had a diagnosis of a psychotic disorder in the Registry, yielding a specificity of 0.87 and sensitivity of 0.89. Re-running this analysis for the narrow definition of schizophrenia identified 94 patients who were diagnosed with schizophrenia using RDC based on the “all sources best estimate”; 82 of those patients also had a diagnosis of schizophrenia in the Registry, yielding a specificity of 0.85 and sensitivity of 0.87. These figures are encouraging, as specificity values over 0.80 are considered extremely suitable for research purposes and very likely to be identifying the important cases of interest.
Studies that identified the era during which the initial schizophrenia diagnosis was generated and examined the subsequent stability of the diagnosis have yielded results that confirm the greater validity of diagnoses generated using DSM-III or later criteria. In a study using the national psychiatric register in Finland (Pihlajamaa et al., 2008) and a systematic re-diagnosis of the patients, 78-80% of patients who received a schizophrenia diagnosis in 1982 or later met each of three different sets of criteria for schizophrenia (DSM-IIIR; ICD-10; DSM-IV), while only 56% of individuals who received a schizophrenia diagnosis prior to 1982 met all of the criteria. Thus, clinical diagnoses generated recently, which are most relevant for new applications for Social Security compensation, tend to be congruent with the results of a structured assessment.
One of the most widely used computer-based systems for the conversion of clinical chart data to criterion-referenced diagnoses is the Operational Criteria for Psychiatric Illness (OPCRIT) (McGuffin et al., 1991) system. This system, originally devised to examine clinical chart data for genetic studies, takes chart information, compares it to predefined algorithms to generate diagnoses, and produces “chart-based” clinical diagnoses. OPCRIT can be used to generate a wide array of International Classification of Diseases (ICD) based diagnoses, including the full array of psychotic spectrum diagnoses. Of interest in this research is that the chart data generated comes from all clinicians who have contact with the patient.
Several different studies have suggested that clinical diagnoses based on these OPCRIT algorithms are highly convergent with direct examinations of the patient. For instance, a large-scale study that examined the validity of OPCRIT diagnoses based on chart data entered by clinicians compared to their own clinical chart diagnoses, using both American (DSM-III-R; DSM-IV) and international criteria (ICD-10), suggested several encouraging conclusions. First, there was convergence of the OPCRIT schizophrenia diagnoses based on data present in the clinical chart with clinical chart diagnoses entered by these clinicians. Second, it was found that the broader the criteria for the diagnosis, including several psychotic spectrum conditions, the larger the overlap between OPCRIT diagnoses derived from chart data and diagnosis entered into the clinical chart (Williams et al., 1996). Finally, convergence between chart diagnoses and computer-derived diagnosis was much better for cases diagnosed after 1982 (i.e., during the “post DSM-III” era), which suggests that charting and chart diagnoses that are based on the current diagnostic criteria are internally consistent with each other. This leads to the conclusion that reliance on clinical chart diagnoses generated by primary clinicians would be the same as obtaining records and performing a structured psychiatric interview, meaning that the results of an examination in order to confirm a diagnosis would likely be highly convergent with the schizophrenia chart diagnosis.
Comparing Different Sources of Diagnostic information
In a directly relevant study, Vares et al., (2006) examined the convergence of diagnoses of schizophrenia and related psychotic conditions generated using several different methods, including structured interviews, chart diagnoses, and the OPCRIT system applied to clinical chart information. Multiple comparisons between diagnoses derived with different procedures were conducted. The authors examined the convergence between chart diagnoses and diagnoses based on a structured interview alone, without reliance on any medical record information, OPCRIT procedures used on medical records only, OPCRIT based on medical records plus the results of the structured interview, or an “all sources best estimate research diagnostic procedure”, without using the computer program.
Diagnoses based on interviews alone without any reliance on medical chart information had poor convergence with the “all sources best estimate research diagnosis” across severe mental illness in general, but still were highly convergent for people with a diagnosis of schizophrenia. All other diagnostic methods were highly convergent with each other, and 94% of people with a clinical chart diagnosis of schizophrenia received a schizophrenia diagnosis based on the “all sources best estimate” research procedures. OPCRIT diagnoses based on chart records (not including the diagnosis) were also convergent with the “all sources best estimate research diagnosis”. Thus, diagnoses that originated from record-based information, when evaluated systematically, were highly convergent with diagnoses that used a structured psychiatric interview combined with systematic chart review. These data suggest that diagnoses based on current chart information leads to diagnoses that are quite similar to those generated with much more time-consuming procedures, particularly for individuals with schizophrenia. Further, clinical chart diagnoses are strikingly convergent with more systematic research procedures when contemporary diagnostic standards are used.
Conclusions Regarding the Validity of Clinical Chart Information and Clinical Chart Diagnoses
In the pre-DSM-III era, the diagnosis of schizophrenia lacked reliability. Data collected since then and applied to the newer diagnostic criteria has suggested that the clinician diagnoses are more reliable than those similar diagnoses generated with the previous criteria. These findings suggest that highly detailed criteria alone increase the reliability of the diagnosis of schizophrenia. Clinical chart diagnoses, generated by clinicians without the benefit of structured interview procedures, converge very well with the results of a wide array of structured diagnostic procedures. Also, clinician diagnoses are quite congruent with the information that they place into the clinical chart, as evidenced by the high correlation between clinical chart diagnoses and OPCRIT diagnoses based on clinician chart notes. These studies did not select specific clinicians, nor did they ensure that the clinicians were trained to any specific competence criterion. Thus, these results suggest that everyday clinicians are collecting and recording clinical information and generating clinical diagnoses that are confirmed by both systematic reviews of their clinical charting and direct interviews with their patients. There are no data presented in these studies that would allow for determination as to whether diagnoses generated by clinicians with certain levels of experience or educational credentials are different in their convergence with the results of structured assessments. A reasonable conclusion from these data is that clinical diagnoses based on suitably lengthy contact with people with schizophrenia are likely to be convergent with diagnoses obtained by much more detailed and structured procedures as long as the diagnoses are based on DSM-III (1980 or later) criteria.
There are several findings of relevance to compassionate allowance in this review. First, chart diagnoses appear to be congruent with the results of specialized assessments. Second, clinicians who are familiar with their patients generate valid diagnoses. The Social Security disability criteria reviewed in the companion paper require a duration of illness of at least 2 years. Thus, the studies above are relevant to patients with an established illness, which is required by the regulations. We review below what constitutes “established” in terms of the research literature.
IV. STABILITY OF DIAGNOSIS
The next major issue is that of the stability of the diagnosis of schizophrenia, such that individuals who receive the diagnosis at some point in time are likely to still meet these criteria later. There is extensive information available about this topic, including studies of individuals who are receiving clinical treatment for the first time and who are then followed for various periods. In specific, the critical question to be addressed for SSA compassionate allowance is at what point after the onset of the illness would there be minimal chance of a change in diagnosis, suggesting that the awarding of Social Security disability compensation would be unlikely to occur in error.
Research efforts have examined patients early in the course of their illness in order to determine their eventual diagnosis and the stability of schizophrenia diagnoses generated early on. These studies are relevant to the SSA disability process because in some studies the research participants were seen so soon after their initial development of psychotic symptoms that they could not meet the full criteria for schizophrenia because of inadequate duration of illness. Cases whose illness fully resolved within a 6-month period with no persistent symptoms or impaired functioning also would not fulfill SSA disability program duration requirements and hence are not relevant to the compassionate disability discussion.
Perhaps the most systematic and relevant of these studies for the current era was the Suffolk County Mental Health project (Bromet et al., 1992). In this epidemiological study, all of the consenting first admissions to inpatient care for a psychotic condition in a large (population >1,500,000) suburban New York county were recruited for clinical assessment and longitudinal follow-up. Diagnoses of the patients studied included schizophrenia, schizoaffective disorder, bipolar disorder, and major depression, as well as unspecified psychotic disorders (e.g., schizophreniform disorder, psychosis not otherwise specified). The subjects in this cohort were seen at baseline and then at multiple reassessments up to 10 years after diagnosis (Schwartz et al., 2000). Of 263 cases at baseline, 122 received a diagnosis of schizophrenia at entry, from which 85% received a schizophrenia diagnosis at their next assessment at either 6 or 24 months. Thus, an initial diagnosis of schizophrenia, even occurring at the first mental health treatment, was unlikely to change. As the SSA criteria require aduration of illness of 2 years prior to consideration for award, a diagnosis stable for that period of time seems unlikely to be altered.
Of the cases who did not receive a schizophrenia diagnosis at baseline, 20% received a schizophrenia diagnosis at their next assessment (Bromet et al., 2005: still less than the 2 year requirement for SSD disability). Of the cases diagnosed as meeting criteria for schizophrenia at the first reassessment (at most 24 months after study entry), 92% of these cases met schizophrenia criteria when they were followed up 10 years later. Among patients whose diagnosis was changed from another diagnosis to schizophrenia at the time of their first reassessment, 97% met criteria at the 10 year follow-up. Thus, even though 6-24 month stability of schizophrenia diagnoses was high itself, individuals who are diagnosed with schizophrenia after two assessments over 24 months are extremely likely to meet those same criteria 10 years later. Diagnoses other than schizophrenia were more likely to change.
In a review of similar studies employing “DSM-III or later” criteria at the follow-up assessments, Bromet et al. (2005) reported results consistent with those of the Suffolk County Mental Health project discussed above. For instance, one study found that 86/86 (Mason et al., 1996) cases diagnosed with schizophrenia received a schizophrenia diagnosis at a 13-year follow-up and another reported that 90% of patients clinically diagnosed with schizophrenia at the time of their first episode manifested diagnostic stability over 25 years (Marneros et al., 1991). These studies, suggest that diagnoses of schizophrenia are associated with substantial temporal stability of the diagnosis and that stability begins early, earlier than would cross the threshold for meeting duration criteria for a SSA disability.
Evidence Gaps
The research we have reviewed here indicates that schizophrenia diagnosis – using modern diagnostic criteria – is valid and reliable when performed by doctoral-level mental health specialists, in community as well as academic settings. While the vast majority of people with schizophrenia receive mental health specialty care, it is possible that some SSDI/SSI applicants might present with a schizophrenia diagnosis made by a primary care provider (PCP), for instance due to shortages of mental health specialists in many areas of the US. We therefore searched for research evidence on the validity of schizophrenia diagnosis made by PCPs, but were unable to find such research.
In this context, one option might be to stipulate that a mental health specialist should confirm any diagnosis of schizophrenia, if schizophrenia were to be considered for SSAs Compassionate Allowance program. Such a requirement would likely be uncontroversial in other aspects of medicine, such as oncology.
We do note that the exact level of mental health specialty expertise required could be a matter of discussion. For instance, various states permit licensed professional social workers and Masters-level psychiatric nurse practitioners, in addition to psychiatrists and doctoral-level psychologists, to execute an order for involuntary mental health care, an authority that is certainly consistent with a professional responsibility for accurate diagnosis of schizophrenia. Further, as previously noted, there are shortages of psychiatrists and psychologists in many part of the US, particularly in rural areas, making such other mental health professionals the clinicians of record for many cases.
An important point to make is the common logical fallacy that the absence of evidence is the same as evidence of absence. It is clear that there are few research data that can address the question as to whether diagnoses confirmed at 24 months are stable in the immediate period thereafter. This is a critical question from a scientific perspective, but likely irrelevant from a compassionate allowance perspective. As very few of the cases who apply for disability have exactly 24 months of illness and impairment. As noted above, the 10 year stability of diagnoses obtained at 24 months past illness onset is substantial. Quibbling about whether this 10 year stability would be found 2 to 6 months after the 24 month period seems trivial, counterproductive, and not a likely candidate for research funding in the current environment.
V. Discussion
In the post DSM-III era, diagnoses of schizophrenia generated by clinicians are consistent with the results of schizophrenia diagnoses generated by other systematic, quantitative approaches. Further, the information entered into clinical charts by clinicians leads to diagnoses generated by structured chart review that are consistent with the chart-based diagnoses entered by clinicians and “all sources best estimate” diagnoses on the same cases, suggesting that clinical chart data are valid in support of these clinical diagnoses. One conclusion suggested by these studies is that clinical record information entered by clinicians who are familiar with the patient and the current diagnostic criteria has suitable validity to substantiate a “true” diagnosis of schizophrenia and hence substantial a compassionate allowance award. Based on the reported convergence between clinical diagnosis and these more detailed and stringent diagnostic procedures, there appears to be no need to require the collection of additional diagnostic data using more structured or detailed methods,
A second major point made by the results of the studies reviewed above is that diagnoses of schizophrenia manifest considerable temporal stability. A diagnosis of schizophrenia that is present 6-24 months after the first clinical contact is almost always present over follow-up periods that range from 10 to 25 years, and most first episode diagnoses are confirmed in a similar time frame. Thus, any individual whose diagnosis of schizophrenia is confirmed from 6 to 24 months after their first diagnosis can be assumed to meet criteria for the illness. This suggests that the information presented in the companion white paper regarding disability in accurately diagnosed people with schizophrenia can be presumed to apply to cases whose diagnosis has been confirmed within 24 months of their first clinical presentation.
As described in the companion paper, deficits in every day functioning are a normative occurrence in people who receive a diagnosis of schizophrenia and these impairments are also accompanied by a host of other correlated features such as cognitive impairments and various negative symptoms. The data from these two papers together suggests that individuals who received a diagnosis of schizophrenia are likely to be validly diagnosed and also to have evidence of considerable concurrent functional disability, leaving little doubt about whether an individual “deserves” disability compensation.
ACKNOWLEDGMENTS
The views expressed in this article do not necessarily represent the views of the National Institute of Mental Health, the National Institutes of Health, the Department of Health and Human Services, or the United States government. This was the result of a committee effort including other individuals not directly involved as authors, including Robert Drake, MD, Susan McGurk, PhD, and Howard Goldman, MD.
A meeting convened by the NIMH served as the basis for this paper. All individuals who attended that meeting contributed to the discussions, but the writing of the paper was completed by the current authors. See special acknowledgement to Ms. Leifker.
FUNDING SOURCES
This work was supported via contract by the National Institute of Mental Health.
SPECIAL ACKNOWLEDGEMENT: The authors would like to thank Feea Leifker for her literature searching for this project.
Role of Funding Source.
This research was funded by the National Institute of Mental Health, who provided no input into the reviews conducted and presentation of these data.
Footnotes
Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
Conflict Of Interest Statement.
DISCLOSURES
In the past 12 Months, the authors have the following activities to Disclose:
Dr. Harvey has received consulting fees for Abbott Labs, Boeheringer Ingelheim, Genentech, Johnson and Johnson, Pharma Neuroboost, Roche Pharma, Shire Pharma, Sunovion Pharma, and Takeda Pharma.
Dr. Carpenter has received consulting fees from Bristol Myers Squibb, Eli Lilly and Company, Lundbeck Pharma, and Merck and Company.
Dr. Green has been a Consultant to Abbott Laboratories, Cypress Bioscience, Lundbeck, Otsuka, Sunovion, Sanofi-aventis, Takeda, Teva and been a Speaker for Janssen-Cilag, Otsuka, Sunovion .
Dr. Gold has served as a consultant to Astra Zeneca, Eli Lilly and Company, Merck and Company, and Pfizer Pharma.
Drs. Heaton and Schoenbaum have no activities to disclose.
Contributions of the Authors.
All authors contributed equally to this review paper, through a series of conference calls and multiple revisions of this paper.
Contributor Information
Philip D. Harvey, University of Miami Miller School of Medicine.
Robert K. Heaton, UCSD Medical Center.
William T. Carpenter, Maryland Psychiatric Research Institute.
Michael F. Green, David Geffen School of Medicine at UCLA.
James M. Gold, Maryland Psychiatric Research Institute.
Michael Schoenbaum, National Institute of Mental Health.
REFERENCES
- American Psychiatric Association . Diagnostic and Statistical Manual of Mental Disorders. 3rd edition Author; Washington, DC: 1980. [Google Scholar]
- Beck AT, Ward CH, Mendelson M, Mock JE, Erbaugh JK. Reliability of psychiatric diagnosis 2: A study of consistency of clinical judgments and ratings. Am J Psychiatry. 1962;119:351–357. doi: 10.1176/ajp.119.4.351. [DOI] [PubMed] [Google Scholar]
- Bleuler E. Dementia praecox; or the group of schizophrenias. International Universities Press; New York: 1911. [Google Scholar]
- Bromet EJ, Naz B, Fochtmann LJ, Carlson GJ, Tanenberg-Karant M. Long-Term Diagnostic Stability and Outcome in Recent First-Episode Cohort Studies of Schizophrenia. Schizophr Bull. 2005;31:639–64. doi: 10.1093/schbul/sbi030. [DOI] [PubMed] [Google Scholar]
- Bromet EJ, Schwartz JE, Fennig S, Geller L, Jandorf L, Kovasznav B, et al. The epidemiology of psychosis: the Suffolk County Mental Health Project. Schizophr Bull. 1992;18:243–255. doi: 10.1093/schbul/18.2.243. [DOI] [PubMed] [Google Scholar]
- Davison GC, Neale JM. Abnormal Psychology. John Wiley; New York: 1975. [Google Scholar]
- Kraepelin E. Dementia praecox and paraphrenia. E. & S. Livingstone; Edinburgh: 1919. [Google Scholar]
- Marneros A, Deister A, Rohde A. Stability of diagnoses in affective, schizoaffective and schizophrenic disorders: cross-sectional versus longitudinal diagnosis. Eur Arch Psy Clin N. 1991;241:187–192. doi: 10.1007/BF02219720. [DOI] [PubMed] [Google Scholar]
- Mason P, Harrison G, Glazebrook C, Medley I, Croudace T. The course of schizophrenia over 13 years: a report from the International Study on Schizophrenia (ISoS) coordinated by the World Health Organization. Brit J Psychiatry. 1996;169:580–586. doi: 10.1192/bjp.169.5.580. [DOI] [PubMed] [Google Scholar]
- McGuffin P, Farmer A, Harvey I. A polydiagnostic application of operational criteria in studies of psychotic illness. Development and reliability of the OPCRIT system. Arch Gen Psychiatry. 1991;48(8):764–70. doi: 10.1001/archpsyc.1991.01810320088015. [DOI] [PubMed] [Google Scholar]
- Pihlajamaa J, Suvissari J, Henriksson M, Heila H, Karjalaninen E, Koskela J, et al. The validity of schizophrenia diagnosis in the Finnish Hospital Discharge Register: Findings from a 10-year birth cohort sample. Acta Psychiatrica Scandinavia. 2008;62:198–203. doi: 10.1080/08039480801983596. [DOI] [PubMed] [Google Scholar]
- Schwartz JE, Fennig S, Tanenberg-Karant M, Carlson G, Craig T, Galambos N, et al. Congruence of diagnoses 2 years after a first-admission diagnosis of psychosis. Arch Gen Psychiatry. 2000;57(6):593–600. doi: 10.1001/archpsyc.57.6.593. [DOI] [PubMed] [Google Scholar]
- Spitzer RL, Endicott J. Current and past psychopathology scales. New York State Psychiatric Institute, Biometrics Research Division; New York: 1968. [Google Scholar]
- Spitzer RL, Endicott J, Robins L. Research diagnostic criteria. 3ed. New York State Psychiatric Institute, Biometrics Research Division; New York: 1977. [Google Scholar]
- Stephen K, Lawrie MD. Professor of Psychiatry. University of Edinborough, personal communication; Sep 14, 2010. [Google Scholar]
- Vares M, Ekholm A, Sedvall GC, Hall H, Jönsson EG. Characterization of Patients with Schizophrenia and Related Psychoses: Evaluation of Different Diagnostic Procedures. Psychopathology. 2006;39:286–295. doi: 10.1159/000095733. [DOI] [PubMed] [Google Scholar]
- Ward CH, Beck AT, Mendelson M, Mock JE, Erbaugh JK. The psychiatric nomenclature: Reasons for diagnostic disagreement. Arch Gen Psychiatry. 1962;7:198–205. doi: 10.1001/archpsyc.1962.01720030044006. [DOI] [PubMed] [Google Scholar]
- Weiser M, Kanyas K, Malaspina D, Harvey PD, Glick I, Goetz D, et al. Concordance between ICD-10 Diagnosis of Non-Affective Psychotic Disorders in the Israeli National Hospitalization Registry and RDC diagnoses. Comprehensive Psychiatry. 2005;46:38–42. doi: 10.1016/j.comppsych.2004.07.016. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Williams J, Farmer AE, Ackenheil M, Kaufmann CA, McGuffin P. A multicentre inter-rater reliability study using the OPCRIT computerized diagnostic system. Psychol Med. 1996;26(4):775–83. doi: 10.1017/s003329170003779x. [DOI] [PubMed] [Google Scholar]
- Wing JK, Birley JL, Cooper JE, Graham P, Isaacs AD. Reliability of a procedure for measuring and classifying “present psychiatric state. Br J Psychiatry. 1967;113:499–515. doi: 10.1192/bjp.113.498.499. [DOI] [PubMed] [Google Scholar]