Abstract
The DSM-5 Level 1 Cross-Cutting Symptom Measure–Adult (DSM XC) was developed by the American Psychiatric Association (APA) as a transdiagnostic measure of current mental health symptomatology. This paper describes utilization of the DSM XC to screen volunteers for participation in mental health research studies as healthy controls. Research volunteers completed an online, modified version of the DSM XC, which along with other clinical information, was used to determine eligibility for participation as a healthy control. The sensitivity and specificity of screening positive on the DSM XC for this eligibility decision were calculated. Of 506 volunteers who completed the screening process, 159 (31%) were ineligible due to mental health reasons. The DSM XC sensitivity in predicting this determination was 64.2% [95% CI: 56.5 – 71.3] and its specificity was 83.9% [95% CI: 79.7 – 87.5]. When DSM XC responses were combined with information about current psychotropic medication use, an important determinant of study eligibility, the sensitivity improved to 81.8% [95% CI: 75.3 – 87.2). These findings provide preliminary support for the use of the DSM XC as an initial screening tool for mental health studies that enroll healthy research volunteers, particularly when supplemented by additional clinical history such as psychotropic medication use.
Keywords: Screening tools, mental health, clinical research, transdiagnostic, study recruitment, research volunteers
1. Introduction
Healthy volunteers are commonly used in mental health research, as either the primary study population or as a comparison group. Two recent examples of NIH-funded initiatives which collect data from large groups of healthy volunteers are the Human Connectome Project (Human Connectome Project, 2019) and the Adolescent Brain Cognitive Development Study (Adolescent Brain Cognitive Development, 2019). While such normative datasets are valuable to the research community, the definition of health, specifically mental health, and how it is ascertained may differ from study to study (Grinker et al., 1962; Shtasel et al., 1991). Although the use of standardized measures partially controls variability in this respect, the screening and assessment of research participants still involves some degree of clinical judgment (Schechter et al., 1994; Adami et al., 2002).
The need for a transdiagnostic instrument to screen for mental health is supported by a high rate in the general adult population of any mental illness, which is defined as a mental, behavioral, or emotional disorder. In 2016, the prevalence of any mental illness in the U.S. was 18.3% based on epidemiologic estimates (National Institute of Mental Health, 2019). Thus, in the context of mental health research, broader screening is necessary and widely utilized (Shtasel et al., 1991; Adami et al., 2002). While several screening tools are available (Mental Health America, 2018), with some used in clinical settings (Olfson et al., 2014), there is currently no transdiagnostic screening instrument which is accepted as the gold standard for routine clinical practice or clinical research. For example, a recent review of screening tools used in primary care settings finds the strongest evidence for the Patient Health Questionnaire 9 (PHQ-9), a self-report depression survey (Mulvaney-Day et al., 2017; Kroenke et al., 2001). However, the PHQ-9, like many screening tools, is limited to a single diagnostic area, which overlooks the fact that mental disorders often co-occur (Kessler et al., 2005).
Experts involved in the Diagnostic and Statistical Manual–5 (DSM-5) development process recommended that a dimensional and cross-cutting measure, intended to supplement a categorical diagnosis if present, be provided to clinicians (Jones, 2012; Clarke and Kuhl 2014). This was meant, in part, to acknowledge that the overreliance on strict categorical diagnosis in previous versions of the DSM could limit progress in “finding the underlying causes of mental disorders and developing effective treatments” (Narrow et al., 2013). The products of this recommendation were the DSM-5 Level 1 Cross-Cutting Symptom Measures (DSM XC). Three versions of the DSM XC exist, including adult self-report (the focus of the current report), child self-report for ages 11–17 years, and parent/guardian report for children ages 6–17 years.
The DSM XC is a brief mental health assessment comprising 23 questions across 13 transdiagnostic domains of psychopathology: depression, anger, mania, anxiety, somatic symptoms, suicidal ideation, psychosis, sleep problems, memory, repetitive thoughts and behaviors, dissociation, personality functioning, and substance use (American Psychiatric Association, 2013). Each item is rated on a 5-point scale (0=none, 1=slight, 2=mild, 3=moderate, 4=severe), and a score of 2 or above on any item is considered a flag for additional inquiry. The exceptions to this guidance are the substance use, suicidal ideation and psychosis items, for which ratings of 1 or above are flagged. Of note, the DSM XC ratings compress features of duration, frequency, and severity into a single rating. Furthermore, while some DSM XC items are grouped under a psychiatric domain that corresponds roughly to a DSM-5 diagnostic category (e.g., depression), other items are not specific to diagnosis (e.g., anger). In general, the DSM XC resembles a psychiatric review of systems, as it is broad but shallow (Clarke and Kuhl, 2014). However, it is not a comprehensive review of systems since there are mental health symptoms that are not included, such as disordered eating or attention/concentration problems.
The DSM XC was evaluated in concert with the 2011 DSM-5 field trials (Jones, 2012; Narrow et al., 2013; Clarke and Kuhl, 2014). Across 11 large academic settings, the test-retest reliability of the DSM XC was found to be adequate (r = 0.64 to 0.97) for all items except for the two mania questions (Narrow et al., 2013). The clinical utility and feasibility of the DSM XC was also evaluated in routine clinical practice mental health settings and was found to be clinically useful and favorably viewed by clinicians and patients (Moscicki et al., 2013). While the DSM XC was designed to inform a more comprehensive mental health evaluation, and was not explicitly developed as a screening tool, it may be a good candidate as a transdiagnostic instrument for research and clinical use. The APA placed the DSM XC on an open access website to encourage researchers and clinicians to provide further data on its usefulness, which could support more widespread adoption of this tool into clinical and research practice (American Psychiatric Association, 2019).
Relevant to its use as a screening tool are the psychometric characteristics of the DSM XC, including its sensitivity (how well the test detects people with mental health issues) and specificity (how well the test detects people without mental health issues). Outside of the DSM-5 trials, two studies provide psychometric data on the DSM XC. One study performed at a correctional community center documented generally good sensitivity and poor specificity of the DSM XC among 150 highly morbid inpatients and outpatients with substance use disorders (Bastiaens and Galus, 2018). Sensitivity for depression (77%), anxiety (85%), and psychosis (100%) were in the acceptable range, while mania (56%) was poor. Except for psychosis (85%), the specificity was poor (range 40 – 44%). This finding was explained by a high rate of false positives, consistent with other studies of patients in correctional facilities.
A second report utilized a convenience sample of student volunteers from university psychology department participant pools who completed the DSM XC and other measures online. Among 7,217 non-treatment-seeking college students, support was found for the convergent (e.g., positive correlations with validated measures of depression or stress) and divergent (e.g., negative correlations with self-esteem) validity of the measure (Bravo et al., 2018). Internal consistency within domain was found to be acceptable among all multi-item DSM domains.
In this report, we evaluated a modified version of the DSM XC as a transdiagnostic tool to screen healthy volunteers for mental health research studies. The data are from a protocol conducted at the National Institute of Mental Health Intramural Research Program (NIMH IRP) (ClinicalTrials.gov identifier: NCT03304665). The primary study aim is to centrally recruit and screen self-referred adult volunteers in good health for participation in other research studies at NIMH. While the parent study was not specifically intended to validate the DSM XC, this secondary analysis utilized a convenience sample of volunteers who completed online study measures, which included the modified DSM XC. We evaluated the sensitivity and the specificity of the DSM XC against the eligibility decision of clinical research team to proceed to an in-person assessment.
2. Methods
The study protocol to recruit healthy research volunteers (NCT03304665) was carried out in accordance with the latest version of the Declaration of Helsinki and was approved by the Institutional Review Board of the National Institutes of Health. Potential research volunteers were recruited through a variety of means including postcards, flyers, listservs and websites, and were then directed to the study website.
2.1. Measures:
After signing an electronic consent form, volunteers used a secure website to complete screening measures including: a modified version of the DSM XC, the Alcohol Use Disorders Identification Test (AUDIT) (Babor et al., 1989), the DSM-5 Level 2 Substance Use Measure-Adult (American Psychiatric Association, 2019), and surveys of demographics, medical and mental health history.
The transdiagnostic DSM XC was chosen for our study as a cross-diagnostic survey measure (not domain specific) to serve as an initial screening instrument to identify mental health symptoms in research volunteers. In addition, the DSM XC was promoted by NIMH as a common measure in funded research projects (National Institute of Mental Health, 2015). When designing our study, we chose to remove five items from the DSM XC. Because the measure was completed online, the item on thoughts of self-harm was removed because volunteers who might require immediate intervention could not be reliably contacted. The two substance use items were excluded because current drug and alcohol use are important exclusion criterion for NIMH studies and thus more detailed questionnaires were used (AUDIT and Level 2 Substance Use-Adult). Lastly, the two items on personality functioning were omitted because study eligibility does not include personality factors (see Appendix A).
2.2. Procedures:
Through the centralized protocol, potential healthy volunteers for NIMH research studies proceed through multiple possible screening steps: online surveys, medical record review, phone calls and clinical review (Figure 1). Those who are deemed likely eligible are invited for an in-person assessment of final eligibility using the following inclusion criteria: 18 years of age or older, fluent in English, able to provide informed consent, and in good general health. Volunteers with a history of significant or unstable medical or mental health condition, current suicidal thoughts or behavior, illicit drug use by history or drug screen, abnormal findings on physical exam or laboratory tests, an IQ below 70, or NIMH employment are excluded. Importantly, an individual who currently takes psychotropic medication would be ineligible for the study, as it is considered evidence of a significant mental health condition, even if the person does not currently experience mental health symptoms. These inclusion and exclusion criteria were based on a general set of criteria used by most NIMH IRP clinical studies. Because the in-person assessment is resource-intensive (i.e., a structured clinical interview, history and physical exam, routine laboratory testing, cognitive testing), we relied on screening to prospectively identify and eliminate those who were likely to fail at least one of the inclusion/exclusion criteria.
Figure 1:
Study Flow
Volunteer online responses to screening measures were reviewed regularly by research staff; the study consent allowed review of NIH medical records for volunteers who had previously participated in research at NIH. From their responses, volunteers were identified for further review based on predetermined thresholds on online measures, e.g. AUDIT scores above 7. Volunteers who had DSM XC responses above the APA recommended thresholds were considered “flag positive” and were phone screened. Medical record review could also identify potential reasons for ineligibility, e.g. previous abnormal brain MRI or undisclosed mental health condition. Phone screens for these volunteers were conducted by a member of the study clinical team comprised of licensed mental health professionals (social work, psychology, psychiatry) who spoke directly to the person to verify positive responses, confirm medical record information, and gather additional history.
The final stage of the screening process was the clinical review, which occurred on a weekly basis, wherein the clinical team met to make consensus decisions of “likely eligible” or “likely ineligible” for participation as a healthy volunteer. This clinical review considered information from all online responses, including the DSM XC, in addition to medical record and phone screen information. Volunteers were categorized as “likely ineligible” for at least one of the following reasons: mental health, substance use, medical health, or other (e.g., language or geography). Ineligibility codes were not mutually exclusive. The eligibility determination made by the clinical team (“likely eligible” versus “likely ineligible” for mental health reasons) was the outcome of interest for this study. Ineligibility for substance use alone was not included in the analysis because the two DSM XC substance use items were removed from our modified instrument.
2.3. Statistical Analysis:
The goal of this analysis was to calculate the sensitivity and specificity of the DSM XC for mental health eligibility decisions among volunteers for NIH research. We evaluated the cutoff recommended by the APA for the DSM XC, which is a score of 2 or higher on at least one item (Narrow), against the determination from study screening of “likely eligible” versus “likely ineligible”. The sensitivity (the ratio of true positive to the sum of true positive and false negative) and specificity (the ratio of true negative to the sum of true negative and false positive) values with 95% confidence intervals were calculated using IBM SPSS Statistics (IBM, 2011). Because the DSM XC queries only current mental health symptoms, we anticipated that individuals with a mental disorder which is well-controlled with psychotropic medication would be missed (i.e., false negative). Thus, we conducted an exploratory analysis to quantify the impact of an additional manifest piece of clinical information, whether or not the individual reported psychotropic medication use, on the sensitivity of the DSM XC. Because the addition of this information could only result in the movement of individuals from the false negative to the true positive, we could expect that this adjustment would not affect specificity.
3. Results
During a 16-month period of study recruitment from October 2017 to January 2019, 592 volunteers signed electronic consent and completed the online surveys. Those who could not be subsequently contacted (n = 86) were excluded from analysis. The final sample of 506 volunteers who could be further assessed ranged in age from 18 to 89 years (mean 35.2 ± 13.9), and the majority were female (65.6%) and white (62.5%) (Table 1).
Table 1:
Demographics
Demographic Characteristic | Total Sample |
---|---|
Age, years | 35.2 ± 13.9 |
Sex | |
Male, n (%) | 171 (33.8%) |
Female, n (%) | 332 (65.6%) |
No response, n (%) | 3 (0.6%) |
Race | |
White, n (%) | 316 (62.5%) |
Black, n (%) | 83 (16.4%) |
Asian, n (%) | 64 (12.6%) |
Other/multiple, n (%) | 43 (8.5%) |
Education | |
< college degree, n (%) | 76 (15.0%) |
Associates, n (%) | 16 (3.2%) |
Bachelors, n (%) | 226 (44.7%) |
Advanced, n (%) | 188 (37.2%) |
Of 506 volunteers, 19% had previously participated in NIH research and therefore study clinicians conducted 94 medical record reviews. Based on the review of online responses and medical records, the clinical team performed phone screens on 161/506 (32%) volunteers to verify and supplement existing information. All available information was used by the clinical team to reach consensus on a “likely eligible” or “likely ineligible” determinations. Using this process, 251/506 (50%) of volunteers were “likely eligible” and 255/506 (50%) were “likely ineligible” (Figure 1). Mental health was the most common reason for ineligibility (n = 159/255, 62%), followed by medical health, substance use, and other reasons (Figure 2). While substance use was categorized independently, many volunteers who were ineligible because of substance use were also ineligible for mental health reasons.
Figure 2:
All Reasons for Ineligibility
Although volunteers were presumably interested in participation based on a self-perception of good health, about one-third (n=159/506 (31%)) were flag positive on the DSM XC, generally defined as at least one rating of 2 (mild) or above. The most frequently endorsed DSM XC items that were rated at 2 or above were “Feeling nervous, anxious, frightened, worried, or on edge” (13%), “Problems with sleep that affected your sleep quality over all” (12%), and “Sleeping less than usual, but still have a lot of energy” (8%) (Figure 3).
Figure 3:
Frequency of Flagged DSM XC Items*
Flag positive on the DSM XC predicted a clinical decision of study ineligibility due to mental health concerns with a sensitivity of 64.2% (95% CI: 56.5 – 71.3) and a specificity of 83.9% (95% CI: 79.7 – 87.5) (Table 2). We performed a second analysis in which DSM XC responses were combined with information about current psychotropic medication use, which improved sensitivity to 81.8% (95% CI: 75.3 – 87.2) (Table 2).
Table 2:
Sensitivity-Specificity Analyses
Eligibility Decision | Sensitivity | Specificity | |||
---|---|---|---|---|---|
Eligible | Ineligible | ||||
DSM XC Only | 64.2 % [56.5 – 71.3] | 83.9% [79.7 – 87.5] | |||
Flag + | 56 | 102 | |||
Flag − | 291 | 57 | |||
DSM XC and Meds | 81.8% [75.3 – 87.2) | 83.9% (79.7 – 87.5) | |||
Flag + | 56 | 130 | |||
Flag − | 291 | 29 |
In-person assessment, which included the Structured Clinical Interview for DSM-5 RV (SCID), was performed only for participants deemed “likely eligible” who elected to a clinic visit, so it was not possible to evaluate the correspondence of the DSM XC with psychiatric diagnosis in the full sample. However, it is worth noting that among the 148 volunteers who received a SCID interview (out of 251 “likely eligible” participants, 59%), only 10 (7%) were found to have a current Axis I diagnosis that made them ineligible to be referred to other NIMH studies. This rate is much lower than that reported by Shtasel et al. (1991), demonstrating the effectiveness of the overall screening process.
4. Discussion
In this secondary analysis, we demonstrated that it is feasible to employ a modified online version of the DSM XC which provided preliminary psychometric support for the use of the DSM XC to help screen for eligibility of healthy research volunteers. The APA recommended thresholds for DSM XC items helped to identify specific areas of concern or distress, and study clinicians anecdotally reported that the measure was very useful in guiding their phone screen questions, engaging the volunteer, and facilitating the gathering of additional clinical history.
The specificity of the DSM XC for predicting whether the clinical team would eventually decide to invite the volunteer for an in-person screen was good, indicating that when the DSM XC suggested that the volunteer did not have a mental health issue, the clinical team, using all available information, usually agreed that this was true. However, the sensitivity was poor, reflecting a relatively high rate of false negatives. As sensitivity is often a priority when developing screening tools (Maxim et al., 2014), we completed a second analysis in which the DSM XC data were supplemented with a single additional piece of clinical history, current psychotropic medication use, which improved sensitivity to a more acceptable level. We anticipated that this would improve the sensitivity, since the DSM XC queries current symptoms and not current disorders—thereby an individual with a well-controlled mental disorder would not flag positive. The addition of clinical history is consistent with APA guidance that the DSM XC should be followed by a clinical assessment (Narrow et al, 2013).
A valid transdiagnostic screener is helpful for mental health research with healthy volunteers. As demonstrated here, most volunteers for our study reported low levels of mental health symptoms, but there were some outliers. In this study, the most frequently flagged item on the DSM XC measure was “Feeling nervous, anxious, frightened, worried, or on edge,” followed by the two questions about sleep problems. Indeed, the study team’s decision of likely ineligibility was most commonly based on mental health reasons; it is possible that open recruitment of healthy volunteers for mental health studies may be biased toward those with mental health problems (Shtasel et al., 1991; Adami et al., 2002).
Qualitative review of the response profiles may be instructive when considering use of the DSM XC for screening of healthy volunteers. A review of item content suggests that the meaning of some items may lack validity, though an item analysis was outside the scope of this study. For instance, the item “sleeping less than usual but still have a lot of energy” was endorsed much more frequently than the other mania domain item, “starting lots more project than usual or doing more risky things than usual”. This is consistent with the previous finding of unreliability of the mania items reported by Narrow and colleagues (2013). A review of the false positives indicated that many volunteers who flagged on the DSM XC reported on the clinical phone screen that the symptoms were due to situational factors. For example, one participant endorsed the item, “problems with sleep that affected your sleep quality” due to a recent move across the country, and another endorsed the anxiety items because of upcoming college final exams. Among the false negatives, we found that some volunteers had accurately reported their current mental health symptoms but had significant histories of mental health problems that rendered them ineligible for research.
This study has several limitations stemming from the fact that it was a secondary analysis of a study which was not designed to evaluate the DSM XC. We removed five items from the DSM XC thus the sensitivity and specificity obtained in our analysis does not reflect that of the full instrument. Of note, others may want to omit the self-harm item if the measured is administered online; responses may not be monitored in real time and thus appropriate follow up for positive responses may be limited. For this study, suicidal ideation was evaluated during subsequent in-person meetings. Further, we evaluated the use of the DSM XC as a screener for predicting the mental health eligibility decision of the clinical team, rather than for predicting an eventual DSM-5 diagnosis. Thus, the results of this study directly inform the use of the DSM XC for initial screening of healthy volunteers for participation in research, rather than for identifying mental disorders in volunteers. For this reason, these results are not likely generalizable to clinical use. However, a very small proportion of those volunteers who were deemed eligible for participation and received a SCID received a mental health diagnosis, suggesting at least that the specificity of the DSM XC for formal diagnosis is good. The final major limitation related to the design of the study is that the clinical team was not blind to the DSM XC responses when making study eligibility decisions. This feature of the study dictates that the estimates of sensitivity and specificity are an upper limit, as knowledge of the DSM XC result during the eligibility decision could have enhanced correspondence between the two. A more minor limitation, true of all self-report screeners, is that research participants may not always be honest or forthcoming when hoping to participate in studies (Pavletic and Pao, 2017).
Despite these limitations, this study fills a dearth in the literature concerning the psychometric profile of the DSM XC, which has been recommended for clinical research by the NIMH Extramural Research Program (National Institute of Mental Health, 2015) and recommended for further evaluation by the APA. This is the first study to describe the use of the DSM XC as an initial screening tool for healthy research volunteers, and our results support its use, particularly when supplemented by clinical history. Because these findings may not generalize to other research or clinical settings, we recommend that researchers and clinicians further evaluate the DSM XC in different populations and settings.
Highlights.
A transdiagnostic measure of current mental health symptomatology is needed
The DSM XC was used to screen healthy volunteers for mental health research
Screen positives on DSM XC were compared to ineligibility decisions
Supplemented with clinical history, the DSM XC had good specificity and sensitivity
Our findings provide preliminary support for the inclusion of the DSM XC when screening volunteers for mental health research
Funding:
The research was funded by the Intramural Research Program of the National Institute of Mental Health (ZIAMH002922). The sponsor had no role in study design; in the collection, analysis and interpretation of data; in the writing of the report; and in the decision to submit the article for publication.
Appendix
Appendix A:
DSM-5 Self-Rated Level 1 Cross-Cutting Symptom Measure—Adult
![]() ![]() |
Appendix B:
LEVEL 2—Substance Use—Adult
During the past TWO (2) WEEKS, about how often did you use any of the following medicines ON YOUR OWN, that is, without a doctor’s prescription, in greater amounts or longer than prescribed? | Clinician Use | |||||||||
---|---|---|---|---|---|---|---|---|---|---|
Not at all | One or two days | Several days | More than half the days | Nearly every day | Item Score | |||||
a. | Painkillers (like Vicodin) | 0 | 1 | 2 | □ 3 | □ 4 | ||||
b. | Stimulants (like Ritalin, Adderall) | 0 | 1 | 2 | □ 3 | □ 4 | ||||
c. | Sedatives or tranquilizers (like sleeping pills or Valium) | 0 | 1 | 2 | □ 3 | □ 4 | ||||
Or drugs like: | ||||||||||
d. | Marijuana | 0 | 1 | 2 | □ 3 | □ 4 | ||||
e. | Cocaine or crack | 0 | 1 | 2 | □ 3 | □ 4 | ||||
f. | Club drugs (like ecstasy) | 0 | 1 | 2 | □ 3 | □ 4 | ||||
g. | Hallucinogens (like LSD) | 0 | 1 | 2 | □ 3 | □ 4 | ||||
h. | Heroin | 0 | 1 | 2 | □ 3 | □ 4 | ||||
i. | Inhalants or solvents (like glue) | 0 | 1 | 2 | □ 3 | □ 4 | ||||
j. | Methamphetamine (like speed) | 0 | 1 | 2 | □ 3 | □ 4 | ||||
Total Score: |
Footnotes
Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
Research Data: Data repository https://nda.nih.gov/ and https://clinicaltrials.gov/ (NCT03304665)
Declarations of interest: none
References
- Adami H, Elliott A, Zetlmeisl M, McMahon R, Thaker G, 2002. Use of telephone screens improves efficiency of healthy subject recruitment. Psychiatry Research 113, 295–301. 10.1016/S0165-1781(02)00265-2 [DOI] [PubMed] [Google Scholar]
- American Psychiatric Association, 2013. Online Assessment Measures [WWW Document], n.d. URL http://www.psychiatry.org/psychiatrists/practice/dsm/dsm-5/online-assessment-measures (accessed 8.2.19)
- American Psychiatric Association, 2019. Online Assessment Measures [WWW Document], n.d. URL https://www.psychiatry.org/psychiatrists/practice/dsm/educational-resources/assessment-measures (accessed 8.2.19)
- Bastiaens L, Galus J, 2018. The DSM-5 Self-Rated Level 1 Cross-Cutting Symptom Measure as a Screening Tool. Psychiatr Q 89, 111–115. 10.1007/s11126-017-9518-7 [DOI] [PubMed] [Google Scholar]
- Bravo AJ, Villarosa-Hurlocker MC, Pearson MR, Protective Strategies Study Team, 2018. College student mental health: An evaluation of the DSM-5 self-rated Level 1 cross-cutting symptom measure. Psychol Assess 30, 1382–1389. 10.1037/pas0000628 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Clarke DE, Kuhl EA, 2014. DSM-5 cross-cutting symptom measures: a step towards the future of psychiatric care? - 13, 314–316. 10.1002/wps.20154 [DOI] [PMC free article] [PubMed] [Google Scholar]
- First MB, Williams JBW, Karg RS, Spitzer RL: Structured Clinical Interview for DSM-5—Research Version (SCID-5 for DSM-5, Research Version; SCID-5-RV). Arlington, VA, American Psychiatric Association, 2015 [Google Scholar]
- Grinker RR, 1962. “Mentally Healthy” Young Males (Homoclites): A Study. Arch Gen Psychiatry 6, 405 10.1001/archpsyc.1962.01710240001001 [DOI] [PubMed] [Google Scholar]
- Human Connectome Project | Mapping the human brain connectivity, n.d. URL http://www.humanconnectomeproject.org/ (accessed 8.2.19a).
- IBM Can SPSS Statistics produce epidemiological statistics from 2×2 tables such as positive and negative predictive values, sensitivity, specificity and likelihood ratios? [WWW Document], 2011. URL https://www.ibm.com/support/pages/can-spss-statistics-produce-epidemiological-statistics-2×2-tables-such-positive-and-negative-predictive-values-sensitivity-specificity-and-likelihood-ratios (accessed 8.2.19).
- Jones KD, 2012. Dimensional and Cross-Cutting Assessment in the DSM-5. Journal of Counseling & Development 90, 481–487. 10.1002/j.1556-6676.2012.00059.x [DOI] [Google Scholar]
- Kessler RC, Demler O, Frank RG, Olfson M, Pincus HA, Walters EE, Wang P, Wells KB, Zaslavsky AM, 2005. Prevalence and treatment of mental disorders, 1990 to 2003. N. Engl. J. Med 352, 2515–2523. 10.1056/NEJMsa043266 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kroenke K, Spitzer RL, Williams JBW, 2001. The PHQ-9: Validity of a brief depression severity measure. J Gen Intern Med 16, 606–613. 10.1046/j.1525-1497.2001.016009606.x [DOI] [PMC free article] [PubMed] [Google Scholar]
- Maxim LD, Niebo R, Utell MJ, 2014. Screening tests: a review with examples. Inhalation Toxicology 26, 811–828. 10.3109/08958378.2014.955932 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Mental Health Screening Tools | Screening 2 Supports [WWW Document], n.d. URL https://screening.mentalhealthamerica.net/screening-tools (accessed 8.2.19).
- Mościcki EK, Clarke DE, Kuramoto SJ, Kraemer HC, Narrow WE, Kupfer DJ, Regier DA, 2013. Testing DSM-5 in routine clinical practice settings: feasibility and clinical utility. Psychiatr Serv 64, 952–960. 10.1176/appi.ps.201300098 [DOI] [PubMed] [Google Scholar]
- Mulvaney-Day N, Marshall T, Downey Piscopo K, Korsen N, Lynch S, Karnell LH, Moran GE, Daniels AS, Ghose SS, 2018. Screening for Behavioral Health Conditions in Primary Care Settings: A Systematic Review of the Literature. J GEN INTERN MED 33, 335–346. 10.1007/s11606-017-4181-0 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Narrow WE, Clarke DE, Kuramoto SJ, Kraemer HC, Kupfer DJ, Greiner L, Regier DA, 2013. DSM-5 field trials in the United States and Canada, Part III: development and reliability testing of a cross-cutting symptom assessment for DSM-5. Am J Psychiatry 170, 71–82. 10.1176/appi.ajp.2012.12071000 [DOI] [PubMed] [Google Scholar]
- NIMH » Mental Illness *WWW Document+, n.d. URL https://www.nimh.nih.gov/health/statistics/mental-illness.shtml (accessed 8.2.19).
- Nitrc: adolescent brain cognitive development (Abcd) study: tool/resource info [WWW Document], n.d. URL https://www.nitrc.org/projects/abcd_study/ (accessed 8.2.19).
- NOT-MH-15–009: Notice Announcing Data Harmonization for NIMH Human Subjects Research via the PhenX Toolkit [WWW Document], n.d. URL https://grants.nih.gov/grants/guide/notice-files/NOT-MH-15-009.html (accessed 8.2.19).
- Olfson M, Wang S, Wall M, Marcus SC, Blanco C, 2019. Trends in Serious Psychological Distress and Outpatient Mental Health Care of US Adults. JAMA Psychiatry 76, 152 10.1001/jamapsychiatry.2018.3550 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Organization WH, 2001. AUDIT: the Alcohol Use Disorders Identification Test: guidelines for use in primary health care. Screening and brief intervention for alcohol problems in primary care. [Google Scholar]
- Pavletic A, Pao M, 2017. Safety, Science, or Both? Deceptive Healthy Volunteers: Psychiatric Conditions Uncovered by Objective Methods of Screening. Psychosomatics 58, 657–663. 10.1016/j.psym.2017.05.001 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Schechter D, Strasser TJ, Santangelo C, Kim E, Endicott J, 1994. “Normal” control subjects are hard to find: A model for centralized recruitment. Psychiatry Research 53, 301–311. 10.1016/0165-1781(94)90057-4 [DOI] [PubMed] [Google Scholar]
- Sheehan DV, Lecrubier Y, Sheehan KH, Amorim P, Janavs J, Weiller E, Hergueta T, Baker R, Dunbar GC, 1998. The Mini-International Neuropsychiatric Interview (M.I.N.I.): the development and validation of a structured diagnostic psychiatric interview for DSM-IV and ICD-10. J Clin Psychiatry 59 Suppl 20, 22–33; quiz 34–57. [PubMed] [Google Scholar]
- Shtasel DL, Gur RE, Mozley PD, Richards J, Taleff MM, Heimberg C, Gallacher F, Gur RC, 1991. Volunteers for biomedical research. Recruitment and screening of normal controls. Arch. Gen. Psychiatry 48, 1022–1025. 10.1001/archpsyc.1991.01810350062010 [DOI] [PubMed] [Google Scholar]