Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2011 Nov 1.
Published in final edited form as: J Clin Child Adolesc Psychol. 2010 Nov;39(6):885–896. doi: 10.1080/15374416.2010.517169

Understanding barriers to evidence-based assessment: Clinician attitudes toward standardized assessment tools

Amanda Jensen-Doss 1, Kristin M Hawley 2
PMCID: PMC3058768  NIHMSID: NIHMS236457  PMID: 21058134

Abstract

In an era of evidence-based practice, why are clinicians not typically engaged in evidence-based assessment? To begin to understand this issue, a national multidisciplinary survey was conducted to examine clinician attitudes toward standardized assessment tools. 1442 child clinicians provided opinions about the psychometric qualities of these tools, their benefit over clinical judgment alone, and their practicality. Doctoral-level clinicians and psychologists expressed more positive ratings in all three domains than master’s-level clinicians and non-psychologists respectively, although only the disciplinary differences remained significant when predictors were examined simultaneously. All three attitude scales were predictive of standardized assessment tool use, although practical concerns were the strongest and only independent predictor of use.


It is clear that we are in an era of increased emphasis on evidence-based practice (EBP) in the treatment of mental health concerns. Over the past decade, professional organizations (e.g., APA Presidential Task Force on Evidence-Based Practice, 2006), state mental health agencies (e.g., Chorpita & Donkervoet, 2005; Jensen-Doss, Hawley, Lopez, & Osterberg, 2009), and federal funders of mental health services and research (e.g., National Institute of Mental Health, 2010; Substance Abuse & Mental Health Services Administration, 2010) have all endorsed increased use of EBP in clinical practice settings in order to improve the quality of mental health services. Although EBP is typically defined as encompassing both effective treatment and assessment practices (APA Presidential Task Force on Evidence-Based Practice), the focus of most EBP efforts has been on evidence-based treatments (EBTs), with less concomitant attention to evidence-based assessment (EBA; Hunsley & Mash, 2007).

There are several reasons why a failure to attend to EBA can undermine these efforts to improve mental health services. First, the results of assessments, such as diagnoses, are often used to communicate about both research findings and clinical clients. To the extent that clinicians are not engaged in EBA, there is a risk that clinicians and researchers will not be able to communicate effectively with one another, making it difficult for clinicians to understand and make appropriate use of research findings in their practice (Jensen & Weisz, 2002). In addition, as EBTs are often designed for use with specific disorders, providing therapists with training on EBTs without associated training in evidence-based assessment of psychopathology and other constructs important to treatment (e.g., impairment, child maltreatment history) may lead clinicians to use EBTs with inappropriate clients or without all of the relevant clinical information needed for their effective use (Jensen-Doss & Weisz, 2008; Weisz & Addis, 2006). Finally, independent of whether clinicians use EBTs, there is evidence that accurate diagnosis (e.g., Jensen-Doss & Weisz, 2008; Pogge, et al., 2001) and monitoring of treatment progress (Lambert, et al., 2003) may be associated with better treatment outcomes.

Despite the importance of EBA, much available evidence suggests that clinicians are not engaged in assessment practices consistent with EBA, including what is arguably the core component of EBA: use of standardized assessment tools with research support for their reliability and validity. Surveys of practicing psychologists suggest that the unstructured clinical interview is the most common, and often the only, assessment method used (Anderson & Paulosky, 2004; Cashel, 2002), despite evidence that this approach is prone to a number of biases undermining its accuracy (Angold & Fisher, 1999; Garb, 1998, 2005). The results of unstructured clinical interviews rarely agree with those generated by structured diagnostic interviews (SDIs; Rettew, Lynch, Achenbach, Dumenci, & Ivanova, 2009) and evidence suggests that they are less valid than the results of SDIs (Basco, et al., 2000; Jewell, Handwerk, Almquist, & Lucas, 2004; Tenney, Schotte, Denys, van Megen, & Westenberg, 2003). Psychiatrists have been found to have similar practices, with 55.3% to 83.3% (depending on the client’s problem area) saying they never use standardized assessment tools for case identification and severity measurement (Gilbody, House, & Sheldon, 2002).

Surveys comparing therapists’ practices to “best practice” assessment guidelines have also found very low percentages of clinicians adhering to those guidelines. For example, fewer than 1/3 of psychologists follow ADHD assessment recommendations (Handler & DuPaul, 2005) and only 3.5% of couples therapists follow guidelines for domestic violence assessment (Schacht, Dimidjian, George, & Berns, 2009). Finally, surveys suggest that formal treatment outcome monitoring is also infrequently used by psychologists (Hatfield & Ogles, 2004) and psychiatrists (Gilbody, et al., 2002), despite evidence that clinicians are poor judges of client progress when they evaluate it informally (Hannan, et al., 2005; Love, Koob, & Hill, 2007).

Although they represent the majority of child clinicians in the workforce (Aarons, Woodbridge, & Carmazzi, 2003), relatively little is known about the assessment practices of master’s level social workers, marriage and family therapists (MFTs), and counselors. Anderson and Paulosky’s (2004) survey included some master’s level clinicians, but they did not specify clinician discipline or examine assessment practices separately by training level. Palmiter (2004) surveyed 309 participants in an assessment workshop, including psychologists, social workers, counselors, and psychiatric nurses, about their assessment practices, and found that psychologists were significantly more likely than professionals from other disciplines to utilize several assessment methods, including ratings scales and personality testing. Similarly, Frauenhoffer, Ross, Gfeller, Searight and Piotrowski (1998) found that psychologists spend more time on assessment and use a wider range of assessment tools than social workers and counselors. Thus, although limited, the available evidence suggests that non-psychologists or master’s level clinicians may be less likely to engage in EBA than psychologists or doctoral level clinicians.

Although several studies indicate that clinicians are unlikely to use EBA, very little is known about why they make the assessment choices they do. Garland, Kruse and Aarons (2003) interviewed 50 child clinicians (counselors, social workers and psychologists) about their experiences utilizing standardized outcome measures. Clinicians expressed practical concerns associated with the measures (e.g., paperwork burden), reservations about the relevance of the measures to their clients (e.g., ethnic minorities), and skepticism that the measures actually tell clinicians anything they could not learn from the other experiences with the family. Similarly, utilizing a survey of 996 psychologists, Hatfield and Ogles (2007) found that clinicians who do not use treatment outcome measures endorse concerns about their practicality and utility, while those who do use them say they do so because they believe there is a benefit to the treatment process. However, Hatfield and Ogles only asked non-users about barriers and users about benefits, making it difficult to determine the degree to which these factors differentiate between these two groups of providers. The participants in Cashel’s (2002) survey of psychologists indicated that managed care had led to decreased use of many standardized assessment instruments, although the potential reasons for this impact were not specified. The participants in Palmiter’s (2004) workshop listed “organizational pressures,” “ethical concerns” and “theoretical orientation” as the top three influences on their assessment practices, but did not specify how these factors influence their practices. Finally, in their survey of psychiatrists, Gilbody and colleagues (2002) provided an open-ended section for participants to describe their views toward standardized assessment. Of the participants providing responses, about 1/3 said they felt the measures do not adequately capture clients’ problems, about 1/4 questioned the psychometric properties of the measures, and about 1/6 expressed practical concerns.

A better understanding of clinicians’ attitudes toward standardized assessment tools would be very useful to inform efforts to train clinicians and encourage their use in clinical practice. According to the Theory of Planned Behavior (TPB; Ajzen, 1991), the performance of a given behavior is a function of individuals’ intentions to do so and their perceived behavioral control, or their beliefs about their abilities to engage in the behavior. The TPB posits that intentions are a function of attitudes toward the behavior, perceptions of social pressure to perform the behavior, and perceived behavioral control. Meta-analyses of research related to the TPB suggest that attitudes toward a given behavior are good predictors of an individual’s intention to perform the behavior and that intentions are subsequently related to behavior (Ajzen; Armitage & Conner, 2001; Godin & Kok, 1996), suggesting that improving clinicians’ attitudes toward standardized assessment tools could lead to increased use through the pathway of increased intention. In addition, knowing clinicians’ existing attitudes would help EBA trainings to directly address clinician concerns about using standardized assessment tools.

Understanding clinician characteristics predictive of positive and negative attitudes toward standardized assessment tools would also help EBA training efforts be more targeted to specific segments of the clinician population. It would be particularly useful to understand differences in attitudes between clinicians with different levels of training (i.e., doctoral- versus master’s-level clinicians) and from different disciplinary background (i.e., psychologists, social workers, MFTs, counselors, and psychiatrists). Given that these characteristics are often associated with different roles in organizations, any variability in attitudes associated with therapist degree or discipline might impact efforts to implement EBA in clinical practice settings. For example, if doctoral-level clinicians, who increasingly play roles as administrators and supervisors, have more positive attitudes than master’s level-clinicians, who are becoming the most prevalent front-line service providers, this might result in EBA implementation efforts with administrative buy-in, but little cooperation from direct service providers. In terms of practice characteristics, given the resource challenges faced by many private practitioners, it is possible that these clinicians might face more challenges using standardized assessment tools than other practitioners. Also, given the need for additional attention to be paid to issues of diversity in measure development (Hunsley & Mash, 2007) and concerns raised by clinicians about the relevance of standardized assessment tools to diverse clients (Garland, Kruse, et al., 2003), clinicians who see more clients with characteristics such as low socioeconomic or minority status might have less positive views of standardized assessment. Finally, exploring therapist demographic characteristics as predictors of attitudes would help identify clinicians who might be challenging to train in EBA.

The present study was designed to document the attitudes of child-serving clinicians toward evidence-based assessment. Utilizing a large, national sample of psychologists, social workers, MFTs, counselors, and psychiatrists, we examined clinician views about the psychometric properties of these tools, the practicality of their use, and whether they provide additional information beyond using clinical judgment alone. We also examined whether demographic (age, gender, ethnicity), professional (degree, discipline), and practice characteristics (private practice setting, proportion low income and ethnic minority clients) serve as predictors of these attitudes. Finally, to assess the predictive validity of these attitudes, we examined their association with self-reported standardized assessment tool use.

Method

Participants

Participants were 1442 mental health professionals who were participating in a national survey intended to characterize treatment as usual for children and adolescents with anxiety, depression, and disruptive behavior disorders. The sample was primarily female (61.8%) and Caucasian (90.5%) and was nearly evenly divided between masters- (43.7%) and doctoral-level (56.3%) clinicians. Table 1 details the demographic and professional characteristics of participants, as well as the characteristics of their practice settings.

Table 1.

Provider Characteristics

Demographic Characteristics
% (N) Female a 61.8% (891)
M (SD) Age b 52.60 (10.00)
Ethnicity c % (N)
 White/Caucasian (non-Hispanic) 90.5% (1301)
 Hispanic/Latino(a) 2.4% (35)
 Black/African American 2.6% (37)
 Asian/Pacific Islander 2.6% (37)
 Mixed/Other 1.9% (28)

Professional Characteristics
Highest Degree Completedd % (N)
 Masters 43.7% (630)
 Doctoral 56.3% (812)
Professional Disciplined % (N)
 Counselor 17.3% (250)
 Marriage and Family Therapist (MFT) 16.4% (236)
 Social Worker 17.9% (258)
 Psychologist 28.6% (412)
 Psychiatrist 19.8% (286)

Practice Characteristics
Practice Settingd, e % (N)
 Elementary, Middle or High School 10.0% (144)
 Higher Education Setting 10.5% (152)
 Outpatient Clinic 18.6% (268)
 Private Practice 61.1% (881)
 Day Treatment Facility 1.7% (25)
 Residential Facility or Group Home 4.0% (58)
 Inpatient Hospital or Medical Clinic 5.8% (84)
 Managed Care Organization 1.6% (23)
 Other 14.4% (208)
M (SD) Percent of caseload are ethnic minoritiesf 32.15 (27.64)
M (SD) Percent of caseload are low incomeg 35.00 (32.38)
a

n = 1441;

b

n = 1417;

c

n = 1438;

d

n = 1442;

e

Percentages for practice setting do not sum to 100 because providers could choose more than one;

f

n = 1421

g

n = 1529

Procedures

The Tailored Design Method (Dillman, 2000) was used to develop a survey covering provider demographics, work setting and caseload characteristics, and assessment and treatment strategies used (Hawley, Cook, & Jensen-Doss, 2009). The survey was then piloted with a sample of 14 mental health providers. Providers were asked to complete the survey, including an open-ended section for comments, and then participate in focus groups to further solicit feedback regarding the measure itself (e.g., Do the items include jargon not commonly used by all disciplines?) and the methods planned for soliciting a representative sample of respondents (e.g., What modifications would increase the likelihood that they would complete it?). The survey was then revised based on this feedback. A second mail-out pilot study of 500 providers was then conducted to determine the optimal level of incentives to utilize in the final survey, indicating that a noncontingent $2 bill would be the most cost-effective incentive (Hawley, et al., 2009).

The final survey was mailed to 5000 mental health providers selected, 1000 from each of five professional organizations (American Counseling Association, ACA; American Association for Marriage and Family Therapy, AAMFT; American Academy of Child and Adolescent Psychiatrists, AACAP; American Psychological Association, APA; National Association of Social Workers, NASW). Each of these organizations boasts the largest membership of any guild within their respective disciplines and has procedures in place to obtain mailing lists of random, representative samples of their membership. From each guild, members within the 50 United States who indicated clinical interest or practice with children, adolescents or families were randomly selected for participation in the survey. Participants who received the survey, but who did not provide services to youths, were asked to complete one item regarding their practice and return the rest of the survey uncompleted.

Clinicians received up to five separate mailings. The first mailing was sent to all 5000 providers and consisted of a personally addressed, hand-signed pre-notice letter that briefly informed the clinicians of the upcoming survey. The second mailing, also sent to all 5000 participants, included a personalized, hand-signed cover letter, $2 bill, survey, and pre-addressed, hand-stamped return envelope. The third mailing consisted of a personalized, signed postcard that included a thank you for those that had returned the survey and reminder for nonrespondents. The fourth mailing was sent to nonrespondents only and included a second personalized cover letter, another copy of the survey and a stamped return envelope. The fifth mailing was sent to any final nonrespondents and consisted of a third personalized cover letter, a third copy of the survey and a final business reply return envelope. Unsigned consent was used in order to maintain participant anonymity; consent information was conveyed in the survey cover letter and participants consented by participating in the survey. All study procedures were approved by the Institutional Review Board at the University of Missouri.

Of the 5000 individuals selected for participation, 347 (6.9%) had undeliverable addresses. Of the 4653 individuals contacted, 2863 (61.6%) responded to the survey [1639 (35.2%) did not respond and 151 (3.2%) declined participation]. Of the responders, 1143 (37.9%) indicated they did not provide services to youths. Of the 1720 participants who indicated they do work with children, 1519 indicated they conduct assessment with youths presenting for treatment and were instructed to complete the items used in this study. Finally, 5 participants were excluded from the present sample because their highest degree obtained was a bachelor’s degree, which we felt introduced unnecessary variability into the sample. Excluding these individuals, along with 2 others who did not indicate their highest degree, yielded a final possible sample size of 1512.

Of these possible participants, 1442 provided sufficient data on the standardized assessment attitude scales (see measures, below) to generate a score for them on at least one scale, leading to their inclusion in the final sample for this paper. The 4.6% of individuals excluded from the sample for missing data were significantly more likely to be women (87.1% vs. 61.8%; χ2 = 16.32, p < .001) and hold master’s degrees (77.1% vs. 43.7%; χ2 = 30.16, p < .001) than the final sample. Based on reviews of previous studies of clinicians (Garland, Aarons, Hawley, & Hough, 2003; Jensen-Doss, et al., 2009; Weisz, 2010) and U.S. Census Bureau data for the category of “Educational, Health, and Social Service Providers” (U.S. Census Bureau, 2003), it was anticipated the sample would be approximately 75% female, with 82% Caucasian, 13% African American, 7.5% Hispanic, 4.7% Asian American/Pacific Islander, and 0.7% Other. Examination of Table 1 indicates that the sample obtained may have somewhat under-represented females, African Americans, Hispanics, and Asian/Pacific Islanders.

Measures

All participants also provided self reports of their demographic, professional, and practice characteristics, provided ratings of their attitudes toward standardized assessment tools, and completed items regarding their use of standardized assessment measures.

Attitudes toward Standardized Assessment Scales (ASA)

The ASA, a measure developed for the current study, was used to assess participants’ views toward standardized assessment measures in three areas (see Table 2). Items were written after a review of previous studies of clinician attitudes toward EBA (Garland, Kruse, et al., 2003; Gilbody, et al., 2002) and theories about why clinicians may or may not engage in EBA (Jensen Doss, 2005; Mash & Hunsley, 2005). They were then reviewed by the second author who provided input on additional domains to be measured and revised the items for clarity and consistency with the items for the larger survey. They were then subjected to the piloting procedures described above to obtain feedback from clinicians about their clarity and face validity. The pilot study led to the deletion of four items due to clinicians finding them redundant or difficult to understand and the modification of four others for clarity or increased relevance to practice. The entire survey was also revised to use the term “children and families” to refer to clients, as providers from the various disciplines could not agree on another term such as “clients,” “patients,” or “consumers.”

Table 2.

Descriptive Statistics for the Attitudes Toward Standardized Assessment Scales (ASA) Items and Scales

Item N M (SD) d1
Benefit over Clinical Judgment 1439 2.95 (.68) −0.077
 Using clinical judgment to diagnose children is superior to using standardized assessment measures.* 3.16 (.96) 0.17
 Standardized measures don’t capture what’s really going on with children and their families.* 3.11 (.95) 0.12
 Clinical problems are too complex to be captured by a standardized measure.* 3.02 (.98) 0.015
 Standardized measures provide more useful information than other assessments like informal interviews or observations. 2.50 (.82) −0.62
 Standardized measures don’t tell me anything I can’t learn from just talking to children and their families.* 2.47 (1.06) −0.50
Psychometric Quality 1428 3.78 (.50) 1.57
 Clinicians should use assessments with demonstrated reliability and validity. 4.20 (.83) 1.44
 Standardized measures help with accurate diagnosis. 3.91 (.77) 1.18
 Standardized measures help detect diagnostic comorbidity (presence of multiple diagnoses). 3.67 (.72) 0.94
 Standardized measures help with differential diagnosis (deciding between 2 diagnoses). 3.64 (.78) 0.83
 Standardized measures overdiagnose psychopathology.* 2.84 (.89) −0.18
 Most standardized measures aren’t helpful because they don’t map on to DSM diagnostic criteria.* 2.45 (.84) −0.65
 It is not necessary for assessment measures to be standardized in research studies.* 1.68 (.84) −1.57
Practicality 1404 3.19 (.56) 0.34
 Standardized measures can efficiently gather information from multiple individuals (e.g., children, parents, teachers). 3.91 (.79) 1.15
 Standardized assessments are readily available in the language my children and their families speak. 3.34 (1.12) 0.30
 There are few standardized measures valid for ethnic minority children and their families.* 3.32 (.82) 0.39
 I have adequate training in the use of standardized measures. 3.25 (1.24) 0.21
 Standardized diagnostic interviews interfere with establishing rapport during an intake.* 3.04 (1.09) 0.035
 Standardized measures take too long to administer and score.* 2.99 (1.07) −0.012
 Standardized symptom checklists are too difficult for many children and their families to read or understand.* 2.72 (.92) −0.30
 Copyrighted standardized measures are affordable for use in practice. 2.71 (.99) −0.29
 Completing a standardized measure is too much of a burden for children and their families.* 2.69 (.93) −0.33
 The information I receive from standardized measures isn’t worth the time I spend administering, scoring, and interpreting the results.* 2.58 (1.08) −0.39
*

Item was reverse-scored before it was included in the scale score

1

Cohen’s d effect size, comparing each mean to the neutral value of 3.

All ASA items were rated on a 5 point Likert scale from 1 (Strongly Disagree) to 5 (Strongly Agree) and any negative items were reverse coded such that higher scale scores indicated more positive attitudes. Scales were scored by averaging items within a scale. The first scale, Benefit over Clinical Judgment (α = .75) consisted of 5 items assessing the extent to which standardized measures can improve upon the information obtained if clinicians relied on their clinical judgment alone. The second, Psychometric Quality (α = .72) consisted of 7 items assessing the extent to which clinicians believe standardized measures are reliable and valid and how much they value these psychometric properties. The third, Practicality (α = .75), consisted of 10 items assessing clinicians’ opinions about the feasibility of using standardized measures in practice.

In addition to being supported by the internal consistency values, this scale structure was also supported by a confirmatory factor analysis (CFA). A CFA was conducted using the AMOS software (Arbuckle, 2008). Because the ASA scales were anticipated to correlate with one another, the factors were set to covary with one another. Where the model modification indices suggested it would be appropriate, error terms were also allowed to covary within, but not between, factors. With these specifications, the CFA indicated adequate model fit, as indicated by a root-mean-square-error of approximation (RMSEA) of .045 and a comparative fit index (CFI) of .935. These fit indices fell within guidelines of “good” fit, as indicated by an RMSEA of less than .060 (Hu & Bentler, 1999) and a CFI of greater than .90 (Raykov & Marcoulides, 2000).

Use of standardized assessment instruments

Providers were asked to rate the frequency with which they use 5 types of standardized assessment tools: structured or standardized diagnostic interviews, formal mental status exams, standardized observational coding systems, standardized checklists, and formal clinician ratings of child/family symptoms or functioning. They were provided with a definition and examples of each type of measure, and asked to rate their use on a scale from 1 (almost never) to 5 (almost always). For the purposes of this paper, clinicians were considered to be users of standardized assessment tools if they used at least one type of instrument at a frequency at or above 4 (often). This approach, rather than taking a continuous approach such as averaging the ratings across types of measures, was selected to recognize that some standardized assessment tools may not be appropriate or applicable in some clinical settings. Dichtomizing this variable also had the advantage of differentiating between committed users of these tools from those who use them infrequently or inconsistently. Of the 1411 providers with valid data on this section, 61.5% (n = 868) were classified as users of standardized assessment tools.

Data Analyses

The study questions were addressed utilizing t tests and multiple regression for continuous dependent variables and logistic regression for categorical outcome variables. Prior to analysis, the continuous variables were examined for normality. Two independent variables, % minority caseload and % low income caseload were positively skewed (zminority = 12.25, p <.001; zincome = 9.66, p <.001) and platykurtic (zminority = −2.93, p <.01; zincome = −7.57, p <.001); therefore, they were dichotomized at the median (25% for both variables). Therapist ethnicity was also dichotomized into Minority versus Caucasian, as the sample was more than 90% Caucasian.

Given our large sample size, the decision was made to utilize an alpha of 0.001. For all analytic methods utilized, this p value corresponded to an effect size that fell below conventions for a “small” effect size, so this cutoff was deemed reasonable to control for Type I error without missing meaningful patterns in the data. Cohen’s (1988) d was used as the effect size indicator for t-tests, with d = .20 considered a small effect, d = .50 a medium effect, and d = .80 a large effect. For the multiple regressions, R2, or the proportion of variance explained, was used as the effect size indicator. Based on Cohen’s conventions, R2 values of 0.02, 0.13, and 0.30 can be considered small, medium, and large, respectively. Finally, odds ratios and prediction success rates were used to determine the strength of association in the logistic regression analyses.

Patterns of missing data were examined utilizing the three opinion scale scores, the use of standardized assessment variable, and the independent variables listed in Table 3. The data were determined to be missing at random utilizing Little’s (1988) MCAR test (χ2= 358.03, DF = 338, p = .22). Given that the overall rate of missing data was low (5% or fewer missing for all variables), the decision was made to conduct the analyses using pairwise deletion.

Table 3.

Demographic, Professional, and Practice Characteristics as Predictors of Attitudes

Benefit Over Clinical Judgment Psychometric Quality Practicality

Univariate Analysis Multivariate Analysis Univariate Analysis Multivariate Analysis Univariate Analysis Multivariate Analysis
Predictor variable B R2 B B R2 B B R2 B
Demographic Characteristics
 Age (in units of 10) .01 .00 −.01 −.00 .00 −.00 .04 .00 .01
 Gender (female = 1) −.09 .00 −.08 −.05 .00 −.00 −.12* .01 −.06
 Ethnicity (minority = 1) −.07 .00 −.06 −.05 .00 −.08 −.17* .01 −.14

Professional Characteristics
 Highest Degree (doctoral = 1) .17* .02 .15 .25* .06 .12 .31* .08 .22*
 Professional Discipline .05 .09 .14
  Psychologist (1) vs Psychiatrist (0) .36* .47* .19* .24* .42* .45*
  Psychologist (1) vs MFT (0) .35* .27* .39* .31* .44* .30*
  Psychologist (1) vs Social Worker (0) .32* .26* .35* .29* .53* .37*
  Psychologist (1) vs Counselor (0) .26* .16 .30* .22* .43* .25*
  Psychiatrist (1) vs MFT (0) −.02 −.21 .20* .08 .02 −.15
  Psychiatrist (1) vs Social Worker (0) −.04 −.21 .16* .05 .10 −.081
  Psychiatrist (1) vs Counselor (0) −.10 −.31* .11 .02 .01 .20
  MFT (1) vs Social Worker (0) −.03 −.01 −.04 −.03 .09 . 07
  MFT (1) vs Counselor (0) −.09 −.11 −.09 −.10 −.01 −.05
  Counselor (1) vs Social Worker (0) .06 .10 .05 .07 .10 .12

Practice Characteristics
 Private practice (1) vs other settings (0) −.12 .01 −.17* −.09 .01 −.13* .01 .0 −.01
 Low income caseload (> 25% = 1) .08 .00 .10 −.00 .00 .00 −.07 .00 .04
 Ethnic minority caseload (> 25% = 1) .03 .00 −.01 .04 .00 .03 −.03 .00 .01
*

p < .001

Results

Clinician Attitudes Toward Standardized Assessment

Provider mean item and scale scores are presented in Table 2. Items within each scale are presented in order from highest to lowest mean level of endorsement (1 = Strongly Disagree to 5 = Strongly Agree). To provide an estimate of the strength of providers’ opinions, the table also contains Cohen’s d effect sizes indicating the magnitude of difference between each rating and the neutral rating of 3. Clinicians expressed strongest opinions about the items in the Psychometric Quality scale, with nearly all items having medium or large effect sizes.

Comparing across scales, providers expressed significantly more positive views on the Psychometric Quality scale (M = 3.78, SD = .50) than on the Practicality [M = 3.19, SD = .56; t(1396) = 44.13, p < .001, d = 1.11] and Benefit Over Clinical Judgment scales [M = 2.95, SD = .68; t(1424) = 54.38, p < .001, d = 1.39]. The difference between the Practicality and Clinical Judgment scales was also significant [t(1403) = 15.19, p < .001, d = 0.39], although the effect size for this difference was small.

Predictors of the Benefit over Clinical Judgment Scale

Two professional characteristics were predictive of scores on the Clinical Judgment scale (see Table 3). Relative to master’s-level clinicians, doctoral-level clinicians perceived more benefit of standardized assessment tools, although this difference was small [F(1, 1437) = 21.70, p < .001, R2 = .015]. In addition, professional discipline explained almost 5% of the variance in ratings on the Clinical Judgment scale [F(4, 1434) = 18.52, p < .001], a small effect size, with psychologists providing higher ratings on this scale than providers from all other disciplines (all p’s < .001). No demographic or practice characteristics were significant predictors of this scale.

When all predictors were entered into a single multiple regression analysis, the only predictor that remained significant from the univariate analyses was professional discipline (Table 3). Psychologists had significantly higher ratings than psychiatrists, MFTs, and social workers, and the comparison between psychiatrists and counselors reached significance, with psychiatrists seeing significantly less benefit over clinical judgment than counselors. In addition, private practice setting, although not significant in the univariate analysis, became a significant predictor in the multivariate analysis; clinicians working in private practice settings saw less benefit to standardized assessment over clinical judgment than clinicians working in other settings. The collective set of predictors explained 8.4% of the variability in the Benefit over Clinical Judgment scale, a small effect [F(11, 1373) = 11.51, p < .001].

Predictors of the Psychometric Quality Scale

Provider degree and professional discipline were also significant univariate predictors of the Psychometric Quality scale (see Table 3). Relative to master’s-level clinicians, doctoral-level clinicians again provided higher ratings, and this difference did exceed the cutoff for a small effect size [F(1, 1426) = 93.33, p < .001, R2 = .061]. Professional discipline explained 9% of the variance, a small effect, in ratings on the Psychometric Quality scale [F(4, 1423) = 35.57, p < .001], with psychologists again providing higher ratings on this scale than all other disciplines (all p’s < .001). In addition, psychiatrists provided higher ratings on this scale than MFTs and social workers (p’s < .001). No demographic or practice characteristics were significant predictors of this scale.

When all predictors were entered into a single multiple regression analysis, professional discipline continued to be a significant predictor of the Psychometric Quality scale, although only the pairwise comparisons between psychologists and the other disciplines remained significant (Table 3). In addition, private practice setting became a significant predictor in this analysis, with private practitioners providing significantly lower ratings than other practitioners. The collective set of predictors explained 11.2% of the variability in the Psychometric Qualities scale, a small effect [F(11, 1365) = 16.82, p < .001].

Predictors of the Practicality Scale

Therapist gender, ethnicity, degree and professional discipline were significant univariate predictors of the Practicality scale (see Table 3). Female and ethnic minority therapists provided significantly lower ratings of the practicality of assessment tools, although these effects fell below the cutoff for a small effect [gender: F(1, 1401) = 14.61, p < .001, R2 = .010; ethnicity: F(1, 1398) = 11.15, p < .001, R2 = .008]. Provider degree had a small effect on Practicality ratings, with doctoral-level clinicians again providing higher ratings [F(1, 1402) = 116.59, p < .001, R2 = .077]. Finally, professional discipline explained 14% of the variance, a medium effect, in ratings on the Practicality scale [F(4, 1399) = 56.38, p < .001], with psychologists again providing higher ratings on this scale than all other disciplines (all p’s < .001). No practice characteristics were significant predictors of this scale.

When all predictors were entered into a single multiple regression analysis, degree and professional discipline both continued to be significant predictors of the Practicality scale (Table 3). The collective set of predictors explained 17.3% of the variability in the Practicality scale, a medium effect size [F(11, 1341) = 25.51, p < .001].

Relation of Clinician Attitudes to Use of Standardized Assessment Measures

To examine the predictive validity of the clinician attitude measures, we examined their individual and collective relation to clinician standardized assessment tool use. The results of these logistic regression analyses are presented in Table 4. Individually, more positive ratings on all three dimensions predicted higher likelihoods of standardized assessment use, with the odds ratios for these analyses, which can be interpreted as the increase in one’s odds of being a standardized assessment tool user associated with a 1 point increase on the 5-point attitude scales, ranging from 1.87 for Benefit over Clinical Judgment to 3.22 for Practicality.

Table 4.

Attitudes as Predictors of Standardized Assessment Use

Variable Standardized Assessment Tool Use
Univariate Analyses Multivariate Analysis1
B Wald OR B Wald OR
Benefit over Clinical Judgment .63 53.50 1.87* .10 .66 1.10
Psychometric Quality .75 41.56 2.12* −.01 .01 .99
Practicality 1.17 105.12 3.22* 1.20 60.14 3.33*
1

Values presented for the multivariate analysis were obtained from models controlling for therapist personal and professional characteristics

Note. OR = Odds Ratio;

*

p < .001

Next, attitudes were examined simultaneously in a hierarchical logistic regression, controlling for provider demographic, professional, and practical characteristics (i.e., the variables used to predict attitudes, see Table 2) in step 1 and entering the three attitude variables in step 2. Of the attitude scales, only the Practicality scale (OR = 3.33) was a significant, independent predictor of use. The predictive power of the model that included attitudes was significantly better than the step 1 model [χ2(3, N = 1323) = 108.59, p <.001], in which none of the control variables were significant independent predictors of use. However, the predictive rates for the attitude items (i.e., the ability of the model to correctly classify participants into the use groups) were not high or substantially better than the step 1 model. While the success rate for predicting membership in the group that used standardized measures was 82.2% (versus 88.3% for the step 1 model), the success rate for the group that did not was only 40.9% (versus 35.7% for the step 1 model), for an overall success rate of 66.6% (versus 64.7% for the step 1 model).

Discussion

The present study examined attitudes toward standardized assessment tool use in a large, multidisciplinary sample of child clinicians. Given the low rate of standardized assessment tool use among clinicians, and the potential benefit of their increased use to improving the quality of clinical practice in general and the use of EBTs specifically, these data provide useful information to the field that can be used to inform future efforts to train clinicians in EBA. On average, clinician attitudes toward these instruments were generally neutral, with clinicians expressing the most positive views about their psychometric properties and less positive views about their practicality or their benefit over using clinical judgment alone.

The most consistent predictors of attitudes were clinician degree and professional discipline. For all three attitude scales, doctoral-level clinicians expressed more positive opinions than master’s-level clinicians and psychologists expressed more positive opinions than psychiatrists, counselors, MFTs and social workers. However, when all predictors were examined simultaneously, degree was no longer significant in most analyses, suggesting that the relation between degree and attitudes may primarily be a function of professional discipline differences.

While therapist demographic, professional, and practice characteristics collectively predicted all three types of attitudes, they were best able to predict which clinicians would hold practical concerns. Some predictors were also found to be specific to the type of attitudes examined. For example, compared to clinicians working in other settings, private practitioners saw less benefit to standardized assessment tools over clinical judgment and provided lower ratings of their psychometric quality. It may be that clinicians who are drawn to private practice associate standardized assessment tools with research and place less value on research than other providers, or that those in agency-based practice receive greater exposure to psychometric theory through colleagues or agency workshops that lead to more positive attitudes. Interestingly, despite potentially working with fewer resources than practitioners working with organizations, these practitioners did not report more practical concerns about standardized assessment.

On the Psychometric Quality Scale, psychiatrists provided significantly higher ratings than MFTs and social workers, suggesting that psychiatrists may have a greater appreciation for the research support for these instruments, despite not valuing them more in other ways. Finally, female and ethnic minority therapists rated the practicality of standardized measures significantly lower than males and Caucasians. Further research, perhaps using focus group or interview methodologies, is needed to understand why these groups of clinicians may perceive more practical challenges to utilizing these measures.

All three scales were separately predictive of clinicians’ self-reported standardized assessment tool use, such that more users of standardized assessment tools held more positive attitudes than non-users. The strength of this relationship was strongest for the Practicality Scale, with the odds of someone being a user of standardized assessment tools increasing threefold for every one point increase on this scale. The Practicality Scale was also the only independent predictor of use when tested simultaneously with the other attitude scales. These results suggest that addressing clinicians’ practical concerns, either through education about erroneous assumptions or through modifications to the instruments themselves, might be especially important to efforts to increase their use of these measures. However, the relatively low predictive power of these models suggests that future research is needed to identify additional attitudes or other barriers that might further explain clinicians’ standardized assessment tool use.

The present study did have some limitations that warrant consideration in the interpretation of its findings. First, compared to estimates of the characteristics of our population of interest, the sample appears to under-represent women and ethnic minorities. In addition, individuals who were excluded from the sample because they did not provide sufficient data analysis were significantly more likely to be women and master’s-level than participants with some complete data. Future research should consider efforts to better represent these groups.

Second, while several predictors of attitudes were found among those examined in this study, significant variability in attitudes remained unexplained, suggesting additional predictors exist. Research on attitudes toward EBTs suggests other factors that warrant consideration in future studies. For example, perceived agency, supervisor, and colleague support of EBTs, clinician quality of training in EBTs, and institutional barriers (e.g., caseload, session length) have been found to be related to clinician attitudes toward EBTs (Jensen-Doss, et al., 2009). Future studies on this topic should include a broader array of predictors to further refine our understanding of clinician attitudes toward standardized assessment tools.

Finally, conceptual basis of the ASA warrants further investigation. The items are phrased with the assumption that the term “standardized assessment tools” represents a unitary construct toward which clinicians have a single set of attitudes. However, it is also possible that clinicians have different attitudes toward different measures, and their ASA ratings might differ if they were asked to provide opinions about specific tools. While the clinicians in our pilot sample did not raise this as a source of confusion, additional research could provide further study of this issue. For example, clinicians could be asked to focus on specific measures as they completed the ASA to determine whether this yields variability in attitudes.

However, despite these limitations, this study had several strengths that make it a useful contribution to the literature. It is by far the largest survey to date to assess clinician attitudes toward standardized assessment and the first to systematically examine predictors of these attitudes across a range of disciplines. The Attitudes toward Standardized Assessment Scales, developed for this study, demonstrated good reliability and predictive validity, supporting their use in future studies on this topic. The study utilized the Tailored Design Method (Dillman, 2000) to develop the survey and recruit participants, resulting in a response rate that exceeded the majority of previous surveys on clinician assessment practices. In addition, the rate of missing data was very low, not exceeding 5% for the variables used in this study, and the data that were missing were determined to be missing at random.

Implications for Research, Policy, and Practice

These findings have several implications for EBA training and implementation efforts. They suggest that many clinicians, particularly non-psychologists, remain skeptical about the benefits of using standardized assessment tools over clinical judgment, do not seem to value their psychometric properties, and find them impractical. The fact that non-psychologists, who are most likely to be serving as the front-line clinicians in many mental health agencies, are less likely to have positive attitudes suggests that, even when agencies have administrative and supervisory support for implementing EBAs, these efforts may not be as welcome by clinicians themselves. Training should be designed to directly address these concerns, as all three types of attitudes appear to be predictive of standardized assessment tool use.

The strong relationship between use and practical concerns seem to warrant particular attention. While some of these concerns, such as concerns about interference with rapport or beliefs about not having enough training, can be addressed through training, others, such as financial and time costs of measures, need to be addressed during the EBA implementation design, funding policies, or through additional research. In the implementation design phase, an emphasis could be placed on selecting brief measures, or measures that are available at little or no cost. Fortunately, several such measures exist with strong psychometric properties, including the Strengths and Difficulties Questionnaire (Goodman, Ford, Simmons, Gatward, & Meltzer, 2000), the Children’s Revised Impact of Events Scale (Perrin, Meiser-Stedman, & Smith, 2005), and the Peabody Treatment Progress Battery (Bickman, et al., 2007). Where funding policies place pressures on clinicians to engage in overly brief assessments or do not reimburse for time spent on activities like progress monitoring, changes to these policies might also help address some of these practical concerns.

In future research to address clinicians’ practical concerns, additional emphasis is needed on the development of efficient measures. For example, while the gold standard instrument for diagnosing is generally considered the structured diagnostic interview, these instruments can be quite burdensome in terms of time and training (Jensen Doss, 2005). Research that is currently being conducted in efficient diagnostic screening (e.g., Kahana, Youngstrom, Findling, & Calabrese, 2003; Warnick, Weersing, Scahill, & Woolston, 2009) has the potential to benefit clinicians by identifying less time-consuming ways to help them engage in more accurate assessment. Finally, some practical concerns about standardized assessment tools can be addressed through additional measure development work. For example, additional work is needed to develop culturally sensitive assessment tools (Hunsley & Mash, 2007).

Efforts to improve the quality of mental health services through the use of EBP are in need of additional focus on EBA. The present findings suggest that, similar to concerns about treatment manuals (e.g., Addis & Krasnow, 2000), a core component of EBTs, clinicians have concerns about standardized assessment tools, a core component of EBA. However, through well-designed training efforts and additional work developing and refining measures, many of these concerns seem addressable. The extent to which these concerns are addressed may determine whether EBA will become more widely integrated into clinical practice.

Acknowledgments

This project was supported in part through R03 MH077752 from the National Institute of Mental Health to Kristin Hawley. The authors would like to thank Jonathan Cook, Brian Doss Marcia Kearns, and Leticia Osterberg for their assistance with this project and helpful feedback on previous drafts of this manuscript.

Contributor Information

Amanda Jensen-Doss, Department of Psychology, University of Miami.

Kristin M. Hawley, Department of Psychological Sciences, University of Missouri, Columbia

References

  1. Aarons GA, Woodbridge M, Carmazzi A. Examining leadership, organizational climate and service quality in a children’s system of care. A System of Care for Children’s Mental Health: Expanding the Research Base; Paper presented at the 15th Annual Research Conference; Tampa, FL. 2003. [Google Scholar]
  2. Addis ME, Krasnow AD. A national survey of practicing psychologists’ attitudes toward psychotherapy treatment manuals. Journal of Consulting and Clinical Psychology. 2000;68(2):331–339. doi: 10.1037//0022-006x.68.2.331. [DOI] [PubMed] [Google Scholar]
  3. Ajzen I. The Theory of Planned Behavior. Organizational Behavior & Human Decision Processes. 1991;50:179–211. [Google Scholar]
  4. Anderson DA, Paulosky CA. A survey of the use of assessment instruments by eating disorder professionals in clinical practice. Eating and Weight Disorders. 2004;9(3):238–241. doi: 10.1007/BF03325075. [DOI] [PubMed] [Google Scholar]
  5. Angold A, Fisher PW. Interviewer-based interviews. In: Shaffer D, Lucas CP, Richters JE, editors. Diagnostic assessment in child and adolescent psychopathology. New York, NY: Guilford Press; 1999. pp. 34–64. [Google Scholar]
  6. APA Presidential Task Force on Evidence-Based Practice. Evidence-based practice in psychology. American Psychologist. 2006;61(4):271–285. doi: 10.1037/0003-066X.61.4.271. [DOI] [PubMed] [Google Scholar]
  7. Arbuckle JL. Amost 17.0 Users Guide. Spring House, PA: Amos Development Corporation; 2008. [Google Scholar]
  8. Armitage CJ, Conner M. Efficacy of the theory of planned behaviour: A meta-analytic review. British Journal of Social Psychology. 2001;40(4):471–499. doi: 10.1348/014466601164939. [DOI] [PubMed] [Google Scholar]
  9. Basco MR, Bostic JQ, Davies D, Rush AJ, Witte B, Hendrickse W, et al. Methods to improve diagnostic accuracy in a community mental health setting. American Journal of Psychiatry. 2000;157(10):1599–1605. doi: 10.1176/appi.ajp.157.10.1599. [DOI] [PubMed] [Google Scholar]
  10. Bickman L, Riemer M, Lambert EW, Kelley SD, Breda C, Dew SE, et al. Nashville, TN: Vanderbilt University; 2007. Manual of the Peabody Treatment Progress Battery [Electronic version] http://peabody.vanderbilt.edu/ptpb. [Google Scholar]
  11. Cashel ML. Child and adolescent psychological assessment: Current clinical practices and the impact of managed care. Professional Psychology: Research and Practice. 2002;33(5):446–453. [Google Scholar]
  12. Chorpita BF, Donkervoet C. Implementation of the Felix Consent Decree in Hawaii: The Impact of Policy and Practice Development Efforts on Service Delivery. In: Steele RG, Roberts MC, editors. Handbook of mental health services for children, adolescents, and families. New York, NY US: Kluwer Academic/Plenum Publishers; 2005. pp. 317–332. [Google Scholar]
  13. Dillman DA. Mail and Internet Surveys: The Tailored Design Method. 2. New York: John Wiley & Sons; 2000. [Google Scholar]
  14. Frauenhoffer D, Ross MJ, Gfeller J, Searight HR, Piotrowski C. Psychological test usage among licensed mental health practitioners: A multidisciplinary survey. Journal of Psychological Practice. 1998;4(1):28–33. [Google Scholar]
  15. Garb HN. Studying the clinician: Judgment research and psychological assessment. Washington, DC: American Psychological Association; 1998. [Google Scholar]
  16. Garb HN. Clinical judgment And decision making. Annual Review of Clinical Psychology. 2005;1(1):67–89. doi: 10.1146/annurev.clinpsy.1.102803.143810. [DOI] [PubMed] [Google Scholar]
  17. Garland AF, Aarons GA, Hawley KM, Hough RL. Relationship of Youth Satisfaction With Mental Health Services and Changes in Symptoms and Functioning. Psychiatric Services. 2003;54(11):1544–1546. doi: 10.1176/appi.ps.54.11.1544. [DOI] [PubMed] [Google Scholar]
  18. Garland AF, Kruse M, Aarons GA. Clinicians and outcome measurement: What’s the use? The Journal of Behavioral Health Services & Research. 2003;30(4):393–405. doi: 10.1007/BF02287427. [DOI] [PubMed] [Google Scholar]
  19. Gilbody SM, House AO, Sheldon TA. Psychiatrists in the UK do not use outcomes measures: National survey. British Journal of Psychiatry. 2002;180(2):101–103. doi: 10.1192/bjp.180.2.101. [DOI] [PubMed] [Google Scholar]
  20. Godin G, Kok G. The Theory of Planned Behavior: A Review of Its Applications to Health-Related Behaviors. American Journal of Health Promotion. 1996;11:87–98. doi: 10.4278/0890-1171-11.2.87. [DOI] [PubMed] [Google Scholar]
  21. Goodman R, Ford T, Simmons H, Gatward R, Meltzer H. Using the Strengths and Difficulties Questionnaire (SDQ) to screen for child psychiatric disorders in a community sample. British Journal of Psychiatry. 2000;177:534–539. doi: 10.1192/bjp.177.6.534. [DOI] [PubMed] [Google Scholar]
  22. Handler MW, DuPaul GJ. Assessment of ADHD: Differences across psychology specialty areas. Journal of Attention Disorders. 2005;9(2):402–412. doi: 10.1177/1087054705278762. [DOI] [PubMed] [Google Scholar]
  23. Hannan C, Lambert MJ, Harmon C, Nielsen SL, Smart DW, Shimokawa K, et al. A lab test and algorithms for identifying cases at risk for treatment failure. Journal of Clinical Psychology. 2005;61:155–163. doi: 10.1002/jclp.20108. [DOI] [PubMed] [Google Scholar]
  24. Hatfield DR, Ogles BM. The Use of Outcome Measures by Psychologists in Clinical Practice. Professional Psychology: Research and Practice. 2004;35(5):485–491. [Google Scholar]
  25. Hatfield DR, Ogles BM. Why some clinicians use outcome measures and others do not. Administration and Policy in Mental Health and Mental Health Services Research. 2007;34(3):283–291. doi: 10.1007/s10488-006-0110-y. [DOI] [PubMed] [Google Scholar]
  26. Hawley KM, Cook JR, Jensen-Doss A. Do noncontingent incentives increase survey response rates among mental health providers? A randomized trial comparison. Administration and Policy in Mental Health. 2009;36(5):343–348. doi: 10.1007/s10488-009-0225-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
  27. Hunsley J, Mash EJ. Evidence-based assessment. Annual Review of Clinical Psychology. 2007;3:29–51. doi: 10.1146/annurev.clinpsy.3.022806.091419. [DOI] [PubMed] [Google Scholar]
  28. Jensen-Doss A, Hawley KM, Lopez M, Osterberg LD. Using evidence-based treatments: The experiences of youth providers working under a mandate. Professional Psychology: Research and Practice. 2009;40(4):417–424. [Google Scholar]
  29. Jensen-Doss A, Weisz JR. Diagnostic agreement predicts treatment process and outcomes in youth mental health clinics. Journal of Consulting and Clinical Psychology. 2008;76(5):711–722. doi: 10.1037/0022-006X.76.5.711. [DOI] [PubMed] [Google Scholar]
  30. Jensen AL, Weisz JR. Assessing match and mismatch between practitioner-generated and standardized interview-generated diagnoses for clinic-referred children and adolescents. Journal of Consulting and Clinical Psychology. 2002;70(1):158–168. [PubMed] [Google Scholar]
  31. Jensen Doss A. Evidence-based diagnosis: Incorporating diagnostic instruments into clinical practice. Journal of the American Academy of Child & Adolescent Psychiatry. 2005;44(9):947–952. doi: 10.1097/01.chi.0000171903.16323.92. [DOI] [PubMed] [Google Scholar]
  32. Jewell J, Handwerk M, Almquist J, Lucas C. Comparing the validity of clinician-generated diagnosis of conduct disorder to the Diagnostic Interview Schedule for Children. Journal of Clinical Child and Adolescent Psychology. 2004;33(3):536–546. doi: 10.1207/s15374424jccp3303_11. [DOI] [PubMed] [Google Scholar]
  33. Kahana SY, Youngstrom EA, Findling RL, Calabrese JR. Employing Parent, Teacher, and Youth Self-Report Checklists in Identifying Pediatric Bipolar Spectrum Disorders: An Examination of Diagnostic Accuracy and Clinical Utility. Journal Of Child And Adolescent Psychopharmacology. 2003;13(4):471–488. doi: 10.1089/104454603322724869. [DOI] [PubMed] [Google Scholar]
  34. Lambert MJ, Whipple JL, Hawkins EJ, Vermeersch DA, Nielsen SL, Smart DW. Is It Time for Clinicians to Routinely Track Patient Outcome? A Meta-Analysis. Clinical Psychology: Science and Practice. 2003;10(3):288–301. [Google Scholar]
  35. Little RJA. A test of missing completely at random for multivariate data with missing values. Journal of the American Statistical Association. 1988;83:1198–1202. [Google Scholar]
  36. Love SM, Koob JJ, Hill LE. Meeting the challenges of evidence-based practice: Can mental health therapists evaluate their practice? Brief Treatment and Crisis Intervention. 2007;7(3):184–193. [Google Scholar]
  37. Mash EJ, Hunsley J. Evidence-Based Assessment of Child and Adolescent Disorders: Issues and Challenges. Journal of Clinical Child and Adolescent Psychology. 2005;34(3):362–379. doi: 10.1207/s15374424jccp3403_1. [DOI] [PubMed] [Google Scholar]
  38. National Institute of Mental Health. The National Institute of Mental Health Strategic Plan. 2010 Retrieved February 17, 2010, 2010, from http://www.nimh.nih.gov/about/strategic-planning-reports/index.shtml.
  39. Palmiter DJ., Jr A Survey of the Assessment Practices of Child and Adolescent Clinicians. American Journal of Orthopsychiatry. 2004;74(2):122–128. doi: 10.1037/0002-9432.74.2.122. [DOI] [PubMed] [Google Scholar]
  40. Perrin S, Meiser-Stedman R, Smith P. The children’s revised impact of event scale (CRIES): Validity as a screening instrument for PTSD. Behavioural and Cognitive Psychotherapy. 2005;33(4):487–498. [Google Scholar]
  41. Pogge DL, Wayland-Smith D, Zaccario M, Borgaro S, Stokes J, Harvey PD. Diagnosis of manic episodes in adolescent inpatients: Structured diagnostic procedures compared to clinical chart diagnoses. Psychiatry Research. 2001;101(1):47–54. doi: 10.1016/s0165-1781(00)00248-1. [DOI] [PubMed] [Google Scholar]
  42. Rettew DC, Lynch AD, Achenbach TM, Dumenci L, Ivanova MY. Meta-analyses of agreement between diagnoses made from clinical evaluations and standardized diagnostic interviews. International Journal of Methods in Psychiatric Research. 2009;18(3):169–184. doi: 10.1002/mpr.289. [DOI] [PMC free article] [PubMed] [Google Scholar]
  43. Schacht RL, Dimidjian S, George WH, Berns SB. Domestic violence assessment procedures among couple therapists. Journal of Marital & Family Therapy. 2009;35(1):47–59. doi: 10.1111/j.1752-0606.2008.00095.x. [DOI] [PubMed] [Google Scholar]
  44. Substance Abuse & Mental Health Services Administration. Mental Health System Transformation. 2010 Retrieved February 17, 2010, from http://www.samhsa.gov/Matrix/matrix_mh.aspx.
  45. Tenney NH, Schotte CKW, Denys DAJP, van Megen HJGM, Westenberg HGM. Assessment of DSM-IV personality disorders in obsessive-compulsive disorder: Comparison of clinical diagnosis, self-report questionnaire, and semi-structured interview. Journal of Personality Disorders. 2003;17(6):550–561. doi: 10.1521/pedi.17.6.550.25352. [DOI] [PubMed] [Google Scholar]
  46. U.S. Census Bureau. Census 2000 Summary File 4 United States. 2003. [Google Scholar]
  47. Warnick EM, Weersing VR, Scahill L, Woolston JL. Selecting measures for use in child mental health services: A scorecard approach. Administration and Policy in Mental Health and Mental Health Services Research. 2009;36(2):112–122. doi: 10.1007/s10488-008-0203-x. [DOI] [PubMed] [Google Scholar]
  48. Weisz JR. Studying clinic-based child mental health care: Unpublished dataset. 2010. [Google Scholar]
  49. Weisz JR, Addis ME. The Research-Practice Tango and Other Choreographic Challenges: Using and Testing Evidence-Based Psychotherapies in Clinical Care Settings. In: Goodheart CD, Kazdin AE, Sternberg RJ, editors. Evidence-based psychotherapy: Where practice and research meet. Washington, DC US: American Psychological Association; 2006. pp. 179–206. [Google Scholar]

RESOURCES