Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2018 Sep 7.
Published in final edited form as: Assessment. 2016 Jul 28;24(2):210–221. doi: 10.1177/1073191115604353

Assessment Practices of Child Clinicians: Results From a National Survey

Jonathan R Cook 1, Estee M Hausman 2, Amanda Jensen-Doss 3, Kristin M Hawley 2
PMCID: PMC6127860  NIHMSID: NIHMS983634  PMID: 26341574

Abstract

Assessment is an integral component of treatment. However, prior surveys indicate clinicians may not use standardized assessment strategies. We surveyed 1,510 clinicians and used multivariate analysis of variance to explore group differences in specific measure use. Clinicians used unstandardized measures more frequently than standardized measures, although psychologists used standardized measures more frequently than nonpsychologists. We also used latent profile analysis to classify clinicians based on their overall approach to assessment and examined associations between clinician-level variables and assessment class or profile membership. A four-profile model best fit the data. The largest profile consisted of clinicians who primarily used unstandardized assessments (76.7%), followed by broad-spectrum assessors who regularly use both standardized and unstandardized assessment (11.9%), and two smaller profiles of minimal (6.0%) and selective assessors (5.5%). Compared with broad-spectrum assessors, unstandardized and minimal assessors were less likely to report having adequate standardized measures training. Implications for clinical practice and training are discussed.

Keywords: assessment, standardized measures, clinician survey


An estimated one out of eight U.S. youth suffer from significant mental illness, with perhaps as few as 10% of those in need receiving quality mental health care (Costello, Egger, & Angold, 2005). Such discouraging findings have led to massive efforts to improve the quality of mental health services in recent decades (e.g., APA Presidential Task Force on Evidence-Based Practice, 2006; National Institute of Mental Health, 2008). A primary emphasis of these efforts has been on the implementation of evidence-based treatments in everyday clinical practice (e.g., Chambers, Ringeisen, & Hickman, 2005). Despite this attention to evidence-based treatments, there has been relatively little focus on the importance of evidence-based assessment in the treatment process (Hunsley & Mash, 2007; Mash & Hunsley, 2005).

Failure to attend to evidence-based assessment can undermine the quality of mental health care for a multitude of reasons (Jensen & Weisz, 2002; Jensen-Doss & Hawley, 2010). When clinical decisions are guided by unreliable or invalid assessment, a number of clinical missteps may occur: misidentifying the primary disorder, missing the presence of significant co-occurring disorders, selecting a treatment approach or intervention inappropriate for the child’s symptom profile, and misinterpreting treatment progress. Indeed, several studies have found clinician diagnoses based on unstructured interviews to be incomplete, inaccurate, and unrelated to other (generally standardized) measures of impairment (e.g., Rettew, Doyle, Achenbach, Dumenci, & Ivanova, 2009). This mismatch can complicate the task of selecting and implementing an appropriate evidence-based treatment. After all, evidence-based treatments are designed and tested for a specific population and problem (often determined based on standardized assessment); they may not work as well, or at all, with another population or problem.

Furthermore, even when clinicians are not implementing a specific evidence-based treatment, there is reason to invest in reliable and valid assessment. Some prior research on usual care has found that when initial clinician diagnoses match those generated by standardized interview, client engagement and outcomes are improved (e.g., Jensen-Doss & Weisz, 2008). Incorporating ongoing standardized assessment into everyday practice has also been associated with improved outcomes for both children and adults (e.g., Bickman, Kelley, Breda, Vides de Andrade, & Riemer, 2011; Lambert et al., 2003). Some evidence-based treatment manuals for specific problems explicitly call for this type of ongoing standardized assessment to monitor client progress (e.g., Parent–Child Interaction Therapy; Eyberg & Funderburk, 2011). Furthermore, transdiagnostic approaches (e.g., modular treatments, such as Behavioral and Affective Skills in Coping; Weisz & Bearman, 2012; and Modular Approach to Therapy for Children with Anxiety, Depression, Trauma, or Conduct Problems; Chorpita & Weisz, 2009) often rely on high quality assessment data to guide decision making through the use of flowcharts and decision trees that help clinicians decide which treatment modules to administer, and in what sequence, for an individual client.

How often are standardized assessments integrated into usual practice? Unfortunately, prior surveys have consistently found that standardized measures (e.g., standardized diagnostic interviews, standardized symptom checklists) are rarely used in routine practice (e.g., Cashel, 2002; Gilbody, House, & Sheldon, 2002; Hatfield & Ogles, 2004; Ionita & Fitzpatrick, 2014). Given the flexibility and affordability of informal assessment approaches like unstructured clinical interviews and informal observations of client behavior, it is not surprising that these same surveys have found unstandardized assessments are used quite routinely.

What is less clear from existing research is whether meaningful variation exists among child clinicians in their assessment practices. Do all child clinicians rely on unstandardized approaches to assessment? Are there some child clinicians who regularly use standardized measures? Here, the research is more limited, but available data do suggest that some clinicians may be more or less likely to take an evidence-based approach to assessment. For example, Hatfield and Ogles (2004) found that clinicians who do not use outcome assessments tend to report more negative attitudes toward such measures. Among clinicians mandated to use standardized progress measures, Garland, Kruse, and Aarons (2003) found that most clinicians reported negative attitudes toward these measures. Furthermore, although the vast majority of these clinicians administered the measures as required, only 8% actually used the scores to inform their clinical work. Some of our recent work (Jensen-Doss & Hawley, 2010) also indicates a correlation between clinician attitudes about standardized measures and use of those measures.

In addition to attitudes, prior training in the administration, scoring, and interpretation of standardized measures may also influence use. In their survey, Hatfield and Ogles (2004) found that clinicians with more training in standardized outcome measures were significantly more likely to report using them than clinicians with less training. Similarly, Lyon, Charlesworth-Attie, Vander Stoep, and McCauley (2011) found an increase in use following training in standardized progress monitoring. Training may be a necessary first step to implementing standardized measures in practice, and graduate school is typically the most concentrated period of instruction for developing this skillset during a formative stage of the clinician’s growth. Given the focus on assessment training in psychology graduate programs and internships (e.g., Kazdin, 2008, but, see also Mours, Campbell, Gathercoal, & Peterson, 2009), psychologists may be more likely to use standardized measures than other mental health disciplines, and at least one survey supports this assumption (Palmiter Jr., 2004). However, despite this reputation, available data indicate that psychology graduate-level training in standardized assessment may be incomplete and insufficient (Krishnamurthy et al., 2004; Mours et al., 2009; Pidano & Whitcomb, 2012). It is also possible that younger clinicians with more recent training may have had more exposure to, and additional training in, standardized measures and may be more likely to use them, regardless of discipline (Overington, Fitzpatrick, Hunsley, & Drapeau, 2015). One reason may be an increased emphasis on assessment due to recent trends in training across mental health disciplines toward competency-based education, including competency in assessment (e.g., American Association for Marriage and Family Therapy, 2004; Bieschke et al., 2009; Council on Social Work Education, 2012).

In the current study, we sought to replicate and extend previous findings on the assessment practices of child clinicians using a large, nationally representative sample drawn from an existing survey of clinicians.1 Our goals in this study were (a) to determine the overall frequency with which unstandardized and standardized measures are used, (b) identify differences in assessment measure use across professional disciplines, and (c) classify clinicians using latent profiles analysis (LPA) based on self-reported unstandardized and standardized measure use. LPA is a model-based analytic approach that provides a way of grouping individuals into categories on the basis of shared characteristics that distinguish members of one group from members in another group. Class assignment is determined through fit statistics and tests of significance (Herman, Ostrander, Walkup, Silva, & March, 2007). Additionally, LPA permits the inclusion of covariates in models to help predict group membership (Walrath et al., 2004). LPA is well suited to addressing the third goal of this study because it can model distinct profiles based on overall assessment use (i.e., both standardized and unstandardized measures) rather than just attending to group differences for a single, specific measure. Although examination of group mean data may highlight how frequently specific measures are used, an LPA approach may better illuminate how individual clinicians fit distinct profiles of overall assessment use. For example, some clinicians may primarily use unstandardized measures and other may use a combination of both standardized and unstandardized assessments.

In the present study, we first examined the frequency of unstandardized and standardized measure use for the entire sample. Based on previous research, we expected that symptom checklists would be the most frequently used type of standardized measure. Although we hypothesized that all standardized measures, including symptom checklists, would be used less frequently than unstandardized measures (Bickman, Lambert, Andrade, & Penaloza, 2000; Cashel, 2002; Palmiter Jr., 2004). We also hypothesized that psychologists would use standardized measures more frequently than nonpsychologists (Palmiter Jr., 2004). Finally, we hypothesized that a LPA approach to our sample would identify a large class of clinicians who primarily use unstandardized measures (i.e., unstandardized assessors). We also predicted that there would be a much smaller class of clinicians who use standardized measures frequently (Bickman et al., 2000; Cashel, 2002; Hatfield & Ogles, 2004; Rosenberg & Beck, 1986). We further hypothesized that unstandardized assessors would be more likely to be older, nonpsychologist, and hold negative attitudes toward standardized assessment, compared with those who frequently used standardized measures. Last, we hypothesized that unstandardized assessors would be less likely to have training in standardized measures compared with those who frequently used standardized measures.

Method

Participants

Participants were 1,510, child-serving counselors (258; 17.1%), marriage and family therapists (MFTs; 217; 14.4%), social workers (286; 18.9%), psychologists (451; 29.9%), and psychiatrists (298; 19.7%) who reported assessing and treating youths in a mailed survey about youth mental health practices (see Jensen-Doss & Hawley, 2010, 2011). The sample was primarily female (62.9%) and Caucasian (90.8%) and was nearly evenly divided between master’s (681; 45.2%) and doctoral-level clinicians (827; 54.8%). See Table 1, for participant characteristics.

Table 1.

Provider Characteristics.

Femalea, % (N) 62.9 (948)
Ageb, M (SD) 52.7 (10.0)
Ethnicityc, % (N)
 White/Caucasian (non-Hispanic) 90.8 (1,371)
 Hispanic/Latino(a) 2.7 (41)
 Black/African American 2.5 (38)
 Asian/Pacific Islander 2.6 (39)
 Mixed/other 1.4 (21)
Professional discipline, % (N)
 Counselor 17.1 (258)
 Marriage and family therapists 14.4 (217)
 Social worker 18.9 (286)
 Psychologist 29.9 (451)
 Psychiatrist 19.7 (298)
Highest degreed, % (N)
 Master’s 45.2 (681)
 Doctoral 54.8 (827)
Frequency of use of assessment toolse, M (SD)
 Unstructured clinical interviewsf 4.6 (0.9)
 Informal behavioral observationsg 4.6 (0.7)
 Informal mental status examsh 4.5 (0.9)
 Standardized checklistsi 3.2 (1.4)
 Formal clinician ratingsj 2.3 (1.4)
 Standardized diagnostic interviewsk 2.0 (1.2)
 Formal mental status examsl 1.9 (1.2)
 Formal observational coding systemsm 1.5 (0.9)
Attitude Toward Standardized Assessment Scalesn, M (SD)
 Benefit Over Clinical Judgment subscaleo 2.9 (0.7)
  Standardized measures don’t tell me anything 1 can’t learn from just talking to children and their familiesp 3.5 (1.1)
  Using clinical judgment to diagnose children is superior to using standardized assessment measuresp 2.8 (1.0)
  Standardized measures provide more useful information than other informal assessments 2.5 (0.8)
  Standardized measures don’t capture what’s really going on with children and their familiesp 2.9 (0.9)
  Clinical problems are too complex to be captured by a standardized measurep 3.0 (1.0)
 Psychometric Quality subscaleq 3.8 (0.5)
  Standardized measures help with accurate diagnosis 3.9 (0.8)
  Most standardized measures aren’t helpful because they don’t map on to DSM diagnostic criteriap 3.5 (0.8)
  Standardized measures overdiagnose psychopathologyp 3.2 (0.9)
  Standardized measures help detect diagnostic comorbidity (presence of multiple diagnoses) 3.7 (0.7)
  Standardized measures help with differential diagnosis (deciding between two diagnoses) 3.6 (0.8)
  It is not necessary for assessment measures to be standardized in research studiesp 4.3 (0.8)
  Clinicians should use assessments with demonstrated reliability and validity 4.2 (0.8)
 Practicality subscaler 3.2 (0.6)
  Standardized measures can efficiently gather information from children, parents, teachers 3.9 (0.8)
  Standardized measures take too long to administer and scorep 3.0 (1.1)
  Standardized measures aren’t worth the time spend administering, scoring, and interpreting the resultsp 3.4 (1.1)
  Standardized diagnostic interviews interfere with establishing rapport during an intakep 3.0 (1.1)
  Standardized assessments are readily available in the language my children and their families speak 3.3 (1.1)
  There are few standardized measures valid for ethnic minority children and their familiesp 2.7 (0.8)
  Completing a standardized measure is too much of a burden for children and their familiesp 3.3 (0.9)
  Copyrighted standardized measures are affordable for use in practice 2.7 (1.0)
  Standardized symptom checklists are too difficult for many children and their families to read or understandp 3.3 (0.9)
  I have adequate training in the use of standardized measures 3.2 (1.2)

Note. DSM = Diagnostic and Statistical Manual of Mental Disorders.

a

N = 1,507.

b

N = 1,484.

c

N = 1,504.

d

N = 1,508.

e

Items were measured using a 5-point Likert-type scale: 1 = never/almost never; 2 = rarely; 3 = sometimes; 4 = often; 5 = all/most of the time.

f

N = 1,466.

g

N = 1,456.

h

N = 1,498.

i

N = 1,499.

j

N = 1,486.

k

N = 1,458.

l

N = 1,473.

m

N = 1,461.

n

Items were measured using a 5-point Likert-type scale: 1 = strongly disagree; 2 = disagree; 3 = neutral; 4 = agree; 5 = strongly agree.

o

N = 1,442.

p

Item was reversed scored before it was included in the scale score.

q

N = 1,456.

r

N = 1,452.

Procedure

We constructed a survey instrument using the Tailored Design Method (Dillman, 1999). Using Tailored Design Method, the survey was designed to minimize participant costs (e.g., time and effort spent responding) and maximize participant likelihood of response through a variety of methods, including a noncontingent incentive for all participants (a $2 bill), personalized follow-up mailings to nonrespondents, and the use of a pilot sample and structured focus groups to refine the survey and mail procedures (see Hawley, Cook, & Jensen-Doss, 2009).

The final survey was sent to 5,000 clinicians from youth-serving mental health disciplines. We randomly selected 1,000 members from the membership rosters of the five largest U.S. practice guilds for child mental health clinicians: the American Counseling Association, the American Association for Marriage and Family Therapy, the National Association of Social Workers, the American Psychological Association, and the American Academy of Child and Adolescent Psychiatrists. Clinicians received up to five separate mailings. A total of 2,863 returned and 347 undeliverable surveys yielded unadjusted and adjusted response rates of 57.3% and 61.5%, respectively. Response rates varied by discipline; 36.6% of psychiatrists (N = 366), 56.7% psychologists (N = 567), 60.9% MFTs (N = 609), 62.5% of counselors (N = 625), and 69.6% of social workers (N = 696) returned completed surveys. Nearly 53% of respondents reported that they conduct assessment with children (N = 1,510), whereas 47% reported that they did not (N = 1,353). The broader survey focused on a range of treatment and assessment issues; for the purposes of this article, analyses focused on the 1,510 participants who indicated they conduct assessment with children. Provider characteristics for child-serving clinicians who conduct assessment are presented in Table 1. Because participants were selected based on guild membership, information on nonrespondents is limited to the clinician’s guild affiliation, name, and mailing address.

Measures

Standardized and Unstandardized Assessment Use.

Clinicians reported how often they use various types of assessment tools, using a 5-point Likert-type scale from 1 (never or almost never) to 5 (all or most of the time). Standardized assessment tools included (a) standardized checklists for child/family symptoms or functioning (paper-and-pencil measure completed by child, parent, teacher, or others with a standardized scoring; e.g., Child Behavior Checklist), (b) formal clinician ratings of child/family symptoms or functioning (standardized rating completed by clinicians after an interview; e.g., Child and Adolescent Functional Assessment Scale or Children’s Global Assessment Scale), (c) standardized diagnostic interviews for child diagnosis (e.g., Schedule for Affective Disorders and Schizophrenia for School-Age Children), (d) formal mental status exam of child (e.g., Folstein Mini-Mental Status Examination), and (e) formal, standardized observational coding system of child/family functioning (e.g., Dyadic Parent–Child Interaction Coding System II), whereas unstandardized assessment included (f) informal behavioral observation of child/ family functioning, (g) unstructured clinical or intake interviews, and (h) unstructured or informal mental status exam of child (e.g., assessment of appearance, mood, speech, thought patterns). A definition and examples of each type of assessment tool accompanied each item.

Professional Characteristics.

Clinicians reported their sex, age, ethnicity, highest degree (master’s or doctoral), and professional discipline (counseling, marriage and family therapy, psychiatry, psychology, and social work).

Attitudes Toward Standardized Assessment Scales (ASA; Jensen-Doss & Hawley, 2010).

The ASA is a 22-item scale measuring clinician attitudes about using standardized assessment measures. The ASA consists of three sub-scales: Benefit Over Clinical Judgment, Psychometric Quality, and Practicality. Items are rated on a 5-point Likert-type scale, ranging from 1 (strongly disagree) to 5 (strongly disagree). The psychometric properties of the ASA have been well supported (Jensen-Doss & Hawley, 2010; Lyon, Dorsey, Pullmann, Silbaugh-Cowdin, & Berliner, 2015). Internal consistency was good for the ASA in the current sample (α = .72-.75).

Results

Group Means for Unstandardized and Standardized Assessment Measures

Overall group means were calculated for the 1,510 clinicians who reported on the frequency with which they used eight types of unstandardized and standardized assessment measures. All forms of unstandardized measures were used “often” to “almost always” (informal behavioral observations, M = 4.62, SD = 0.71; unstructured clinical interviews, M = 4.60, SD = 0.84; informal mental status exams, M = 4.57, SD = 0.82). Standardized checklists were used “sometimes” (M = 3.22, SD = 1.41), whereas all other forms of standardized measures were used “almost never” to “rarely” (formal clinician ratings, M = 2.30, SD = 1.41; standardized diagnostic interviews, M = 1.99, SD = 1.21; formal mental status exams, M = 1.81, SD = 1.14; formal observational coding systems, M = 1.48, SD = 0.86). Mean frequency of assessment use by clinician discipline is presented in Table 2.

Table 2.

Mean Frequency (SD) of Unstandardized and Standardized Assessment Use by Clinician Discipline.

Unstandardized assessment tools Standardized assessment tools
Unstructured Informal Informal Standardized Formal Formal Formal
clinical mental status behavioral diagnostic mental status observational Standardized clinician
Discipline interview exam observation interview exam coding system checklist rating
Counselors (n = 221) 4.40 (0.96) 4.41 (0.98) 4.51 (0.82) 1.91 (1.18) 1.62 (1.10) 1.37 (0.72) 2.83 (1.40) 2.11 (1.38)
Marriage and family therapists (n = 186) 4.61 (0.74) 4.54 (0.81) 4.67 (0.54) 1.98 (1.23) 1.62 (1.00) 1.52 (0.85) 2.92 (1.48) 2.30 (1.43)
Social workers (n = 251) 4.46 (0.96) 4.41 (0.99) 4.57 (0.77) 2.03 (1.25) 1.81 (1.21) 1.55 (0.91) 2.91 (1.47) 2.42 (1.48)
Psychologists (n = 414) 4.68 (0.75) 4.57 (0.75) 4.65 (0.68) 2.15 (1.27) 1.81 (1.08) 1.56 (0.97) 3.68 (1.28) 2.28 (1.48)
Psychiatrists (n = 267) 4.76 (0.84) 4.86 (0.44) 4.68 (0.71) 1.79 (1.03) 2.11 (1.22) 1.36 (0.73) 3.30 (1.28) 2.37 (1.40)

Note. N = 1,339. Items were measured using a 5-point Likert-type scale: 1 = never/almost never; 2 = rarely; 3 = sometimes; 4 = often; 5 = all/most of the time.

Professional Discipline Differences in Unstandardized and Standardized Assessment Use

A preliminary multivariate analysis of variance was conducted to explore the effects of professional discipline on the frequency of unstandardized and standardized measure use. The results showed significant effects for professional discipline, Wilks’s Λ = 0.83, F(32, 4,895) = 8.13, p < .001. Analysis indicated that professional disciplines differed significantly in the frequency with which they used unstructured clinical interviews, F(4, 1,334) = 8.39, p < .001; informal mental status exams, F(4, 1,334) = 13.22, p < .001; informal behavioral observations, F(4, 1,334) = 2.46, p < .05; standardized diagnostic interviews, F(4, 1,334) = 3.84, p < .01; formal mental status exams, F(4, 1,334) = 7.78, p < .001; formal observational coding, F(4, 1,334) = 3.38, p < .01; and standardized checklists, F(4, 1,334) = 21.94, p < .001. Professional disciplines did not differ significantly in use of formal clinician ratings.

To further explore differences in how frequently professional disciplines use specific assessment measures a series of post hoc comparisons were conducted. All of the post hoc comparisons reported here were significant after Bonferroni corrections. The primary differences were between psychologists and nonpsychologists. Consistent with our hypothesis, psychologists used several standardized measures more frequently than nonpsychologists. Psychologists used standardized diagnostic interviews more frequently than psychiatrists (p < .01); formal observational coding systems more frequently than psychiatrists (p < .05); and standardized checklists more frequently than counselors (p < .001), MFTs (p < .001), social workers (p < .001), and psychiatrists (p < .01). Psychologists also used unstructured clinical interviews more frequently than counselors (p < .001) and social workers (p < .01). Psychiatrists also used several unstandardized and standardized measures more frequently than other clinicians. Psychiatrists used unstructured clinical interviews more frequently than counselors (p < .001) and social workers (p < .01); informal mental status exams more frequently than counselors (p < .001), MFTs (p < .001), social workers (p < .001), and psychologists (p < .001); formal mental status exams more frequently than counselors (p < .001), MFTs (p < .001), social workers (p < .05), and psychologists (p < .01); and standardized checklists more frequently than counselors (p < .01), MFTs (p < .05), and social workers (p < .05).

Latent Profiles of Assessment Use

We used Mplus 7.3 (Muthén & Muthén, 2012) to conduct LPA to identify the best fitting model of clinician assessment practices (N = 1,510) and evaluated one- to five-class models with correlated manifest indicator error variances (See Table 3). Using multiple measures of model fit, including the Bayesian information criteria (BIC), samplesize-adjusted BIC, entropy, Vuong–Lo–Mendell–Rubin likelihood ratio test (VLMR), and the Lo–Mendell–Rubin adjusted likelihood ratio test, the four-class latent profile model provided the best fit (BIC = 30508.63, adjusted BIC = 30283.09, entropy = .97) and was superior to the more parsimonious three-class model (VLMR, p < .001 and adjusted VLMR, p < .001). Previous research by Tofighi and Enders (2007) has identified both the adjusted BIC and VLMR as reliable indicators of model fit and these were the primary statistical information criteria that we used to select our model. There was little overlap between profiles in the four-class model as indicated by the average latent profile probabilities for clinicians’ most likely latent profile class, which ranged from 0.96 to 0.99. Although the lower BIC and adjusted BIC for the five-class model indicate better fit, the lower entropy value and nonsignificant VLMR and adjusted VLMR suggest that the increase in model complexity is not justified compared with the more parsimonious four-class model. Additionally, the five-profile model results in three classes that are relatively small, each explaining 5% or less of the total data.

Table 3.

Summary of Latent Profile Analyses of Child Clinician Assessment Practices.

Model fit indices Solution
Model BIC Adjusted BIC Entropy VLMRp Adjusted VLMRp Profile 1 Profile 2 Profile 3 Profile 4 Profile 5
One-class 32,489.74 32,349.96 1,510 (100%)
Two-class 31,488.80 31,320.44 .98 .00 .00 1,356 (89.8%) 154 (10.2%)
Three-class 30,969.69 30,772.73 .96 .06 .06 1,219 (80.7%) 181 (12.0%) 110 (7.3%)
Four-class 30,508.63 30,283.09 .97 .00 .00 1,158 (76.7%) 179 (11.9%) 90 (6.0%) 83 (5.5%)
Five-class 30,173.48 29,919.34 .96 .06 .06 1,104 (73.1%) 173 (11.5%) 84 (5.6%) 86 (5.7%) 63 (4.2%)

Note. N = 1,510. BIC = Bayesian information criteria; adjusted BIC = sample-size-adjusted Bayesian information criteria; VLMR = Vuong–Lo–Mendell– Rubin likelihood ratio test; adjusted VLMR = Lo–Mendell–Rubin adjusted likelihood ratio test.

A graphical representation of each of the four assessment profiles is displayed in Figure 1. The four-class latent profile model included classes we termed as unstandardized, broad-spectrum, minimal, and selective assessors. Unstandardized assessors represented 76.7% of the overall sample (N = 1,158); these clinicians use all unstandardized tools (informal behavioral observations, interviews, and mental status exams) “all or most of the time,” and all standardized tools “never or almost never” or “rarely,” with the exception of standardized checklists, which they use “sometimes.” The remaining profiles represent much smaller groups of clinicians. Broad-spectrum assessors, who regularly use both standardized and unstandardized assessment tools, represented 11.9% of the overall sample (N = 179). They reported using all standardized measures “sometimes,” except standardized checklists, which they use “often,” and all unstandardized assessments “all or most of the time.” Minimal assessors represented just 6.0% of the overall sample (N = 90); they use unstructured clinical interviews “often” and informal mental status exams “sometimes,” but all other assessment tools “never or almost never” or “rarely.” Finally, selective assessors represented 5.5% of the overall sample (N = 83); they use informal behavioral observations and mental status exams “often,” unstructured clinical interviews “rarely,” standardized checklists and standardized diagnostic interviews “sometimes,” and other standardized measures “never or almost never” or “rarely.”

Figure 1.

Figure 1.

Frequency of assessment use based on the most likely latent profile membership in the four-profile solution.

Note. IBO = informal behavioral observation; UCI = unstructured clinical interview; IME = informal mental status exam; SCL = standardized checklists; FCR = formal clinician ratings; SDI = standardized diagnostic interviews; FME = formal mental status exams; FOC = formal observational coding. Frequency of assessment measured using a 5-point Likert-type scale where: 1 = never or almost never; 2 = rarely; 3 = sometimes; 4 = often; 5 = all or most of the time.

Multivariate Logistic Regression

We used Mplus 7.3 (Muthén & Muthén, 2012) to conduct a multivariate logistic regression to predict profile membership. We examined the relative importance of the following hypothesized a priori predictors: clinician age, professional discipline, and the three ASA subscales. Because LPA requires complete data for all predictor variables in a multivariate regression, we excluded clinicians with missing data on any predictor variable from the multivariate analyses (N = 1,415). The logistic regression and odds ratios for predictors of profile membership are presented in Table 4. We selected broad-spectrum assessors as the reference profile so that we could compare clinicians who reported frequently using standardized measures against all other clinician profiles. Only the Benefit Over Clinical Judgment subscale score significantly predicted profile membership (OR = 0.68, p < .05). To facilitate interpretation, this odds ratio was reversed, which indicated that broad-spectrum assessors were 1.47 times more likely than unstandardized clinicians to agree that standardized measures have additional benefits over clinical judgment. The four-class latent profile multivariate model had good fit (BIC = 28,303.07; adjusted BIC = 28,156.94, entropy = .97).

Table 4.

Multivariate Logistic Regression Predicting Profile Membership.

Multivariate analysis, OR [95% Cl]
Unstandardized assessorsa Minimal assessorsb Selective assessorsc
Professional characteristics
 Clinician age 0.98 [0.97, 1.01] 1.00 [0.97, 1.03] 1.03 [0.99, 1.06]
 Professional disciplines (1 = psychologists) 0.95 [0.65, 1.39] 0.51 [0.26, 1.01] 1.24 [0.67, 2.29]
Attitudes Toward Standardized Assessment
 Benefit Over Clinical Judgment 0.68* [0.49, 0.95] 1.00 [0.55, 1.84] 1.09 [0.63, 1.89]
 Psychometric Quality 1.35 [0.88, 2.08] 1.04 [0.49, 2.23] 0.82 [0.39, 1.73]
 Practicality 1.18 [0.79, 1.76] 1.22 [0.59, 2.53] 0.84 [0.43, 1.64]

Note. OR = odds ratio; CI = confidence interval. Broad-spectrum assessors (N = 165) are the reference group. Bayesian information criteria = 28303.07; adjusted information criteria = 28156.94; entropy = .97.

a

N = 1,094.

b

N = 84.

c

N = 72.

*

p < .05.

Bonferroni-Corrected Post Hoc Analysis

The ASA Practicality score, which includes an item about prior training in standardized measures along with other items about the ease of using standardized measures, was not a significant predictor in our initial model. Because training in standardized measures is central to the rationale for why clinician discipline is expected to predict assessment practice and because training in standardized measures has been a strong predictor in prior research (e.g., Hatfield & Ogles, 2004; Lyon et al., 2011), we chose to examine the single ASA training item in a post hoc analysis along with the significant predictors from the initial multivariate model (see Table 5). In this post hoc model, the Benefit Over Clinical Judgment score no longer significantly predicted profile membership. However, the training in standardized assessment item was a significant predictor of both the unstandardized (OR = 0.68, p < .05) and minimal assessor groups (OR = 0.73, p < .01). Reversing these odds ratios indicated that, for each 1 point increase on this item (indicating higher levels of training), participants were 1.47 times more likely to be in the broad-spectrum group than the unstandardized group and 1.37 times more likely to be in the broad spectrum group than the minimal assessors group.

Table 5.

Bonferroni-Corrected Post Hoc Multivariate Logistic Regression Predicting Profile Membership.

Multivariate analysis, OR [95% Cl]
Unstandardized assessorsa Minimal assessorsb Selective assessorsc
Attitudes Toward Standardized Assessment
 Benefit Over Clinical Judgment 0.83 [0.66, 1.05] 1.06 [0.70, 1.62] 0.93 [0.61, 1.43]
 I have adequate training in standardized measures 0.68** [0.59, 0.79] 0.73* [0.58, 0.92] 0.82 [0.65, 1.04]

Note. OR = odds ratio; CI = confidence interval. Broad-spectrum assessors (N = 162) are the reference group. Bayesian information criteria = 27387.36; adjusted Bayesian information criteria = 27269.83; entropy = .97.

a

N = 1,050.

b

N = 82.

c

N = 74.

*

p < .05.

**

p < .01.

Discussion

Previous research on child clinician assessment practices has provided descriptive information on clinician characteristics and assessment practices based primarily on small samples of mental health professionals (Cashel, 2002; Palmiter Jr., 2004). More recently, Jensen-Doss and Hawley (2010, 2011) used a large, national, multidisciplinary survey to identify attitudinal, discipline, and degree variables associated with use of standardized diagnostic interviews. In a follow-up analysis of that sample, this study found that (a) unstandardized measures are used more frequently than standardized measures as a whole; (b) psychologists and psychiatrists use at least some types of standardized measures more frequently than social workers, MFTs, and counselors; and (c) child clinicians representing five major mental health professions can be classified into four distinct profile groups based on their overall pattern of assessment use. Additionally, although a small group of clinicians do regularly use standardized measures, this group is no more likely to be younger clinicians or psychologists. Rather, these broad-spectrum assessors are primarily distinguished from unstandardized and minimal assessors in that they report greater training in using standardized measures.

Our study provides an updated and multidisciplinary confirmation that standardized measures remain far less frequently used than unstandardized measures, a finding that echoes previous survey research. Standardized checklists were used “sometimes”—more often than the majority of standardized measures. However, consistent with previous survey research (Cashel, 2002; Gilbody et al., 2002; Hatfield & Ogles, 2004; Ionita & Fitzpatrick, 2014), unstructured clinical interviews, informal mental status exams, and informal behavioral observations were used more frequently than all forms of standardized assessment. Psychologists use most standardized measures (i.e., standardized diagnostic interviews, standardized checklists, and formal observational coding) more frequently than nonpsychologists. However, psychiatrists also use standardized checklists more frequently than all other nonpsychologist clinicians and they use both formal and informal mental status exams more frequently than all other clinicians, psychologists included. These differences in assessment practices may be driven by discipline-specific training differences in assessment and the function of assessment work in each discipline (e.g., psychologists historically served as the comprehensive assessors and developed many of the original standardized measures; mental status exams were developed to provide psychiatrists with efficient measurement for brief office visits).

Although the multivariate analysis of variance results suggest that psychologists use some standardized measures more frequently than nonpsychologists, LPA and post hoc analyses indicate that the importance of training in standardized measures cuts across all mental health disciplines. We identified a large class of clinicians, nearly 77%, who primarily use unstandardized measures (i.e., unstandardized assessors). This finding is consistent with prior literature and further highlights the substantial research–practice gap in clinical assessment. We also hypothesized that we would identify a smaller subgroup of clinicians who would frequently use standardized measures. Through LPA, we did identify a smaller class of assessors (i.e., broad-spectrum assessors) who used standardized measures at least some of the time, in addition to frequent use of unstandardized assessment tools. We did not find a group who used standardized measures at the same rate as unstandardized. Although this finding may seem troublesome, it highlights the importance of informal procedures even among clinicians who regularly use standardized measures. Indeed, it seems hard to envision routine practice that would not entail informal observation and interviewing to accompany standardized assessment (e.g., noticing affect or mannerisms; inquiring about the reason for referral; gauging motivation for treatment). Future research could examine in further detail exactly how clinicians integrate the more formal, standardized assessment information with the more subjective and nuanced information that is always part of clinical observation and interactions.

Two other groups also emerged. Minimal assessors accounted for approximately 6% of clinicians and used all forms of assessment infrequently. It therefore appears that there are some clinicians who do not endorse incorporating any type of assessment into treatment. As it seems difficult to imagine treatment that does not involve at least an unstructured interview to determine why the family is seeking treatment, it is possible that these clinicians either did not conceptualize such activities as assessment or worked in positions where someone else conducted the treatment intake. Our final group, which we termed selective assessors, comprised the remaining 5.5% of clinicians who, like minimal assessors, infrequently used unstandardized interviews but, unlike minimal assessors, did endorse the use of other unstandardized strategies and some selective use of standardized strategies.

When we examined predictors of profile membership suggested by existing research (clinician age, discipline, training, and attitudes), we did not find that younger clinicians were more likely to use standardized measures compared with older clinicians. The mean age of the clinicians in our sample was early 50s, suggesting that many clinicians may have completed their training well before the case for evidence-based assessment (Mash & Hunsley, 2005) began influencing training. If we had a larger proportion of younger clinicians in our sample, we might have seen significant differences between broad-spectrum assessors and other clinicians based on age. We found modest support for attitudinal factors (p < .05; see Table 4), consistent with existing research (Jensen-Doss & Hawley, 2010, 2011). However, the clear winner in our post hoc analysis is training. This single item reflecting clinician perception of the adequacy of their prior training in the use of standardized assessments was, by far, the strongest predictor; unstandardized and minimal assessors were almost half as likely as broad spectrum assessors to endorse adequate training (p < .01 and p < .05, respectively; see Table 5).

Taken as a whole, our findings suggest that a significant barrier to routine implementation of evidence-based assessment in youth mental health care is lack of training. The vast majority of clinicians, across disciplines, may need additional training about how to administer, score, and interpret standardized assessment tools and about the benefit of using such tools in their practice before they can be expected to change their assessment practice. In thinking about how to encourage more clinicians to become “broad-spectrum” assessors, training that incorporates both practical components (e.g., administration, interpretation) as well as experiential or attitudinal components (e.g., benefits over clinical judgment alone; availability of inexpensive or free measures; how and why such measures can be useful in daily clinical work) may be important. Focusing exclusively on how to administer or score measures may result in partial success, at best, in increasing use of standardized measures. As previously noted, Garland et al. (2003) found that following a mandate to administer standardized measures, most clinicians administered the measures, but continued to hold negative attitudes about the usefulness of the mandated measures and very few (8%) incorporated the assessment results into the client’s treatment plan. In other words, simple mandates to complete standardized assessments may be result in clinical practices that follow the “letter of the law” without honoring the “spirit.” This finding suggests that compliance cannot simply be measured in terms of completion (i.e., administration, scoring) and that changing clinician attitudes and perceptions may play an important role in the success of assessment training. Indeed, Lyon et al. (2015) recently found that clinician attitudes about practicality of use and perceived skill at using standardized measures improved after completion of training and consultation and that actual use of standardized measures also increased. Unfortunately, clinicians’ use of standardized measures began to decline again after consultation ended.

It may be that trying to train practicing clinicians to use more standardized assessment tools will bear less fruit than trying to increase the focus on such methods within graduate training programs. As mentioned in the introduction, training in clinical psychology doctoral programs often does not involve adequate coverage of evidence-based assessment (Krishnamurthy et al., 2004; Mours et al., 2009; Pidano & Whitcomb, 2012; although see Ready & Veague, 2014, for information suggesting this may be improving), and little is known about whether these topics are addressed at other mental health training programs. In light of recent trends toward competency-based education across mental health disciplines, greater emphasis on training in standardized assessment as part of competency in assessment by graduate training programs and accrediting bodies may be an ideal approach to enhancing assessment training across mental health disciplines. Outreach to these professional organizations regarding the costs associated with not using standardized assessment strategies, including inaccurate diagnoses (e.g., Rettew et al., 2009) and increased risk of treatment failure (e.g., Bickman et al., 2011; Lambert et al., 2003), may help convince them to address these training gaps.

In addition to improving training, improving the clinical utility of assessment tools might also help increase their use. Like the deployment-focused model of intervention development (Weisz, 2014), it may be beneficial to involve clinicians more fully in the process of measure development, particularly for those assessments intended to be integrated into routine clinical practice. Partnerships between researchers and clinicians from an early stage of measure development could facilitate communication, long-term professional partnerships, and a cross-fertilization of ideas resulting in psychometrically sound measures with excellent clinical utility. Although practicing clinicians with preexisting negative attitudes toward standardized measures may express little interest in participating in measurement development, it may be that younger generations of clinicians will demonstrate greater interest, especially if adequate graduate training in standardized measures becomes more widespread. In addition, efforts to disseminate measures, particularly those that are brief and inexpensive, should market or advertise these directly to clinicians, in addition to publishing in research journals. A number of these are currently available, such as those described in Beidas et al. (2015), but more attention is needed to increase clinician awareness of available assessment resources.

Finally, it is worth noting the limitations of the current research. General limitations of survey data are well established, such as reliance on unrepresentative samples, low response rates, response order effects, monomethod bias, and use of items, which elicit social desirability biases (Krosnick, 1999). This study attempted to minimize several of these by using a survey methodology that included focus groups and pilot testing, and multiple, personalized mailings to improve response rates (Dillman, 1999). Our response rate exceeded the majority of previous surveys on clinician assessment practices (e.g., Cashel, 2002; Ionita & Fitzpatrick, 2014; Palmiter Jr., 2004), but still ranged from a high of 69.6% for social workers to a low of 36.6% for psychiatrists. The survey relied on retrospective self-report of clinicians whose memories may have been inaccurate. Although clinicians were asked about their assessment practices, the extent to which the reported assessment practices were used in an appropriate, reliable manner is unknown. In addition, the clinicians sampled were chosen based on their membership in a large professional guild organization. Although the guilds boasting the largest memberships within each discipline were selected, guild members may not represent the broader population of child clinicians. For example, guild members may have greater access to resources such as a network of like-minded colleagues or access to guild publications. If so, guild members may be more likely to use research supported techniques such as standardized measures. As such, the finding that less than 20% of the sample regularly used standardized measures may be an overestimate of the true population rate.

Despite these limitations, the current study serves as an important step forward in identifying and explaining patterns of assessment use in a large sample of youth-serving mental health clinicians. Our findings suggest a critical role for clinician training in supporting evidence-based assessment. Future research should examine the components of effective clinician training interventions that increase standardized assessment use among the majority of clinicians who rarely incorporate standardized assessment into their routine practice.

Acknowledgments

Funding

The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: The work reported here is based on Jonathan Cook’s doctoral dissertation and was supported by research grant R03 MH077752 (PI: Kristin Hawley) from the National Institute of Mental Health.

Footnotes

Declaration of Conflicting Interests

The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

1.

The current study uses the sample, measure development, and survey procedure described in Jensen-Doss and Hawley (2010).

References

  1. American Association for Marriage and Family Therapy. (2004). Marriage and family therapy core competencies. Alexandria, VA: Author. [Google Scholar]
  2. APA Presidential Task Force on Evidence-Based Practice. (2006). Evidence-based practice in psychology. American Psychologist, 61, 271–285. [DOI] [PubMed] [Google Scholar]
  3. Beidas RS, Stewart RE, Walsh L, Lucas S, Downey MM, Jackson K, . . . Mandell DS (2015). Free, brief, and validated: Standardized instruments for low-resource mental health settings. Cognitive and Behavioural Practice, 22, 5–19. [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Bickman L, Kelley SD, Breda C, Vides de Andrade AR, & Riemer M (2011). Effects of routine feedback to clinicians on mental health outcomes of youth: Results of a randomized trial. Psychiatric Services, 62, 1423–1429. [DOI] [PubMed] [Google Scholar]
  5. Bickman L, Lambert EW, Andrade AR, & Penaloza R (2000). The Fort Bragg continuum of care for children and adolescents: Mental health outcomes over five years. Journal of Consulting and Clinical Psychology, 68, 710–716. [PubMed] [Google Scholar]
  6. Bieschke K, Bell D, Davis C, Hatcher R, Peterson R, & Rodolfa E (2009). Establishing and assessing core competencies in professional psychology: A call to action. Training and Education in Professional Psychology, 3(Suppl. 4), S1–S4. [Google Scholar]
  7. Cashel ML (2002). Child and adolescent psychological assessment: Current clinical practices and the impact of managed care. Professional Psychology: Research and Practice, 33, 446–453. [Google Scholar]
  8. Chambers DA, Ringeisen H, & Hickman EE (2005). Federal, state, and foundation initiatives around evidence-based practices for child and adolescent mental health. Child & Adolescent Psychiatric Clinics of North America, 14, 307–327. [DOI] [PubMed] [Google Scholar]
  9. Chorpita BF, & Weisz JR (2009). Modular approach to therapy for children with anxiety, depression, trauma, or conduct problems (MATCH-ADTC). Satellite Beach, FL: PracticeWise. [Google Scholar]
  10. Costello E, Egger H, & Angold A (2005). 10-Year research update review: The epidemiology of child and adolescent psychiatric disorders: I. Methods and public health burden. Journal of the American Academy of Child and Adolescent Psychiatry, 44, 972–986. [DOI] [PubMed] [Google Scholar]
  11. Council on Social Work Education. (2012). Educational Policy and Accreditation Standards. Retrieved from http://www.cswe.org/File.aspx?id=13780
  12. Dillman DA (1999). Mail and internet surveys: The tailored design method (2nd ed.). New York, NY: Wiley. [Google Scholar]
  13. Eyberg S, & Funderburk B (2011). Parent-Child Interaction Therapy protocol. Gainesville, FL: PCIT International. [Google Scholar]
  14. Garland AF, Kruse M, & Aarons GA (2003). Clinicians and outcome measurement: What’s the use? Journal of Behavioral Health Sciences & Research, 30, 393–405. [DOI] [PubMed] [Google Scholar]
  15. Gilbody SM, House AO, & Sheldon TA (2002). Psychiatrists in the UK do not use outcomes measures: National survey. British Journal of Psychiatry, 180, 101–103. [DOI] [PubMed] [Google Scholar]
  16. Hatfield DR, & Ogles BM (2004). The use of outcome measures by psychologists in clinical practice. Professional Psychology: Research and Practice, 35, 485–491. [Google Scholar]
  17. Hawley KM, Cook JR, & Jensen-Doss A (2009). Do noncontingent incentives increase survey response rates among mental health providers? A randomized trial comparison. Administration and Policy in Mental Health, 36, 343–348. [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. Herman KC, Ostrander R, Walkup JT, Silva SG, & March JS (2007). Empirically derived subtypes of adolescent depression: Latent profile analysis of co-occurring symptoms in the Treatment for Adolescents with Depression Study (TADS). Journal of Consulting and Clinical Psychology, 75, 716–728. [DOI] [PubMed] [Google Scholar]
  19. Hunsley J, & Mash EJ (2007). Evidence-based assessment. Annual Review of Clinical Psychology, 3, 29–51. [DOI] [PubMed] [Google Scholar]
  20. Ionita G, & Fitzpatrick M (2014). Bringing science to clinical practice: A Canadian survey of psychological practice and usage of progress monitoring measures. Canadian Psychology/Psychologie Canadienne, 55, 187–196. [Google Scholar]
  21. Jensen AL, & Weisz JR (2002). Assessing match and mismatch between practitioner-generated and standardized interviewer-generated diagnoses for clinic-referred children and adolescents. Journal of Consulting and Clinical Psychology, 70, 158–168. [PubMed] [Google Scholar]
  22. Jensen-Doss A, & Hawley KM (2010). Understanding barriers to evidence-based assessment: Clinician attitudes toward standardized assessment tools. Journal of Clinical Child & Adolescent Psychology, 39, 885–896. [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Jensen-Doss A, & Hawley KM (2011). Understanding clinicians’ diagnostic practices: Attitudes toward the utility of diagnosis and standardized diagnostic tools. Administration and Policy in Mental Health and Mental Health Services Research, 38, 476–485. [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Jensen-Doss A, & Weisz JR (2008). Diagnostic agreement predicts treatment process and outcomes in youth mental health clinics. Journal of Consulting and Clinical Psychology, 76, 711–722. [DOI] [PubMed] [Google Scholar]
  25. Kazdin AE (2008). Evidence-based treatment and practice: New opportunities to bridge clinical research and practice, enhance the knowledge base, and improve patient care. American Psychologist, 63, 146–159. [DOI] [PubMed] [Google Scholar]
  26. Krishnamurthy R, VandeCreek L, Kaslow NJ, Tazeau YN, Miville ML, Kerns R,... Benton SA (2004). Achieving competency in psychological assessment: Directions for education and training. Journal of Clinical Psychology, 60, 725–739. [DOI] [PubMed] [Google Scholar]
  27. Krosnick JA (1999). Survey research. Annual Review of Psychology, 50, 537–567. [DOI] [PubMed] [Google Scholar]
  28. Lambert MJ, Whipple JL, Hawkins EJ, Vermeersch DA, Nielsen SL, & Smart DW (2003). Is it time for clinicians to routinely track patient outcome? A meta-analysis. Clinical Psychology: Science and Practice, 10, 288–301. [Google Scholar]
  29. Lyon AR, Charlesworth-Attie S, Vander Stoep A, & McCauley E (2011). Modular psychotherapy for youth with internalizing problems: Implementation with therapists in school-based health centers. School Psychology Review, 40, 569–581. [Google Scholar]
  30. Lyon AR, Dorsey S, Pullmann M, Silbaugh-Cowdin J, & Berliner L (2015). Clinician use of standardized assessments following a common elements psychotherapy training and consultation program. Administration and Policy in Mental Health and Mental Health Services Research, 42, 47–60. [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Mash EJ, & Hunsley J (2005). Evidence-based assessment of child and adolescent disorders: Issues and challenges. Journal of Clinical Child & Adolescent Psychology, 34, 362–379. [DOI] [PubMed] [Google Scholar]
  32. Mours JM, Campbell CD, Gathercoal KA, & Peterson M (2009). Training in the use of psychotherapy outcome assessment measures at psychology internship sites. Training and Education in Professional Psychology, 3, 169–176. [Google Scholar]
  33. Muthén LK, & Muthén BO (2012). Mplus user’s guide (7th ed.). Los Angeles, CA: Author. [Google Scholar]
  34. National Institute of Mental Health. (2008). The National Institute of Mental Health strategic plan (NIH Publication No. 08–6368). Washington, DC: National Institutes of Health. [Google Scholar]
  35. Overington L, Fitzpatrick M, Hunsley J, & Drapeau M (2015). Trainees’ experiences using progress monitoring measures. Training and Education in Professional Psychology, 9, 202–209. doi: 10.1037/tep0000088 [DOI] [Google Scholar]
  36. Palmiter DJ Jr. (2004). A survey of the assessment practices of child and adolescent clinicians. American Journal of Orthopsychiatry, 74, 122–128. [DOI] [PubMed] [Google Scholar]
  37. Pidano AE, & Whitcomb JM (2012). Training to work with children and families: Results from a survey of psychologists and doctoral students. Training and Education in Professional Psychology, 6, 8–17. [Google Scholar]
  38. Ready RE, & Veague HB (2014). Training in psychological assessment: Current practices of clinical psychology programs. Professional Psychology: Research and Practice, 45, 278–282. doi: 10.1037/a0037439 [DOI] [Google Scholar]
  39. Rettew DC, Doyle A, Achenbach TM, Dumenci L, & Ivanova M (2009). Meta-analyses of agreement between diagnoses made from clinical evaluations and standardized diagnostic interviews. International Journal of Methods in Psychiatric Research, 18, 169–184. [DOI] [PMC free article] [PubMed] [Google Scholar]
  40. Rosenberg RP, & Beck S (1986). Preferred assessment methods and treatment modalities for hyperactive children among clinical child and school psychologists. Journal of Clinical Child Psychology, 15, 142–147. [Google Scholar]
  41. Tofighi D, & Enders CK (2007). Identifying the correct number of classes in growth mixture models In Hancock GR & Samuelson KM (Eds.), Advances in latent variable mixture models (pp. 317–341). Greenwich, CT: Information Age. [Google Scholar]
  42. Walrath CM, Petras H, Mandell DS, Stephens RL, Holden EW, & Leaf PJ (2004). Gender differences in patterns of risk factors among children receiving mental health services: Latent class analyses. Journal of Behavioral Health Services & Research, 31, 297–311. [DOI] [PubMed] [Google Scholar]
  43. Weisz JR (2014). Building robust psychotherapies for children and adolescents. Perspectives on Psychological Science, 9, 81–84. [DOI] [PMC free article] [PubMed] [Google Scholar]
  44. Weisz JR, & Bearman SK (2012). Behavioral and Affective Skills in Coping (BASIC). Unpublished treatment manual. [Google Scholar]

RESOURCES