Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2018 Aug 29.
Published in final edited form as: Adm Policy Ment Health. 2011 Nov;38(6):476–485. doi: 10.1007/s10488-011-0334-3

Understanding Clinicians’ Diagnostic Practices: Attitudes Toward the Utility of Diagnosis and Standardized Diagnostic Tools

Amanda Jensen-Doss 1,, Kristin M Hawley 2
PMCID: PMC6114089  NIHMSID: NIHMS983630  PMID: 21279679

Abstract

Data on clinician diagnostic practices suggest they may not align with evidence-based guidelines. To better understand these practices, a multidisciplinary survey of 1,678 child clinicians examined attitudes toward the utility of diagnosis and standardized diagnostic tools. Psychiatrists were more likely than other disciplines to value diagnosis, whereas psychologists were more likely than others to value standardized diagnostic tools. Private practitioners held less positive views in both domains than other practitioners. Both attitude scales predicted self-reported diagnostic practices, although views of diagnosis utility were more associated with diagnosing in general, whereas views of diagnostic tools were more predictive of standardized tool use.

Keywords: Diagnosis, Provider attitudes, Evidence-based assessment


Diagnosis plays a central role in the treatment of psychological distress. The assignment of a diagnosis from the diagnostic and statistical manual of mental disorders (DSM; American Psychiatric Association 2000) is often required by clinics and third-party payers to authorize treatment. Diagnoses can also aid in treatment planning, as many interventions are designed for specific diagnostic groups. This particular role of diagnosis is becoming increasingly important as many mental health organizations are turning to diagnosis-specific evidence-based treatments (EBTs) to improve service quality (e.g., Chorpita and Donkervoet 2005; Jensen-Doss et al. 2009). Finally, there is some evidence that an accurate diagnosis is an important precursor to treatment success (e.g., Jensen-Doss and Weisz 2008; Pogge et al. 2001).

Despite their importance, questions have been raised about the accuracy of the diagnoses generated by clinicians practicing in community settings. A recent meta-analysis of agreement between clinician-generated diagnoses and standardized diagnostic interviews (SDIs), research instruments with standardized sequencing of questions and algorithms to assign diagnoses, found that the average agreement for child diagnoses is kappa (k) = 0.39 (Rettew et al. 2009), which is considered ‘‘poor’’ agreement (Landis and Koch 1977). Furthermore, studies comparing SDIs and clinician-generated diagnoses to external indicators of validity have found stronger support for the accuracy of SDIs (e.g., Basco et al. 2000).

In part because of these data, guidelines have been generated for evidence-based assessment (EBA) of adult (Hunsley and Mash 2005) and child (Mash and Hunsley 2005) psychopathology, typically providing suggestions for the domains to be assessed, the informants to involve, and the methods to gather data. Traditionally, the predominant diagnostic method has been the unstructured diagnostic interview (UDI), in which clinical judgment is used to guide question-asking and interpretation of information. Unfortunately, research has documented several information-gathering biases that could negatively impact the validity of this approach (Angold and Fisher 1999; Garb 1998, 2005). It is not surprising, therefore, that none of the EBA guidelines for child psychopathology specifically identify the UDI as an essential diagnostic method. Two other methods are consistently recommended. First, standardized rating scales are recommended for diagnostic screening (Klein et al. 2005; McMahon and Frick 2005; Silverman and Ollendick 2005; Youngstrom et al. 2005) or sometimes as a method of directly assessing diagnostic criteria (Pelham et al. 2005). Second, SDIs are often recommended for use in the second stage of in-depth assessment, following initial screening (Klein, et al.; McMahon and Frick; Silverman and Ollendick), although questions have been raised about the utility of currently available SDIs for disorders such as attention-deficit hyperactivity disorder (Pelham et al. 2005) and pediatric bipolar disorder (Youngstrom et al. 2005).

Available data suggest that usual care clinician diagnostic methods often do not map onto these EBA recommendations. Surveys indicate that the UDI is the assessment method used most often by psychologists (e.g., Cashel 2002). A survey of psychiatrists revealed similar practices, with a majority of respondents saying they never use standardized assessment tools for case identification (Gilbody et al. 2002).

These data raise questions about why clinicians use the diagnostic methods they do. Some have suggested that pressures to assign a diagnosis for authorization of services play a role (e.g., Jensen and Weisz 2002). Indeed, several studies have documented that clinicians are more likely to assign diagnoses to clients who they believe will pay using managed care than to self-pay clients (e.g., Lowe et al. 2007; Pomerantz and Segrist 2006). Other data suggest that some clinicians might also feel pressured to under-diagnose clients; surveys of psychiatrists (Setterberg et al. 1991) and counselors (Mead et al. 1997) suggest that deliberate under diagnosis to avoid stigma may be a common practice.

These external influences on the assignment of diagnoses raise questions about whether clinicians feel that the assignment of an accurate diagnosis is clinically useful. If clinicians do not feel diagnosis is valuable for treatment planning, it follows that they would not invest significant effort in the diagnostic process. Studies assessing clinicians’ views of the DSM suggest this could be the case. Frazer et al. (2009), for example, found that social workers said that they primarily used the DSM for billing purposes. Similarly, surveys have shown that fewer than half of social workers (Kutchins and Kirk 1988), psychiatrists (Jampala et al. 1992) and psychologists (Miller et al. 1981) say that the DSM is useful for treatment planning. If clinicians are primarily using diagnosis for purposes such as authorization of services, that simply requires the assignment of any diagnosis, or one of a specific set of diagnoses, selecting an accurate diagnosis among these diagnostic options may seem less useful.

Finally, even if clinicians do feel diagnosis is clinically useful, it is possible that they face barriers to using standardized diagnostic tools, or have concerns about the usefulness of the tools themselves. Little data exist to test this possibility, particularly focused on diagnostic methods specifically. Studies on clinician use of standardized outcome measures suggest that practical concerns, such as paperwork burden, also influence measure selection (Garland et al. 2003; Hatfield and Ogles 2007), as do concerns about the measures themselves, such as their relevance to subgroups of clients like ethnic minorities.

A recent, large, multidisciplinary survey examined child clinicians’ attitudes toward standardized assessment tools in general, and examined their relation to self-reported use of a range of standardized assessment tools (Jensen-Doss and Hawley 2010). Clinicians provided ratings of their views toward the psychometric qualities of these tools, their benefit over clinical judgment alone, and their practicality. Doctoral-level clinicians and psychologists provided more positive ratings on all three scales than master’s-level clinicians and non-psychologists respectively, although degree was no longer significant when it was tested simultaneously with discipline. Private practitioners also had less positive ratings on the psychometric quality and clinical judgment scales than other providers. All three attitude scales were predictive of self-reported standardized assessment tool use, although practical concerns were the strongest, and the only independent, predictor of use.

The present study provides follow-up analyses on this survey sample, focusing specifically on clinicians’ diagnostic attitudes and practices. Prior studies have not simultaneously examined clinicians’ attitudes toward the utility of diagnosis and of specific diagnostic tools, so it is not clear which of these factors is more related to diagnostic practices. Understanding these attitudes would help inform efforts to encourage clinicians to engage in more evidence-based diagnosis. Consistent with the theory of planned behavior (Ajzen 1991), meta-analyses suggest that attitudes toward a given behavior are a good predictor of an individual’s intention to perform the behavior and that intentions are subsequently related to behavior (Ajzen; Armitage and Conner 2001; Godin and Kok 1996), suggesting that improving clinicians’ diagnostic attitudes could lead to improved diagnostic behaviors through the pathway of increased intention.

Understanding clinician characteristics associated with these attitudes would also help training efforts be more targeted. Initiatives to target diagnostic practices within organizations would particularly benefit from data on differences in attitudes between clinicians with different levels of training and from different disciplinary backgrounds, variables often associated with different roles in organizations. Prior surveys on diagnosis have typically only included clinicians from one discipline, and have generally not compared clinicians with different levels of training. In addition, given the different reimbursement and authorization requirements faced by private practitioners relative to agency employees, it is possible that private practitioners might differ in their views of the utility of diagnosis, as well as their views of specific diagnostic tools. Also, clinician concerns about the utility of standardized assessment tools for diverse clients (Garland et al. 2003), clinicians who see more clients from low socioeconomic or minority status background might have less positive views of standardized diagnostic tools. Finally, exploring the relation between therapist demographic characteristics and attitudes would help identify clinicians who might be more or less open to training efforts.

The present study was designed to examine the attitudes that might underlie the diagnostic practices of child-serving clinicians. Utilizing a large, national, interdisciplinary sample, we examined clinician views about the utility of diagnosis and clinician views regarding standardized diagnostic instruments. We also examined the relation between these attitudes and therapists’ demographic (age, gender, ethnicity), professional (degree, discipline), and practice characteristics (private practice setting, proportion low income clients, proportion ethnic minority clients). Finally, we examined the association between these attitudes and the frequency of therapists’ self-reported diagnosing of clients and use of SDIs and standardized checklists.

Method

Participants

Participants were 1,678 child-serving counselors (19.8%), marriage and family therapists (MFTs; 17.2%), social workers (18.3%), psychologists (26.7%), and psychiatrists (18.0%) who participated in a survey about youth clinical care (Jensen-Doss and Hawley 2010). The sample was primarily female (63.8%) and Caucasian (90.3%) and was nearly evenly divided between master’s (47.3%) and doctoral level (52.7%) providers. Table 1 details participant characteristics.

Table 1.

Provider characteristics.

Demographic characteristics
% (N) Femalea 63.8% (1070)
M (SD) Ageb 52.90 (10.07)
Ethnicityc % (N)
 White/Caucasian (non-Hispanic) 90.3% (1512)
 Hispanic/Latino (a) 2.4% (41)
 Black/African American 2.8% (47)
 Asian/Pacific Islander 2.3% (39)
 Mixed/Other 2.1% (35)
Professional characteristics
Highest degree completedd % (N)
 Masters 47.3% (794)
 Doctoral 52.7% (884)
Professional disciplined % (N)
 Counselor 19.8% (333)
 Marriage and family therapist (MFT) 17.2% (288)
 Social worker 18.3% (307)
 Psychologist 26.7% (448)
 Psychiatrist 18.0% (302)
Practice characteristics
Practice Settinga,e % (N)
 Elementary, middle or high school 11.8% (198)
 Higher education setting 9.8% (164)
 Outpatient clinic 17.6% (295)
 Private practice 60.5% (1014)
 Day treatment facility 1.6% (27)
 Residential facility or group home 3.8% (63)
 Inpatient hospital or medical clinic 5.5% (93)
 Managed care organization 1.6% (26)
 Other 14.0% (235)
M (SD) percent of caseload are ethnic minoritiesf 32.44 (28.14)
M (SD) percent of caseload are low incomeg 34.86 (32.38)
Diagnostic practicesh
M (SD) frequency that clinician uses assessment to diagnose clientsi 4.34 (1.02)
M (SD) frequency of standardized diagnostic interview usej 2.02 (1.22)
M (SD) frequency of standardized checklist usek 3.22 (1.41)
a

n = 1677

b

n = 1648

c

n = 1674

d

n = 1678

e

Percentages for practice setting do not sum to 100 because providers could choose more than one

f

n = 1646

g

n = 1654

h

Diagnostic Practices items were only administered to the 1512 participants who indicate they conduct assessment with clients presenting for treatment

i

n = 1458

j

n = 1453

k

n = 1495

Procedures

The tailored design method (Dillman 2000) was used to develop the survey. The survey was then piloted with a sample of 14 mental health providers. Providers were asked to complete the survey, provide comments in an open-ended section, and participate in focus groups to provide feedback regarding the measure (e.g., Do the items include jargon not commonly used by all disciplines?) and the planned recruitment methods. A second randomized pilot study of 500 providers was then used to determine the optimal level of survey incentives, indicating that a non-contingent $2 bill was the most cost-effective incentive (Hawley et al. 2009).

The final survey was mailed to 5,000 mental health providers, including 1,000 from each of five professional organizations (American Counseling Association; American Association for Marriage and Family Therapy; American Academy of Child and Adolescent Psychiatrists; American Psychological Association; National Association of Social Workers). These organizations provided mailing lists of random, representative membership samples. Individuals within the US who indicated clinical interest with children, adolescents or families were randomly selected for participation. Individuals who received the survey, but who did not provide services to youths, were asked to indicate this and return the survey uncompleted.

The final survey mailing featured several aspects of the tailored design method (Dillman 2000) in order to increase participation rates, including (a) personally addressed and hand-signed cover letters with first class stamps on both the individually addressed outgoing envelope and the return envelope, (b) unique survey appearance with landscape orientation and photographs of children to help the survey stand out from other mail, and (c) formatting features designed to help participants complete and return it with ease (e.g., important words were bolded or italicized and each section of the survey was grouped using borders). Clinicians received up to five mailings. The first was a pre-notice letter of the upcoming survey. The second included the survey, a $2 bill, and return envelope. The third consisted of a postcard that thanked those that had returned the survey and reminded non-respondents to participate. The fourth was sent to non-respondents only and included another copy of the survey. The fifth was sent to any final non-respondents and consisted of a third copy of the survey.

Of the 5,000 individuals selected for participation, 347 (6.9%) had incorrect addresses. Of the 4,653 individuals successfully contacted, 2,863 (61.6%) responded [1,639 (35.2%) did not respond and 151 (3.2%) declined participation]. Of the responders, 1,143 (37.9%) indicated they did not provide services to youths. Of the 1,720 remaining participants, six were excluded from this study because their highest degree was a bachelor’s degree, which we felt introduced unnecessary variability into the sample. Excluding these individuals, along with two others with missing degree data, yielded a final possible sample size of 1,712. 1,678 completed at least one of the survey items for the present study, leading to their inclusion in this sample. These individuals did not differ significantly from the 34 potential participants who did not complete any items on any of the demographic, professional, or practice characteristics listed in Table 1. Finally, items assessing clinicians’ diagnostic practices and their views of diagnostic tools were administered only to the 1,512 (90.1%) clinicians who indicated that they conduct assessments with youths presenting for treatment. These clinicians were less likely to be master’s level [χ 2 (1, N = 1678) = 26.53, P < 0.001] or a counselor [χ2 (1, N = 1678) = 30.77, P < 0.001], and more likely to be a psychiatrist [χ2 (1, N = 1678) = 30.33, P < 0.001] than the 151 clinicians who indicated they did not conduct assessments. They did not differ on any other demographic, professional, or practice characteristics, or on their ratings on the utility of diagnosis scale (see measures).

Measures

Diagnosis Attitude Scales

Clinicians completed two diagnosis attitude scales which were written for the current study. Items were generated based on prior studies of clinician attitudes toward diagnosis (e.g., Jampala et al. 1992; Kutchins and Kirk 1988; Miller et al. 1981) and EBA (Garland et al. 2003; Gilbody et al. 2002) and were then subjected to the piloting procedures described above. All attitude items were rated on a five point Likert scale from 1 (Strongly disagree) to 5 (Strongly agree) and any negative items were reverse coded such that higher scale scores indicated more positive attitudes. Scales were scored by averaging items; participants missing more than two items on a given scale were treated as missing.

The utility of diagnosis scale consisted of five items assessing clinician attitudes toward the usefulness of diagnosis in treatment (e.g., ‘‘Accurate diagnosis is an important part of my treatment planning,’’ see Table 2). When the scale properties of these items were examined, the scale had somewhat low internal consistency (α = 0.62), which did not improve if items were omitted.

Table 2.

Descriptive statistics for the utility of diagnosis and standardized diagnosis scales and items

Scale or item N Ma (SD) db
Utility of diagnosis scale 1634 3.15 (0.71) 0.21
 Accurate diagnosis is an important part of my treatment planning 3.96 (0.93) 1.04
 Most children and families come to work on problems of daily living rather than a diagnosisc 3.72 (1.07) 0.67
 It is sometimes necessary to assign a diagnosis that is not clinically indicated in order to qualify for servicesc 2.89 (1.22) −0.087
 Assigning a diagnosis is more important for authorization of services or obtaining insurance payment than for planning treatmentc 2.88 (1.23) −0.094
 It is sometimes necessary to assign a less serious diagnosis than is clinically indicated to avoid stigma associated with serious diagnosesc 2.72 (1.14) −0.24
Standardized diagnosis scale 1454 3.39 (0.54) 0.72
 Standardized measures help with accurate diagnosis 3.91 (0.77) 1.18
 Standardized measures help detect diagnostic comorbidity 3.67 (0.72) 0.94
 Standardized measures help with differential diagnosis 3.64 (0.78) 0.83
 Using clinical judgment to diagnose children is superior to using standardized assessment measuresc 3.15 (0.95) 0.16
 Standardized diagnostic interviews interfere with establishing rapport during an intakec 3.05 (1.08) 0.043
 Standardized measures over diagnose psychopathologyc 2.83 (0.89) −0.19
 Most standardized measures aren’t helpful because they don’t map onto DSM diagnostic criteriac 2.45 (0.84) −0.66
a

Mean on a scale from 1 to 5

b

Cohen’s d effect size, comparing each mean to the neutral value of 3

c

Item was reverse-scored before it was included in the scale score

However, given that this scale was short, which can impact the magnitude of alpha (e.g., Cortina 1993), we also examined the corrected inter-item correlations for this scale. All item correlations were above 0.30, exceeding the recommended cutoff of 0.20 for inclusion in a scale (Kline 1986).

Participants who conducted assessments completed the standardized diagnosis scale, which consisted of seven items assessing opinions about standardized diagnostic tools (e.g. ‘‘Standardized measures help with accurate diagnosis,’’ see Table 2). Items were drawn from a broader measure of attitudes toward standardized assessment tools in general, the attitudes toward standardized assessment scales (ASA; Jensen-Doss and Hawley 2010) and had good internal constancy (α = 0.73).

Diagnostic Practices

The providers who indicated they conducted assessments were also asked to rate the frequency with which they assess for diagnoses on a scale from 1 (almost never) to 5 (almost always). Using the same frequency scale, they were also asked how often they use two types of standardized diagnostic tools: ‘‘Structured or standardized diagnostic interviews for child diagnosis’’ (SDIs) and ‘‘Standardized checklists for child/family symptoms or functioning.’’ They were provided with a definition and examples of each type of measure.

Data Analysis Plan

The study questions were addressed utilizing t-tests and multiple regression. Prior to analysis, the continuous variables were examined for outliers and normality. There were no univariate outliers. Two independent variables, % minority caseload and % low income caseload, were positively skewed and platykurtic; they therefore were dichotomized at the median (25% for both variables). Frequency of SDI use was also positively skewed. As it was desirable to keep this variable continuous in order to facilitate comparison to the analyses of checklist use, an inverse transformation was used to for this variable. Therapist ethnicity was also dichotomized into Minority (1) versus Caucasian (0), as the sample was more than 90% Caucasian.

Given our large sample size, an alpha of 0.001 was employed; this P value corresponded to effect sizes that fell below conventions for a ‘‘small’’ effect size, so this was deemed appropriate to control for Type I error without missing meaningful patterns in the data. Cohen’s (1988) d was used as the effect size indicator for comparisons of means, with d = 0.20 considered a small effect, d = 0.50 a medium effect, and d = 0.80 a large effect. For the multiple regressions, R2, or the proportion of variance explained, was used as the effect size indicator; R2 values of 0.02, 0.13, and 0.30 are considered small, medium, and large, respectively (Cohen 1988).

Examination of missing data indicated that the rate of missingness was 5% or fewer for all variables, suggesting that any missing data procedure would likely yield similar results (Tabachnick and Fidell 2007). The analyses were therefore conducted using pairwise deletion.

Results

Clinician Attitudes toward the Utility of Diagnosis

Participants provided a mean rating of 3.15 on a scale of 1 to 5 (SD = 0.71) on the Utility of Diagnosis scale. This was significantly different from a neutral rating of 3 [t(1633) = 8.49, P < 0.001], with the associated small effect size (d = 0.21) indicative of mildly positive views toward the utility of diagnosis. Examination of item responses presented in Table 2 shows that clinicians strongly endorsed the notion that accurate diagnosis is important in treatment planning (large effect size of d = 1.04), but also had moderately strong beliefs that most clients come to treatment for issues other than a diagnosis (d = 0.67). Clinicians also disagreed that under diagnosis is necessary to avoid stigma (d = −0.24). Other item effect sizes fell below the cutoff for a small effect.

Provider degree, discipline, and private practice setting were significant (P < 0.001) univariate predictors of this scale (Table 2). Doctoral-level clinicians provided higher ratings than masters-level clinicians, a small effect (R2 = 0.02). Professional discipline explained 4.9% of the variance in ratings on this scale, also a small effect. Psychiatrists provided higher ratings than all other disciplines (all P’s < 0.001). Private practitioners provided significantly lower ratings than professionals working in other settings (R2 = 0.022), a small effect.

When all predictors were entered into a single multiple regression analysis, professional discipline continued to be a significant predictor, with the pairwise comparisons between psychiatrists and the other disciplines remaining significant (Table 3). Private practice setting also remained significant, but degree was no longer significant. The collective set of predictors explained 6.5% of the variability in the utility of diagnosis scale, a small effect.

Table 3.

Demographic, professional, and practice characteristics as predictors of attitudes

Predictor variable Utility of diagnosis Standardized diagnostic assessment
Univariate analyses Multivariate analysis Univariate analyses Multivariate analysis
B R2 B B R2 B
Demographic characteristics
 Age 0.005 0.004 −0.002 0.001 0.0002 0.001
 Gender (female = 1) −0.065 0.002 0.002 −0.051 0.002 −0.010
 Ethnicity (minority = 1) 0.093 0.001 −0.056 −0.036 0.0004 −0.052
Professional characteristics
 Highest degree (doctoral = 1) 0.20* 0.02 0.066 0.23* 0.04 0.15
 Professional disciplinea 0.049 0.081
 Psychologist (1) vs. Psychiatrist (0) −0.32* −0.25* 0.28* 0.36*
  Psychologist (1) vs. MFT (0) 0.12 0.081 0.39* 0.30*
  Psychologist (1) vs. social Worker (0) 0.087 0.090 0.36* 0.29*
  Psychologist (1) vs. counselor (0) 0.094 0.090 0.29* 0.20*
  Psychiatrist (1) vs. MFT (0) 0.44* 0.33* 0.10 −0.057
  Psychiatrist (1) vs. social worker (0) 0.41* 0.34* 0.075 −0.066
  Psychiatrist (1) vs. counselor (0) 0.41* 0.34* 0.012 −0.15
  MFT (1) vs. social worker (0) −0.035 0.009 −0.029 −0.009
  MFT (1) vs. counselor (0) −0.029 −0.005 −0.092 −0.096
  Counselor (1) vs. social worker (0) −0.006 0.003 0.064 0.087
Practice characteristics
 Private practice (1) vs. other settings (0) −0.22* 0.022 −0.16* −0.12* 0.012 −0.17*
 Low income caseload (>25% = 1) 0.11 0.006 0.015 0.036 0.001 0.036
 Ethnic minority caseload (>25% = 1) 0.11 0.006 0.029 0.047 0.002 0.021
*

P < 0.001

a

The analyses for professional discipline were run multiple times, changing the group that was treated as the reference group, in order to compare all possible pairs of disciplines to one another

Clinician Attitudes toward Standardized Diagnostic Tools

Participants provided a mean rating of 3.39 on a scale of 1 to 5 (SD = 0.54) on the standardized diagnosis scale. The single-sample t-test comparing this mean to the neutral rating of three was significant [t(1453) = 27.57, P < 0.001] and associated with a medium effect size (d = 0.73), indicative of moderately positive views toward standardized diagnostic tools. Ratings on this scale were significantly higher than those on the diagnosis utility scale [t(1434) = 10.98, P < 0.001, d = 0.35]. Item responses indicated that participants strongly agreed that standardized measures help with accurate diagnosis (d = 1.18), detection of comorbidity (d = 0.94), and differential diagnosis (d = 0.83) and disagreed that standardized checklists lack utility because they do not map onto DSM criteria (d = −0.66; see Table 2). Other item effect sizes fell below the cutoff for a small effect.

Because this scale was only administered to participants who indicated they conduct assessment, it is possible that the sample’s ratings on this scale would have been more negative had the clinicians who choose not to do assessment had been included. To test the ‘‘worst case scenario’’ impact of this design feature, we conducted a follow-up analysis in which we assigned all clinicians who did not engage in assessment a value of 1(the most negative rating possible on the scale).1 Using this approach, the sample mean on the standardized diagnosis scale decreased to 3.20 on a scale of 1 to 5 (SD = 0.82, d = 0.24), a small, rather than medium effect size, but still reflective of positive views toward standardized diagnostic instruments.

Provider degree, professional discipline, and private practice setting were significant (P < 0.001) univariate predictors of the standardized diagnosis scale (see Table 2). Doctoral-level clinicians provided higher ratings on this scale than masters-level clinicians, a small effect (R2 = 0.04). Professional discipline explained 8.1% of the variance in ratings on this scale, also a small effect. Psychologists provided more positive ratings than all other disciplines (all P’s < 0.001). Private practitioners also provided significantly lower ratings than professionals working in other settings, although the difference did not exceed the cutoff for a small effect size (R2 = 0.012).

When all predictors were examined simultaneously, professional discipline continued to be a significant predictor of attitudes toward standard diagnostic tools, with the pairwise comparisons between psychologists and the other disciplines remaining significant (Table 3). Private practice setting also remained significant, but degree did not. The collective set of predictors explained 12.1% of the variability in the diagnosis item, approaching a medium effect.

Relation of Clinician Attitudes to Diagnostic Practices

To examine the predictive validity of the attitude measures, we examined their individual and joint relation to diagnostic practices, including the frequency with which clinicians diagnose clients, use SDIs, and use standardized checklists, after controlling for clinician demographic (age, gender, ethnicity), professional (degree, discipline) and practice (private practice setting, low income caseload, ethnic minority caseload) characteristics. The results of these multiple regression analyses are presented in Table 4. Analyses utilizing the transformed versions of the diagnosis and SDI use variables yielded nearly identical results to analyses using the non-transformed data. The results using the non-transformed data are therefore reported, as they are more straightforward to interpret. When tested individually, the utility of diagnosis scale was a significant (P < 0.001), positive predictor of frequency of diagnosing (ΔR2 = 0.033), SDI use (ΔR2 = 0.008) and standardized checklist use (ΔR2 = 0.015), although only the relation with diagnosing exceeded the cutoff for a small effect size. The standardized diagnosis scale was also a significant (P < 0.001), positive predictor of frequency of diagnosing (ΔR2 = 0.012), SDI use (ΔR2 = 0.020), and checklist use (ΔR2 = 0.036), with the effects for the two instrument use variables falling in the small range.

Table 4.

Regressions predicting frequency of diagnostic practices.

Attitude scales entered separately Attitude scales entered simultaneously
B ΔR2 B ΔR2
Diagnoses clients
 Utility of diagnosis 0.27* 0.033 0.23* 0.034
 Standardized diagnosis 0.21* 0.012 0.13
Uses standardized diagnostic interviews
 Utility of diagnosis 0.16* 0.008 0.071 0.021
 Standardized diagnosis 0.34* 0.020 0.31*
Uses standardized checklists
 Utility of diagnosis 0.25* 0.015 0.16 0.042
 Standardized diagnosis 0.53* 0.036 0.47*

ΔR2 change in R2 for the model when attitude scales were added to the models controlling for therapist demographic, professional, and practice characteristics

*

P < 0.001

When examined simultaneously, the two attitude items explained 3.4% of the variance in frequency of diagnosis, 2.1% of the variance in SDI use, and 4.2% of the variance in checklist use, all small effects. The utility of diagnosis scale was the only independent predictor of frequency of diagnosis. The standardized diagnosis scale was the only independent predictor of use of the two diagnostic tools.

Discussion

The present study examined attitudes toward the utility of diagnosis and toward standardized diagnostic tools in a large, multidisciplinary sample of child clinicians. Given the questions that have been raised about the accuracy of clinician-generated diagnoses, understanding the attitudes that underlie diagnostic practices might facilitate efforts to train clinicians in evidence-based diagnosis. On average, clinicians indicated that they see diagnosis as clinically useful and they have positive views toward the use of standardized tools in diagnosis.

Examination of item responses indicates some differences between the present results and what might be expected based on prior studies. For example, clinicians in this sample felt that diagnosis is an important part of treatment planning, which runs contrary to previous studies of clinicians’ views of the DSM (Jampala et al. 1992; Kutchins and Kirk 1988; Miller et al. 1981). This difference may be due to the present study asking about diagnosis generally, rather than the DSM specifically, or due to increasing acceptance of diagnosis by clinicians relative to those earlier surveys. Also unexpected based on prior studies (Mead et al. 1997; Setterberg et al. 1991), the clinicians sampled also disagreed that under diagnosis is necessary to avoid stigma. However, this difference may be due to item phrasing; previous studies asked whether respondents thought under diagnosis was common, not whether they agreed with or engaged in the practice.

Both attitude scales were predicted by clinician degree, discipline and private practice setting. However, only discipline and practice setting were significant independent predictors, suggesting the more positive views held by doctoral-level clinicians may be a function of discipline or practice setting. For both the utility of diagnosis and standardized diagnosis scales, private practitioners held more negative attitudes than providers working in other settings, such as schools or outpatient clinics. It may be that private practitioners are less likely to work with specific assessment requirements than providers working in agencies. Without agency mandates or an agency culture promoting diagnosis, private practitioners may simply have fewer opportunities to form positive attitudes toward these practices.

When disciplinary differences in the two attitude scales were examined, two different patterns emerged. For the utility of diagnosis scale, psychiatrists reported significantly more positive views than all other disciplines. Given the central role that the American Psychiatric Association has taken in creating and disseminating the DSM, it is not surprising that psychiatrists are the most likely to see diagnosing as a useful part of their practice. Conversely, psychologists were more likely than all other disciplines to provide positive ratings on the standardized diagnosis scale, which likely reflects the increased emphasis on assessment in psychology training programs and in psychologists’ job duties relative to other disciplines. The fact that non-psychologists, who are most likely to be serving as the front-line clinicians in many mental health agencies, are less likely to have positive attitudes toward these standardized diagnostic tools suggests that, even when agencies have administrative support for implementing these tools, these efforts may not be as welcome or understood by non-psychologist clinicians. Designing evidence-based diagnosis implementation efforts with an eye toward addressing the concerns of front-line clinicians may help increase the likelihood of their success.

Both attitude scales were positively associated with clinician self-reported diagnostic practices, including frequency of diagnostic assessment, SDI use, and standardized checklist use, even after controlling for provider personal, professional, and practice characteristics. When the two scales were examined simultaneously, however, they appeared to be independently related to different aspects of the diagnostic process. The utility of diagnosis scale was independently predictive of the frequency of diagnostic assessment, but not the selection of specific diagnostic methods, whereas the opposite was true for the standardized diagnosis scale. This pattern of findings suggests that clinician opinions about the clinical utility of diagnosis seem to be a distinct construct from their views of specific approaches to diagnosing. This indicates efforts to get clinicians to assess for diagnoses in general might benefit from addressing their concerns about the utility of that process, but efforts to target their choice of diagnostic tools might need to focus more on their views of the instruments themselves. For example, the present findings suggest that one barrier to convincing clinicians to assign diagnoses is their belief that most families come to treatment to work on problems of daily living, rather than a diagnosis. Training clinicians to integrate diagnostic information into a case conceptualization that also considers issues such as environmental stressors might help clinicians to see the utility of considering the role diagnoses might play in these problems of daily living.

The present study did have some limitations that warrant consideration in the interpretation of its findings. First, because of the branching format used in the larger survey, the standardized diagnosis scale was only administered to participants who indicated that they conduct assessments with youths presenting for treatment; ratings on this scale were therefore not available for the 10% of the sample who did not do so. These individuals did not differ on their ratings of diagnosis utility, but it is possible that they held more negative views of standardized diagnostic tools, a possibility that could not be tested. This means that the sample’s true views of standardized diagnostic tools may have been more negative on average than what was reported. However, our analyses suggested that, even if all of these non-assessors had reported the most negative views possible on this scale, the overall sample mean still would have been positive and reflective of a small effect size difference from neutral. Second, while a strength of the study is that it examined the link between diagnostic attitudes and behaviors, the reports of behaviors were limited to self-reports. Future studies would benefit from the incorporation of more objective information about clinician diagnostic behavior.

Third, while several predictors of attitudes were found, significant variability in attitudes remained unexplained, suggesting additional predictors exist that warrant inclusion in future studies. For example, perceived agency, supervisor, and colleague support of EBTs, clinician quality of training in EBTs, and institutional barriers (e.g., caseload, session length) have been found to be related to clinician attitudes toward EBTs (Jensen-Doss et al. 2009). Other practice characteristics might be also predictive of attitudes. For example, given that different measures exist for different types of problems and for different age groups, clinician attitudes might vary as a function of the presenting problems or age groups typically treated by the clinician.

Finally, the reliability of the utility of diagnosis scale was somewhat low. Low reliability can limit one’s ability to find significant results using a scale, given that there may be increased error variance. However, the impact of this appears to have been limited, given that an identical number of significant findings were obtained for this scale as for the standardized diagnostic scale.

Despite these limitations, this study had several strengths. To our knowledge, this is the first study to simultaneously examine attitudes toward diagnosis and toward diagnostic tools simultaneously, and the first to systematically examine predictors of these attitudes across a range of disciplines. The study also utilized a rigorous approach to developing the survey and recruiting participants, the tailored design method (Dillman 2000), which resulted in a response rate and sample size that exceeded the majority of previous surveys on clinician assessment practices. In addition, the rate of missing data was very low.

Diagnosis is playing an increasingly important role in the provision of mental health services, particularly as the field increasingly focuses on making clinical practice more evidence-based. Given that many clinicians may not generate diagnoses that match those assigned in research studies, additional focus on disseminating and implementing evidence-based diagnostic tools is likely needed. To the extent that clinicians are not engaged in evidence-based diagnosis, it may be difficult for clinicians to make appropriate use of research findings in their practice (Jensen and Weisz 2002). In particular, as clinicians attempt to use diagnosis-specific EBTs, providing therapists with training on these treatments without associated training in evidence-based assessment of psychopathology may lead clinicians to use EBTs with inappropriate clients or without all of the relevant clinical information needed for their effective use (Jensen-Doss and Weisz 2008; Weisz and Addis 2006). This could, in turn, lead to clinicians having a lack of success implementing these treatments and hinder future uptake of additional EBTs. The present data suggest that, while clinician views of the utility of diagnosis are associated with their diagnostic practices, their choice of diagnostic methods seems most strongly associated with their views of the tools themselves. Efforts to improve clinicians’ views of these tools in particular may help facilitate the success of future evidence-based practice efforts.

Footnotes

1

We thank an anonymous reviewer for this suggestion.

Contributor Information

Amanda Jensen-Doss, University of Miami, PO Box 248185, Coral Gables, FL 33124-0751, USA.

Kristin M. Hawley, University of Missouri, Columbia, USA

References

  1. Ajzen I (1991). The theory of planned behavior. Organizational Behavior and Human Decision Processes, 50, 179–211. [Google Scholar]
  2. American Psychiatric Association. (2000). Diagnostic and statistical manual of mental disorders (Text Revision) (4th ed.). Washington, DC: Author. [Google Scholar]
  3. Angold A, & Fisher PW (1999). Interviewer-based interviews In Shaffer D, Lucas CP, & Richters JE (Eds.), Diagnostic assessment in child and adolescent psychopathology (pp. 34–64). New York: Guilford Press. [Google Scholar]
  4. Armitage CJ, & Conner M (2001). Efficacy of the theory of planned behaviour: A meta-analytic review. British Journal of Social Psychology, 40(4), 471–499. [DOI] [PubMed] [Google Scholar]
  5. Basco MR, Bostic JQ, Davies D, Rush AJ, Witte B, Hendrickse W, et al. (2000). Methods to improve diagnostic accuracy in a community mental health setting. American Journal of Psychiatry, 157(10), 1599–1605. [DOI] [PubMed] [Google Scholar]
  6. Cashel ML (2002). Child and adolescent psychological assessment: Current clinical practices and the impact of managed care. Professional Psychology Research and Practice, 33(5), 446–453. [Google Scholar]
  7. Chorpita BF, & Donkervoet C (2005). Implementation of the Felix consent decree in Hawaii: The impact of policy and practice development efforts on service delivery In Steele RG & Roberts MC (Eds.), Handbook of mental health services for children, adolescents, and families (pp. 317–332). New York, US: Kluwer Academic/Plenum Publishers. [Google Scholar]
  8. Cohen J (1988). Statistical power analysis for the behavioral sciences . Hillsdale, NJ: Erlbaum. [Google Scholar]
  9. Cortina JM (1993). What is coefficient alpha? An examination of theory and applications. Journal of Applied Psychology, 78, 98–104. [Google Scholar]
  10. Dillman DA (2000). Mail and internet surveys: The tailored design method (2nd ed.). New York: John Wiley & Sons. [Google Scholar]
  11. Frazer P, Westhuis D, Daley JG, & Phillips I (2009). How clinical social workers are using the DSM-IV: A national study. Social Work in Mental Health, 7(4), 325–339. [Google Scholar]
  12. Garb HN (1998). Studying the clinician: Judgment research and psychological assessment. Washington, DC: American Psychological Association. [Google Scholar]
  13. Garb HN (2005). Clinical judgment and decision making. Annual Review of Clinical Psychology, 1(1), 67–89. [DOI] [PubMed] [Google Scholar]
  14. Garland AF, Kruse M, & Aarons GA (2003). Clinicians and outcome measurement: What’s the use? The Journal of Behavioral Health Services and Research, 30(4), 393–405. [DOI] [PubMed] [Google Scholar]
  15. Gilbody SM, House AO, & Sheldon TA (2002). Psychiatrists in the UK do not use outcomes measures: National survey. British Journal of Psychiatry, 180(2), 101–103. [DOI] [PubMed] [Google Scholar]
  16. Godin G, & Kok G (1996). The theory of planned behavior: A review of its applications to health-related behaviors. American Journal of Health Promotion, 11, 87–98. [DOI] [PubMed] [Google Scholar]
  17. Hatfield DR, & Ogles BM (2007). Why some clinicians use outcome measures and others do not. Administration and Policy in Mental Health and Mental Health Services Research, 34(3), 283–291. [DOI] [PubMed] [Google Scholar]
  18. Hawley KM, Cook JR, & Jensen-Doss A (2009). Do noncontingent incentives increase survey response rates among mental health providers? A randomized trial comparison. Administration and Policy in Mental Health, 36, 343–348. [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Hunsley J, & Mash EJ (2005). Introduction to the special section on developing guidelines for the evidence-based assessment (EBA) of adult disorders. Psychological Assessment, 17(3), 251–255. [DOI] [PubMed] [Google Scholar]
  20. Jampala VC, Zimmerman M, Sierles FS, & Taylor MA (1992). Consumers’ attitudes toward DSM-III and DSM-III—R: A 1989 survey of psychiatric educators, researchers, practitioners, and senior residents. Comprehensive Psychiatry, 33(3), 180–185. [DOI] [PubMed] [Google Scholar]
  21. Jensen AL, & Weisz JR (2002). Assessing match and mismatch between practitioner-generated and standardized interview-generated diagnoses for clinic-referred children and adolescents. Journal of Consulting and Clinical Psychology, 70(1), 158–168. [PubMed] [Google Scholar]
  22. Jensen-Doss A, & Hawley KM (2010). Understanding barriers to evidence-based assessment: Clinician attitudes toward standardized assessment tools. Journal of Clinical Child and Adolescent Psychology, 39, 885–896. [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Jensen-Doss A, Hawley KM, Lopez M, & Osterberg LD (2009). Using evidence-based treatments: The experiences of youth providers working under a mandate. Professional Psychology : Research and Practice, 40(4), 417–424. [Google Scholar]
  24. Jensen-Doss A, & Weisz JR (2008). Diagnostic agreement predicts treatment process and outcomes in youth mental health clinics. Journal of Consulting and Clinical Psychology, 76(5), 711–722. [DOI] [PubMed] [Google Scholar]
  25. Klein DN, Dougherty LR, & Olino TM (2005). Toward guidelines for evidence-based assessment of depression in children and adolescents. Journal of Clinical Child and Adolescent Psychology, 34(3), 412–432. [DOI] [PubMed] [Google Scholar]
  26. Kline P (1986). A handbook of test construction. London: Methuen. [Google Scholar]
  27. Kutchins H, & Kirk SA (1988). The business of diagnosis: DSM III and clinical social work. Social Work, 33(3), 215–220. [Google Scholar]
  28. Landis JR, & Koch GG (1977). The measurement of observer agreement for categorical data. Biometrics, 33, 159–174. [PubMed] [Google Scholar]
  29. Lowe J, Pomerantz AM, & Pettibone JC (2007). The influence of payment method on psychologists’ diagonistic decisions: Expanding the range of presenting problems. Ethics & Behavior, 17(1), 83–93. [Google Scholar]
  30. Mash EJ, & Hunsley J (2005). Evidence-based assessment of child and adolescent disorders: Issues and challenges. Journal of Clinical Child and Adolescent Psychology, 34(3), 362–379. [DOI] [PubMed] [Google Scholar]
  31. McMahon RJ, & Frick PJ (2005). Evidence-based assessment of conduct problems in children and adolescents. Journal of Clinical Child and Adolescent Psychology, 34(3), 477–505. [DOI] [PubMed] [Google Scholar]
  32. Mead MA, Hoenshil TH, & Singh K (1997). How the DSM system is used by clinical counselors: A national study. Journal of Mental Health Counseling, 19, 383–401. [Google Scholar]
  33. Miller LS, Bergstrom DA, Cross HJ, & Grube JW (1981). Opinions and use of the DSM system by practicing psychologists. Professional Psychology, 12, 385–390. [Google Scholar]
  34. Pelham WE Jr., Fabiano GA, & Massetti GM (2005). Evidence-based assessment of attention deficit hyperactivity disorder in children and adolescents. Journal of Clinical Child and Adolescent Psychology, 34(3), 449–476. [DOI] [PubMed] [Google Scholar]
  35. Pogge DL, Wayland-Smith D, Zaccario M, Borgaro S, Stokes J, & Harvey PD (2001). Diagnosis of manic episodes in adolescent inpatients: Structured diagnostic procedures compared to clinical chart diagnoses. Psychiatry Research, 101(1), 47–54. [DOI] [PubMed] [Google Scholar]
  36. Pomerantz AM, & Segrist DJ (2006). The influence of payment method on psychologists’ diagnostic decisions regarding minimally impaired clients. Ethics & Behavior, 16(3), 253–263. [Google Scholar]
  37. Rettew DC, Lynch AD, Achenbach TM, Dumenci L, & Ivanova MY (2009). Meta-analyses of agreement between diagnoses made from clinical evaluations and standardized diagnostic interviews. International Journal of Methods in Psychiatric Research, 18(3), 169–184. [DOI] [PMC free article] [PubMed] [Google Scholar]
  38. Setterberg SR, Ernst M, Rao U, & Campbell M (1991). Child psychiatrists’ views of DSM-III—R: A survey of usage and opinions. Journal of the American Academy of Child & Adolescent Psychiatry, 30(4), 652–658. [DOI] [PubMed] [Google Scholar]
  39. Silverman WK, & Ollendick TH (2005). Evidence-based assessment of anxiety and its disorders in children and adolescents. Journal of Clinical Child and Adolescent Psychology, 34(3), 380–411. [DOI] [PubMed] [Google Scholar]
  40. Tabachnick BG, & Fidell LS (2007). Using multivariate statistics (5th ed.). Boston, MA: Allyn & Bacon/Pearson Education. [Google Scholar]
  41. Weisz JR, & Addis ME (2006). The research-practice tango and other choreographic challenges: Using and testing evidence-based psychotherapies in clinical care settings In Good-heart CD, Kazdin AE, & Sternberg RJ (Eds.), Evidence-based psychotherapy: Where practice and research meet (pp. 179–206). Washington, DC US: American Psychological Association. [Google Scholar]
  42. Youngstrom EA, Findling RL, Youngstrom JK, & Calabrese JR (2005). Toward an evidence-based assessment of pediatric bipolar disorder. Journal of Clinical Child and Adolescent Psychology, 34(3), 433–448. [DOI] [PubMed] [Google Scholar]

RESOURCES