Abstract
It has been repeatedly demonstrated that clinicians rely more on clinical judgment than on research findings. We hypothesized that psychologists in practice might be more open to adopting empirically supported treatments (ESTs) if outcome results were presented with a case study. Psychologists in private practice (N = 742) were randomly assigned to receive a research review of data from randomized controlled trials of cognitive-behavioral treatment (CBT) and medication for bulimia, a case study of CBT for a fictional patient with bulimia, or both. Results indicated that the inclusion of case examples renders ESTs more compelling and interests clinicians in gaining training. Despite these participants’ training in statistics, the inclusion of the statistical information had no influence on attitudes or training willingness beyond that of the anecdotal case information.
Keywords: empirically supported treatments, private practitioners, dissemination, case-based research
Evidence is building that efforts to identify and promote empirically supported treatments (ESTs) have had minimal impact on the practice of front-line practitioners (Arnow, 1999; Becker, Zeyfert, & Anderson, 2004; Crowe, Mussell, Peterson, Knopke, & Mitchell, 1999; Goisman, Warshaw, & Keller, 1999; Haas & Clopton, 2003; Mussell et al., 2000; von Ranson & Robinson, 2006). It is becoming increasingly clear that the identification and endorsement of psychotherapies as empirically supported is not sufficient to achieve successful dissemination (Cook, Weingardt, Jaszka, & Wiesner, 2008). If ESTs are to be adopted outside of academic circles, more effective dissemination efforts are required.1
One argument often promulgated in the literature is that clinically relevant research findings must be presented in a form practicing therapists can easily use. Researchers tend to write for other researchers, and not clinicians (Beutler, Williams, Wakefield, Entwistle, & 1995; Goldfried, Borkovec, Clarkin, Johnson, & Parry, 1999; Goldfried & Wolfe, 1996). Although the information and language utilized in research reports (e.g., end-state functioning, treatment fidelity, effect size) are critical for researchers communicating with other researchers, they may not be meaningful to the practicing clinician (Goldfried & Wolfe, 1998) and efforts must be made to communicate research findings in a manner that is readily understandable and relevant for the front-line practitioner. Despite extensive discussion of this issue, there is remarkably little research focusing on the difficult question of just how to make EST research more influential and compelling to practicing clinicians.
If clinicians are not using EST research to inform practice, from what sources are they drawing? Stewart and Chambless (2007) surveyed a random sample of members of APA Division 42 (Psychologists in Private Practice) regarding their approach to treatment decisions, specifically the use of research on ESTs to inform practice. Consistent with prior research, clinicians reported that they often but do not usually use treatment materials informed by EST research findings. They described the most important influence on their work as their own past clinical experiences. On a 7-point scale (1 = Strongly Agree to 7 = Strongly Disagree), they strongly to moderately agreed (M = 1.53, SD = 0.91) that past clinical experiences affect their treatment decisions, and indicated that they usually use past experiences with patients to improve therapy skills and effectiveness. That clinicians prefer to rely on clinical experience rather than EST research to inform treatment decisions is consistent with prior research in the field (Morrow-Bradley & Elliott, 1986; Raine et al., 2004; von Ranson & Robinson, 2006). However, it is inconsistent with the thrust of evidence-based practice, which emphasizes the use of research to guide practice, albeit tempered by the therapist’s clinical expertise (Sackett et al., 2000; Spring, 2007).
How accurate is clinical judgment? The limitations of humans as information processors suggests that cognitive biases (Dawes, Faust, & Meehl, 1989) may limit the degree to which clinicians can accurately draw on their own clinical experiences, much less keep up with new developments in the field after a full day of seeing patients and paperwork. There is a large body of research on clinicians’ decision making regarding psychological assessments, much of which suggests clinical experience does not increase practitioners’ ability to reach valid conclusions if they rely on clinical experience rather than empirical data to reach these conclusions (Garb, 1998). In a related fashion, making decisions about how to treat patients is the result of an assessment process and is likely subject to the same errors in judgment. Although clinicians value their clinical judgment as a guide to treatment, the available data suggest that they may not be particularly adept at predicting which treatments will lead to success or failure for their clients. Three studies are pertinent here.
Kadden, Cooney, Getter, and Litt (1989) asked therapists of inpatients with alcohol dependence to predict which of two aftercare treatment programs would be better for their patients. Patients were randomly assigned to one of two treatments, and the authors found that empirical patient data (e.g., severity of psychopathology) predicted which treatment would work better for which patients. However, therapists were no better than chance at predicting which treatments would work for which of their patients despite their extensive contact and experience with these patients. Schulte, Kunzel, Pepping, and Schulte-Bahrenberg (1992) randomly assigned patients with phobias to standardized treatment with exposure or to an individualized program of cognitive-behavioral therapy of the therapist’s device. Therapists were significantly more effective when they were constrained to use the standardized treatment than when they were allowed to devise their own treatment plan presumably guided by clinical judgment. Hannan et al. (2005) compared the judgment of clinicians in a college counseling center to a research-derived algorithm for prediction of treatment failure, and found that clinicians were quite poor at predicting outcome, whereas the actuarial method worked very well. Hannan and colleagues noted that therapists rarely predicted deterioration even though it occurred in 42 of 550 patients. In contrast, the empirical method identified all of the patients who would become reliably worse, but generated numerous false-positive results. Nevertheless, the false alarms did have poorer outcomes than those not identified as likely treatment failures. Although more research is clearly needed, these three studies indicate that reliance primarily on clinical judgment, although prevalent, may misdirect therapy.
If evidence does not drive practice, then what is the point of research? An assumption of this paper is that, whenever possible, treatment decisions in psychotherapy should be made whenever possible on the results of empirical evidence. But if evidence is to drive practice, researchers need to make it more appealing to practicing clinicians. It has been repeatedly demonstrated that clinicians prefer to rely on clinical judgment when making treatment decisions. It is a product of our imperfect human cognition that a fascinating case history with personal details will affect clinical decision making more than a summary of well-controlled experimental studies. Furthermore, it would be an enormously difficult (and impractical) task to convince clinicians to abandon the belief that clinical experience is an excellent source upon which to base clinical judgment. Training sessions would need to be multi-faceted and interactive to be effective (Davis et al., 1999), and it is not clear whether any changes in thinking would endure without sustained interventions (Jameson, Stadter, & Poulton, 2007).
How can we use clinical experience to make EST research more influential? It is conceivable that practitioners reject treatment outcome data because they believe that although such treatments may work for the average treatment study patient, they are unlikely to work for the complex patients seen in their practices. It has been documented that people reject statistical reasoning in favor of anecdotal reasoning such as clinical experience (Borgida & Nisbett, 1977), an effect known as the vividness bias (Nisbett & Ross, 1980).
Ubel, Jepson, and Baron (2001) investigated this effect when both statistical information and anecdotal information are presented. Ubel et al. presented participants (who were prospective jurors at a county courthouse) with hypothetical statistical information about the percentage of angina patients who benefited from angioplasty (50%) and bypass surgery (75%). Participants were also given written testimonials from patients who had benefited or not benefited from either treatment: These testimonials were varied to be either proportionate or disproportionate with the statistical information. For example, those who received proportionate testimonials received 1 success and 1 failure for angioplasty, and 3 successes and 1 failure for bypass surgery. Those who received disproportionate testimonials received the same testimonials for angioplasty, but only 1 success and 1 failure for bypass surgery. In other words, the disproportionate testimonial success rate for bypass surgery was 50%, whereas one could expect 75% from the statistical information. Participants were then asked which of the two treatments they would choose. Because the statistical information derived from aggregated patient data is more accurate, participants should have discounted the testimonials, and chosen their treatment according to the presented statistical information. Of those participants who received the proportionate questionnaire, 44% chose bypass surgery. In contrast, only 30% of participants receiving the disproportionate questionnaire chose bypass surgery, suggesting that the anecdotal information significantly influenced treatment choices when presented in conjunction with statistical information.
There are several reasons why anecdotal information may have an undue influence on decisions. In the case of the clinician, it may be easier for clinicians to identify with a specific patient than with a statistically average person. Whereas statistical information is abstract in nature, anecdotal information is concrete. For these reasons and given clinicians’ propensity to value clinical experience (their own or others), a vividly detailed case study with a clear outcome might be more compelling than a summary of results from randomized controlled trials (RCTs). Stewart and Chambless (2007) found that providing practitioners with a summary of treatment outcome literature affected how clinicians said they would treat a specific case of a patient with panic disorder. Those participants who received a summary of the treatment outcome literature for panic disorder (recommending CBT or medication) were more likely to recommend using CBT than were the control participants who did not receive the research summary following the description of the case patient. The effect was significant but small, which was expected given the minimal intervention. However, the effect size is noteworthy given the possibility of a ceiling effect: The majority of therapists in both experimental and control groups reported that they would use CBT for the case. Unfortunately, psychologists in the experimental group (who received the research summary) were no more willing to seek training in the EST than the control group, thus indicating the limitations of information comprised solely of statistical information derived from group means. This preliminary research indicates that providing information can have some effect on treatment decisions.
The primary aim of the present study is to extend and expand upon this finding by drawing from the judgment and decision-making literature and using a vivid case study. We hypothesized that clinicians will find EST statistics more compelling and will be more willing to receive training in ESTs if outcome statistics are presented with a case study. If clinicians read a case of a complex and comorbid patient who was treated successfully with a particular EST, they may be more inclined to believe that EST results can generalize to their patients. We were also interested in whether statistical information affects attitudes and training willingness above and beyond the case information alone. Although it is documented that people reject statistical reasoning in favor of anecdotal reasoning, the vast majority of the sample were Ph.D. level psychologists, who had by nature of their degree received statistical training. It is possible this sample would pay more attention to the statistical information than would be expected from the general population. A secondary aim is to test whether Stewart and Chambless’s (2007) findings are replicated in the present sample. The final aim is to provide descriptive information on how many hours and how much money clinicians might be willing to devote for training in CBT for bulimia and to compare these findings to existing workshops.
Method
Participants
Using mailing labels purchased from APA, we sent a cover letter and survey to a randomly selected sample of 3,200 members of the American Psychological Association (APA) who specify themselves as practitioners in private practice. APA maintains a database of over 19,660 members indicating themselves employed in private practice. We believed this would provide a more balanced sample of practitioners across APA than Stewart and Chambless (2007) obtained by selecting members in a specific division of APA (such as Division 42: Psychologists in Independent Practice, which has 4,672 members). Front-line clinicians practice in a number of settings, including, for example, community agencies and hospitals as well as private practice. To achieve greater sample homogeneity in light of limited resources, this study was limited to private practitioners. Thirteen envelopes were returned to the sender due to faulty mailing addresses. Of the 799 respondents, 57 were unusable (i.e., participants returned blank surveys indicating they did not have time, were not currently in practice, or had retired). We had a total of 742 useable responses to the mailing, for an effective response rate of 23%.
Measures
A questionnaire was developed comprising 62 self-report items and is presented in the Appendix. The survey is divided into five sections. Section 1 assessed practitioners’ demographic information, theoretical orientation, clinical experience, primary employment setting, and whether continuing education (CE) credits are required by the participant’s state. In Section 2, all participants read a description of the fictional case of Tracy, abstracted (with permission) from a published treatment case in the literature (Oltmanns, Neale, & Davison, 1999). This included excerpts from Tracy’s history and a description of her symptoms of bulimia nervosa. We endeavored to select a disorder for which the research evidence indicates a psychosocial treatment of choice. Bulimia was chosen because ESTs have been identified for this particular disorder (Chambless & Ollendick, 2001), and the NICE (National Institute for Health and Clinical Excellence, 2009) guidelines in the United Kingdom and the Cochrane Collaboration review on psychotherapies for bulimia (Hay, Bacaltchuk, & Stefano, 2004) designate CBT as the treatment of choice for bulimia. There is also evidence that ESTs for bulimia are rarely utilized in clinical practice (Arnow, 1999).
The outcome statistics section comprised Section 3. This consisted of data from RCTs of CBT and of medication for bulimia and provided the response rates to each. Outcome statistics were presented as percentage reductions in binge eating and purging. Section 4 contained the case treatment section, which included a 2-page description of Tracy’s treatment with CBT abstracted from the aforementioned case study.
Sections 5 comprised the dependent measures in this study. In Section 5, participants were asked questions assessing their positive attitudes towards CBT. The attitude measures were modified from the Treatment Expectancy Scale (Borkovec & Nau, 1972), a widely used measure of expectancy and credibility administered to patients participating in randomized controlled psychotherapy trials. In the current survey this measure was reworded to assess attitudes practitioners held about CBT. Participants are asked to rate on 10-point Likert-type scales how appropriate CBT seems, how confident they are that CBT would reduce the severity of bulimia, and how likely they would be to use CBT for a patient with bulimia. Section 5 also included questions assessing participants’ training willingness. Participants were asked how likely they would be to complete workshop and home-study training in CBT for bulimia and another treatment (of their choice) for bulimia. The latter is included to control for participants’ willingness to seek training in general and to see if they are more likely to seek training in CBT over another treatment for bulimia. Although some research indicates that workshops are not effective in changing practitioner behavior (Davis et al., 1999), a workshop may offer an introduction to a treatment and serve as a motivational gateway to gaining further training. Lastly, Section 5 comprised questions assessing training specifics. Participants are asked how many hours they would be willing to devote to training in CBT for bulimia in a workshop or a home-study program and how much they would be willing to spend for that training.
There were five versions of this questionnaire to satisfy relevant controls. The first two (case + statistics and statistics + case) versions included all sections of the survey and counterbalanced the order of the statistics portion (Section 3) and the description of the case treatment with CBT (Section 4). The third version (statistics only) omitted the description of Tracy’s treatment with CBT (Section 4). The fourth version (case only) omitted the statistics section (Section 3). The fifth version served as the baseline. It omitted the statistics section (Section 3) and Tracy’s treatment description (Section 4). See Table 1.
Table 1. Description of Questionnaire Types and Descriptive Data on Dependent Variables.
Questionnaire type | |||||
---|---|---|---|---|---|
Version 1 Case+Statistics n = 117 |
Version 2 Statistics+Case n = 118 |
Version 3 Statistics only n = 169 |
Version 4 Case only n = 119 |
Version 5 Baseline n = 158 |
|
Section | |||||
1. Demographic s |
● | ● | ● | ● | ● |
2. Description of Tracy’s history and symptoms |
● | ● | ● | ● | ● |
3. CBT for bulimia statistical summary (statistics) |
● | ● | ● | ||
4. Case study of Tracy’s treatment with CBT (case) |
● | ● | ● | ||
5. Dependent measures |
● | ● | ● | ● | ● |
Attitudesa | 23.38 (5.32) | 23.71 (5.64) | 21.14 (6.23) | 24.08 (4.84) | 19.63 (5.97) |
Trainingb | 0.18 (1.69) | 0.30 (1.70) | −0.20 (1.67) | 0.22 (1.43) | −0.20 (1.41) |
Note. Practitioners who reported they already had significant training in CBT for bulimia were removed (n = 61).
Case+Statistics and Statistics+Case were combined after analyses indicated no order effect
Higher numbers denote more positive attitudes about CBT (out of 30).
Scores indicate raw willingness to get training based on difference scores. Higher numbers denote more willingness to get training in CBT compared to another treatment for bulimia.
Workshop Analysis
Given the hours and money participants indicate they are willing to devote to training in CBT for bulimia, are there existing workshops and home study programs that meet their specifications? To determine the average cost and hours of a CE workshop, we analyzed CE workshop offerings on the APA website (www.apa.org/ce/) as well as the Institute offerings at the Association for Behavioral and Cognitive Therapies (ABCT) conferences. We also analyzed the List of Independent Study Programs on the APA website. These analyses provide descriptive data on existing and available workshops and home study programs and permit an assessment of fit between available program and the time and money practitioners report being willing to spend on training.
Results
Sample Characteristics
Of the practitioners 64% were female. The mean age of the sample was 56.69 years (SD = 9.71, range = 28-87). In terms of highest professional degree, 80% of the practitioners had earned a Ph.D., 13% had received a Psy.D., 6% had received an Ed.D., and 1% reported they had earned an Masters’ degree. Practitioners had an average of 22.4 years (SD = 9.45) in practice. They saw patients an average of 24.12 hours (SD = 11.28) hours a week. The large majority (96%) of the practitioners worked in private practice. The two most commonly self-described primary theoretical orientations were cognitive-behavioral (39.8%) and psychodynamic (28.2%). An additional 19.3% reported themselves as eclectic, 5.5% of clinicians described themselves as humanistic/experiential and 3.9% subscribed to family systems. An additional 3.4% chose the category other as their primary theoretical orientation.
To evaluate the representativeness of the sample, the above characteristics were compared to data on the 19,660 practitioners in APA who report private practice as their primary employment setting (American Psychological Association, 2008). The sample collected was virtually identical in terms of age, professional degree, and number of years in practice. Our sample had a somewhat larger percentage of women responding than the larger sample (65% versus 56%) and this is consistent with prior research indicating women are more likely than men to respond to surveys (Green, 1996). Information on theoretical orientation and place of employment was not available from the APA.
Power
Power analyses were conducted for the experimental study, specifically for attitudes about ESTs after participants read EST outcome data with a case versus outcome data without a case. The experimental manipulation was minimal, and a small effect size was expected. With a small effect size of f = .1, alpha set at .05, and a sample size of 681 participants (after excluding 61 participants who reported having received significant training in CBT, see below), the power for the primary analyses (section below) was estimated at .74.
Positive Attitudes and Willingness to be Trained
There are two primary dependent variables in this study: the positive attitudes and willingness to be trained measures. The three attitudes questions (section 5) are on the same 1-10 scale. These three items were summed to produce a measure of positive attitudes for CBT for bulimia, where higher scores indicate more valuing of CBT for bulimia. Cronbach’s alpha was adequate at .93. The willingness to be trained in CBT measure is the difference between subjects’ ratings on willingness to be trained in CBT for bulimia and their willingness to be trained in another treatment (of their choice) for bulimia. This is to control for participants’ interest in training in general. We asked subjects about both workshop training and home-study training in CBT and another treatment for bulimia. The two difference scores were highly correlated (r = .70) and were summed to produce an overall score in willingness to be trained in CBT for bulimia. Practitioners who reported that they had already had significant training in CBT for bulimia were removed (n = 61). Means and standard deviations on the dependent variables for all versions of the questionnaire are reported in Table 1.
We have two a priori hypotheses and conducted contrast analyses to address our two focused predictions, in addition to a third contrast to test whether our study replicated prior research by Stewart and Chambless (2007). Prior to hypothesis testing, we tested for an order effect for the case treatment section and statistical information as demonstrated by differences between Versions 1 and 2 of the questionnaire. Contrast analyses indicated that there was no significant order effect between Versions 1 and 2 of the questionnaire for either dependent variable. Accordingly, the data from these two versions were combined for the remainder of the analyses.
The primary hypothesis in this study is that clinicians provided with the case treatment section will find statistical information about ESTs more compelling, and will be more likely than those clinicians who did not receive the case treatment section to hold positive attitudes towards CBT for bulimia and to be willing to receive training. As predicted, participants who received Versions 1 or 2 (now collapsed) had more positive attitudes toward CBT for bulimia and were more willing to seek out training than those participants who received only the statistical information (Version 3). See Table 2. The third planned comparison was to determine whether statistical information had any impact. Results indicated that contrary to our hypothesis that psychologists would pay attention to statistics by nature of doctoral training, the statistical information did not add to the effects of the case treatment section: Participants who received statistics plus the case treatment section (Versions 1 and 2) did not differ from participants who received only the case treatment section (Version 4) in their positive attitudes for CBT for bulimia and willingness to be trained. See Table 2.
Table 2. Planned Contrasts for Positive Attitudes and Willingness to be Trained in CBT.
Contrast and Dependent Variable | df | F | d | p |
---|---|---|---|---|
Case+Stats (V1) versus Stats+Case (V2) | ||||
Positive Attitudes | 733 | 0.24 | 0.04 | .62 |
Training Willingness | 658 | 0.37 | 0.05 | .54 |
Case+Stats (V1&2) versus Stats Only (V3) | ||||
Positive Attitudes | 734 | 15.6 | 0.30 | <.001 |
Training Willingness | 658 | 6.06 | 0.19 | .04 |
Case+Stats (V1&2) versus Case Only (V4) | ||||
Positive Attitudes | 732 | 0.78 | 0.06 | .38 |
Training Willingness | 658 | 0.04 | 0.02 | .84 |
Stats Only (V3) versus Baseline (V5) | ||||
Positive Attitudes | 735 | 6.17 | 0.18 | .01 |
Training Willingness | 659 | 0.00002 | 0.001 | .99 |
A secondary hypothesis of the current study is that statistical information (in the form of a summary of the treatment outcome literature) has influence above and beyond receiving no information at all (baseline), a replication of Stewart and Chambless’s (2007) finding. A contrast analysis indicated that participants who received the statistical information (Version 3) had more positive attitudes about CBT for bulimia than participants who received no information beyond the description of the patient (Version 5). However, the statistical summary did not increase willingness to be trained in CBT for bulimia. See Table 2.
Theoretical orientation, pro-research attitudes, years out from graduate school, and the impact of statistical data
Overall, we found that statistical information had no influence above and beyond the case information. However, it is possible that the predicted relationship obtains, but for only part of the sample. Accordingly, we investigated how three moderators may have affected the results: theoretical orientation, proresearch attitudes, and years since completion of the degree. Subjects who reported having extensive training in CBT for bulimia (n = 61) were excluded.
Do the statistical data have a differential effect on practitioners of different theoretical orientations? This question was tested with two contrasts utilizing questionnaire type (case, case+statistics) and theoretical orientation (CBT, other) as the independent variables, and positive attitudes and willingness to be trained as the dependent variables. The analyses indicated significant and large main effects for orientation for both positive attitudes (F(1, 372) = 17.79, d = .44, p < .0001) and willingness to obtain training in CBT (F(1, 326) = 23.13, d = .53, p < .0001), indicating that CBT practitioners held more positive attitudes and were more likely to be interested in training in CBT for bulimia than practitioners of other orientations. Consistent with our previous results, there was no main effect for the experimental condition for either dependent variable (attitudes: F(1, 372) = .02, d = .01, p = .89; training: F(1, 326) = .56, d = .08, p = .45). There were no significant interaction effects for either dependent variable (both Fs = .00), suggesting that statistical data did not have a differential impact on practitioners of different theoretical orientations.
We were next interested in whether the statistical information had influence on those practitioners who had graduate training that emphasized research outcome findings. We created a dichotomous research training variable by utilizing a median split on the 1-7 graduate training variable (Mdn = 4.5)2. We employed two contrasts utilizing experimental condition (case, case+statistics) and graduate research training (high, low) as the independent variables, and positive attitudes and willingness to be trained as the dependent variables. The analyses indicated significant and small main effects for graduate training in research (F(1, 388) = 5.45, d = 0.24, p < .05) for CBT, suggesting that practitioners who reported more graduate training in research held more positive attitudes about CBT for bulimia than those who reported less research training in graduate school. However, there was no main effect for research emphasis in graduate school and willingness to gain training in CBT for bulimia (F(1, 340) = 0.71, d = 0.09, p = .40). Consistent with previous overall results, there was no main effect for the experimental condition for either dependent variable (attitudes: F(1, 388) = 1.10, d = 0.10, p = .29; training: F(1, 340) = .004, d = 0.007, p = .95). There were no significant interaction effects (both Fs = 0.00) indicating that regardless of graduate training that emphasized research, the statistical information had no impact above and beyond the case.
Thirdly, we investigated whether statistical information had an impact on those practitioners who more recently emerged from graduate school. It is plausible that more recently trained practitioners are more amenable to interpreting statistical results than more seasoned practitioners who are many years out from their academic training. This hypothesis was tested with two contrasts utilizing experimental condition (case, case+statistics) and years out of graduate school (high, low) (Mdn = 25) as the independent variables, and positive attitudes and willingness to be trained as the dependent variables.3 There was no main effect for years out of graduate school for either dependent variable (attitudes, F(1, 382) = 0.11, d = 0.03, p = .74; training, F(1, 335) = 0.03, d = 0.02, p = .98). Additionally, there was no main effect for the experimental condition for either dependent variable (attitudes: F(1, 382) = 1.08, d = 0.11, p = .30; training: F(1, 335) = 0.002, d = 0.004, p = .97). There were also no significant interaction effects (both Fs = 0.00); newer practitioners were no more likely to be influenced by the statistical information above and beyond the case than were more seasoned clinicians.
Effect of the Case and CE Requirements on Training Specifics: Hours and Money
Excluding subjects who reported having extensive training in CBT for bulimia (n = 61), practitioners who received the case (n = 394, hereafter the target group) reported they would be willing to devote a median of 5.5 hours (lower quartile 0, upper quartile 8) for workshop training and 2 hours (lower quartile 0, upper quartile 6) for home study training in CBT for bulimia. Clinicians reported they would be willing to spend a median of $100 (lower quartile 0, upper quartile $175) on training.
To determine whether CE requirements affected the time and money people were willing to devote to training in CBT for bulimia, a Mann-Whitney U test was carried out between participants who did and did not have state CE requirements on the training specifics variables. The test showed there was no significant differences between participants with and without CE requirements on the number of resources participants were willing to devote to workshop (hours, z = −0.32, p = .75; dollars, z = −0.30, p = .76) or home-study (hours, z = −0.23, p = .82; dollars, z = −0.64, p = .52) programs
Existing CE workshops and home-study programs on ESTs
Given the time and money clinicians are willing to devote to receiving training in CBT for bulimia, and ostensibly other workshops on ESTs, are there existing workshops to meet their specifications? To determine the average cost and hours of a CE workshop, we analyzed six months (August 2008 – January 2009) of CE workshop offerings on the APA website (www.apa.org/ce/). Of the 851 offerings, 86 (10%) were treatment-related workshops, and 38 were EST workshops (44% of treatment workshop offerings, 4% overall). Excluding one basic training program in EMDR that was 40 hours and a clear outlier, these workshops averaged a mean of 6.69 hours (SD = 3.15) and $211.77 (SD = 121.04) in price. We also examined the Institute offerings at the Association for Behavioral and Cognitive Therapies (ABCT) conferences. Practitioners can attend workshops given by experts in the field, without registering or paying for the conference. In 2007 and 2008, 12 of the 15 offered workshops were treatment related (80%) and 9 of the 15 (60%) workshops involved training in ESTs. For both years, a 5-hour workshop had a fee of $140 and a 7-hour workshop cost $175 for non-members,
To determine the average cost and hours of a home-study program for an EST, we analyzed the List of Independent Study Programs on the APA website. Of the 94 offerings (as of September 18, 2008), 27 (29%) were treatment related and 11 (12%) related to ESTs. The average home study program required 7.18 hours (SD = 3.15) and had a fee of $107.04 (SD = 42.44).
Discussion
How can we make EST statistics more compelling to the average practitioner? The results of the present study indicate that practitioners who received the case treatment example held more positive attitudes toward the EST described and were more willing to get trained in CBT for bulimia than those who did not receive the description of Tracy’s treatment with CBT. Unfortunately for the evidence-based practice movement, despite psychologists’ statistical training, the inclusion of statistical information had no influence on attitudes and training willingness beyond that of the anecdotal case treatment information, consistent with prior research indicating that lay people ignore statistical reasoning in favor of anecdotal/testimonial reasoning (Ubel et al., 2000). Nonetheless, by encouraging clinicians to rely on, trust, and value empirical evidence by (paradoxically) using anecdotal information in the form of clinical experience, we may foster their openness to ESTs.
Our results replicated the prior research of Stewart and Chambless (2007) in that providing statistical information affected clinicians’ attitudes towards the EST. Clinicians who received only the treatment summary (recommending CBT or medication) were more likely to have positive attitudes towards CBT for panic disorder than were those who did not receive any information at all, suggesting that statistical information unaccompanied by case treatment information enhances attitudes. Nonetheless, as Stewart and Chambless (2007) found, the present results indicated that the statistical information did not increase willingness to get training in CBT for bulimia, lending further support for the importance of the addition of case studies for willingness to get trained.
Articles on RCTs of psychotherapy typically do not include case studies of the treated patients. One major implication of this study is that editors of journals that publish RCTs providing data supportive of the efficacy of an EST should be encouraged to include space for a case study of a particular patient in the trial who was treated successfully. This is not a novel idea. The discrepancy between academic research reporting and clinical application has been noted repeatedly in the literature (Beutler et. al, 1995; Chambless & Hollon, 1998; Goldfried & Wolfe, 1996). Bilsbury and Richman (2002) noted that a limitation of statistical reporting (such as reported in the RCT literature) is that the properties of individual cases cannot be deduced from the overall population, and yet this is the information practitioners want and need to know. For example, an RCT may show practitioners that a treatment helps 40% of clients in the population tested, yet it does not tell practitioners if the client walking into their office is one of them (Edwards, Dattilio, & Bromley, 2004). Moreover, research using complex experimental and statistical methods yields results in the form of descriptive and inferential statistics, which may be difficult to comprehend and assimilate despite statistical training. By engaging the reader, case studies have an immediate appeal to clinicians, especially when the intervention is presented in a narrative format that allows them to see how a treatment actually works. As a result, clinicians may more quickly integrate the information into their frameworks of clinical knowledge (Dattilio, 2006).
The results of the present study provide initial evidence that case studies may be able to deliver RCT results in a way that is meaningful and influential to clinicians. Not only are case studies possibly influential as a dissemination method from researcher to clinician, several authors have championed the significance of case-based research in the process of building clinical knowledge (Edwards et al., 2004; Jones, 1993). Barlow (1981) argued persuasively that encouraging clinicians to conduct case-based research might benefit both clinician and researcher by creating a meaningful role for clinicians in the research process, and providing researchers with rich and detailed information from real clinical practice. In sum, a greater respect for and appreciation of case-based research could go a long way towards bridging the science-practice gap (Jones, 1993).
We tested several moderators in our analyses to see if they affected our overall finding that statistical information had no impact above and beyond the case information. None of the moderator interaction effects were significant. The statistical information had no differential impact regardless of theoretical orientation, graduate training in psychotherapy outcome findings, or years out of graduate school. In other words, despite training or influences that may engender more statistical literacy or interest, the overall finding that the statistical information had no influence above and beyond the case information was robust. It is unclear why ostensibly more empirically based clinicians are equally unaffected by data, and more research is needed on this point. However, these results are consistent with the vividness effect and speak to the influential power of case studies and the need for case material to be integrated into the treatment outcome literature.
Our results indicate that CE requirements do not influence the amount of training or money practitioners are willing to spend on an EST workshop. There is some indication in the literature that licensed psychologists tend not to attend CE programs just for the sake of earning needed credits and report attending CE programs even when they do not need credits (Sharkin & Plageman, 2003). This is a promising finding suggesting that clinicians are motivated to learn and are not simply interested in accumulating credits towards continuing education. However, because CE requirements do not stipulate types of training (aside from ethics) that are required, it is unknown whether a CE requirement specifically for evidence-based practice would change our results.
Clinicians who received the case reported they would be willing to devote a median of 5.5 hours and $100 for workshop training. Our analysis of available workshops on the APA website indicated that workshops averaged 6.7 hours and were priced at $211.77. A 5-hour workshop cost $150 and a 7-hour workshop cost $175 at ABCT. This suggests that although clinicians are generally willing to devote the time needed for workshops, available workshops may be priced at somewhat above what they are willing to pay. Some proportion, however, were willing to allocate sufficient resources. Of the target group, 33 clinicians (8%) were willing to devote the time and money needed (or more) for an available workshop from APA, 115 (29%) were willing to devote the resources for the 5-hour workshop at ABCT, and 46 (12%) for the 7-hour workshop. Where home-study training in CBT for bulimia was concerned, practitioners were willing to devote a median of 2 hours and $100, whereas available home-study programs on the APA website required an average of 7.18 hours and $107.04. Thus, although clinicians can afford home-study training, overall they may not be willing to dedicate the time for some programs. However, some are. Of the target group 58 clinicians (15%) were willing to get training at the average cost and time found for home-study programs. It is unknown from our results what deters clinicians from being willing to devote more time or money to continuing education, and more research is needed on clinicians’ willingness to gaining training in ESTs. Nonetheless, it is promising that some clinicians are willing to allocate resources for training given the available offerings, and our results indicate that workshops within their parameters may be available.
More problematic is the availability of these workshops. Although EST workshops comprised 60% of offerings at the 2007 ABCT conference, this is not surprising given this organization’s focus on behavioral and cognitive therapies, which appear prominently among the EST lists for many disorders (Chambless & Ollendick, 2001). By contrast, EST workshops comprised only 4% of available workshop offerings on the APA website, and 12% of home-study offerings. This suggests that although CE/EST offerings are available, they are not common and may be difficult to access for practitioners in many regions in the country. Lack of access to training has been commonly cited as a reason for the low utilization of ESTs for many disorders (Arnow, 1999), and more research is needed on the availability of EST training to the average practitioner.
One limitation of this study is that the sample may not be representative of practicing psychologists. It is possible that psychologists who join APA are not representative of psychologists in practice. Moreover, PhD psychologists are only part of the mental health practice community, which includes Masters’ level psychologists, social workers, and counselors of various sorts. Moreover, because only 23% of the sample responded, there is no way of knowing if the responses can be generalized to describe the initial sample. Although our sample had slightly more female participants, it was otherwise virtually identical to the larger sample drawn from APA. Nonetheless, clinicians who respond to a survey describing an EST may be more sympathetic to EST research than non-responders. Our response rate (23%) is commensurate with surveys of our page length with no compensation, preliminary notifications, duplicate mailings, or follow-ups (Yammarino, Skinner, & Childers, 1991). It is possible that survey participation would have been higher had funding restrictions not limited further mailings and participant payments.
A second limitation of the present study is our use of bulimia nervosa. Bulimia was the disorder chosen for the case example due to our efforts to select a disorder for which the research and policy guidelines (NICE, Cochrane) clearly converge on a treatment of choice. The majority (57%) of practitioners surveyed reported that they saw bulimic patients - however, a substantial percentage (43%) did not. Clinicians who saw bulimic patients reported these patients made up 5.28% (SD = 33.23) of their practices. Although we asked clinicians to assume they saw bulimic patients for the remainder of the questionnaire, it is possible that participants did not heed this instruction and therefore did not give legitimate responses to training willingness.4 It is also plausible that even clinicians who did see bulimic patients were not willing to devote resources for training in treatment of a disorder that does not make up a large part of their practice. It is possible training willingness to spend time and money to obtain would have increased had we used an example of a highly prevalent disorder, such as depression.
A third limitation is our use of willingness to attend workshops in CBT as the operational definition of overall training willingness. There is evidence that workshop training does not influence practitioner behaviors’ or outcomes (Davis et al., 1999; Morgenstern, Blanchard, Morgan, Labouvie, & Hayaki, 2001). Although there are some data suggesting a 2-day workshop can increase clinicians’ knowledge, observational ratings indicated only modest changes in practice behavior and no meaningful impact on client behaviors (Miller & Mount, 2001). Moreover, it is important to note that a CE workshop in an EST is hardly sufficient to train therapists in a new treatment, and supervision is necessary beyond didactics in order to ensure minimum levels of competence (Morgenstern, Morgan, McCrady, Keller, & Carroll, 2001; Sholomskas et al., 2005). Our survey measures willingness to attend one particular workshop, and it may not be measuring practitioners’ willingness to submit themselves to the amount of training that is required when learning a new EST. Nonetheless, a workshop in CBT may serve as a beginning to more intensive and advanced training.
Conclusions and Future Directions
The present research suggests that providing clinicians with case studies of ESTs may have some impact on their attitudes, and it would be desirable to test the effects of more elaborate case studies than were used here. In addition, further research designed to elucidate in more detail how clinicians decide to approach further training is needed. More research is also needed on the barriers to seeking training in ESTs, and the availability of EST training to the average practitioner.
Supplementary Material
Footnotes
It is not the purpose of this article to argue for the merits or disadvantages of the EST approach. For a discussion of these issues, see Chambless and Ollendick (2001) and Norcross, Beutler, and Levant (2006).
This question was also tested with a regression analysis using research training in graduate school as a continuous independent variable. No significant interaction was observed (β = −.08, t(387) = −0.237, p = .81). Contrast analyses are reported due to the increase in power afforded by this technique.
The question was tested with a regression analysis using years out as a continuous variable. The interaction term was not significant (β = .05, t(381) = 0.721, p = .47).
No differences were found between those who reported seeing bulimics and those who did not. The results from the analyses of both samples indicated the same pattern of results as in the overall sample.
References
- American Psychological Association . Compilation of data from APA Directory. 2008 Edition. 2008. Unpublished data. [Google Scholar]
- Arnow BA. Why are empirically supported treatments for bulimia nervosa underutilized and what can we do about it? Journal of Clinical Psychology: In Session. 1999;55:769–779. doi: 10.1002/(sici)1097-4679(199906)55:6<769::aid-jclp9>3.0.co;2-h. [DOI] [PubMed] [Google Scholar]
- Barlow DH. On the relation of clinical research to clinical practice: Current issues, new directions. Journal of Consulting and Clinical Psychology. 1981;49:147–155. doi: 10.1037//0022-006x.49.2.147. [DOI] [PubMed] [Google Scholar]
- Becker CB, Zeyfert C, Anderson E. A survey of psychologists’ attitudes towards and utilization of exposure therapy for PTSD. Behavior Research and Therapy. 2004;42:277–293. doi: 10.1016/S0005-7967(03)00138-4. [DOI] [PubMed] [Google Scholar]
- Beutler LE, Williams RE, Wakefield PJ, Entwistle SR. Bridging scientist and practitioner perspectives in clinical psychology. American Psychologist. 1995;50:984–994. doi: 10.1037//0003-066x.50.12.984. [DOI] [PubMed] [Google Scholar]
- Borgida E, Nisbett R. The differential impact of abstract versus concrete information on decisions. Journal of Applied Social Psychology. 1977;7:258–271. [Google Scholar]
- Borkovec TD, Nau SD. Credibility of analogue therapy rationales. Journal of Behavior Therapy and Experimental Psychiatry. 1972;3:257–260. [Google Scholar]
- Chambless DL, Hollon SD. Defining empirically supported therapies. Journal of Consulting and Clinical Psychology. 1998;66:7–18. doi: 10.1037//0022-006x.66.1.7. [DOI] [PubMed] [Google Scholar]
- Chambless DL, Ollendick TH. Empirically supported psychological interventions: Controversies and evidence. In: Fiske ST, Schacter DL, Zahn-Waxler C, editors. Annual Review of Psychology. Vol. 52. Annual Reviews; Palo Alto, CA: 2001. pp. 685–716. [DOI] [PubMed] [Google Scholar]
- Cook JM, Weingardt KR, Jaszka J, Wiesner M. A content analysis of advertisements for psychotherapy workshops: Implications for Disseminating Empirically Supported Treatments. Journal of Consulting Psychology. 2008;64:296–307. doi: 10.1002/jclp.20458. [DOI] [PubMed] [Google Scholar]
- Crowe S, Mussel MP, Peterson C, Knopke A, Mitchell J. Prior treatment received by patients with bulimia nervosa. International Journal of Eating Disorders. 2000;25:39–44. doi: 10.1002/(sici)1098-108x(199901)25:1<39::aid-eat5>3.0.co;2-w. [DOI] [PubMed] [Google Scholar]
- Dattilio FM. Does the case study have a future in the psychiatric literature? TBD. 2006 doi: 10.1080/13651500600649895. [DOI] [PubMed] [Google Scholar]
- Davis D, Thomson O’Brien MA, Freemantle N, Wolfe FM, Mazmanian P, Taylor-Vaisey A. Inpact of formal continuing medical education: Do conferences, workshops, rounds and other traditional continuing education activities change physician behavior or health care outcomes? Journal of the American Medical Aassociation. 1999;282:867–874. doi: 10.1001/jama.282.9.867. [DOI] [PubMed] [Google Scholar]
- Dawes RM, Faust D, Meehl P. Clinical versus actuarial judgment. Science. 1989;243:1668–1674. doi: 10.1126/science.2648573. [DOI] [PubMed] [Google Scholar]
- Edwards DJA, Dattilio FM, Bromley DB. Developing evidence-based practice: The role of case-based research. Professional Psychology: Research and Practice. 2004;35:589–597. [Google Scholar]
- Floyd FJ, Widaman KF. Factor analysis in the development and refinement of clinical assessment instruments. Psychological Assessment. 1995;7:286–299. [Google Scholar]
- Garb HN. Studying the clinician: Judgment research and psychological assessment. American Psychological Association; Washington, DC: 1998. [Google Scholar]
- Goisman RM, Warshaw MG, Keller MB. Psychosocial treatment prescriptions for generalized anxiety disorder, panic disorder and social phobia, 1991-1996. American Journal of Psychiatry. 1999;156:1819–1821. doi: 10.1176/ajp.156.11.1819. [DOI] [PubMed] [Google Scholar]
- Goldfried MR, Borkovec TD, Clarkin JF, Johnson LD, Parry G. Toward the development of a clinically useful approach to psychotherapy research. Journal of Clinical Psychology/In Session: Psychotherapy in Practice. 1999;55:1385–1405. doi: 10.1002/(sici)1097-4679(199911)55:11<1385::aid-jclp5>3.0.co;2-5. [DOI] [PubMed] [Google Scholar]
- Goldfried MR, Wolfe BE. Psychotherapy practice and research: Repairing a strained alliance. American Psychologist. 1996;51:1007–1016. doi: 10.1037//0003-066x.51.10.1007. [DOI] [PubMed] [Google Scholar]
- Goldfried MR, Wolfe BE. Toward a more clinically valid approach to therapy research. Journal of Consulting and Clinical Psychology. 1998;66:143–150. doi: 10.1037//0022-006x.66.1.143. [DOI] [PubMed] [Google Scholar]
- Green KE. Sociodemographic factors and mail survey response. Psychology and Marketing. Special Issue: Psychology, Marketing, and Direct Mail. 1996;13:171–184. [Google Scholar]
- Haas HL, Clopton JR. Comparing clinical and research treatments for eating disorders. International Journal of Eating Disorders. 2003;33:412–420. doi: 10.1002/eat.10156. [DOI] [PubMed] [Google Scholar]
- Hannan C, Lambert MJ, Harmon C, Nielsen SL, Smart DW, Shimokawa K. A lab test and algorithms for identifying clients at risk for treatment failure. Journal of Clinical Psychology. 2005;61:155–163. doi: 10.1002/jclp.20108. [DOI] [PubMed] [Google Scholar]
- Hay PJ, Bacaltchuk J, Stefano S. Psychotherapy for bulimia nervosa and binging. Cochrane Database of Systematic Reviews. 2004;3:1–201. doi: 10.1002/14651858.CD000562.pub2. [DOI] [PubMed] [Google Scholar]
- Jameson P, Stadter M, Poulton J. Sustained and sustaining continuing education for therapists. Psychotherapy: Theory, Research, Practice, Training. 2007;44:110–114. doi: 10.1037/0033-3204.44.1.110. [DOI] [PubMed] [Google Scholar]
- Jones EE. Introduction to special section: Single-case research in psychotherapy. Journal of Consulting and Clinical Psychology. 1993;3:371–372. doi: 10.1037//0022-006x.61.3.381. [DOI] [PubMed] [Google Scholar]
- Kadden RM, Cooney NL, Getter H, Litt MD. Matching alcoholics to coping skills or interactional therapies: Posttreatment results. Journal of Consulting and Clinical Psychology. 1989;57:698–704. doi: 10.1037//0022-006x.57.6.698. [DOI] [PubMed] [Google Scholar]
- Miller WR, Mount KA. A small study of training in motivational interviewing. Does one workshop change clinician and client behavior? Behavioural and Cognitive Psychotherapy. 2001;29:457–471. [Google Scholar]
- Morgenstern J, Blanchard KA, Morgan TJ, Labouvie E, Hayaki J. Testing the effectiveness of cognitive-behavioral treatment for substance abuse in a community setting: Within treatment and posttreatment findings. Journal of Consulting and Clinical Psychology. 2001;69:1007–1017. doi: 10.1037//0022-006x.69.6.1007. [DOI] [PubMed] [Google Scholar]
- Morgenstern J, Morgan TJ, McCrady BS, Keller DS, Carroll KM. Manual-guided cognitive-behavioral therapy training: A promising method for disseminating empirically supported substance abuse treatments to the practice community. Psychology of Addictive Behaviors. 2001;15:83–88. [PubMed] [Google Scholar]
- Morrow-Bradley C, Elliott R. Utilization of psychotherapy research by practicing psychotherapists. American Psychologist. 1986;41:188–197. doi: 10.1037//0003-066x.41.2.188. [DOI] [PubMed] [Google Scholar]
- Mussell MP, Crosby RD, Crow SJ, Knopke AJ, Peterson CB, Wonderlich SA, et al. Utilization of empirically supported psychotherapy treatments for individuals with eating disorders: A survey of psychologists. International Journal of Eating Disorders. 2000;27:230–237. doi: 10.1002/(sici)1098-108x(200003)27:2<230::aid-eat11>3.0.co;2-0. [DOI] [PubMed] [Google Scholar]
- National Institute for Health and Clinical Excellence Mental health guidelines. 2006 Retrieved February 2, 2009, from http://www.nice.org.uk/Guidance/CG9.
- Nisbett RE, Ross L. Human inference: Strategies and shortcomings of social judgment. Prentice Hall; Englewood Cliffs, NJ: 1980. [Google Scholar]
- Oltmanns TF, Neale JM, Davison GC. Case Studies in Abnormal Psychology. 3rd Edition John Wiley & Sons, Inc.; New York: NY: 1999. Eating Disorders: Bulimia. [Google Scholar]
- Persons JB, Silberschatz G. Are results of randomized controlled trials useful to psychotherapists? Journal of Counseling and Clinical Psychology. 1998;66:126–135. doi: 10.1037//0022-006x.66.1.126. [DOI] [PubMed] [Google Scholar]
- Raine R, Sanderson C, Hutchings A, Carter S, Larkin K, Black N. An experimental study of determinants of group judgments in clinical guideline development. Lancet. 2004;364:429–437. doi: 10.1016/S0140-6736(04)16766-4. [DOI] [PubMed] [Google Scholar]
- Sackett DL, Straus SE, Richardson WS, Rosenberg W, Haynes RB. Evidence-based medicine: How to practice and teach EBM. 2nd ed. Churchill Livingstone; London: 2000. [Google Scholar]
- Schulte D, Kunzel R, Pepping G, Schulte-Bahrenberg T. Tailor-made versus standardized therapy of phobic patients. Advances in Behaviour Research and Therapy. 1992;14:67–92. [Google Scholar]
- Sharkin BS, Plageman PM. What do psychologists think about mandatory continuing education? A survey of Pennsylvania practitioners. Professional Psychology: Research and Practice. 2003;34:319–323. [Google Scholar]
- Spring B. Evidence-based practice in clinical psychology: What it is, why it matters; what you need to know. Journal of Clinical Psychology. 2007;63:611–631. doi: 10.1002/jclp.20373. [DOI] [PubMed] [Google Scholar]
- Stewart RE, Chambless DL. Does psychotherapy research inform treatment decisions in private practice? Journal of Clinical Psychology. 2007;63:267–281. doi: 10.1002/jclp.20347. [DOI] [PubMed] [Google Scholar]
- Ubel PA, Jepson C, Baron J. The inclusion of patient testimonials in decision aids: Effects on treatment decisions. Medical Decision Making. 2001;21:60–68. doi: 10.1177/0272989X0102100108. [DOI] [PubMed] [Google Scholar]
- von Ranson KM, Robinson KE. Who is providing what type of psychotherapy to eating disorder clients?: A survey. International Journal of Eating Disorders. 2006;31:27–34. doi: 10.1002/eat.20201. [DOI] [PubMed] [Google Scholar]
- Yammarino FJ, Skinner SJ, Childers TL. Understanding mail survey response behavior: A meta-analysis. Public Opinion Quarterly. 1991;55:613–639. [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.