Abstract
Clinical researchers and clinical practitioners share a goal of increasing the integration of research and clinical practice, which is reflected in an evidence-based practice (EBP) approach to psychology. The EBP framework involves the integration of research findings with clinical expertise and client characteristics, values, and preferences, and consequently provides an important foundation for conducting clinically relevant research, as well as empirically based and clinically sensitive practice. Given the critical role that early training can play in the integration of science and practice and in promoting the future of the field, the present article addresses predoctoral training programs as a context for adopting an EBP approach to clinical work. We address training in the three components of EBP and provide suggestions for curriculum development and practicum training that we hope will contribute to bridging the gap between research and practice.
Keywords: evidence-based practice, clinical training, scientist-practitioner, clinical scientist, supervision
The setting that is perhaps best poised for achieving the lofty but essential goal of bridging the gap between research and practice is the training setting. Although our discussion may touch on issues that are relevant to other levels of clinical training (e.g., predoctoral internships, postdoctoral fellowships, and continuing education programs), we focus on predoctoral training as it represents the first step in the clinical training progression, and it provides an ideal milieu for the integration of science and practice. Indeed, particularly in PhD graduate programs dedicated to the training of scientist–practitioners and clinical scientists, the goals are to teach, model, and provide hands-on experience in two key activities: clinical research and clinical practice. The training setting is thus replete with opportunities to seamlessly integrate practice and research. For example, psychology training clinics are often housed in the same setting in which basic coursework is provided, and faculty are often participating in their own research activities, as well as providing supervision. A curriculum that encourages the trainee to consume and produce basic and applied research, and a clinical practicum that integrates and translates this information, is likely to promote evidence-based practice (EBP) in external clinical settings, provide suggestions for clinically relevant research, and, consequently, shape the nature of the trainee’s professional development after graduate school. In fact, we hypothesize that trainees whose early clinical and research experiences embody the integration of science and practice are likely to adopt and maintain this approach as they progress through subsequent stages of professional development.
In the present article, we use the EBP framework to identify and recommend training opportunities that can bridge clinical practice and clinical research. EBP involves the integration of three primary components: the best research evidence; clinical expertise; and client characteristics, values, and preferences (Collins, Leffingwell, & Belar, 2007; Hunsley, 2007; Spring, 2007). Thus, EBP is a rather broad construct and is not synonymous with empirically supported treatments (ESTs), although ESTs are one type of research evidence considered within the EBP framework. Addressing each of these EBP components in training is no small task, and we recognize that there is some uncertainty about what training in EBP should entail (e.g., Bauer, 2007; Beidas & Kendall, 2010). Nevertheless, we believe that this framework (a) allows more in-depth and careful consideration of critical elements relevant to clinical practice that can improve the quality of services we provide and (b) highlights potential pitfalls and opportunities for scaffolding and improving the research-practice bridge in psychology training settings.
The present article is organized into three sections reflecting each of the EBP components: research evidence, clinical expertise, and client characteristics. Throughout, we consider issues, highlight recent advances and gaps in the literature, and provide practical recommendations related to curriculum development and supervision, as well as issues related to practicum performance, including the acquisition of clinical competencies and client response to treatment (Falender et al., 2004). Though PhD programs prepare students to function as both clinicians and researchers, we focus on suggestions most pertinent to clinical training, recognizing, of course, that preparedness for one can and does reinforce preparedness for the other. We believe that training programs are an ideal place for instilling skills in clinical research and clinical practice in trainees, the future of our field, by creating opportunities for their synergy. We agree with others (Bauer, 2007; Collins et al., 2007; DiLillo & McChargue, 2007; Hunsley, 2007; Spring, 2007) that an evidence-based approach to clinical training can provide the foundation for bridging the gap between clinical practice and research. In the following sections, we provide recommendations within the EBP framework to meet this important but challenging goal. A summary of our training suggestions is presented in Table 1.
Table 1.
Considering “Best” Research Evidence |
---|
Skills to approach the ever-changing evidence
|
What constitutes evidence and obtaining clinical evidence
|
Developing Clinical Expertise |
Treatment outset
|
Case conceptualization and selection of interventions
|
The therapeutic relationship, therapist emotions, and therapist variables
|
Supervision: From clinical experience to clinical expertise
|
Relevant Client Characteristics |
|
Considering “Best” Research Evidence
Skills to Approach the Ever-Changing Evidence
Among the three components of EBP, we believe that many predoctoral training programs are perhaps most successful at providing coursework that includes “best” research evidence that informs clinical practice (e.g., Spring, 2007). However, there are likely gaps in the curriculum that should be considered to permit students to best use the EBP framework to inform their clinical practice and clinical research efforts. For example, several authors (Bauer, 2007; Collins et al., 2007; DiLillo & McChargue, 2007; Drabick & Goldfried, 2000; Spring, 2007) have recommended training in clinical and research methods, as well as epidemiology, clinical trials and qualitative methodology, and research skills that include secondary sources, systematic reviews, and informatics (e.g., resources, guidelines, electronic records) to enable students to find the best available evidence for the problems encountered in their clinical (and research) work. Indeed, we believe that a critical skill for EBP training is learning how to obtain and integrate the best available evidence.
This deceptively simply stated goal is in fact quite complex. For example, it can be difficult to identify, synthesize, and critically evaluate the evidence relevant to client’s presenting problems because of rapid changes in available evidence; differences in terms used to describe clinical phenomena (e.g., impulsivity may be construed as poor executive control, difficult temperament, sensation-seeking, risk taking, or attention-deficit/hyperactivity disorder); the absence of translational efforts that facilitate relevant work across disciplines; and the ever-expanding information available on the World Wide Web. Thus, we believe that training in clinical research must include strategies for identifying and critically evaluating a variety of sources (Collins et al., 2007; Falender et al., 2004; Falzon, Davidson, & Bruns, 2010; Spring, 2007). Students are often taught to value primary sources of information (e.g., empirical studies, theoretical reviews, practice guidelines) above other sources (e.g., secondary sources that are summaries of primary sources, including both qualitative and quantitative [meta-analytic] reviews). Though primary sources of information are valuable, we recommend that students learn to approach and integrate multiple lines of evidence, including, but not limited to, primary sources.
Given expected changes in available evidence over time, training should include strategies for reviewing and critically evaluating the constantly evolving literature base relevant to clients’ presenting issues. Alongside this goal, students should be exposed to the literature regarding clinical decision-making and potential heuristics and biases that influence such decision-making (Dawes, Faust, & Meehl, 1989; Hunsley, 2007; Swets, Dawes, & Monahan, 2000). Although we know that this is a typical component of assessment training, we believe that opportunities to practice and examine these heuristics through supervision, case consultations, and case conference presentations would be useful for evaluating the effects of potential biases on clinical case management and the development of skills to search for and weigh the best evidence. Overall, this interactive approach is consistent with the EBP principle of lifelong learning and provides an important framework for systematically approaching the ever-changing evidence base (Spring, 2007). Further, we believe that this proposed approach to training facilitates the use of clinical research to appropriately inform clinical practice (Bauer, 2007; Goldfried & Wolfe, 1996; Stricker, 1992).
What Constitutes “Evidence”
A related issue concerns what evidence students are taught to value and approach. Training in ESTs is often viewed as one major source of evidence and, consequently, a major aspect of clinical training (Calhoun, Moras, Pilkonis, & Rehm, 1998). However, there are other sources of evidence from which students can draw relevant clinical information. In addition, other approaches to psychology training integrate many of these EBP domains and principles and can illustrate a possible approach to implementing the EBP framework in clinical training settings. Developmental psychopathology, for example, provides an alternative, complementary framework for training that considers how to not only conceptualize the “best research evidence,” but also develop clinical expertise and take client characteristics into account. Here we briefly address the contribution of developmental psychopathology as a heuristic framework to obtain the best evidence.
The developmental psychopathology perspective is a framework that many psychologists use for understanding clinical issues across the life span. The developmental psychopathology perspective considers issues such as risk and resilience (i.e., thriving despite adversity), developmental pathways or course, contextual influences, determination of typical and atypical behaviors based on developmental demands, and translational efforts that can inform prevention and intervention efforts (Drabick & Kendall, 2010; Drabick & Steinberg, 2011; Hart & Marmorstein, 2009; Jensen & Hoagwood, 1997; Rutter & Sroufe, 2000; Steinberg & Avenevoli, 2000). Both the EBP and developmental psychopathology frameworks recommend considering multiple lines of evidence drawn from different research approaches and across different biological, psychological, and social domains, as well as gathering convergent and divergent evidence for hypotheses and case conceptualizations. Thus, the idiographic, client-centered approach used by both the EBP and developmental psychopathology frameworks provides implications for curriculum development in training programs. Specifically, this approach requires coursework that addresses (and teaches students to approach) biological, cognitive, emotional, developmental, contextual, and cultural domains that can influence individuals’ functioning, and thus the trainees’ assessments, case conceptualizations, and interventions.
Obtaining Evidence With Ongoing Assessment Data
Training in clinical assessment is a well-integrated, core component of most psychology training programs. However, we believe that two related areas may require additional attention to realize a more effective link between clinical practice and clinical research. First, it is unclear to what extent initial assessment findings influence case conceptualizations and related decisions about prognosis, prevention, and intervention (Bauer, 2007; Collins et al., 2007; Hunsley, 2007). Second, ongoing assessment to monitor client progress and the therapeutic alliance is important but receives less attention in training than initial clinical assessment (Borkovec, 2004; Collins et al., 2007; Hunsley, 2007). We believe that the use of systematic and ongoing assessment during treatment is critical to consider within an EBP framework; indeed, perhaps no aspect of the curriculum is better suited to model the synergy of science and practice than the use of ongoing assessment as a source of evidence. Broadly, training should highlight and illustrate strategies for linking ongoing assessment information to clinical intervention, particularly to determine whether changes in the intervention approach are needed (Lambert, in press; Youn, Kraus, & Castonguay, in press), and because inclusion of ongoing assessment information in treatment (e.g., providing feedback to clients) can be used to promote change (Harmon, Hawkins, Lambert, Slade, & Whipple, 2005).
Ongoing assessment information can be used in numerous additional ways. First, we encourage programs to formally integrate ongoing assessment data into client-centered research (e.g., controlled single case studies, time-series analyses) as part of the clinical practicum training. Students may conduct their own research that tracks symptom status of one or a few psychotherapy clients across baseline and intervention phases (Borckardt et al., 2008). Second, students can use intake and ongoing assessment data (e.g., depressive and anxious symptoms, hypothesized mediators such as dysfunctional attitudes) to examine group-level changes from pre- to posttherapy (i.e., effectiveness data), which can be qualitatively compared with efficacy data from other resources, such as randomized clinical trials, or quantitatively through benchmarking methods (e.g., Minami et al., 2009). Data also can be collected to examine client and therapist characteristics, as well as process issues, that may be associated with positive outcomes (Borkovec, 2002; Gard, Tremblay, DiLillo, & Pantesco, 2002). Recent efforts to promote this research infrastructure in training programs have included establishing faculty-student research teams to generate ideas for research, ensure that human subjects’ approval is obtained, and monitor ongoing research projects conducted in psychology training clinics, including maintenance of relevant outcome data (Sauer & Huber, 2007). We believe that actively participating in this research (a) teaches trainees (and clients) to operationalize the variables that clients want to change, (b) enhances trainees’ ability to develop a methodology to track change over time in conjunction with their clinical observational and interviewing skills, and (c) promotes case conceptualization as a set of testable hypotheses. Finally, we encourage the incorporation of client-centered research into practicum because it provides the trainee with the opportunity to sit in two chairs (i.e., those of the researcher and clinician) simultaneously (Wolfe, in press).
In sum, there is a plethora of research that informs clinical practice. Given the speed with which this research base transforms, we believe that one of the most critical aspects of this EBP component is providing trainees with methodologies to approach and incorporate this information into their treatment, including the use of ongoing literature searches involving a variety of sources and collection of data with clients.
Developing Clinical Expertise
Although we recognize that clinical experience is an important foundation of clinical expertise, clinical expertise is a much more complicated construct than experience, and requires supervision, knowledge of current research, skill in developing and maintaining the therapeutic alliance, as well as assessing and treating clients, among other abilities (e.g., Collins et al., 2007). Several of these aspects of clinical expertise are addressed through training in ESTs (Calhoun et al., 1998). However, a consequence of the EST movement is the proliferation of manuals, particularly individual manuals for individual disorders. A discussion of the rationale, advantages, and disadvantages of manuals is beyond the scope of this article (see, e.g., Addis, Cardemil, Duncan, & Miller, 2006). Nevertheless, it is important to recognize potential concerns that have been raised regarding training in ESTs. For example, manuals often prescribe overlapping strategies for change and there is little empirical basis to guide efforts to combine or select among particular ESTs (Chorpita, Daleiden, & Weisz, 2005). Related issues involve concerns about cognitively overloading trainees and the importance of thinking flexibly about the methods with which to attain the goals of an intervention, given the heterogeneity of client variables. Thus, in addition to training in the “treatment method” that is supported by ESTs (Chambless et al., 2006), the following section includes recommendations to train students in factors important in therapy implementation from treatment outset through the course of treatment (irrespective of specific treatment method), to learn to approach case conceptualizations with flexibility that are guided by principles, to attend to the therapeutic relationship, to promote certain therapist variables, and to manage and make use of the trainee’s own emotions.
Treatment Outset
Core processes that cut across theoretical orientation and treatment modality, common factors, are an important aspect of clinical training (Spring, 2007), and the presence of these factors at the outset of therapy are positively associated with outcome. For example, as reviewed in DeFife and Hilsenroth (2011), trainees should learn to foster realistic and positive expectations in the client, to socialize the client to his or her role in treatment (role preparation), and to engage in a collaborative formulation regarding presenting problems and treatment goals (see also Hilsenroth & Cromer, 2007, for key behaviors associated with alliance formation during the assessment phase of therapy). These elements dovetail with suggestions put forth by Wampold (2007) that the most critical elements of treatment involve offering the client a more functional explanation for his or her problems and related set of actions to ameliorate those problems, provided by a therapist who delivers that treatment with the expectation (e.g., based on the literature, clinical expertise, etc.) that it will be effective. Addressing the alliance at the outset of treatment may be particularly important in a training setting, given that attrition rates in psychology training clinics can be as high as 77.5% (Callahan, Aubuchon-Endsley, Borja, & Swift, 2009; Callahan & Hynan, 2005), compared with rates of about 40% to 60% in community outpatient clinics (Clarkin & Levy, 2003; Garfield, 1994). Notably, a significant portion of the variance in premature termination in psychology training clinics may be accounted for by clients’ unrealistically high pretreatment expectations about the rate and speed of recovery (Callahan, Aubuchon-Endsley, et al., 2009; Swift & Callahan, 2008a), and evidence suggests that helping clients to acquire more realistic expectations about the degree of improvement that they can expect across sessions substantially decreases premature termination rates (Swift & Callahan, 2008b). Taken together, regardless of the predominant theoretical orientation, it is important that trainees learn to orient their clients to therapy by addressing, and providing brief education regarding, these essential elements that set the stage for the therapeutic process.
Case Conceptualization and Selection of Intervention
Because there is a limit to the amount of information that anyone can retain, and because the inundation of individual ESTs has led to many “treatments that work” (Minami et al., 2009; Nathan & Gorman, 1998; Stiles, Barkham, Mellor-Clark, & Connell, 2007), we do not recommend encouraging trainees to read and apply ESTs in isolation. Instead, there are a variety of advances in the field that augment training in ESTs, including (a) methodologies to identify shared techniques of empirically supported interventions (e.g., interpersonal psychotherapy and cognitive behavioral therapy both include problem solving and activity scheduling; Chorpita et al., 2005); (b) transdiagnostic approaches to treatment (e.g., Allen, McHugh, & Barlow, 2008); and (c) delineation of common change processes (e.g., principles of change; Castonguay & Beutler, 2006). First, identifying the most common techniques across validated treatments potentially can provide an empirically informed starting point, though this research is still in its infancy and the utility of this approach remains a question deserving of further empirical scrutiny. Second, training with transdiagnostic approaches can increase broad conceptualization skills and generalizability of particular change strategies, enabling trainees to identify and treat core deficits that are common across diagnoses (e.g., enhancing emotion regulation and present-focus awareness skills). Finally, learning to conceptualize therapy goals from a lens of common principles of change that cut across treatment type may help to instill trainee confidence (i.e., reducing the focus on the delivery of one specific technique and broadening the focus to the goal of the intervention), which may be particularly useful for individuals who do not respond to the first change strategy used.
Indeed, we believe it is paramount to provide training in common change processes so that trainees learn to think with flexibility about the methods with which to attain the goals of an intervention. For example, one strategy is to teach trainees to conceptualize their clinical cases from a broader principles approach (e.g., Bauer, 2007; Helge Rønnestad & Ladany, 2006), an approach that is pertinent at treatment outset (assessment), as well as throughout the course of therapy (intervention). Principles of change refer to a level of abstraction that is more specific than theory but more general than the strategies or techniques (Castonguay & Beutler, 2006), and have been delineated broadly (Goldfried, 1980; Weinberger, 1995), as well as in reference to specific problem domains (e.g., depression, anxiety, personality disorders) that involve change processes pertinent to the relationship, client, and treatment model (Castonguay & Beutler, 2006). This argument is similar to suggestions to train in the “underlying spirit” of a treatment rather than overemphasize techniques (Miller, Yahne, Moyers, Martinez, & Pirritano, 2004).
Training to conceptualize from a principle approach encourages greater flexibility in choosing a technique to facilitate a particular change process (Boswell, Nelson, Nordberg, Mcaleavey, & Castonguay, 2010) (e.g., “devil’s advocate” or “two-chair technique” to increase awareness of critical voices that prompt rumination). Though the principle (e.g., increasing clients’ awareness about factors contributing to their life problems; Goldfried, 1980) may provide a suggestion for a technique consistent with the theoretical orientation of the trainee/supervisor initially, the willingness to engage in functional equivalence (that is, turning to an alternative technique to attain the same treatment goal) may reduce the possibility of strains to the alliance. Indeed, thinking flexibly about an intervention may reduce a clinician’s frustration related to his or her client’s ambivalence or resistance, enabling the clinician to think creatively about the problem to be solved rather than continually attempting to inflexibly apply the same technique. This is an important goal, given that inflexible application of therapeutic strategies reduces the therapeutic alliance (e.g., responding to strains in the alliance by further attempts to persuade the client of the validity of the treatment rationale; Ackerman & Hilsenroth, 2001; Castonguay, Goldfried, Wiser, Raue, & Hayes, 1996; Henry, Strupp, Butler, Schacht, & Binder, 1993).
To aid in the development of flexible, empirically informed, and principle-oriented conceptualizations, we recommend training in methodologies to develop case conceptualization skills (Eells, 2007; Kuyken, Padesky, & Dudley, 2009). Indeed, case conceptualization skills, considered a “core competency” in psychotherapy (Page & Stritzke, 2006), are at the heart of training practitioners in EBP, as conceptualization sets the stage for how therapy will proceed and should be the roadmap to which the clinician refers throughout treatment. To bridge the gap between research and practice, we encourage graduate training programs and students’ supervisors to provide trainees with a methodology for approaching a conceptualization and treatment plan in a systematic way (Page & Stritzke, 2006; Page, Stritzke, & McLean, 2008). Not only does this foundation involve learning to provide evidence-based assessments (Hunsley & Mash, 2007; Mash & Hunsley, 2005), conducting systematic literature searches (Falzon et al., 2010), extrapolating from findings (DiLillo & McChargue, 2007), and incorporating client variables into a principle-oriented conceptualization and treatment plan, but importantly, evaluating whether case formulations are reliable and valid. Page et al. (2008) provide an interesting illustration of how supervision can facilitate integrating science-informed clinical case formulation into graduate training. Specifically, the authors developed case vignettes with assessment data; identified requisite conceptualization domains to cover (e.g., problem list, predisposing factors, maintaining factors, problems potentially hindering treatment, strengths, and assets); and created a scoring system for rating students’ conceptualizations based on advanced undergraduate students, fellow trainees, and experienced clinicians to enable comparisons with various developmental levels. Their goal is to create a standard set of videotaped interviews with clients with different psychological problems that vary in severity so that trainees can practice developing reliable case conceptualizations, compare their skills to benchmark ratings to identify areas of strength and weakness, and test the validity of these ratings by examining associations between benchmarking scores and treatment outcomes. Evidence that case conceptualization skills improve with training is encouraging (Kendjelic & Eells, 2007); moreover, such research on case conceptualization illustrates another application of an evidence-based approach to clinical practice and research in training.
The Therapeutic Relationship
Like case conceptualization, the therapeutic relationship is a ubiquitous aspect of therapy and accounts for roughly 10% of the variance in treatment outcome (Chambless et al., 2006) and thus needs to be addressed in training. Within a principle-oriented approach, the establishment of a relationship is a necessary prerequisite for conducting therapy (Goldfried, 1980; Weinberger, 1995), and at times, attending to the “here and now” of the relationship itself may be part of the intervention. Rather than engage in “either/or” dichotomous thinking—do we teach the relationship or the technique—we believe that the answer necessitates a dialectical stance; as cogently argued by Goldfried and Davila (2005), it is not the technique or the relationship but rather, both and their interaction. When to place a greater emphasis on the relationship or the technique depends on the conceptualization and the moment-to-moment interaction. Taking a dialectical stance thus involves monitoring the alliance and repairing alliance ruptures. Toward these ends, we encourage trainees to learn to monitor and quantitatively assess the alliance (consistent with our recommendation for ongoing assessment; for example see Duncan et al [2003], 4-item Working Alliance Inventory), and obtain supervision in repairing alliance ruptures (Safran & Muran, 2000; Safran, Muran, Samstag, & Stevens, 2001). Indeed, Miller, Duncan, and colleagues (2006) found that including and addressing clients’ responses to two four-item measures pertaining to the alliance and progress in treatment doubles the effect size of “treatment as usual.” Thus, these data suggest that monitoring clients’ progress and the therapeutic relationship is related to more positive outcomes than not monitoring these issues.
Therapist Emotions
Another major aspect of maintaining and repairing the alliance is the ability to detect and make use of one’s own (potentially negative) emotions in session (Binder & Strupp, 1997; McCullough, 2000; Wolf, Goldfried, & Muran, 2012). Though space limitations prohibit a full discussion, we suggest that helping the trainee to understand the function of these emotions is critical. On the one hand, negative emotions may be idiosyncratic to the trainee and therefore empathy development is needed (e.g., from frustration to compassion). On the other hand, one’s affective reaction may be directly tied into the client’s presenting problems, in which case it is important to receive supervision that enables the trainee to appropriately make use of this response as part of the intervention itself (McCullough, 2000), which may be a vehicle to promote change and/or to strengthen the alliance. Finally, in addition to skillfully and planfully making use of their negative emotional response, trainees also need to become aware and make use of their positive emotions in a planful way (e.g., reinforcing client’s success by celebrating new learning). Besides promoting change, this type of process work can raise the client’s awareness about the impact that he or she has on others, which may decrease his or her feelings of helplessness and social disconnection (McCullough, 2000, 2006).
Therapist Variables
In addition to attention to emotions during sessions, training of evidence-based practitioners should involve the exploration of therapist variables, which account for roughly 6% to 9% of variance in treatment outcome (Addis et al., 2006; Kim, Wampold, & Bolt, 2006). Because therapy is contextual and modified to fit the particular client, therapist qualities such as being flexible, honest, alert, and warm, which are associated with the alliance (Ackerman & Hilsenroth, 2001, 2003), should be modeled and supported. It is further important to address the trainee’s belief that the therapeutic approach is viable (Kendall & Beidas, 2007), which is also consistent with Wampold’s (2007) postulation that therapist’s expectation that the therapy has value is a necessary ingredient in promoting change.
Both therapist and relationship variables can be assessed via observation and with standardized instruments (Beidas & Kendall, 2010), the combination of which can protect against common cognitive biases (e.g., favoring information that confirms hypotheses about the trainee or client), highlight clinically relevant areas that may not emerge from one strategy alone, and provide data to use in the context of research. Thus, an EBP approach indicates that trainees should learn to develop and monitor the therapeutic alliance and relevant therapist variables using observation, standardized measures, and feedback from supervision, and to integrate these multiple sources of evidence into an ongoing case conceptualization and treatment plan throughout the course of treatment. In addition, coursework can be used to teach these recommended approaches for developing clinical expertise, such as an intervention course for teaching principles of change and relevant client and therapist variables, as well as a Clinical Practicum for teaching approaches to case conceptualization and the development of the therapeutic alliance, consistent with arguments to promote learning with the use of multiple methods (blended learning; Cucciare, Weingardt, & Villafranca, 2008). Supervision is also a major outlet in which to model and teach these evidence-based activities that can move students from clinical experience to clinical expertise.
Supervision
The primary roles of a clinical supervisor are to (1) ensure that treatment services are implemented effectively, ethically, with integrity, with appreciation of diversity, and informed by empirical evidence; and (2) develop the trainee’s clinical competencies (Falender & Shafranske, 2004). Supervision provides an excellent opportunity for “active” learning (e.g., role plays, coaching, feedback) that are advantageous over more “passive” learning strategies (e.g., didactic lectures) (Beidas & Kendall, 2010; Drabick & Goldfried, 2000). Indeed, the case-specific mentoring that occurs in supervision provides an excellent opportunity to strengthen the skill set, and spirit, of EBP, particularly by integrating research into practice, as well as creating opportunities for practice-based research.
Although emerging as a growing area of interest, supervision has been relatively neglected empirically (Reiser & Milne, in press; Watkins, 2011). This dearth of attention may stem, in part, from lack of formal training in models of supervision among most supervisors, which perpetuates the notion that supervision is more of an art than a science (Falender & Shafranske, 2004; Kavanagh et al., 2003; Scott, Ingram, Vitanza, & Smith, 2000). The accumulating data are encouraging, as the evidence for the validity of supervision (i.e., that it leads to expected outcomes) is growing (Helge Rønnestad & Ladany, 2006; Watkins, 2011). Indeed, empirical research demonstrates that supervision can benefit supervisees, particularly with regard to knowledge of treatment, skill acquisition, self-efficacy, supervisee’s therapeutic alliance (Hilsenroth, Defife, Blagys, & Ackerman, 2006; Watkins, 2011), and client outcomes such as decreased dropout rates, working alliance, and symptom reduction (Bambling, King, Raue, Schweitzer, & Lambert, 2006; Callahan, Almstrom, Swift, Borja, & Heath, 2009; Stein & Lambert, 1995). Further, research has identified potentially harmful aspects of supervision, including dismissing supervisees’ ideas and emotions (Gray, Ladany, Walker, & Ancis, 2001) and ignoring or avoiding responsibility for behaviors that lead to supervisor-supervisee conflict (Nelson & Friedlander, 2001). These findings likely parallel the goals of therapeutic change (parallel process; see Helge Rønnestad & Ladany, 2006) and consequently reflect similar processes such as the centrality of the supervisor-supervisee relationship, raising awareness of problematic supervisee behaviors, creating opportunities for corrective experiences, managing negative emotions effectively, and promoting skill acquisition and autonomy development.
Recent advances also have been made with regard to supervisory assessments of trainee competence, which is one vehicle for evaluating students’ development of expertise. Although competency assessments were previously somewhat idiosyncratic and program specific, recent efforts resulted in more systematic attempts to derive consensus models of competence-based training. In fact, together with other training councils, the Association of Psychology Postdoctoral and Internship Centers cosponsored a conference in 2002 examining competencies in professional psychology, leading to a systematic effort to define competencies across key domains and developmental levels (Falender et al., 2004; Rodolfa et al., 2005). Particularly for predoctoral programs, major advantages of this approach include a de-emphasis on somewhat arbitrarily defined “clinical hours” and increased standardization across programs (Cellucci & Reports, 2010). Such standardization also provides a basis from which to conduct research on the reliability and validity of these ratings (Rings, Genuchi, Hall, Angelo, & Cornish, 2009). This approach provides ample guidance for research efforts, though much work remains to be done. Nevertheless, the EBP framework provides a useful foundation for conceptualizing gaps in our knowledge of supervision and supervisory assessments and for conducting research to address these gaps.
Not only does the evidence base with regard to supervision demonstrate advances in the field that bridge the gap between science and practicum activities, but, most germane to this paper, provide methodologies to enhance an EBP approach within individual supervisory relationships. For example, Hilsenroth, DeFife, and colleagues (2006) demonstrated that trainees significantly increased use of psychodynamic-interpersonal techniques over the course of treatment with ongoing psychodynamic supervision, vis-à-vis independent clinical ratings made with the Comparative Psychotherapy Process Scale (CPPS; Hilsenroth, Blagys, Ackerman, Bonge, & Blais, 2005), a 20-item descriptive measure designed to assess therapist activity and techniques. This type of work is important, as it demonstrates objectively rated increases in competence in a particular set of behaviors and can be adapted within an individual supervisory relationship. For example, to encourage trainees to think about the “internal validity” of their treatment, increase awareness of their actual (in contrast to perceived) behavior, and provide a more objective way of assessing competence, an exercise could involve both supervisor and supervisee rating the trainee’s audiotaped or videotaped session using the CPPS to compare their understanding of the interventions used in session. We hypothesize that as training increases, (a) scores will increase on the CPPS, indicating greater use of characteristic aspects of the therapeutic orientation selected; (b) the interrater reliability between trainee self-assessment and supervisor assessment will increase; and (c) opportunities to test whether increasing CPPS scores and interrater reliability are associated with improvement in client outcome (validity) will arise. Given the mismatch between self-perception and actual behavior in evaluating fidelity to a treatment (Beidas & Kendall, 2010; Carroll, Martino, & Rounsaville, 2010), such an activity may be a fruitful way to increase trainee self-awareness and reduce the discrepancy between perceived and actual behavior over time. Notably, such a strategy involves the use of videotaping or audiotaping therapy sessions, rather than relying on trainee self-report, as well as providing adherence ratings, both of which have been suggested as guidelines for training (Calhoun et al., 1998).
In addition to the use of actual recordings, we agree with suggestions to have more than one supervisor make competency assessments (e.g., based on adherence ratings during recorded sessions), given the systematic biases that supervisors have been shown to exhibit (Gonsalvez & Freestone, 2007; Gonsalvez & McLeod, 2008; Lazar & Mosek, 1993). In a related vein, we also suggest making both absolute and relative competency ratings to track progress over time and to compare ability to peers, respectively. Another possible way to assess clinical competency is client response to treatment. We recognize that, in isolation, this approach may reflect idiosyncrasies in client’s symptom levels, rather than therapist skill, and may underemphasize client changes that are not indexed by standardized measures. As such, we do not espouse a particular reliance on client outcome as a proxy for supervisory or trainee effectiveness (Helge Rønnestad & Ladany, 2006). However, consistent with EBP and integrating multiple methods to obtain the best evidence, the use of client outcome data (vis-à-vis ongoing assessment ratings) can provide one method to assess therapist effectiveness. In sum, in an evidence-based approach to training, the performance of a trainee may be based on his or her skills in delivering treatment, standardized competency ratings, and clients’ treatment outcomes.
Overall, to enhance the science within the art of supervision, we encourage supervisors to model a nonjudgmental stance of empirical curiosity. For example, a supervisor can model this approach by encouraging supervisees to consult the literature, collaborating about hypotheses, and, as discussed previously, collecting data with supervisees from their sessions. In other words, consistent with the evidence-based value of training supervisees to be lifelong learners, supervisors can encourage literature searches and hypothesis testing, rather than assuming the role of expert with the “correct” answer (Falender et al., 2004; Hunsley, 2007). Further, an evidence-based approach to supervision suggests that at least a subset of supervisory sessions may be audio- or video-recorded, analyzed, and critiqued (anonymously) vis-à-vis structured evaluation questionnaires, either by supervisees, colleagues, or independent agencies so that supervisors also have an opportunity to receive feedback (Gonsalvez & McLeod, 2008) and to identify discrepancies between perceived and actual behavior (Beidas & Kendall, 2010). We believe that engaging in these activities will increase the scientific zeitgeist of supervision, thereby continuing to instill in trainees the value of an EBP approach. Even more, these supervisory activities can provide a model from which trainees can extrapolate to their own clinical work, consequently further reducing the gap between what gets taught in the class hour and what takes place during the therapy hour. These recommendations for supervision are consistent with the EBP framework in that we (a) expect supervisors and supervisees to obtain the best available evidence, (b) are interested in clinical expertise for both supervisor and supervisee, and (c) should take into consideration individual characteristics, values, and preferences (of both trainee and client) in determining how supervision should proceed. It is to this last point regarding client characteristics that we now turn.
Relevant Client Characteristics
As much as 40% of the variance in treatment outcome, or more, may be due to client variables (Addis et al., 2006). Thus, part of training in EBP must involve teaching trainees to integrate key client variables into their case conceptualization and treatment plan. Although a range of client characteristics are likely important, in the following sections, we focus on individual difference variables, expectations for and willingness to engage in treatment, and client values and preferences.
It goes without saying that clients’ presenting problems are heterogeneous. Individuals with the same disorder may exhibit different symptom constellations; further, those who have the same disorder may have experienced different developmental pathways and risk processes (equifinality), and those with the same risk factors (e.g., abuse) may experience very different outcomes (multifinality; Cicchetti & Rogosch, 1996). As discussed previously, consistent with a developmental psychopathology perspective, EBP thus involves learning to approach the multifaceted “evidence,” which involves training in biological, cognitive, emotional, developmental, contextual, and cultural domains that may influence individuals’ functioning, course, and response to intervention. For example, the work of Westen and colleagues highlights patterns of personality functioning that predict significant variability in adaptive and psychiatric functioning, above and beyond DSM–IV (Diagnostic and Statistical Manual of Mental Disorders, 4th edition) diagnoses, that have direct implications for treatment targets (e.g., perfectionism, emotional constriction, and emotional dysregulation across the eating disorders; Hershenberg, Novotny, & Westen, 2006; Westen & Harnden-Fischer, 2001).
Additionally, knowledge of client characteristics included in controlled clinical trials, such as rates of comorbidity in targeted populations, may help a trainee to choose the most appropriate treatment and to adopt realistic and evidence-based expectations for client’s response to treatment. For example, one of the largest randomized clinical trials for the treatment of chronic depression demonstrated that about one-third of the sample also had a history of anxiety disorders, one-third a history of alcohol/substance abuse disorders, and two-thirds received a diagnosis for coexisting personality disorders (Keller et al., 2000). Thus, when assessing a client who may be seeking therapy for a long-standing mood disorder such as depression, a trainee should know to assess for the presence of concomitant psychological issues that may affect the client’s response to treatment (even if these problems are not identified by the client as targets for treatment), and to learn to tailor treatment plans to anticipate and flexibly accommodate these additional issues as they become salient.
Further, awareness of the effect sizes of controlled outcome studies with targeted populations also can be used to promote therapist awareness of how much change can be attained (on average) with a particular treatment for a particular population and over how many “dosages” of therapy. Consistent with our recommendation to collect ongoing assessment data (e.g., through online software packages; see Lambert, in press; Youn et al., in press), aggregate effect sizes of outcome data gathered from clients in psychology training clinics can be compared with those of clinical trials (Minami et al., 2008; Ogles, Lambert, & Fields, 2002), or response to treatment can be compared with functioning in non-clinical populations (Kraus, Seligman, & Jordan, 2005; Youn, et al., in press) via benchmarking approaches. On an individualized level, trainees can compare their clients to expected recovery curves and examine client increases or decreases (measured in standardized scores) across relevant psychosocial and adaptive functioning domains. Statistically examining discrepancies between individual clients and average or expected trajectories of change may enhance trainee’s attention to client characteristics that either promote or hinder the speed of change in therapy and, again, enhance the trainee’s ability to flexibly address possible discrepancies in a planful way. Ideally, as part of selecting the appropriate treatment based on the best research evidence, client characteristics could be used to determine which treatment (or components of treatment) may be appropriate for a particular client and in which circumstances (e.g., Chorpita et al., 2005; Roth et al., 1996). Unfortunately, as a field, we are far from meeting the goal of treatment matching; thus, we recommend selecting treatments based on both evidence and client characteristics, with careful attention to issues related to diversity and cultural differences.
In addition to more presumably static client variables such as personality style, we believe it is important that all trainees learn to form hypotheses about clients’ dynamic patterns of interpersonal functioning and to monitor how those patterns are evidenced in the therapeutic relationship as part of their case conceptualization (McCullough, 2000, 2006). Such patterns may be conducive to therapy (e.g., the “good” client who completes her homework to please others) or may interfere with goals of therapy (e.g., the client who has unresolved issues with a parent and projects these issues onto the therapist), and these patterns may be linked to maintaining factors in the client’s presenting problems. Given the robust associations between interpersonal functioning and psychopathology, we encourage programs to teach trainees to examine the interpersonal patterns of the client; to consider how the therapeutic relationship may potentially parallel, reinforce, or exacerbate maladaptive interpersonal patterns (see, e.g., Levenson & Strupp, 1997; McCullough, 2000); and to obtain supervision to determine how to support a more adaptive pattern of relating in interpersonal relationships.
Similarly, trainees must learn to assess and address process-related variables that affect the timing and course of treatment. For example, to address readiness and motivation to change, we recommend that students learn the transtheoretical model of change (Prochaska & DiClemente, 1982) and suggest the possibility of training in motivational interviewing as one technique to address these stages of change (Miller & Rollnick, 1991). Further, because of the impact of client expectancies on treatment outcome (Arnkoff, Glass, & Shapiro, 2002), as we discussed earlier, we also recommend learning to directly address these expectancies at the onset of treatment (Callahan, Aubuchon-Endsley, et al., 2009), preparing clients for their roles in therapy, and developing a collaborative approach (DeFife & Hilsenroth, 2011), which may increase rates of client retention (Swift & Callahan, 2008b).
In terms of client characteristics, values, and preferences, trainees should learn to work with clients as shared decision-makers and recognize clients’ roles as stake-holders in therapy, which requires the development of a treatment plan that is sensitive and responsive to clients’ values (e.g., cultural diversity, spirituality, involvement of family) and preferences (e.g., acceptability of treatment, timing and frequency of intervention, use of medications, individual and/or group therapy; Bauer, 2007; Spring, 2007). One issue that requires attention is how to assess such values and preferences; indeed, there is a dearth of user-friendly, validated instruments for identifying client values and preferences that will inform the decisions that we make (Spring, 2007). However, such information will influence not only case conceptualization and treatment planning, but also trainees’ development of clinical expertise through their learning to assess and apply information about client values and preferences flexibly and effectively (Bauer, 2007). Training consequently should prepare students for attending to these individual client characteristics, as well as quantifying and monitoring these variables to evaluate how these factors affect treatment course and relevant outcomes.
Overall, we recognize that training programs likely differ in the extent to which client characteristics are incorporated into case conceptualization and intervention. For example, clinics training in predominantly interpersonal approaches may consider relevant personality and interpersonal variables that manifest themselves in the “here and now” of the client–therapist interaction as one of the most salient aspects of active intervention, whereas more cognitive behaviorally oriented approaches may largely reflect on progress that occurs outside of session. Regardless of where the focus of change takes place (e.g., inside or outside of the session), an EBP approach includes the incorporation of client variables into the case conceptualization and treatment plan, which may directly impact the development of a strong alliance. We hypothesize that if assessing for and responding to client variables is reinforced in a training program, trainees might view responding to client ambivalence and even resistance as part of the intervention, rather than something standing in the way of it. However, this is ultimately an empirical question, and tools for assessing and monitoring these variables will be useful for evaluating this hypothesis. In sum, trainees must not only learn what type of disorder the client has, but also know what kind of client has the disorder (Reed, Kihlstrom, & Messer, 2006). As such, training in EBP requires that trainees increase their own awareness, assessment, and implementation of knowledge related to client variables that will affect the timing, style, and content of their intervention.
Conclusions
EBP is the integration of the best available research with clinical expertise in the context of client characteristics, values, and preferences. Consistent with the spirit of EBP, we have provided suggestions for methodologies to increase our ability as a field to train students to function as empirically informed and clinically sensitive clinicians and clinical researchers, with a particular emphasis on clinical training (summarized in Table 1).
A few caveats must be kept in mind. First, because training is contextual, trainees and supervisors are embedded within larger systems. As such, the adoption of EBPs into training needs to be accepted by all faculty; similar to a key client variable such as resistance (Sanders & Murphy-Brennan, 2010), motivation to change may need to be addressed. Second, further empirical study is needed to determine the appropriate focus of training content (Beidas & Kendall, 2010; Calhoun et al., 1998; McFall, 1991), including “how” it gets taught (e.g., use of multiple approaches to learning), as well as “what,” “how much” (e.g., duration), and “when” (e.g., sequence, spacing) (Chu, 2008; Cucciare et al., 2008).
Notably, in addition to documenting clinical notes and conducting single-case studies, there are numerous opportunities for psychotherapy research that can be incorporated into the clinical setting to answer these questions and to assess how well specific training activities are meeting these goals. For example, data collected through coursework, assessment, and supervision can be included in the clinic’s database and used to determine training variables that are associated with increases in trainee competency and client outcome. The overall goal would be to support a research infrastructure within the training clinic, to collect data that could be used to evaluate particular components of treatment or training, and to provide one resource that could be combined across different training clinics, consistent with Practice Research Networks (Borkovec, 2004; Drabick & Goldfried, 2000).
The bigger issue in the field is the chasm between research and practice, though this gap is increasingly shrinking and may be seen from either an optimistic or pessimistic lens (Teachman et al., in press). We believe that there are ample opportunities within the current climate to train students to act and think in ways that will reduce the polarization both now and over time. Indeed, doing so will permit trainees to begin their own careers with this way of thinking and behaving firmly embedded in their approach to clinical practice and clinical research, and to train others to adopt this approach and associated strategies. We write this training piece with incredible optimism, given the bastion of resources at our disposal. We hope that our suggestions contribute to others’ thinking about ways to facilitate training in EBP and to further identify opportunities for bridging clinical research and clinical practice.
Acknowledgments
We would like to thank Marv Goldfried, Bethany Teachman, and Barry Wolfe for their helpful comments on earlier versions of this manuscript. Preparation of this manuscript was supported in part by NIMH 1K01 MH07317-01A2 awarded to Dr. Drabick.
Contributor Information
Rachel Hershenberg, Department of Psychology, Stony Brook University.
Deborah A. G. Drabick, Department of Psychology, Temple University
Dina Vivian, Department of Psychology, Stony Brook University.
References
- Ackerman SJ, Hilsenroth MJ. A review of therapist characteristics and techniques negatively impacting the therapeutic alliance. Psychotherapy: Theory, Research, Practice, Training. 2001;38:171–185. [Google Scholar]
- Ackerman SJ, Hilsenroth MJ. A review of therapist characteristics and techniques positively impacting the therapeutic alliance. Clinical Psychology Review. 2003;23:1–33. doi: 10.1016/s0272-7358(02)00146-0. [DOI] [PubMed] [Google Scholar]
- Addis ME, Cardemil EV, Duncan BL, Miller SD. Does manualization improve therapy outcomes? In: Norcross JC, Beutler LE, Levant RF, editors. Evidence-based practices in mental health: Debate and dialogue on the fundamental questions. Washington, DC: American Psychological Association; 2006. pp. 131–160. [Google Scholar]
- Allen LB, McHugh RK, Barlow DH. Emotional disorders: A unified protocol. In: Barlow DH, editor. Clinical handbook of psychological disorders: A step-by-step treatment manual. 4th ed. New York, NY: Guilford Press; 2008. pp. 216–249. [Google Scholar]
- Arnkoff DB, Glass CR, Shapiro SJ. Expectations and preferences. In: Norcross JC, editor. Psychotherapy relationships that work: Therapist contributions and responsiveness to patients. New York, NY: Oxford University Press; 2002. pp. 335–356. [Google Scholar]
- Bambling M, King R, Raue P, Schweitzer R, Lambert W. Clinical supervision: Its influence on client-rated working alliance and client symptom reduction in the brief treatment of major depression. Psychotherapy Research. 2006;16:317–331. [Google Scholar]
- Bauer RM. Evidence-based practice in psychology: Implications for research and research training. Journal of Clinical Psychology. 2007;63:685–694. doi: 10.1002/jclp.20374. [DOI] [PubMed] [Google Scholar]
- Beidas RS, Kendall PC. Training therapists in evidence-based practice: A critical review of studies from a systems-contextual perspective. Clinical Psychology: Science and Practice. 2010;17:1–30. doi: 10.1111/j.1468-2850.2009.01187.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Binder JL, Strupp HH. “Negative Process”: A recurrently discovered and underestimated facet of therapeutic process and outcome in the individual psychotherapy of adults. Clinical Psychology: Science and Practice. 1997;4:121–139. [Google Scholar]
- Borckardt JJ, Nash MR, Murphy MD, Moore M, Haw D, O’Neil P. Clinical practice as natural laboratory for psychotherapy research: A guide to case-based time-series analysis. American Psychologist. 2008;63:77–95. doi: 10.1037/0003-066X.63.2.77. [DOI] [PubMed] [Google Scholar]
- Borkovec TD. Training clinic research and the possibility of a national training clinics practice research network. The Behavior Therapist. 2002;25:98–103. [Google Scholar]
- Borkovec TD. Research in training clinics and practice research networks: A Route to the Integration of Science and Practice. Clinical Psychology: Science and Practice. 2004;11:211–215. [Google Scholar]
- Boswell JF, Nelson DA, Nordberg SS, McAleavey AA, Castonguay LG. Competency in integrative psychotherapy: Perspectives on training and supervision. Psychotherapy Theory, Research, Practice, Training. 2010;47:3–11. doi: 10.1037/a0018848. [DOI] [PubMed] [Google Scholar]
- Calhoun KS, Moras K, Pilkonis PA, Rehm LP. Empirically supported treatments: Implications for training. Journal of Consulting and Clinical Psychology. 1998;66:151–162. doi: 10.1037//0022-006x.66.1.151. [DOI] [PubMed] [Google Scholar]
- Callahan JL, Almstrom CM, Swift JK, Borja SE, Heath CJ. Exploring the contribution of supervisors to intervention outcomes. Training and Education in Professional Psychology. 2009;3:72–77. [Google Scholar]
- Callahan JL, Aubuchon-Endsley N, Borja SE, Swift JK. Pretreatment expectancies and premature termination in a training clinic environment. Training and Education in Professional Psychology. 2009;3:111–119. [Google Scholar]
- Callahan JL, Hynan MT. Models of psychotherapy outcome: Are they applicable in training clinics? Psychological Services. 2005;2:65–69. [Google Scholar]
- Carroll KM, Martino S, Rounsaville BJ. No train, no gain? Clinical Psychology: Science and Practice. 2010;17:36–40. [Google Scholar]
- Castonguay LG, Beutler LE. Principles of therapeutic change that work. New York, NY: Oxford University Press; 2006. [Google Scholar]
- Castonguay LG, Goldfried MR, Wiser S, Raue PJ, Hayes AM. Predicting the effect of cognitive therapy for depression: A study of unique and common factors. Journal of Consulting and Clinical Psychology. 1996;64:497–504. [PubMed] [Google Scholar]
- Cellucci T, Reports M. A competencies evaluation and tracking system for pre-doctoral practicum training. Poster presented at the American Psychological Association Annual Convention; San Diego, CA. 2010. [Google Scholar]
- Chambless DL, Crits-Christoph P, Wampold BE, Norcross JC, Lambert MJ, Bohart AC, Johannsen BE. What should be validated? In: Norcross JC, Beutler LE, Levant RF, editors. Evidence-based practices in mental health: Debate and dialogue on the fundamental questions. Washington, DC: American Psychological Association; 2006. pp. 191–256. [Google Scholar]
- Chorpita B, Daleiden E, Weisz J. Identifying and selecting the common elements of evidence based interventions: A distillation and matching model. Mental Health Services Research. 2005;7:5–20. doi: 10.1007/s11020-005-1962-6. [DOI] [PubMed] [Google Scholar]
- Chu BC. Empirically supported training approaches: The who, what, and how of disseminating psychological interventions. Clinical Psychology: Science and Practice. 2008;15:308–312. [Google Scholar]
- Cicchetti D, Rogosch FA. Equifinality and multifinality in developmental psychopathology. Development and Psychopathology. 1996;8:597–600. [Google Scholar]
- Clarkin JF, Levy KN. Influence of client variables on psychotherapy. In: Lambert MJ, editor. Handbook of psychotherapy and behavior change. 5th ed. New York, NY: Wiley & Sons; 2003. [Google Scholar]
- Collins FL, Leffingwell TR, Belar CD. Teaching evidence-based practice: Implications for psychology. Journal of Clinical Psychology. 2007;63:657–670. doi: 10.1002/jclp.20378. [DOI] [PubMed] [Google Scholar]
- Cucciare MA, Weingardt KR, Villafranca S. Using blended learning to implement evidence-based psychotherapies. Clinical Psychology: Science and Practice. 2008;15:299–307. [Google Scholar]
- Dawes RM, Faust D, Meehl PE. Clinical versus actuarial judgment. Science. 1989;243:1668–1674. doi: 10.1126/science.2648573. [DOI] [PubMed] [Google Scholar]
- DeFife JA, Hilsenroth MJ. Starting off on the right foot: Common factor elements in early psychotherapy process. Journal of Psychotherapy Integration. 2011;21:172–191. [Google Scholar]
- DiLillo D, McChargue D. Implementing elements of evidence-based practice into scientist-practitioner training at the University of Nebraska-Lincoln. Journal of Clinical Psychology. 2007;63:671–684. doi: 10.1002/jclp.20375. [DOI] [PubMed] [Google Scholar]
- Drabick DA, Goldfried MR. Training the scientist- practitioner for the 21st century: Putting the bloom back on the rose. Journal of Clinical Psychology. 2000;56:327–340. doi: 10.1002/(sici)1097-4679(200003)56:3<327::aid-jclp9>3.0.co;2-y. [DOI] [PubMed] [Google Scholar]
- Drabick DA, Kendall PC. Developmental psychopathology and the diagnosis of mental health problems among youth. Clinical Psychology: Science and Practice. 2010;17:272–280. doi: 10.1111/j.1468-2850.2010.01219.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Drabick DA, Steinberg L. Developmental psychopathology. In: Brown B, Prinstein M, editors. Encyclopedia of Adolescence. Vol. 3. San Diego: Academic Press; 2011. pp. 136–142. [Google Scholar]
- Duncan BL, Miller SD, Reynolds L, Sparks J, Claud D, Brown J, Johnson LD. The session rating scale: Psychometric properties of a ‘working’ alliance scale. Journal of Brief Therapy. 2003;3:3–12. [Google Scholar]
- Eells TD. Handbook of psychotherapy case formulation. New York, NY: The Guilford Press; 2007. [Google Scholar]
- Falender CA, Cornish JAE, Goodyear R, Hatcher R, Kaslow NJ, Leventhal G, Grus C. Defining competencies in psychology supervision: A consensus statement. Journal of Clinical Psychology. 2004;60:771–785. doi: 10.1002/jclp.20013. [DOI] [PubMed] [Google Scholar]
- Falender CA, Shafranske EP. Clinical supervision: A competency-based approach. Washington, DC: American Psychological Association; 2004. [Google Scholar]
- Falzon L, Davidson KW, Bruns D. Evidence searching for evidence-based psychology practice. Professional Psychology: Research and Practice. 2010;41:550–557. doi: 10.1037/a0021352. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gard G, Tremblay G, DiLillo D, Pantesco V. Facilitating research in training clinics: Aspiring to the scientist-practitioner ideal. The Behavior Therapist. 2002;25:103–106. [Google Scholar]
- Garfield SL. Research on client variables in psychotherapy. In: Bergin AE, Garfield SL, editors. Handbook of psychotherapy and behavior change. 4th ed. New York, NY: Wiley & Sons; 1994. [Google Scholar]
- Goldfried MR, Davila J. The role of relationship and technique in therapeutic change. Psychotherapy: Theory, Research, Practice, Training. 2005;42:421–430. [Google Scholar]
- Goldfried MR, Wolfe BE. Psychotherapy practice and research: Repairing a strained alliance. American Psychologist. 1996;51:1007–1016. doi: 10.1037//0003-066x.51.10.1007. [DOI] [PubMed] [Google Scholar]
- Goldfried MR. Toward the delineation of therapeutic change principles. American Psychologist. 1980;35:991–999. doi: 10.1037//0003-066x.35.11.991. [DOI] [PubMed] [Google Scholar]
- Gonsalvez CJ, Freestone J. Field supervisors’ assessments of trainee performance: Are they reliable and valid? Australian Psychologist. 2007;42:23–32. [Google Scholar]
- Gonsalvez CJ, McLeod HJ. Toward the science-informed practice of clinical supervision: The Australian context. Australian Psychologist. 2008;43:79–87. [Google Scholar]
- Gray LA, Ladany N, Walker JA, Ancis JR. Psychotherapy trainees’ experience of counterproductive events in supervision. Journal of Counseling Psychology. 2001;48:371–383. [Google Scholar]
- Harmon C, Hawkins EJ, Lambert MJ, Slade K, Whipple JS. Improving outcomes for poorly responding clients: The use of clinical support tools and feedback to clients. Journal of Clinical Psychology. 2005;61:175–185. doi: 10.1002/jclp.20109. [DOI] [PubMed] [Google Scholar]
- Hart D, Marmorstein NR. Neighborhoods and genes and everything in between: Understanding adolescent aggression in social and biological contexts. Development and Psychopathology. 2009;21:961–973. doi: 10.1017/S0954579409000510. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Helge Rønnestad M, Ladany N. The impact of psychotherapy training: Introduction to the special section. Psychotherapy Research. 2006;16:261–267. [Google Scholar]
- Henry WP, Strupp HH, Butler SF, Schacht TE, Binder JL. Effects of training in time-limited dynamic psychotherapy: Changes in therapist behavior. Journal of Consulting and Clinical Psychology. 1993;61:434–440. doi: 10.1037//0022-006x.61.3.434. [DOI] [PubMed] [Google Scholar]
- Hershenberg R, Novotny C, Westen D. Personality subtypes and eating disorders: Replication and validation of a taxonomy. Emory University; 2006. [Google Scholar]
- Hilsenroth MJ, Blagys MD, Ackerman SJ, Bonge DR, Blais MA. Measuring psychodynamic-interpersonal and cognitive-behavioral techniques: Development of the comparative psychotherapy process scale. Psychotherapy: Theory, Research, Practice, Training. 2005;42:340–356. [Google Scholar]
- Hilsenroth MJ, Cromer TD. Clinician interventions related to alliance during the initial interview and psychological assessment. Psychotherapy: Theory, Research, Practice, Training;Psychotherapy: Theory, Research, Practice, Training. 2007;44:205–218. doi: 10.1037/0033-3204.44.2.205. [DOI] [PubMed] [Google Scholar]
- Hilsenroth MJ, Defife JA, Blagys MD, Ackerman SJ. Effects of training in short-term psychodynamic psychotherapy: Changes in graduate clinician technique. Psychotherapy Research. 2006;16:293–305. [Google Scholar]
- Hunsley J, Mash EJ. Evidence-based assessment. Annual Review of Clinical Psychology. 2007;3:29–51. doi: 10.1146/annurev.clinpsy.3.022806.091419. [DOI] [PubMed] [Google Scholar]
- Hunsley J. Training Psychologists for Evidence-Based Practice. Canadian Psychology. 2007;48:32–42. [Google Scholar]
- Jensen PS, Hoagwood K. The book of names: DSM-IV in context. Development and Psychopathology. 1997;9:231–249. doi: 10.1017/s0954579497002034. [DOI] [PubMed] [Google Scholar]
- Kavanagh DJ, Spence SH, Strong J, Wilson J, Sturk H, Crown N. Supervision practices in allied mental health: A staff survey. Mental Health Services Research. 2003;5:187–195. doi: 10.1023/a:1026223517172. [DOI] [PubMed] [Google Scholar]
- Keller MB, McCullough JP, Klein DN, Arnow B, Dunner DL, Gelenberg AJ, Zajecka J. A Comparison of nefazodone, the cognitive behavioral-analysis system of psychotherapy, and their combination for the treatment of chronic depression. New England Journal of Medicine. 2000;342:1462–1470. doi: 10.1056/NEJM200005183422001. [DOI] [PubMed] [Google Scholar]
- Kendall PC, Beidas RS. Smoothing the trail for dissemination of evidence-based practices for youth: Flexibility within fidelity. Professional Psychology: Research and Practice. 2007;38:13–20. [Google Scholar]
- Kendjelic EM, Eells TD. Generic psychotherapy case formulation training improves formulation quality. Psychotherapy: Theory, Research, Practice, Training. 2007;44:66–77. doi: 10.1037/0033-3204.44.1.66. [DOI] [PubMed] [Google Scholar]
- Kim D, Wampold BE, Bolt DM. Therapist effects in psychotherapy: A random-effects modeling of the National Institute of Mental Health Treatment of Depression Collaborative Research Program data. Psychotherapy Research. 2006;16:161–172. [Google Scholar]
- Kraus DR, Seligman D, Jordan JR. Validation of a behavioral health treatment outcome and assessment tool designed for naturalistic settings: The Treatment Outcome Package. Journal of Clinical Psychology. 2005;61:285–314. doi: 10.1002/jclp.20084. [DOI] [PubMed] [Google Scholar]
- Kuyken W, Padesky CA, Dudley R. Collaborative case conceptualization: Working effectively with clients in cognitive-behavioral therapy. New York, NY: The Guilford Press; 2009. [Google Scholar]
- Lambert MJ. Helping clinicians to use and learn from research-based systems: The OQ-analyst. Psychotherapy. doi: 10.1037/a0027110. (in press). [DOI] [PubMed] [Google Scholar]
- Lazar A, Mosek A. The Influence of the field instructor-student relationship on evaluation of students’ practice. The Clinical Supervisor. 1993;11:111–120. [Google Scholar]
- Levenson H, Strupp HH. Cyclical maladaptive patterns: Case formulation in time-limited dynamic psychotherapy. In: Eells TD, editor. Handbook of psychotherapy case formulation. New York, NY: Guilford Press; 1997. pp. 84–115. [Google Scholar]
- Mash EJ, Hunsley J. Evidence-based assessment of child and adolescent Disorders: Issues and challenges. Journal of Clinical Child & Adolescent Psychology. 2005;34:362–379. doi: 10.1207/s15374424jccp3403_1. [DOI] [PubMed] [Google Scholar]
- McCullough JP. Treatment for chronic depression: Cognitive behavioral analysis system of psychotherapy. New York, NY: The Guilford Press; 2000. [DOI] [PubMed] [Google Scholar]
- McCullough JP. Treating chronic depression with disciplined personal involvement: Cognitive behavioral analysis system of psychotherapy (CBASP) New York, NY: Springer Publishing Company, Inc; 2006. [Google Scholar]
- McFall RM. Manifesto for a science of clinical psychology. Clinical Psychologist. 1991;44:75–88. [Google Scholar]
- Miller SD, Duncan BL, Brown J, Sorrell R, Chalk MB. Using outcome to inform and improve treatment outcomes. Journal of Brief Therapy. 2006;5:5–22. [Google Scholar]
- Miller WR, Rollnick S. Motivational interviewing: Preparing people to change addictive behavior. New York, NY: Guilford Press; 1991. [Google Scholar]
- Miller WR, Yahne CE, Moyers TB, Martinez J, Pirritano M. A randomized trial of methods to help clinicians learn motivational interviewing. Journal of Consulting and Clinical Psychology. 2004;72:1050–1062. doi: 10.1037/0022-006X.72.6.1050. [DOI] [PubMed] [Google Scholar]
- Minami T, Davies DR, Tierney SC, Bettmann JE, McAward SM, Averill LA, Wampold BE. Preliminary evidence on the effectiveness of psychological treatments delivered at a university counseling center. Journal of Counseling Psychology. 2009;56:309–320. [Google Scholar]
- Minami T, Wampold BE, Serlin RC, Hamilton EG, Brown GS, Kircher JC. Benchmarking the effectiveness of psychotherapy treatment for adult depression in a managed care environment: A preliminary study. Journal of Consulting and Clinical Psychology. 2008;76:116–124. doi: 10.1037/0022-006X.76.1.116. [DOI] [PubMed] [Google Scholar]
- Nathan PE, Gorman JE, editors. A guide to treatments that work. New York, NY: Oxford University Press; 1998. [Google Scholar]
- Nelson ML, Friedlander ML. A close look at conflictual supervisory relationships: The trainee’s perspective. Journal of Counseling Psychology. 2001;48:384–395. [Google Scholar]
- Ogles BM, Lambert MJ, Fields SA. Essentials of outcome assessment. New York, NY: John Wiley & Sons, Inc; 2002. [Google Scholar]
- Page AC, Stritzke WGK, McLean NJ. Toward science-informed supervision of clinical case formulation: A training model and supervision method. Australian Psychologist. 2008;43:88–95. [Google Scholar]
- Page AC, Stritzke WGK. Clinical psychology for trainees: A foundations of science-informed practice. Cambridge, United Kingdom: Cambridge University Press; 2006. [Google Scholar]
- Prochaska JO, DiClemente CC. Transtheoretical therapy: Toward a more integrative model of change. Psychotherapy: Theory, Research & Practice. 1982;19:276–288. [Google Scholar]
- Reed GM, Kihlstrom JF, Messer SB. What qualifies as evidence of effective practice? In: Norcross JC, Beutler LE, Levant RF, editors. Evidence-based practices in mental health: Debate and dialogue on the fundamental questions. Washington, DC: American Psychological Association; 2006. pp. 13–55. [Google Scholar]
- Reiser R, Milne D. Supervising cognitive-behavioral psychotherapy: Pressing needs, impressing possibilities. Journal of Contemporary Psychotherapy. (In press). [Google Scholar]
- Rings JA, Genuchi MC, Hall MD, Angelo M-A, Cornish JA. Is there consensus among predoctoral internship training directors regarding clinical supervision competencies? A descriptive analysis. Training and Education in Professional Psychology. 2009;3:140–147. [Google Scholar]
- Rodolfa E, Bent R, Eisman E, Nelson P, Rehm L, Ritchie P. A cube model for competency development: Implications of psychology educators and regulators. Professional Psychology: Research and Practice. 2005;36:347–454. [Google Scholar]
- Roth A, Fonagy P, Parry G, Target M, Woods R. What works for whom? A critical review of psychotherapy research. New York, NY: Guilford Press; 1996. [Google Scholar]
- Rutter M, Sroufe LA. Developmental psychopathology: Concepts and challenges. Development and Psychopathology. 2000;12:265–296. doi: 10.1017/s0954579400003023. [DOI] [PubMed] [Google Scholar]
- Safran JD, Muran JC, Samstag LW, Stevens C. Repairing alliance ruptures. Psychotherapy: Theory, Research, Practice, Training. 2001;38:406–412. [Google Scholar]
- Safran JD, Muran JC. Resolving therapeutic alliance ruptures: Diversity and integration. Journal of Clinical Psychology. 2000;56:233–243. doi: 10.1002/(sici)1097-4679(200002)56:2<233::aid-jclp9>3.0.co;2-3. [DOI] [PubMed] [Google Scholar]
- Sanders MR, Murphy-Brennan M. Creating conditions for success beyond the professional training environment. Clinical Psychology: Science and Practice. 2010;17:31–35. [Google Scholar]
- Sauer E, Huber D. Implementing the boulder model of training in a psychology training clinic. Journal of Contemporary Psychotherapy. 2007;37:221–228. [Google Scholar]
- Scott KJ, Ingram KM, Vitanza SA, Smith NG. Training in supervision: A survey of current practices. The Counseling Psychologist. 2000;28:403–422. [Google Scholar]
- Spring B. Evidence-based practice in clinical psychology: What it is, why it matters; what you need to know. Journal of Clinical Psychology. 2007;63:611–631. doi: 10.1002/jclp.20373. [DOI] [PubMed] [Google Scholar]
- Stein DM, Lambert MJ. Graduate training in psychotherapy: Are therapy outcomes enhanced? Journal of Consulting and Clinical Psychology. 1995;63:182–196. doi: 10.1037//0022-006x.63.2.182. [DOI] [PubMed] [Google Scholar]
- Steinberg L, Avenevoli S. The role of context in the development of psychopathology: A conceptual framework and some speculative propositions. Child Development. 2000;71:66–74. doi: 10.1111/1467-8624.00119. [DOI] [PubMed] [Google Scholar]
- Stiles WB, Barkham M, Mellor-Clark J, Connell J. Effectiveness of cognitive-behavioural, person-centred, and psychodynamic therapies in UK primary-care routine practice: Replication in a larger sample. Psychological Medicine. 2007;38:677–688. doi: 10.1017/S0033291707001511. [DOI] [PubMed] [Google Scholar]
- Stricker G. The relationship of research to clinical practice. American Psychologist. 1992;47:543–549. [Google Scholar]
- Swets JA, Dawes RM, Monahan J. Psychological science can improve diagnostic decisions. Psychological Science in the Public Interest. 2000;1:1–26. doi: 10.1111/1529-1006.001. [DOI] [PubMed] [Google Scholar]
- Swift JK, Callahan JL. A delay discounting measure of great expectations and the effectiveness of psychotherapy client decision making. Professional Psychology: Research and Practice. 2008a;39:581–588. [Google Scholar]
- Swift JK, Callahan JL. Decreasing treatment drop-out thru pre-treatment education. Poster presentation at the 2008 OPA Annual Convention; Tulsa, OK. 2008b. [Google Scholar]
- Teachman BA, Drabick DA, Hershenberg R, Vivian D, Wolfe BE, Goldfried MR. Bridging the gap between clinical research and clinical practice: Introduction to the special section. Psychotherapy. doi: 10.1037/a0027346. (In press). [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wampold BE. Psychotherapy: The humanistic (and effective) treatment. American Psychologist. 2007;62:857–873. doi: 10.1037/0003-066X.62.8.857. [DOI] [PubMed] [Google Scholar]
- Watkins CE. Psychotherapy supervision since 1909: Some friendly observations about its first century. Journal of Contemporary Psychotherapy. 2011;41:57–67. [Google Scholar]
- Weinberger J. Common factors aren’t so common: The common factors dilemma. Clinical Psychology: Science and Practice. 1995;2:45–69. [Google Scholar]
- Westen D, Harnden-Fischer J. Personality profiles in eating disorders: Rethinking the distinction between Axis I and Axis II. American Journal of Psychiatry. 2001;165:547–562. doi: 10.1176/appi.ajp.158.4.547. [DOI] [PubMed] [Google Scholar]
- Wolf A, Goldfried MR, Muran JC, editors. Transforming negative reactions to clients: From frustration to compassion. Washington, DC: American Psychological Association; 2012. [Google Scholar]
- Wolfe BE. Two chair dialogue between my research head and my therapist head. Psychotherapy. (In press). [Google Scholar]
- Youn SJ, Kraus D, Castonguay LG. The treatment outcome package: Facilitating practice and clinically relevant research. Psychotherapy. doi: 10.1037/a0027932. (In press). [DOI] [PubMed] [Google Scholar]