Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2016 Jan 31.
Published in final edited form as: Child Youth Care Forum. 2014 Aug 7;44(1):133–157. doi: 10.1007/s10566-014-9274-x

Clinical Decision-Making in Community Children’s Mental Health: Using Innovative Methods to Compare Clinicians With and Without Training in Evidence-Based Treatment

Mary J Baker-Ericzén a, Melissa M Jenkins b, Soojin Park a, Ann F Garland a,c
PMCID: PMC4397566  NIHMSID: NIHMS619762  PMID: 25892901

Abstract

Background

Mental health professionals’ decision-making practice is an area of increasing interest and importance, especially in the pediatric research and clinical communities.

Objective

The present study explored the role of prior training in evidence-based treatments on clinicians’ assessment and treatment formulations using case vignettes. Specifically, study aims included using the Naturalistic Decision Making (NDM) cognitive theory to 1) examine potential associations between EBT training and decision-making processes (novice versus expert type), and 2) explore how client and family contextual information affects clinical decision-making.

Methods

Forty-eight clinicians across two groups (EBT trained=14, Not EBT trained=34) participated. Clinicians were comparable on professional experience, demographics, and discipline. The quasi-experimental design used an analog “think aloud” method where clinicians read case vignettes about a child with disruptive behavior problems and verbalized case conceptualization and treatment planning out-loud. Responses were coded according to NDM theory.

Results

MANOVA results were significant for EBT training status such that EBT trained clinicians’ displayed cognitive processes more closely aligned with “expert” decision-makers and non-EBT trained clinicians’ decision processes were more similar to “novice” decision-makers, following NDM theory. Non-EBT trained clinicians assigned significantly more diagnoses, provided less detailed treatment plans and discussed fewer EBTs. Parent/family contextual information also appeared to influence decision-making.

Conclusion

This study offers a preliminary investigation of the possible broader impacts of EBT training and potential associations with development of expert decision-making skills. Targeting clinicians’ decision-making may be an important avenue to pursue within dissemination-implementation efforts in mental health practice.

Keywords: clinical decision-making, clinician training, evidence-based treatment (EBT), naturalistic decision making (NDM), community-based services, behavior disorders


Given discouraging data regarding the limited effectiveness of routine mental health care for children (Trask & Garland, 2012; Weisz & Jensen, 2001; Weisz, 2004), there have been numerous national calls for improved dissemination and implementation of evidence-based treatments (EBTs) (Hogan, 2003). At present, efforts to implement EBTs largely rely on training clinicians in individual empirically-supported treatment manuals or protocols. To this end, implementation programs have increasing numbers of trained clinicians, providing an opportunity to further investigate training methodologies being utilized in clinical practice (Becker & Stirman, 2011). To complement the individual treatment protocol training model, some researchers propose greater attention to a trans-treatment model of clinical skills, suggesting that empirically supported treatments share identifiable elements across several individual protocols and that the trans-treatment approach may be more effective in practice and more efficient in terms of training clinicians (Chorpita, Daleiden, & Weisz, 2005; Garland, Bickman, & Chorpita, 2010; McHugh, Murray, & Barlow, 2009). A major component of the common elements approach is the use of clinical decision-making, which often involves the use of an algorithm type approach to guide decisions (Chorpita & Daleiden, 2014). This type of clinical skill development may involve the growth of “meta-cognitive skills” involved in clinical decision-making and that can be used across problem areas to improve the effectiveness of treatment while reducing training demands. settings.

ithin the larger evidence-based framework, clinical decision-making plays a crucial role. So much that definitions of evidence-based practice in medicine and psychology highlight clinical decision-making in their descriptions of what evidence-based practice is (Sackett, Rosenberg, Gray, Haynes, & Richardson, 1996). Moreover, according to Patel, Kaufman and Arocha (2002), while the principal thrust of EBT is on the dissemination of cutting-edge clinical evidence, the mere availability of evidence does not guarantee that it will be applied appropriately. Effective decision-making skills and strategies are essential for successful delivery of evidence-based practices. Spring (2008) proposes decision-making as a lynchpin of the evidence-based practice process. Despite its apparent importance, there is unfortunately limited research regarding community-based mental health clinicians’ decision-making (Grasso, Rothschild, Genest, & Bates, 2003; Nath & Marcus, 2006).

The extant literature on mental health clinicians’ decision-making has found that, in general, clinicians’ predictions are inconsistent with actuarial, data-based predictions (Dawes, Faust, & Meehl, 1989; Elstein & Schwarz, 2002; Grove, 2005; Meehl, 1954, 1986), and that clinicians are vulnerable to faulty heuristics and cognitive errors in their clinical judgment (Ægisdóttir et al., 2006; Jenkins, Youngstrom, Washburn, & Youngstom, 2011). For mental health clinicians, biases and errors are associated with minimal agreement across clinicians on clinical diagnoses and assessments, treatment planning, evaluation of progress (Galanter & Patel, 2005; Jensen & Weisz, 2002), and prescribing practices (Pappadopulos, et al., 2002). There is also low agreement between clinicians and algorithm data-driven decisions on these same constructs (Grove, Zald, Lebow, Snitz, & Nelson, 2000; Jenkins et al., 2011; Lewczyk, Garland, Hurlburt, Gearity, & Hough, 2003). While the potential biases of clinician judgment have been lamented, there is a need for more research examining mental health clinicians’ decision-making processes within pediatric community services.

Historically, evidence-based assessment (Mass, 2003) and treatment (Weisz, Jensen-Doss, & Hawley, 2006) have not been routinely implemented in community-based mental health care. However, efforts to disseminate and implement EBTs have increased dramatically, including the delivery of evidence-based clinical protocols, use of treatment guidelines, decision support systems as well as using continuing professional education and/or ongoing clinical training through supervision and consultation methods (Herschell, Kolko, Baumann, & Davis, 2010; Spring, 2008). Training in individual EBT models often emphasize more focused attention to relevant clinical symptoms or behaviors and systematic organization of clinical information (Schoenwald, et al., 2011). The extent to which these skills are internalized as meta-cognitive skills remains largely unknown. Research on the generalization of specific EBT training beyond the confines of delivery of an individual treatment protocol is needed. Such clinical skill extensions may include meta-cognitive skill consolidation.

Theoretical models and research methods developed in the field of cognitive science, and specifically for medical decision-making research, can be extended to mental health research. In contrast to mental health, much more is known about the psychology of decision-making within medical healthcare settings (see Chapman & Sonnenberg, 2000; Patel et al., 2002 for reviews); at present, these methods have been applied less frequently to mental health settings (Galanter & Patel, 2005). In particular, the cognitive theory of Naturalistic Decision Making (NDM) provides a relevant and useful framework for understanding clinical decision-making processes in health care settings, and by extension, mental health (Kahneman & Klein, 2009). NDM theory was developed to model decision-making processes in naturalistic situations, with scarce or unreliable information, using quality or effectiveness criteria within an evolving-dynamic system (as is usually the case in “real world” community service contexts) (Klein, 2008).

NDM theory differentiates “expert” and “novice” decision-makers based on operationalized criteria assessing decision-making processes (Glaser & Chi, 1988). These classifications do not appear directly related to years of experience as one might intuitively assume, suggesting these distinctions may be related to specific cognitive skill development (Klein, 1997). Thus, applying the expert/novice classification and comparison offers a useful method of assessment of clinical decision-making skills which may reflect meta-cognitive skills associated with training, including training in EBT models (Patel, Arocha, & Kaufman, 1994). This theory and potential application of it to child mental health may have direct implications for how to train clinicians to become experts in the field (Galanter & Patel, 2005).

To provide a summary of the expert/novice classification, we have summated the results of two comprehensive review articles (Elliot, 2005; Glaser & Chi, 1988). These reviews extracted a list of operational themes associated with “expert” performance based on findings from the literature and present examples of decision-making processes used by experts and novices These include: 1) type of reasoning, 2) organization of information and deriving hypotheses, 3) attention to information and level of abstraction, 4) finding solutions and 5) incorporating actuarial information and flexibility in application. Specifically, expert decision makers were found to use forward reasoning in which the individual works forward from a hypothesis to find a problem solution, decide on a course of action first and then collect information to prove or disprove this information. Experts typically use a highly organized knowledge base by chunking information and disregarding less relevant information. Experts use inferences and principles from established mental models and, thus, make fewer cognitive errors and generate a small set of relevant hypotheses at a high level of abstraction. They also often use rules of thumb or heuristics correctly to find accurate solutions in a timely fashion and are usually flexible in responding to individual characteristics, incorporating these individual differences with actuarial evidence (efficacy/effectiveness data). Alternatively, a novice decision maker typically uses backward reasoning by working backwards from a hypothesis to evaluate different options for a solution and they collect a lot of information before carrying out any plan of action. Novices are not well organized in their knowledge base and they generate many hypotheses, many of which are unrelated to the specific problem. They often demonstrate difficulty discriminating relevant from irrelevant information which results in errors of commission or an inability to make a decision. There is also a tendency for novices to stay focused on detail without using higher levels of abstraction, and they don’t effectively use rules of thumb or heuristics to guide them. This approach results in lengthier amounts of time to reach less effective solutions. Lastly, novices are less likely to adapt protocols to an individual’s characteristics because of an uncertainty in using flexibility during problem-solving.

Further, the NDM literature also suggests that “situation awareness,” related to screening information for situational relevance, is a core skill that experts use and novices do not (Hutton & Klein, 1999). Applying situation awareness during the decision-making process involves using perceptual-cognitive performance characteristics which include: 1) speed, 2) accuracy 3) efficiency (completeness of action) and 4) planning (Burns, 2005; Klein, 1993). Examining situation awareness in pediatric mental health clinicians may be a particularly important component of clinical decision-making as there is typically a vast amount of information available to the clinician regarding the child, parent and family context (Baker-Ericzén, Hurlburt, Brookman-Frazee, Jenkins, & Hough, 2010). Although having extensive case information seems advantageous, it can also be problematic. Environments with many features or contexts have been shown to influence decision-making (Whyte, 2001). One can imagine that clinicians resembling novice decision-makers may be inundated with information on the child, parent and family, which may cause cognitive overload and substantially increase the potential for errors and inefficiencies in diagnosis and treatment. Skill in screening abundant information and attending to the most relevant pieces is a useful skill in clinical decision-making, especially within child mental health contexts. Often, training in an EBT involves the use of cognitive performance skills such as attention to relevant clinical cues as well as other “expert” decision making processes such as forward reasoning approaches, organization of clinical knowledge, solution focus and flexibility of applying treatment strategies based on clinical cues (e.g. Multisystemic Therapy, Henggeler, Schoenwald, Borduin, Rowland & Cunningham, 1998).

To our knowledge, there have been no previous studies that have applied NDM theory to clinicians’ decision-making practices in pediatric mental health settings. Despite this current gap in the mental health literature, it seems highly plausible that expert clinician decision-makers are better positioned to yield positive treatment outcomes than novice decision-makers. Consistent with earlier descriptions, experts are more likely to synthesize relevant case information, engage in more streamlined hypothesis testing, and flexibly pursue treatment options (integrating individual differences as necessary)--within a timely manner. Notably, one study with mental health clinicians found that experts were more accurate in making mental health diagnosis according to the DSM-IV manual (Witteman & van der Bercken, 2007). Findings from studies in other fields (e.g., medicine, physics, military command and control) have also found that expert decision-makers perform at a more superior level in regards to making accurate diagnoses or decisions and better outcomes or results than novices in their respective practices (Anzai, 1991;Boshizen & Schmidt, 1992; Feltovich, Spiro, & Coulson, 1997; Serfaty et al., 1997;). Related, a recent randomized controlled trial study found that mental health professionals who received training in decision-making strategies provided more accurate diagnoses and committed fewer cognitive errors for complex case vignettes than clinicians who received only a brief tutorial on psychiatric symptoms (Jenkins, 2012). Overall, given the typically complex nature of child mental health (Baker-Ericzen et al., 2010), “expert” decision-making seems highly advantageous to addressing and effectively treating youths and their families. As noted earlier, it is also plausible that EBT training may develop clinical decision-making processes in clinicians that is similar to skills found in “expert” decision-makers.

The present study was designed to explore the potential influence of EBT training on clinical decision-making among community-based pediatric mental health clinicians using the cognitive theory of NDM and an established analog “think aloud” research method. The primary aim was to examine if clinicians who have prior training in an EBT protocol differ in their clinical decision-making from clinicians who have not received such training, controlling for the therapist identified primary theoretical orientation. We hypothesized that (1) participants’ clinical decision-making processes would be significantly different as a function of whether or not they had received prior training in EBT and that (2) prior training in EBT (herein after referred to as “EBT trained”) would be similar to “expert” decision-making skills whereas no prior training in EBT (herein after referred to as “non-EBT trained”) would be similar to “novice” decision-making. An important secondary aim was to explore clinical situation awareness, particularly how complex contextual information impacts decision-making. We predicted that EBT trained clinicians would respond differently to contextual factors (e.g., maternal depression, household stressors) embedded in case vignettes than non-EBT trained clinicians as evidenced by the number and type of diagnoses stated as well as the amount of attention given to child, parent and family factors.

Method

This study used a “think aloud” technique to examine clinical decision-making in community-based clinicians. The think aloud technique is considered an appropriate method for assessing details of decision-making processes and has been described as a model of verbalization (Ericsson & Simon, 1993). This method assesses how decisions are made and draws upon the cognitive information-processing model described by Newell and Simon (Newell & Simon, 1972). In short, it is postulated that information in the short-term memory can be verbalized, and thus verbal information collected when participants are instructed to think aloud (vocalizing all their thoughts) accurately reflects their course of thought (Ericsson & Simon, 1993). Therefore, a participant’s verbal report of his/her clinical decision-making reveals a set and sequence of information that can be assessed without altering the participant’s cognitive processes (Fonteyn, Kuipers, & Grobe, 1993). According to the established think aloud protocols, (Patel & Arocha, 2001) participants were asked to verbalize their thoughts out loud without theorizing about their cognitive processes (refer to procedure section below for details about the implementation of the think aloud method in this study). These verbalizations produce a verbal protocol which is then coded for analyses (as described below in “coding scheme” paragraph). The think aloud technique provides insight into how clinicians make decisions when faced with a clinical scenario.

The study was approved by the Rady Children’s Hospital, San Diego Institutional Review Board on human subject protection.

Sample

Forty-eight clinician participants consented and participated in the study. All clinicians were currently or recently (within the past year) practicing as pediatric clinicians in community mental health outpatient settings in one large metropolitan county in Southern California. Clinicians were recruited for study participation from posted flyers and by word-of-mouth at the community clinics. All clinicians identified themselves as having experience in working with children with disruptive behavior disorders and having been “trained” to provide therapy to this population. Each clinician self-selected to participate in the study.

A large number of clinicians initially responded to the study flyer, resulting in a total of 35 non-EBT trained clinicians and 6 EBT trained clinicians. One non-EBT clinician could not be reached for recruitment. More active recruitment methods were necessary to enroll additional EBT trained clinicians as there turned out to be substantially fewer EBT trained clinicians available in the community clinics than non-EBT trained clinicians. These additional recruitment methods included contacting programs that had participated in EBT research or evaluation projects and announcing the study in staff meetings . The EBT-trained group included 14 clinicians.

EBT trained clinicians were defined as clinicians who self-reported completing a specific EBT training at some time during their clinical career. To qualify as EBT-trained, the clinician had to have been trained in a child or family therapy approach cited in a published review as an EBT (i.e. Eyberg, Nelson, & Boggs, 2008). The training needed to include both didactics and ongoing supervision of at least one clinical case over time with the clinician reporting that treatment fidelity was monitored and reached (the latter criteria is based on research suggesting that didactics alone may be insufficient,Herschell, et al., 2010). There were no restrictions on the clinical problem area for the EBT training.

All of the participating EBT trained clinicians reported training in a cognitive-behavioral or behavioral based EBT. The majority of the EBT trained clinicians (79%) were trained in an EBT for Disruptive Behavior Disorders (DBDs) (i.e. Incredible Years [IY], Parent-Child Interaction Therapy [PCIT], or Multisystemic Therapy [MST]) while a couple were trained in cognitive-behavioral treatment (CBT) for depression or anxiety (14%) and one clinician (7%) was trained in EBTs for autism (Applied Behavior Analysis [ABA)] and Pivotal Response Training [PRT]). All non-EBT trained clinicians reported being informed of a number of EBTs either through a workshop, independent reading or clinical supervisor but did not meet criteria for EBT trained.

According to the clinician distribution of the community at large, groups were naturally balanced in regards to discipline (psychologists, marriage and family therapists [MFT], social workers [SW]) with the exception of psychiatrists. There were no EBT trained psychiatrists in the sample. All EBT trained clinicians endorsed a primary theoretical orientation of CBT and half of the non-EBT clinicians endorsed CBT as their primary orientation. Because all EBT trained clinicians also reported CBT as their primary orientation, CBT orientation was controlled for in comparisons and analyses. Table 1 lists clinician characteristics presented in three groups (EBT/CBT, non-EBT/CBT, non-EBT/non-CBT) for demographic comparisons. The non-EBT/CBT group endorsed their primary orientation as CBT but did not indicate any specific training or supervision experiences in any EBT model so the level of knowledge and skill obtained in CBT is unknown.

Table 1.

Clinicians’ demographics and clinical factors by group

Non-EBT/Non-CBT
n=17
Non EBT / CBT
n=17
EBT & CBT
n=14
Total
n=48
Age: Mean (SD) 39.47 (11.34) 31.93 (6.52) 31.24 (3.25) 34.25 (8.57)
Range 27–59 24–52 27–36 24–59
Gender: Female (%) 16 (94%) 11 (65%) 13 (93%) 40 (83%)
Race/Ethnicity (%):
  White 9 (53%) 9 (53%) 8 (57%) 26 (54%)
  Hispanic 2 (12%) 1 (6%) 3 (21%) 6 (13%)
  Black 1 (6%) 0 1 (7%) 2 (4%)
  Asian/PI 3 (18%) 3 (18%) 2 (14%) 8 (17%)
  Other/mixed 2 (12%) 4 (24%) 0 6 (13%)
Licensed (%) 7 (41%) 7 (41%) 6 (43%) 20 (42%)
Training status n=28
(%):Post-degree
7 (70%) 4 (40%) 6 (75%) 17 (61%)
(%) In-school 3 (30%) 6 (60%) 2 (25%) 11 (39%)
Years of Practice Mean (SD) 8.12 (7.69) 7.62 (7.97) 7.11 (2.81) 7.65 (6.64)
Range 1–25 1–29 3–14 1–29
Discipline Type (%)
  Psychology 0 7 (41%) 6 (43%) 13 (27%)
  Marriage & Family Therapy 9 (56%) 2 (12%) 5 (36%) 16 (33%)
  Social Work 5 (28%) 3 (18%) 3 (21%) 11 (23%)
  Psychiatry 3 (17%) 5 (29%) 0 8 (17%)
Primary Orientation (%)
  CBT 0 17 (100%) 14 (100%) 31 (65%)
  Family Systems 12 (70%) 0 0 12 (25%)
  Psychodynamic 4 (24%) 0 0 4 (8%)
  Others 1 (6%) 0 0 1 (2%)
Spanish Speaking (%) 4 (24%) 1 (6%) 8 (57%) 13 (27%)
Clinic Size (%)
  Small (1–3) 0 1 (6%) 0 1 (2%)
  Medium (4–9) 7 (41%) 3 (18%) 6 (35%) 16 (33%)
  Large (10+) 10 (59%) 13 (76%) 8 (47%) 31 (65%)

Based on one-way analysis of variance (ANOVA) testing, clinicians in the three groups were compared on demographic (e.g. age, race/ethnicity) and clinical training/experience (e.g. discipline, years of experience) variables and found equivalent with the exception of age and languages spoken. , p > .05. Non-EBT/non-CBT clinicians were significantly older than the other two groups, p < .05, and there were more Spanish-speaking clinicians in the EBT/CBT group, p < .05. The total sample of clinicians was also generally representative of a larger group of clinicians sampled from a large usual care study in 6 community-based clinics in the same geographic area (Garland, et al., 2010).

Procedure

Clinicians completed a demographic and clinical training history form prior to participating in the think aloud. Participants were unaware of their group status and study hypotheses. Academic attainment and training have been postulated as factors that affect decision-making (Bakalis, Bowman, & Porock, 2003) so information on the type of degree, training, years of experience and other sources of skill development (e.g., continuing education courses, supervisory trainings, conferences attended) was collected. Clinicians were then presented with the case vignettes one at a time. One hour was allotted for the procedure to control the time allowed for thinking and processing material (Van Someren, Barnard, & Sandberg, 1994) and to respect clinicians’ busy schedules. Clinicians ranged in the number of vignettes they completed within the hour (i.e. between 1 and 4; see results). The length of response per vignette was calculated as the number of written lines on the transcribed verbal response and used as an outcome variable.

Vignettes were presented randomly and counter balanced for equal distribution across the entire participant sample (Vignette 1=33, Vignette 2=32, Vignette 3=34, Vignette 4=31). Dependent variables were calculated as averages across the vignettes, controlling for the variation in number of vignettes completed as the sample size was not large enough to use multilevel modeling techniques.

During the think-aloud procedure, participants were asked to sit out of view of the research assistant to remove any potential biases from non-verbal cues. Participants were instructed to “Please verbalize out loud all thoughts that occur to you in both conceptualizing the case and in making a treatment decision. Also please state how you would treat the case.” During silences, the researcher prompted participants to “please think aloud” as advocated by Ericsson and Simon (Ericsson & Simon, 1993). No other prompts or probes were provided. Verbalizations were recorded using a digital recorder and video recorder. The verbal responses were transcribed verbatim into verbal protocols.

At the end of each session, a short debriefing interview took place. This interview allowed participants to comment about their experience with the think aloud method and to receive answers to any questions they had about the process.

Study Materials

The “think aloud” method involved presenting standardized written case vignettes individually to the clinician participants. Individual administration of vignettes allowed for controlling the order in which cues were presented, as recommended by Patel, Arocha, & Kaufmann (2001). The vignettes contained information that is typically available to clinicians at intake, as advocated by Fonteyn et al. (1993).

Four case vignettes were devised explicitly for this study with each case vignette presenting clinical information about a different child client with an identical DBD diagnosis (controlling for diagnosis and clinical information). Specifically, each youth exhibited symptoms consistent with oppositional defiant disorder (ODD) per the Diagnostic and Statistical Manual of Mental Disorders [DSM-IV TR] (American Psychiatric Association, 2000) (note: there is some overlap in symptoms between ODD and conduct disorder (CD) and DBD, Not Otherwise Specified (NOS) so the term DBD is used herein to indicate any of these specific conditions and to allow clinicians to report any of these correlated conditions). DBD was chosen as the diagnostic problem area for the study because DBDs are the most common presenting problems for children entering publicly-funded usual care (Garland, et al., 2001) and more EBTs have been developed for DBDs than for any other childhood disorder (Eyberg, et al., 2008). Vignettes were reviewed by an expert panel to assure that they each reflected a diagnosis of disruptive behavior disorders.

Contextual variables about the child, parent and family were also included in the vignette, with each vignette varying in the type and number of contextual features reported about the parent and family. Child clinical information was comparable in each vignette but vignettes varied slightly in demographics so that the vignettes appeared different. Demographic information such as child age and race/ethnicity was equally distributed and balanced during administration. In particular, each case reported equal child clinical information, which included child diagnostic symptoms occurring in the home and at school (i.e. noncompliant, aggressive/fighting, angry/argumentative, loses temper and blames others). However, each vignette reported unique parent and family contextual information with the exception of parental stress and feelings of incompetence which were reported as occurring in all vignettes. The vignettes were constructed to include parent and family contextual factors found prevalent within community mental health outpatient services for children with DBDs (i.e. parental depression, poor social support, family problems, marital discord) (Baker-Ericzén, Jenkins, & Brookman-Frazee, 2010). See Table 2 for a list of factors presented in each vignette.

Table 2.

Case vignettes: Child, parent and family factors stated

Vignette #
1
n = 36
2
n = 35
3
n = 37
4
n = 32

Youth
Demographics
11 year old
Hispanic
7 year old
White
11 year old
White
7 year old
Black
DBD Youth
Symptoms
Problem Behaviors*
Occurring in Home
X X X X
Problem Behaviors*
Occurring at School
X X X X
Total # Child Factors 2 2 2 2
Parent Issues Alcohol X
Depression X X X
Suicidality X
Medical health X
Parent incompetence X X X X
Stress X X X X
Total # Parent Factors 4 4 4 2
Family Issues Domestic Violence X
Cultural issues X
Financial problems X
Household stressors X
Low Tx expectations X
Marital problems X
Negative impact of
Household
X X
Resistance to Tx X
Sibling problems X
Low Social support X X X
Total # Family Factors 5 2 2 4

Notes: DBD= disruptive behavior disorder, TX= treatment,

*

Problem behaviors include aggression, fighting, noncompliance, blame others, angry/loses temper, argues

Each vignette was reviewed by a panel of researchers and clinicians considered experts in DBDs and rated for consistency across variables. Note: More information about case vignettes is available from the first author.

Coding Scheme and Training

A coding scheme was developed for this study by applying each of the five main NDM expert/novice decision-making processes (1. type of reasoning, 2 organization of information and deriving hypotheses, 3.attention to information and level of abstraction, 4. finding solutions and 5. incorporating actuarial information and flexibility in application) to mental health practice. Table 3 provides an overview of how the NDM terms were operationally defined and then measured as mental health decision-making constructs. A coding manual was developed for this study that thoroughly explained each of the decision-making processes, how to code each as well as multiple text examples for each code. In sum, “type of reasoning” was measured by the number of questions asked and qualitative description of types of questions, “organization of information and deriving hypotheses” was measured by number and type of diagnoses, “attention to information and level of abstraction” was measured by the total number and type of factors stated across all vignettes, “finding solutions” was measured by treatment statements and “incorporating actuarial information and flexibility in application” was measured by the statements of using EBT and standardized assessments. As described in Table 3, this study defined the expert decision-making process of finding an “effective solution” as a clinician clearly stating how to treat the case and providing the rationale. The novice decision-making process of an “ineffective solution” was defined as a vague description of the treatment and limited to no rationale. A treatment extensiveness rating was constructed to code a solution on a scale from effective to ineffective following NDM theory. The extensiveness score was a rating (1 through 5) given by the coder after reading the entire verbal protocol following extensive coding guidelines. A rating of “1” involved use of broad language to describe treatment with limited reasoning (e.g. “to do some play work with him and assess things”) and a rating of “5” involved a thorough description of treatment with multiple details provided stating what, why and how it would be done (e.g., “use Parent-Child Interaction Therapy and focus on the parent-child relationship heavily at first, teaching the mother how to follow the child’s lead and engage with him. The mother and child don’t have much positive interaction as of now and we need to empower the mother and help build the connection stronger prior to implementing behavioral control strategies…”).

Table 3.

Operationally defined natural decision-making theory terms and constructs measured for clinical psychology.

Natural Decision-Making Process
E= Experts, N= Novice
Form of Assessment in this MH Study

Number of questions (summated per verbal protocol and averaged across vignettes)

Qualitative description of questions
Type of Reasoning:
E=Forward reasoning (demonstrated by asking few relevant
questions)

N=Backward reasoning (demonstrated by asking many irrelevant and relevant questions)
Organization of Information & Deriving Hypotheses
(Diagnoses)
E=Highly organized & accurate (generate minimal hypotheses-diagnoses with accuracy)

N=Poorly organized & less accurate (generate many
hypotheses-multiple diagnoses with inaccuracy)
Number of diagnoses
(summated all unique diagnoses stated and
averaged across vignettes)
% DBD Dx (DBD, ODD, CD, DBD NOS)
% each diagnostic classification
(Mood D/Os, Anxiety D/Os, ADHD, LD &
Other-Attachment D/O, Substance D/Os) (%s
summated for each vignette)
Attention to Information and Level of Abstraction:
E=High abstraction (attend to relevant information and high
level of abstraction-attend to child factors such as symptoms)

N=Low abstraction (attend to all information, focusing on
detail and low level of abstraction- attend to child, parent and
family factors such as context)
Total Number of factors attended to of any type
(across all vignettes)
Mean Number of Child factors attended to
(averaged across vignettes)*
Mean Number of Parent factors attended to
(averaged across vignettes)
Mean Number of Family factors attended to
(averaged across vignettes)
Finding a solution:
E=Timely solution, Effective solution (clearly stating use of an
effective treatment, providing details with rationale)

N=Extended time in generating solution or no solution,
Ineffective solution (stating use of ineffective treatment or
vague statement of treatment, providing limited to no details or
rationale)
% time discussing treatment
(# of treatment lines divided by total lines of
protocol and averaged across vignettes)

Score of treatment extensiveness (assigned
rating between 1–5 and averaged across
vignettes)
Incorporating Actuarial Information:
E= Incorporate actuarial information with individual’s
characteristics and demonstrate flexibility in applying it (use of
EBT and standardized assessments)

N= Unsure how to incorporate actuarial information with an
individual’s characteristic and challenged by applying it
flexibly (no use of EBT or standardized assessments)
Mean % of treatment plans including EBT
(across all vignettes per clinician: PCIT, PMT,
PSST, MST, IY. CBT)§

Number of Standardized Assessments
indicated

Notes: DBD= disruptive behavior disorder, ODD= Oppositional Defiant Disorder, CD= Conduct Disorder, NOS= Not Otherwise Specified, ADHD= Attention Deficit Hyperactivity Disorder, LD= Learning Disorder, Dx = Diagnosis, D/O = Disorder;

*

A total of 6 child factors were reported for this study from clinician responses and included: 1) relationships with peers, 2) disruptive behaviors at home, 3) disruptive behaviors at school, 4) psychological or school testing/assessment, 5) academic performance, and 6) social skills,

A coding scheme developed in a previous study (Baker-Ericzén, Jenkins, et al., 2010) was used to code the occurrence of a set of 7 parent and 18 family factors,

treatment plan scored 1 had limited detail (i.e. broad description of treatment plan with short explanation), a plan scored 5 had details and explanations (i.e. the treatment was labeled, explained, and described in detail with defined steps or strategies). EBT= evidence-based treatment, PCIT= Parent Child Interaction Therapy, PMT= Parent Management Training, PSST= Problem Solving Skills Training, MST= Multisystemic Therapy, IY= Incredible Years Parent Training, CBT= Cognitive-Behavior Therapy

§

EBT determined from published lists according to EST definition (Eyberg, et al., 2008; Ollendick & King, 2004; Warren, et al., 2010).

Situation awareness, for the purposes of this study, was defined similarly to “attention to information” above but defined as the factor type (child, parent or family) and number of factors attended to in each vignette (statements made within a verbal protocol can be interpreted as attention to that information, Hutton and Klein, 1999). According to NDM theory, larger numbers indicate more attention which signals more relevance placed on that piece of information (Klein, 1993). A confirmative statement made about the child, parent or family was calculated as a “factor” (i.e., “the child has many behavior problems”, “the mother is depressed” or “this family has no support”). The total number of statements made was calculated for each factor type: child, parent and family, per vignette. These data were not aggregated across vignettes because each vignette was intentionally designed to vary the type of information provided on the parent/s and family. Refer to Table 2 for the details specifically included in each vignette per factor type. A total of 6 different details defined as a child factor were reported by clinicians in this study and included: 1) relationships with peers, 2) disruptive behaviors at home, 3) disruptive behaviors at school, 4) psychological or school testing/assessment, 5) academic performance, and 6) social skills. A coding scheme developed in a previous study (Baker-Ericzén, Jenkins, et al., 2010) was used to code the occurrence of a set of 7 possible details defined as a parent factor and 18 possible details defined as a family factor.

Two research assistants served as coders. Both were blind to the clinician group. Coder training was conducted by two authors (MBE, SP) and consisted of the review of a coding manual, group discussions, individual training sessions, coding practice to criterion, and reliability checks. Coders were trained until reliability demonstrated ≥85%. Coders independently coded the transcripts with a sample double-coded for reliability checks.

Data Analysis

The 48 study participants generated a total of 140 verbal protocols; a verbal protocol represented the response to one case vignette. Analysis of participants’ verbal protocols was conducted by the two trained raters (described above). Inter-rater agreement was calculated for a random sample of 41% of the verbal protocols for each of the four vignettes. Cohen’s k was used to determine the level of agreement between the two raters with an overall agreement of .80 (range of.72 to .84) (Cohen, 1960). All coefficients were considered acceptable (≥.70) (Miles & Huberman, 1994). Disagreements were resolved by discussion and consensus between the raters and the first author. Each verbal protocol was categorized according to the defined expert and non-expert mental health decision-making constructs (per coding procedures described above). Frequency and proportion variables were calculated after coding and reliability was completed for all verbal protocols.

All analyses were conducted using SPSS V21. Analyses were first conducted on two control dependent variables (DVs) (1. number of vignettes completed, and 2. average length per response) using factorial ANOVAs per DV with the two independent variables (IVs ) (1. theoretical orientation, and 2. EBT trained status) confirming no effects.

After initial sample comparisons, two-way between subjects MANOVA analyses tested for differences between the groups of clinicians by entering the two categorical independent variables as factors 1) theoretical orientation (CBT, Family systems, humanistic and psychodynamic), and 2) EBT trained status (EBT and non-EBT) with the multiple DVs. However, due to sample size limitations the interaction term could not be tested; main effects were evaluated. Checks for homogeneity of variance-covariance matrix assumption were done using Box’s M test. Levene’s test of equality of error variances was performed.

Follow-up univariate analyses were conducted for tests of between-subjects effects and adjusted for multiple comparisons using Sidak as suggested by Tabachnick & Fidell (2007). Six dependent variables (DVs) measuring 4 of the NDM decision-making processes were entered into the MANOVA models: 1. Average number of questions, 2. Average number of diagnoses, 3. Average percentage of time discussing treatment, 4. Average extensiveness score, 5. Average percentage of treatment plans with EBTs indicated, and 6. Average number of standardized assessments indicated. The decision-making process of attention to information was analyzed as part of situation awareness described below.

Exploratory analyses were conducted per vignette to examine the secondary research aim, clinical situation awareness, by investigating how child, parent and family cues may affect clinical decision-making between the two groups: EBT trained and non-EBT trained. First, a two-sample z-test between proportions was performed per vignette to determine whether there was a significant difference between EBT and non-EBT trained clinicians with respect to the percentage of each diagnoses stated. Next, independent sample t-test analyses tested for differences between the EBT trained and the non-EBT trained clinician groups on each contextual factor: child, parent and family per vignette. Due to the limited sample sizes per vignette and the exploratory nature of the study, a more generous alpha level of .05 for defining statistical significance was employed despite the relatively large number of statistical tests that were conducted. The effect size was calculated to provide an indication of the magnitude of the differences and to reduce inflation of results during interpretation.

Results

Results revealed significant differences in a number of decision-making processes between clinicians trained in EBT versus those not trained in EBT. Supporting the face and ecological validity of the case vignettes, all clinicians (100%) stated the vignettes were representative of community care clients. The clinician groups did not differ on control variables as number of vignettes completed, F(1, 48) = .124, p > .05, partial eta squared = .003, or on the average length of response across vignettes, F(1, 48) = .298, p > .05, partial eta squared = .007.

A two-way MANOVA revealed a significant multivariate main effect for clinician EBT training status, Wilks’ ƛ = .674, F(6, 38) = 3.06, p < .02, partial eta squared = .325. There was no effect for theoretical orientation, Wilks’ ƛ = .561, F(18, 38) = 1.36, p > .05, partial eta squared = .175. Therefore, hypothesis 1 was confirmed, EBT trained clinicians were making clinical decisions significantly different compared to non-EBT trained clinicians, controlling for primary theoretical orientation based on the model with all six dependent variables.

Given the significance of the overall test, univariate main effects were examined for each DV to identify any specific group differences and to compare them to expert-type decision-making. Significant main effects for EBT training were obtained for four of these six DVs. There was a significant difference for the average number of diagnoses per response, F(1, 48) = 6.097, p < .02, partial eta squared = .124, for the average percent of time discussing treatment, F (1,48) =2.388, p<.05, partial eta squared= .053, for extensiveness of treatment, F(1, 48) = 7.072, p = .01, partial eta squared = .141, and percentage of responses indicating EBTs, F(1, 48) = 4.635, p < .05, partial eta squared = .097. Group mean differences are described below and provided in Table 4. These results support hypothesis 2, indicating that EBT trained clinicians are making decisions that appear to be more consistent with expert decision-making strategies on many of the variables.

Table 4.

Comparisons of Non-EBT and EBT trained clinicians on Decision Making Strategies

Non-EBT
n=34
Mean (SD)
EBT
n=14
Mean (SD)
t df Sig.
P-value
Cohen’s
d
Effect
Size r
# of vignettes
completed
2.91
(.98)
3.00
(.96)
−0.73 44 .43 .22 .11
Total length of
response
57.09
(35.81)
58.20
(34.03)
0.27 44 .79 .08 .04
# Questions 15.06
(12.45)
8.43
(4.7)
1.92 44 .06 .58 .28
# Dx 1.79
(1.40)
.81
(.76)
2.29 44 .03* .69 .33
% Time
discussing Tx
24.94
(15.75)
36.36
(13.97)
−2.38 44 .02* .72 .34
Extensiveness of
Tx
2.88
(1.27)
3.93
(1.14)
−2.26 44 .03* .68 .32
Mean % of EBT .16
(.39)
.64
(.59)
−3.408 47 .001*** .98 .44
# standardized
assessment
.09
(.31)
.11
(.43)
−0.36 44 .72 .11 .05
Total # of factors
attended
26.40
(8.86)
23.79
(5.60)
1.02 47 .31 .30 .15
Mean # of
child factors
3.82
(1.94)
3.42
(1.66)
.675 47 .50 .22 .11
Mean # of parent
factors
2.15
(1.10)
1.82
(.61)
1.056 47 .29 .37 .18
Mean # of
family factors
3.40
(1.13)
2.72
(.86)
2.025 47 .04* .68 .32

Notes: EBT= evidence based treatment, Dx= diagnoses, Tx= treatment, ,

*

= p<.05,

**

= p<.01,

***

= p<.001

Naturalistic Decision Making Strategies: EBT trained and Non-EBT trained Clinician Comparisons

Type of Reasoning

There was no statistically significant difference between the number of questions asked between the two groups, p = .06, but group means reveal a possible trend as non-EBT trained clinicians’ average number of questions was almost double that of EBT trained clinicians (M = 15.06 non-EBT vs. M = 8.43 EBT) and the effect size difference was meaningful (Cohen’s d = .58). Qualitative review of the types of questions asked revealed EBT trained clinicians asked a limited number of relevant clarifying questions (e.g., How long have the symptoms occurred?) throughout their responses. Non-EBT trained clinicians asked both relevant and irrelevant questions such as detailed questions regarding contextual issues (e.g., How long was the father in jail ? Who else is living in the home? Is the grandmother maternal or paternal?) at the beginning of the response prior to stating their case conceptualization or treatment plan which resulted in a higher total number of questions.

Organization of Information and Deriving Hypotheses/Diagnoses

The average number of total lines per verbal protocol was similar across clinician groups (EBT M = 57.09 and non-EBT M = 58.20), indicating that clinicians stated a similar amount of content regardless of EBT training status. The EBT trained clinicians provided significantly fewer diagnoses; they gave an average of .8 diagnoses while non-EBT trained clinicians gave twice as many (average of 1.79) per vignette. Groups did not differ on the percentage of clinicians who made no diagnosis per vignette, F(1, 48) = .066, p > .05. The specific diagnoses provided by clinicians in each group for each vignette are listed in Table 5. For each of the vignettes (all intended to reflect a DBD diagnosis), there were a number of different diagnoses suggested by clinicians. The distribution of diagnoses suggested by non-EBT trained clinicians appears less specific, with diagnoses stated from many conditions including DBDs, ADHD, Learning Disorder (LD), mood disorders, anxiety and “other.” Mood disorder diagnoses were suggested significantly more often for non-EBT trained than EBT trained clinicians.

Table 5.

Diagnoses made per case vignette by clinician group

Vignette Diagnostic Classification Non-EBT Trained (%) EBT Trained (%) p-value
1 Made Any Diagnosis 71 75 .57
DBD 40 38 .73
ADHD 32 25 .79
LD 43 25 .32
Mood** 43 0 .002**
Other 4 0 .52
2 Made Any Diagnosis 62 67 .87
DBD 14 31 .26
ADHD 36 15 .14
LD 32 31 .79
Mood 50 23 .07
Other 5 0 .35
3 Made Any Diagnosis 48 44 .79
DBD 32 44 .49
ADHD 14 0 .13
LD 18 11 .78
Mood* 29 0 .02*
Other 4 0 .52
4 Made Any Diagnosis 50 50 1
DBD 25 42 .27
ADHD 25 8 .13
LD 15 8 .47
Mood* 45 0 .002**
Other 0 0 ----
*

Notes: indicates p<.05

**

indicates p<.01 at 90% confidence interval on Z-test for two sample proportions, EBT= evidence based treatment, DBD= disruptive behavior type disorders, ADHD= attention deficit hyperactivity disorder and impulsive control disorder, LD= learning disorders, Mood= depressive, anxiety and bipolar types of disorders, other= substance and attachment disorders.

Attention to Information and Level of Abstraction

Each vignette described the child’s symptoms and functioning (child factors) as well as psychosocial information about the parent (parent factor) and broader contextual information about the family (family factors). When the data are combined across vignettes, EBT trained clinicians reported a similar total mean of child factors (M = 3.4) compared to non-EBT trained clinicians (M = 3.8) and parent factors (M = 1.8 EBT vs. M = 2.1 non-EBT) but significantly fewer family factors (EBT M = 2.7, non-EBT M = 3.4, t(df 47) = 4.102, p < .05). Refer to Table 4. Also, qualitative review of clinician statements revealed that non-EBT trained clinicians expressed a desire to learn more about child, parent and family factors by making qualitative statements like “I need to know a lot more about the family… prior to making an assessment or treatment plan.”

Finding a Solution

There were significant differences between the two groups in regards to the NDM strategy of finding a solution, operationally defined for this study as a treatment plan. EBT trained clinicians spent more time discussing treatment (non-EBT M = 24.94, EBT M = 36.36), F(1,48) =2.388, p=.02, partial eta squared= .053) compared to non-EBT trained clinicians. EBT trained clinicians also provided more detail and clarity about the treatment plan, resulting in a significantly higher extensiveness rating of their treatment plans, with a mean extensiveness rating score of 3.93 on a 5 point rating scale compared to 2.88 for non-EBT trained clinicians, F(1, 48) = 1.89, p = .03, partial eta squared = .129.

Incorporating Actuarial Information

EBT trained clinicians reported intent to use an EBT 64% of the time whereas non-EBT trained clinicians mentioned an EBT 16% of the time which represents a statistically significant difference, F(1, 48) = 4.01, p < .001, partial eta squared = .087. Specifically, EBT trained clinicians discussed the following EBTs within their treatment plans: Parent Child Interaction Therapy (PCIT), Parent Management Training (PMT), Incredible Years (IY), Problem-Solving Skills Training plus Parent Management Training (PSST+PMT), Multisystemic Therapy (MST), Cognitive Behavioral Therapy (CBT). There were no significant differences in the number of EBT treatments recommended by EBT trained clinicians across vignettes, suggesting that EBT trained clinicians applied EBTs broadly across patients presenting with a variety of characteristics.

Another form of incorporating actuarial information is the use of standardized assessment tools. Both groups of clinicians stated equally infrequently that they would use or would want information from a standardized assessment tool to assist in their diagnostic decisions, with an average number of assessments of less than one per response protocol (.09 non-EBT and .11 EBT, F(1, 48) = .401, p >.05, partial eta squared = .009).

Situation Awareness: Association of Parent and Family Factors on Decision Making, EBT Trained and Non-EBT Trained Clinician Comparisons

Overall, the number and types of diagnoses clinicians reported varied by vignette. For both clinician groups (EBT trained and non-EBT trained), there was a range in the percentage of clinicians who stated a child diagnostic condition. For example, 50% of all clinicians made at least one diagnosis for case vignette 4 while approximately 75% of EBT trained clinicians and 71% of non-EBT trained clinicians made at least one diagnoses for case vignette 1. Of those clinicians who made diagnoses, the EBT trained clinicians made approximately one child diagnosis per vignette regardless of the parent and family issues (range M =1.1–1.75). In contrast, the non-EBT trained clinicians reported more child diagnoses when additional family contextual information was provided (V1 & V4) (V1 M = 2.75, V4 M = 2.5) and significantly more diagnoses on these two vignettes compared to EBT trained clinicians (V1, t(df 24) = 2.448, p < .05; V4, t(df 14) = 2.646, p < .05). Non-EBT trained clinicians’ assigned diagnoses also reflected a wider array of diagnostic classifications, a total of 13 different diagnoses across the 4 vignettes, while EBT trained clinicians usually gave 1 of 3 diagnoses (DBD, ADHD and/or LD) in each vignette with a total of 4 different types of diagnoses given across the 4 vignettes Interestingly, the inclusion of parent depression and/or parent elevated stress in the case vignettes was associated with increased likelihood of assignment of a child mood disorder for the non-EBT trained clinicians, but not for the EBT trained clinicians (in V1, V3 & V4). One exception was in case vignette 2 which described the child’s mother as significantly depressed with intermittent suicidality; a number of clinicians in both groups conceptualized the child as having depression/mood disorder (50% for non-EBT and 23% for EBT). Refer to Table 5.

As suggested from situation awareness literature, the parent and family contextual information presented in each vignette appeared to be attended to differently by the two groups of clinicians. More specifically, information, defined as factors, appeared to be attended to in greater amounts by non-EBT trained clinicians compared to EBT trained clinicians on specific vignettes. Non-EBT trained clinicians attended to significantly more factors (child, parent and family combined) than EBT trained clinicians on vignette #1 (Non-EBT M = 13.58, EBT M = 10.62, t(df 34) = 1.992, p = .05).. There were also significant differences by type of factor by case vignette between the groups of clinicians. Refer to Table 6 for comparisons. Specifically, the non-EBT trained clinicians stated significantly more family factors than the EBT trained clinicians on vignette 1. For two of the cases ( V1 & V3), the non-EBT trained clinicians stated more child factors compared to EBT trained clinicians which can be defined as an indicator of attention to more cues, which in turn, may lead to different case conceptualizations (i.e. diagnoses).

Table 6.

Number of parent, family, child factors attended to in each vignette

Non-EBT (n=35)
Mean (SD)
EBT (n=14)
Mean (SD)
ES
Cohen’s d
ES
r
V1 Total 13.57 (3.98) 10.63 (2.26) * .91 .41
V1 Parent 2.89 (2.2) 1.71 (2.05) .56 .27
V1 Family 4.23 (2.78) 2.0 (2.25)** .88 .40
V1 Child 3.37 (2.86) 1.64 (2.02)* .70 .33
V2 Total 8.27 (3.01) 7.23 (2.77) .36 .18
V2 Parent 1.60 (1.56) 1.93 (1.49) .22 .11
V2 Family 1.31 (1.43) 1.36 (1.28) .04 .02
V2 Child 2.29 (2.33) 2.35 (1.91) .03 .01
V3 Total 5.75 (1.69) 6.56 (1.24) .55 .26
V3 Parent 1.34 (.998) 1.21 (1.12) .12 .06
V3 Family 1.57 (1.09) 1.14 (1.10) .39 .19
V3 Child 2.40 (2.21) 1.36 (1.34)* .57 .27
V4 Total 7.85 (2.68) 7.25 (2.96) .21 .11
V4 Parent 1.11 (1.13) 1.29 (.914) .18 .09
V4 Family 1.63 (1.82) 2.29 (1.82) .36 .18
V4 Child 1.74 (2.15) 1.79 (1.48) .03 .01

Notes: EBT= evidence based treatment, V=vignette,

*

= p<.05,

**

= p<.01.

Discussion

Results of this naturalistic analog design study provide preliminary data to suggest that training in an EBT may be associated with differences in clinical decision-making skills among community-based mental health clinicians. Specifically, according to NDM theory classifications, EBT trained clinicians demonstrated decision-making skills more consistent with what the theory would predict for expert performance by scoring high on the general dimensions of expert decision-making compared to non-EBT trained clinicians. First, EBT trained clinicians were more likely to use forward reasoning by using the relevant clinical information provided and by asking focused questions (e.g. about symptoms) to gather additional information. These clinicians asked half as many questions compared to non-EBT trained clinicians. Second, EBT trained clinicians appeared to organize the clinical information in a way that allowed them to conceptualize a diagnosis and treatment plan more accurately compared to non-EBT trained clinicians. Specifically, EBT trained clinicians attended to the given child symptoms provided in the vignette and assigned on average one diagnosis, usually a disruptive behavior disorder (matching the vignette symptom presentation), while non-EBT trained clinicians were more likely to assign multiple diagnoses, including internalizing diagnoses and others (e.g. bipolar, attachment disorder, substance abuse). EBT trained clinicians spent more time discussing treatment during their response than non-EBT trained clinicians and stated more detailed, comprehensive treatment plans (e.g., more time spent on treatment and higher treatment extensiveness rating) that were linked to an accurate child’s diagnosis and case conceptualization of a DBD. Finally, EBT trained clinicians described using an EBT for childhood DBD more often in their treatment plans.

In general, there was substantial variation in regards to what information clinicians attended to and how they made clinical decisions, especially between the EBT and non-EBT trained clinicians. Clinical reasoning, the cognitive process involved in clinical decision-making, is most efficient and accurate if the clinician can organize the information and recognize the “gist” of the clinical presentation. While expert-type decision makers rely on gist-based reasoning (Lloyd & Reyna, 2009), novice-type clinicians are described as treating pieces of information as isolated entities--they are less likely to spontaneously detect the theme or gist that connects the pieces (Stahl & Klauer, 2008). Data from this study suggest that EBT trained clinicians were more able to recognize the gist by generating an accurate DBD diagnosis more often than non-EBT trained clinicians.

Novices are also described as not having sufficient knowledge and skills to discriminate between salient and non-salient cues (Benner, Tanner, & Chesla, 1992) which appeared to occur with non-EBT trained clinicians based on the increased number of diagnoses and factors attended to for vignettes that included more parent and family factors (i.e. Vignette 1). The findings suggest that non-EBT trained clinicians may have more difficulty organizing large amounts of information (child, parent and family based). Given the complexity of families seeking services in community-based mental health settings, including the high frequency of multiple parent and family contextual risk factors (Baker-Ericzén, Hurlburt, et al., 2010), a clinician’s ability to meaningfully synthesize case information and create an accurate case conceptualization, including a diagnosis and treatment plan, in an efficient manner can be critical to delivering optimal care.

Findings from the current study provide a first look into the decision-making processes of community based clinicians and demonstrate that the NDM theory of expert-type and novice-type decision makers can be applied to MH practice. The similarities found in this exploratory study between expert-type decision makers and clinician training in EBT is interesting, raising the possibility that training in EBT influences a clinician in a way above and beyond the specific intervention strategies learned. The process of EBT training may also teach meta-cognitive skills such as attention to relevant cues, organizational skills, and gist formulation so that the experience of EBT training may generalize to improved decision-making skills which can be applied to new, complex cases.

The notion that EBT training may influence more global decision-making skills may not be farfetched, as there are some consistencies between the process of EBT training and those required for the development of expert decision-making. Cognitive decision-making theory states that experts apply “skilled intuition as recognition” (Kahneman & Klein, 2009) as was first postulated by Simon (Simon, 1992). This recognition model implies that two conditions must be satisfied for a skilled intuitive judgment (i.e., recognition) of an expert to occur. The first condition is that adequate valid cues are available in the environment, and the second is that the individual has had an opportunity to learn the relevant cues through an environment of sufficiently high validity and adequate opportunity to practice the skill (Kahneman & Klein, 2009). Training in EBT appears to satisfy these conditions. Specifically, it routinely involves close attention to specific cues, such as diagnosis, symptoms or behaviors, (simulating a “high validity” environment) and it provides ample opportunities for practice at both the case conceptualization and treatment levels (i.e. role plays, session by session supervision of implementation of identified skill, etc.) (e.g., Incredible Years program- Webster-Stratton, 1984). Reviews of a number of EBT models report that each involve active learning strategies (e.g., behavioral role plays, coaching, experimental exercise) which are necessary to change therapist behavior (Beidas & Kendall, 2010; Herschell, et al., 2010). Thus, EBT training may encourage meta-cognitive skill development that resembles those of expert decision-makers. Clearly, more research is needed to examine the relationship between EBT training and meta-cognitive skills.

A number of interventions that are defined either as EBT programs or trans-treatment approaches compiling common elements of evidence-based practice are based on learned skills and application of specific empirically supported therapeutic strategies (Chorpita, Becker, & Daleiden, 2007; Chorpita, et al., 2005; Ollendick & King, 2004). Research on cognitive decision-making has found that people will apply learned skills intuitively after they are internalized at the meta-cognitive level; however, without these learned skills, they will produce incorrect intuitions that are fraught with cognitive errors from heuristics and biases (Tversky & Kahneman, 1974). Of significance, it is suggested that adequate opportunities for learning skills and the meta-cognitive tools for their application should include prolonged practice and feedback that is both rapid and unequivocal (Kahneman & Klein, 2009). In addition, simulated training materials have been shown to improve decision-making skills (Cohen, Freeman, & Thompson, 1997). Taken together, clinicians’ decision-making skills may be improved via training efforts that teach clinicians meta-cognitions such as rules or heuristics (Jenkins, 2012), and in the delivery of EBT strategies for different clinical profiles. Furthermore, even though many training modalities exist, current research highlights the importance of practice, feedback, and simulated materials (Herschell, et al., 2010) as described in the development of meta-cognitive skill literature.

Results of this study also contribute to a broader debate about whether duration of clinical experience is associated with clinical expertise in mental health (Garb, 1989). Some authors who study cognitive expertise estimate that it takes 10 years to become an expert in any field (Galanter & Patel, 2005); however, years of experience in this sample did not differ across our two groups (both groups averaging between 7–8 years), only training experiences. After further investigation, these findings could suggest that clinical expertise may be altered with specific types of training. Although common sense suggests that novices become experts by gradually increasing or developing skills over time with enhanced knowledge and experience, Patel and colleagues (2001) have found that this is not always the case. Related, research has shown that the validity of clinical judgment and the amount of clinical experience are unrelated (Dawes, 2006). Clearly, there is a need to study how expertise develops in mental health practice and to examine the direct impact of EBT training on clinical decision-making through randomized trials and longitudinal designs. As it stands, it is not clear from this study if the decision-making is a result of particular individual experiences or EBT training because of the non-experimental design. It is of value to examine if “expert” decision-making could be trained in a more explicit way rather than relying on clinicians potentially distilling this from EBT training. Future study could involve comparing the utility of an expert decision-making training versus specific EBT training to “usual care” clinician development.

There are a number of study limitations that warrant attention. First, the quasi-experimental design of the study and reliance on clinicians’ retrospective reporting on EBT training (or lack thereof) limit any casual attributions for the observed differences. For example, there may be confounding factors associated with history of EBT training that are also associated with clinical decision-making, such as self-selection or theoretical orientation. For example, all of the EBT trained clinicians endorsed a CBT theoretical orientation whereas only half of the non-EBT trained therapists endorsed this orientation. We controlled for primary theoretical orientation in the analyses and determined that it did not account for the differences; however, future research should include a randomized, prospective design to better control for a variety of potential confounding factors.

The fact that we have limited information about the nature of the specific EBT training is a study weakness; however the variability in the EBT training strengthens the significance of the observed group differences. Specifically, there was variation when and how clinicians received EBT training, and the extent to which they have implemented the specific EBT in clinical practice. Despite this heterogeneity within the EBT trained group and the relatively small number of total participants, group differences were detected between the EBT-trained and non-EBT groups with meaningful effect sizes.

Further, due to the limited sample size, interaction term analyses could not be conducted between theoretical orientation and EBT training status. The limited sample size also precluded rigorous individual vignette analyses. Therefore the situation awareness (child, parent and family factors) analyses need to be considered exploratory in nature because Type 1 errors could not be accounted for. Due to these limitations, results should be interpreted cautiously.

Additionally, the think aloud method is innovative and has not been widely used to examine training within mental health community settings. To address this issue, we have followed an established methodology that has been used to assess decision-making processes across other populations of study (i.e. physicians, military personnel, emergent workers) in community settings. Although general analog data collection methods have limitations because of the indirect assessment of the phenomenon being investigated, this method has been validated as an approach to understand cognitive processes (Ericsson & Simon, 1993).

Despite limitations, however, preliminary findings from the present study describe a first step of an innovative line of study on community-based clinician decision-making. This study suggests a number of future directions for research and practice. First and foremost, clinical decision-making practices warrant greater consideration for study within dissemination and implementation efforts of EBTs. Based on study findings, EBT trained clinicians approach case conceptualization and treatment planning differently. Further understanding of how clinicians are trained within EBT models is warranted with specific attention to potential meta-cognitive skills. This is especially important as academic programs have begun to implement formal methods for EBP and EBT training; community service programs will need to be equipped to continue this exposure and training for their staff (Leffler, Jackson, West, McCarty, & Atkins, 2012). However, models for ongoing training in EBTs are less well-developed (Beidas & Kendall, 2010), and evidence suggests that the vast majority of required clinical supervision in psychotherapy across professional programs continues to be in non-evidence-based practices (Leffler, et al., 2012).

Second, targeting clinicians’ meta-cognitive skills may help change their behavior in practice. For example, Redelmeier (2005) recommends that merely increasing awareness of common cognitive errors can substantially improve patient care. Increasing the accuracy of clinical decision-making by targeting specific cognitive deficits, coupled with providing clinicians opportunities for practice and feedback, could greatly improve clinical decision-making and enhance the quality of clinical practice. Finally, paying greater attention to process rather than content in training models may be a meaningful approach that can be both effective and acceptable to community clinicians.

In summary, within the context of current calls for improving quality of care in community services through the dissemination and implementation of EBTs, it is important to gain appreciation for the potential impact of such training that may extend beyond the specific clinical problem targeted by a specific EBT program. This study offers preliminary information that such training may be associated with meta-cognitive skills specific to clinical decision-making strategies, thus reinforcing the potential benefits of EBTs for improving care more broadly. In addition, with further research and replication, these findings may point to innovative clinician intervention approaches which rely on developing meta-cognitive decision-making strategies. Given discouraging data regarding the average effectiveness of routine mental health care, exploration of a variety of approaches to potentially improve care is needed (Warren, Nelson, Mondragon, Baldwin, & Burlingame, 2010)

Acknowledgements

Support for this work comes from National Institute of Mental Health Mentored Research Scientist Development Awards K01-MH69665 (M.B.E.) and National Institute of Mental Health award R01-MH66070 (A.G). The authors thank Greg McKoon, BA, Mary Garnand Mueggenborg, MSW and Cynthia Fuller, BA for technical support, Scott Roesch, PhD for statistical consultation and Lauren Brookman-Frazee, Ph.D. for her scientific review.

Footnotes

*

The term evidence-based treatments (EBT) has supplanted the term empirically supported treatment (EST) because it is believed that the word evidence is more immediately and readily understood by non-psychologists of various medical and psychosocial health disciplines (Silverman & Hinshaw, 2008).

Contributor Information

Melissa M. Jenkins, Email: mmj@ucsd.edu.

Soojin Park, Email: hisoonjinp@gmail.com.

Ann F. Garland, Email: agarland@sandiego.edu.

References

  1. Ægisdóttir S, White MJ, Spengler PM, Maugherman AS, Anderson LA, Cook RS, et al. The meta-analysis of clinical judgment project: Fifty-six years of accumulated research on clinical versus statistical prediction. The Counseling Psychologist. 2006;34(3):341–382. [Google Scholar]
  2. American Psychiatric Association. Diagnostic and statistical manual of mental disorders. Revised 4th ed. Washington, DC: American Psychiatric Association; 2000. [Google Scholar]
  3. Anzai Y. Learning and use of representations for physics expertise. Towards a general theory of expertise. 1991:64–92. [Google Scholar]
  4. Bakalis N, Bowman G, Porock D. Decision making in Greek and English registered nurses in coronary care units. International journal of nursing studies. 2003;40(7):749–760. doi: 10.1016/s0020-7489(03)00014-2. [DOI] [PubMed] [Google Scholar]
  5. Baker-Ericzén M, Hurlburt M, Brookman-Frazee L, Jenkins M, Hough R. Comparing child, parent and family characteristics in usual care and empirically supported treatment research samples for children with disruptive behavior disorders. Journal of Emotional and Behavioral Disorders. 2010;18(2):82–99. [Google Scholar]
  6. Baker-Ericzén M, Jenkins M, Brookman-Frazee L. Clinician and Parent Perspectives on Parent and Family Contextual Factors that Impact Community Mental HealthServices for Children with Behavior Problems. Child and Youth Care Forum. 2010 doi: 10.1007/s10566-010-9111-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Becker KD, Stirman SW. The science of training in evidence-based treatments in the context of implementation programs: Current status and prospects for the future. Administration and Policy in Mental Health and Mental Health Services Research. 2011:1–6. doi: 10.1007/s10488-011-0361-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Beidas RS, Kendall PC. Training Therapists in Evidence-Based Practice: A Critical Review of Studies From a Systems-Contextual Perspective. Clinical Psychology: Science and Practice. 2010;17(1):1–30. doi: 10.1111/j.1468-2850.2009.01187.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Benner P, Tanner C, Chesla C. From beginner to expert: gaining a differentiated clinical world in critical care nursing. Advances in Nursing Science. 1992;14(3):13–28. doi: 10.1097/00012272-199203000-00005. [DOI] [PubMed] [Google Scholar]
  10. Burns K. Mental models and normal errors. In: Montgomery H, Lipshitz R, Brehmer B, editors. How professionals make decisions. Mahwah, NJ: Lawrence Erlbaum Associates; 2005. pp. 15–28. [Google Scholar]
  11. Boshuizen HPA, Schmidt HG. On the role of biomedical knowledge in clinical reasoning by experts, intermediates and novices. Cognitive Science. 1992;16:153–184. [Google Scholar]
  12. Chapman GB, Sonnenberg FA. Decision making in health care: theory, psychology, and applications. Cambridge, UK: Cambridge University Press; 2000. [Google Scholar]
  13. Chorpita BF, Becker KD, Daleiden EL. Understanding the common elements of evidence-based practice: misconceptions and clinical examples. Journal of the American Academy of Child & Adolescent Psychiatry. 2007;46(5):647–652. doi: 10.1097/chi.0b013e318033ff71. [DOI] [PubMed] [Google Scholar]
  14. Chorpita BF, Daleiden EL. Structuring the collaboration of science and service in pursuit of a shared vision. Journal of Clinical Child and Adolescent Psychology. 2014;43:323–338. doi: 10.1080/15374416.2013.828297. [DOI] [PubMed] [Google Scholar]
  15. Chorpita BF, Daleiden EL, Weisz JR. Identifying and selecting the common elements of evidence based interventions: A distillation and matching model. Mental Health Services Research. 2005;7(1):5–20. doi: 10.1007/s11020-005-1962-6. [DOI] [PubMed] [Google Scholar]
  16. Cohen J. A coefficient of agreement for nominal scales. Educational and psychological measurement. 1960;20(1):37–46. [Google Scholar]
  17. Cohen MS, Freeman JT, Thompson BB. Training the naturalistic decision maker. In: Zsambok CE, Klein G, editors. Naturalistic decision making. Mahwah, NJ: Erlbaum; 1997. pp. 257–268. [Google Scholar]
  18. Croskerry P. Achieving quality in clinical decision making: cognitive strategies and detection of bias. Academic Emergency Medicine. 2002;9(11):1184–1204. doi: 10.1111/j.1553-2712.2002.tb01574.x. [DOI] [PubMed] [Google Scholar]
  19. Dawes RM. Experience and validity of clinical judgment: The illusory correlation. Behavioral Sciences & the Law. 2006;7(4):457–467. [Google Scholar]
  20. Dawes RM, Faust D, Meehl PE. Clinical versus actuarial judgment. Science. 1989;243:1668–1674. doi: 10.1126/science.2648573. [DOI] [PubMed] [Google Scholar]
  21. Elliot T. Expert decision-making in naturalistic environments: a summary of research. Australia: Defence Science and Technology Organisation, Systems Sciences Laboratory; 2005. [Google Scholar]
  22. Elstein AS, Schwarz A. Clinical problem solving and diagnostic decision making: selective review of the cognitive literature. British Medical Journal. 2002;324:729–732. doi: 10.1136/bmj.324.7339.729. [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Ericsson KA, Simon HA. Protocol Analysis, Revised Edition: Verbal Reports as Data. Cambridge, MA: Bradford books/MIT Press; 1993. [Google Scholar]
  24. Eyberg SM, Nelson MM, Boggs SR. Evidence-based psychosocial treatments for children and adolescents with disruptive behavior. Journal of Clinical Child & Adolescent Psychology. 2008;37(1):215–237. doi: 10.1080/15374410701820117. [DOI] [PubMed] [Google Scholar]
  25. Feltovich PJ, Spiro RJ, Coulson RL. Issues of expert flexibility in contexts characterized by complexity and change. The MIT Press; 1997. [Google Scholar]
  26. Fonteyn ME, Kuipers B, Grobe SJ. A description of think aloud method and protocol analysis. Qualitative Health Research. 1993;3(4):430–441. [Google Scholar]
  27. Galanter CA, Patel VL. Medical decision making: a selective review for child psychiatrists and psychologists. Journal of Child Psychology and Psychiatry. 2005;46(7):675–689. doi: 10.1111/j.1469-7610.2005.01452.x. [DOI] [PubMed] [Google Scholar]
  28. Garb HN. Clinical judgment, clinical training, and professional experience. Psychological Bulletin. 1989;105(3):387–396. doi: 10.1037/0033-2909.105.3.387. [DOI] [PubMed] [Google Scholar]
  29. Garland AF, Bickman L, Chorpita BF. Change what? Identifying quality improvement targets by investigating usual mental health care. Administration and Policy in Mental Health and Mental Health Services Research. 2010;37(1–2):15–26. doi: 10.1007/s10488-010-0279-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
  30. Garland AF, Hough RL, McCabe KM, Yeh M, Wood PA, Aarons GA. Prevalence of psychiatric disorders in youths across five sectors of care. Journal of the American Academy of Child & Adolescent Psychiatry. 2001;40(4):409–418. doi: 10.1097/00004583-200104000-00009. [DOI] [PubMed] [Google Scholar]
  31. Glaser RE, Chi MTH. Overview. In: Chi MTH, Glaser RE, Farr MJ, editors. The nature of expertise. Hillsdale, NJ: Lawrence Erlbaum Associates, Inc; 1988. [Google Scholar]
  32. Grasso BC, Rothschild JM, Genest R, Bates DW. What do we know about medication errors in inpatient psychiatry? Joint Commission Journal on Quality and Patient Safety. 2003;29(8):391–400. doi: 10.1016/s1549-3741(03)29047-x. [DOI] [PubMed] [Google Scholar]
  33. Grove WM. Clinical versus statistical prediction: The contribution of Paul E. Meehl. Journal of clinical psychology. 2005;61(10):1233–1244. doi: 10.1002/jclp.20179. [DOI] [PubMed] [Google Scholar]
  34. Grove WM, Zald DH, Lebow BS, Snitz BE, Nelson C. Clinical versus mechanical prediction: A meta-analysis. Psychological assessment. 2000;12(1):19–30. [PubMed] [Google Scholar]
  35. Henggeler SW, Schoenwald SK, Borduin CM, Rowland MD, Cunningham PB. New York: Guilford Press; 1998. Multisystemic treatment of antisocial behaviour in children and adolescents. [Google Scholar]
  36. Herschell AD, Kolko DJ, Baumann BL, Davis AC. The role of therapist training in the implementation of psychosocial treatments: A review and critique with recommendations. Clinical Psychology Review. 2010;30(4):448–466. doi: 10.1016/j.cpr.2010.02.005. [DOI] [PMC free article] [PubMed] [Google Scholar]
  37. Hogan MF. New freedom commission report: the president's new freedom commission: recommendations to transform mental health care in America. Psychiatric Services. 2003;54(11):1467. doi: 10.1176/appi.ps.54.11.1467. [DOI] [PubMed] [Google Scholar]
  38. Hutton RJB, Klein G. Expert decision making. Systems Engineering. 1999;2:32–45. [Google Scholar]
  39. Jenkins MM. Cognitive De-Biasing and the Assessment of Pediatric Bipolar Disorder [Dissertation] Chapel Hill: Psychology, University of North Carolina at Chapel Hill; 2012. [Google Scholar]
  40. Jenkins MM, Youngstrom EA, Washburn J, Youngstom JK. Evidence-Based strategies improve assessment of pediatric bipolar disorder by community practitioners. Professional Psychology: Research and Practice. 2011;18:121–129. doi: 10.1037/a0022506. [DOI] [PMC free article] [PubMed] [Google Scholar]
  41. Jensen AL, Weisz JR. Assessing match and mismatch between practitioner-generated and standardized interview-generated diagnoses for clinic-referred children and adolescents. Journal of Consulting and Clinical Psychology. 2002;70(1):158–168. [PubMed] [Google Scholar]
  42. Kahneman D, Klein G. Conditions for intuitive expertise: A failure to disagree. American Psychologist. 2009;64(6):515–526. doi: 10.1037/a0016755. [DOI] [PubMed] [Google Scholar]
  43. Klein G. Developing expertise in decision making. Thinking & Reasoning. 1997;3(4):337–352. [Google Scholar]
  44. Klein G. Naturalistic decision making. Human Factors: The Journal of the Human Factors and Ergonomics Society. 2008;50(3):456–460. doi: 10.1518/001872008X288385. [DOI] [PubMed] [Google Scholar]
  45. Klein GA. A recognition-primed decision (RPD) model of rapid decision making. In: Klein GA, Orasanu J, Calderwood R, Zsambok CE, editors. Decision making in action: Models and methods. Norwood, NJ: Ablex; 1993. pp. 138–147. [Google Scholar]
  46. Klein GA. Sources of power: How people make decisions. Cambridge, MA: The MIT Press; 1999. [Google Scholar]
  47. Leffler JM, Jackson Y, West AE, McCarty CA, Atkins MS. Training in Evidence-Based Practice Across the Professional Continuum. Professional Psychology: Research and Practice. 2012 [Google Scholar]
  48. Lewczyk CM, Garland AF, Hurlburt MS, Gearity J, Hough RL. Comparing DISC-IV and Clinician Diagnoses Among Youths Receiving Public Mental Health Services. Journal of the American Academy of Child & Adolescent Psychiatry. 2003;42(3):349–356. doi: 10.1097/00004583-200303000-00016. [DOI] [PubMed] [Google Scholar]
  49. Lloyd FJ, Reyna VF. Clinical Gist and Medical Education: Connecting the Dots. Journal of the American Medical Association. 2009;302(12):1332–1333. doi: 10.1001/jama.2009.1383. [DOI] [PubMed] [Google Scholar]
  50. Mass J. Evidence-based assessment of children with behavioral and emotional disorders. Report of Emotional and Behavioral Disorders in Youth. 2003;3:31–34. [Google Scholar]
  51. McHugh RK, Murray HW, Barlow DH. Balancing fidelity and adaptation in the dissemination of empirically-supported treatments: The promise of transdiagnostic interventions. Behaviour research and therapy. 2009;47(11):946–953. doi: 10.1016/j.brat.2009.07.005. [DOI] [PMC free article] [PubMed] [Google Scholar]
  52. Meehl PE. Clinical versus statistical prediction: A theoretical analysis and a review of the evidence. Minneapolis: University of Minnesota Press; 1954. [Google Scholar]
  53. Meehl PE. Causes and effects of my disturbing little book. Journal of personality assessment. 1986;50(3):370–375. doi: 10.1207/s15327752jpa5003_6. [DOI] [PubMed] [Google Scholar]
  54. Miles MB, Huberman AM. Qualitative data analysis: An expanded sourcebook. Thousand Oaks, CA: SAGE publications, Inc; 1994. [Google Scholar]
  55. Nath SB, Marcus SC. Medical errors in psychiatry. Harvard review of psychiatry. 2006;14(4):204–211. doi: 10.1080/10673220600889272. [DOI] [PubMed] [Google Scholar]
  56. Newell A, Simon HA. Human problem solving. Englewood Cliffs, NJ: Prentice-Hall; 1972. [Google Scholar]
  57. Ollendick TH, King NJ. Empirically supported treatments for children and adolescents: Advances toward evidence-based practice. In: Barrett PM, Ollendick TH, editors. Handbook of interventions that work with children and adolescents: Prevention and treatment. West Sussex, England: John Wiley & Sons Ltd; 2004. pp. 3–25. [Google Scholar]
  58. Pappadopulos E, Jensen PS, Schur SB, MacIntyre II JC, Ketner S, Van Orden K, et al. "Real World" Atypical Antipsychotic Prescribing Practices in Public Child and Adolescent Inpatient Settings. Schizophrenia bulletin. 2002;28(1):111–121. doi: 10.1093/oxfordjournals.schbul.a006913. [DOI] [PubMed] [Google Scholar]
  59. Patel VL, Arocha JF. The Nature of Constraints on Collaborative Decision Making in Health Care Settings. In: Salas E, Klein G, editors. Linking expertise and naturalistic decision making. Mahwah, NJ: Lawrence Erlbaum Associates, Inc; 2001. pp. 383–405. [Google Scholar]
  60. Patel VL, Arocha JF, Kaufman DR. Diagnostic reasoning and medical expertise. The psychology of learning and motivation: Advances in research and theory. 1994;31:187–252. [Google Scholar]
  61. Patel VL, Arocha JF, Kaufman DR. A primer on aspects of cognition for medical informatics. Journal of the American Medical Informatics Association. 2001;8(4):324–343. doi: 10.1136/jamia.2001.0080324. [DOI] [PMC free article] [PubMed] [Google Scholar]
  62. Patel VL, Kaufman DR, Arocha JF. Emerging paradigms of cognition in medical decision-making. Journal of Biomedical Informatics. 2002;35(1):52–75. doi: 10.1016/s1532-0464(02)00009-6. [DOI] [PubMed] [Google Scholar]
  63. Redelmeier DA. The cognitive psychology of missed diagnoses. Annals of internal medicine. 2005;142(2):115–120. doi: 10.7326/0003-4819-142-2-200501180-00010. [DOI] [PubMed] [Google Scholar]
  64. Sackett DL, Rosenberg WMC, Gray JA, Haynes RB, Richardson WS. Evidence based medicine: what it is and what it isn't. British Medical Journal. 1996;312:71–72. doi: 10.1136/bmj.312.7023.71. [DOI] [PMC free article] [PubMed] [Google Scholar]
  65. Schoenwald SK, Garland AF, Chapman JE, Frazier SL, Sheidow AJ, Southam-Gerow MA. Toward the effective and efficient measurement of implementation fidelity. Administration and Policy in Mental Health and Mental Health Services Research. 2011;38(1):32–43. doi: 10.1007/s10488-010-0321-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
  66. Serfaty D, MacMillan J, Entin EE, Entin EB. The decision-making expertise of battle commanders. Naturalistic decision making. 1997:233–246. [Google Scholar]
  67. Silverman WK, Hinshaw SP. The second special issue on evidence-based psychosocial treatments for children and adolescents: A 10-year update. Journal of Clinical Child & Adolescent Psychology. 2008;37(1):1–7. doi: 10.1080/15374410701818293. [DOI] [PubMed] [Google Scholar]
  68. Simon HA. What Is an “Explanation” of Behavior? Psychological Science. 1992;3(3):150–161. [Google Scholar]
  69. Spring B. Health decision making: lynchpin of evidence-based practice. Medical Decision Making. 2008;28(6):866–874. doi: 10.1177/0272989X08326146. [DOI] [PMC free article] [PubMed] [Google Scholar]
  70. Stahl C, Klauer KC. A simplified conjoint recognition paradigm for the measurement of gist and verbatim memory. Journal of Experimental Psychology: Learning, Memory, and Cognition. 2008;34(3):570–586. doi: 10.1037/0278-7393.34.3.570. [DOI] [PubMed] [Google Scholar]
  71. Tabachnick B, Fidell L. Using multivariate statistics. 5th ed. Boston: Allyn and Bacon; 2007. [Google Scholar]
  72. Trask EV, Garland AF. Are children improving? Results from outcome measurement in a large mental health system. Administration and Policy in Mental Health and Mental Health Services Research. 2012;39(3):210–220. doi: 10.1007/s10488-011-0353-0. [DOI] [PubMed] [Google Scholar]
  73. Tversky A, Kahneman D. Judgement under uncertainty: Heuristics and Biases. Science. 1974;185:1124–1131. doi: 10.1126/science.185.4157.1124. [DOI] [PubMed] [Google Scholar]
  74. Van Someren MW, Barnard YF, Sandberg JAC. The think aloud method: A practical guide to modelling cognitive processes. London: Academic Press; 1994. [Google Scholar]
  75. Warren JS, Nelson PL, Mondragon SA, Baldwin SA, Burlingame GM. Youth Psychotherapy Change Trajectories and Outcomes in Usual Care: Community Mental Health Versus Managed Care Settings. Journal of Consulting and Clinical Psychology. 2010;78(2):144–155. doi: 10.1037/a0018544. [DOI] [PubMed] [Google Scholar]
  76. Webster-Stratton C. Randomized trial of two parent-training programs for families with conduct-disordered children. Journal of Consulting and Clinical Psychology. 1984;52(4):666–678. doi: 10.1037//0022-006x.52.4.666. [DOI] [PubMed] [Google Scholar]
  77. Weisz JR. Psychotherapy for children and adolescents: Evidence-based treatments and case examples. Cambridge, UK: Cambridge University Press; 2004. [Google Scholar]
  78. Weisz JR, Jensen AL. Child and adolescent psychotherapy in research and practice contexts: Review of the evidence and suggestions for improving the field. European Journal of Child and Adolescent Psychiatry. 2001;10:12–18. doi: 10.1007/s007870170003. [DOI] [PubMed] [Google Scholar]
  79. Weisz JR, Jensen-Doss A, Hawley KM. Evidence-based youth psychotherapies versus usual clinical care: A meta-analysis of direct comparisons. American Psychologist. 2006;61(7):671–689. doi: 10.1037/0003-066X.61.7.671. [DOI] [PubMed] [Google Scholar]
  80. Witteman CLM, van der Bercken JHL. Intermediate effects in psychodiagnostic classification. European Journal of Psychological Assessment. 2007;23:56–61. [Google Scholar]
  81. Whyte G. Perspectives on naturalistic decision making from organizational behavior. Journal of Behavioral Decision Making. 2001;14(5):383–384. [Google Scholar]
  82. Youngstrom EA, Freeman AJ, Jenkins MM. The assessment of children and adolescents with bipolar disorder. Child and Adolescent Psychiatric Clinics of North America. 2009 Apr;18(2):353–390. doi: 10.1016/j.chc.2008.12.002. viii-ix. [DOI] [PMC free article] [PubMed] [Google Scholar]

RESOURCES