Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2017 Nov 1.
Published in final edited form as: Adm Policy Ment Health. 2016 Nov;43(6):945–956. doi: 10.1007/s10488-015-0709-y

What Predicts Clinician Dropout from State-Sponsored Managing and Adapting Practice Training

S Serene Olin 1,, Erum Nadeem 1, Alissa Gleacher 1, James Weaver 1, Dara Weiss 1, Kimberly E Hoagwood 1,2, Sarah McCue Horwitz 1
PMCID: PMC5545802  NIHMSID: NIHMS884409  PMID: 26699136

Abstract

Dropouts from system-wide evidence-based practice trainings are high; yet there are few studies on what predicts dropouts. This study examined multilevel predictors of clinician dropout from a statewide training on the Managing and Adapting Practice program. Extra-organizational structural variables, intra-organizational variables and clinician variables were examined. Using multivariable logistic regression analysis, state administrative data and prospectively collected clinician participation data were used to predict dropout. Two characteristics were predictive: younger clinicians and those practicing in upstate-rural areas compared to downstate-urban areas were less likely to drop out from training. Implications for research and policy are described.

Keywords: EBP training, Managing and adapting practice (MAP) system, Predictors of clinician dropouts, Children’s services

Introduction

Substantial investment by states to implement EBPs has focused on workforce development (Bruns et al. 2008; Gleacher et al. 2011; Kerker et al. 2014). Despite this, successful dissemination of EBPs has been slow and the promising benefits of EBPs have not been consistently realized (Fixsen et al. 2013; McHugh and Barlow 2010). Nationally, strategies for scaling-up EBPs have not been guided by empirical knowledge from implementation science (McHugh and Barlow 2010). A challenging aspect of EBP implementation relates to clinician training, in part because of the complexity of psychosocial treatments (McHugh & Barlow 2010). While didactic training is usually completed as planned, training clinicians to proficiency and maintaining that proficiency has been much less successful (Beidas and Kendall 2010; Beidas et al. 2011; Bellg et al. 2004). Hence, efforts to promote EBPs in particular, have had limited impact on frontline provider practice (Stewart et al. 2012).

To date, the majority of efforts to understand EBP uptake in child and adolescent mental health has focused on the active implementation phase, when clinician training generally occurs. This phase is distinguished from the earlier phases of exploration and preparation, and the later phase of sustainment (Aarons et al. 2011; Novins et al. 2013). Limited attention has focused on understanding the factors that motivate clinicians to participate in and/or drop out of non-mandated EBP system-wide trainings (Powell et al. 2014). In particular, training dropout rates are quite high. Among practitioners serving youth in community-based clinics, common reasons cited for dropout from trainings include change in employment, maternity leave and family issues (Chorpita & Daleiden 2014; Gleacher et al. 2011). In New York State (NYS), among clinicians who do not prematurely quit EBP training, completion rates in early statewide EBP trainings were slightly under 80 % (Gleacher et al. 2011). In another large-scale clinician training effort, data available from 504 out of more than 1700 clinicians who started training showed that only 74 % of these 504 clinicians were trained to proficiency (Southam-Gerow et al. 2014).

Theoretical models of implementation and work-related training suggest that reasons for failed effectiveness, in this case clinician dropout, may be complex (Aarons et al. 2011; Sitzmann and Weinhardt 2015; Wisdom et al. 2014). Within mental health, efforts to understand the interplay among multilevel factors that influence EBP implementation and training effectiveness at a system level are very limited (Olin et al. 2015). As counties, states and the federal government continue to invest heavily in workforce development (Hoagwood et al. 2014; NCTSN 2015; Southam-Gerow et al. 2014), understanding what factors influence participation in EBP trainings and dropout from training can provide critical information to systems (both funders and employers of clinicians who sponsor these trainings) about judicious selection of clinicians who are most likely to complete training and employ EBPs in their practices. Targeted training for those more likely to complete and apply it may also avoid costly missteps in efforts to improve care quality (Saldana et al. 2012; Wisdom et al. 2014). We thus sought to expand the knowledge base on clinician participation in a system-wide EBP training by examining factors associated with participation, and more specifically, dropout from training, using prospectively collected data from a New York State training initiative of the Managing and Adapting Practice (MAP) system developed by Chorpita and Daleiden (2014).

Context for MAP Training in New York State

NYS has been a frontrunner in providing training and consultation on a range of EBPs for community-based providers. In 2006, NYS Office of Mental Health (OMH) established an evidence-based treatment (EBT) training center for mental health providers statewide. In the first 3 years, the Evidence Based Treatment Dissemination Center (EBTDC) provided free training to 1210 community-based providers. The first four years of training focused on specific EBPs, including CBT for post-traumatic stress disorder and depression, and parent training for disruptive behavior disorders (Gleacher et al. 2011). To date, evaluation of EBTDC has focused on clinician attendance in trainings. In the first 2 years of EBTDC training, approximately 80 % of clinicians who did not prematurely quit completed training requirements, which included 75 % attendance on consultation calls and presentation of two cases; these training completion rates decreased to 58 and 49 % when clinicians who prematurely quit training were included in the denominator (Gleacher et al. 2011). EBTDC most recently selected the Managing and Adapting Practice (MAP) as an evidence-informed model of care (Chorpita and Daleiden 2014).

MAP was selected for several reasons. First, MAP provides a broader coverage of client populations compared to single-disorder focused EBPs, allowing clinicians to apply the evidence base and knowledge to a broader range of clients seen in practice. Second, as organizations prepare to meet the challenges of a changing behavioral healthcare system that focuses on accountability and outcomes, the role of clinician-friendly decision support tools to promote effective practices with measurable outcomes will become increasingly important. Third, MAP is currently being implemented in a number of mental health organizations and counties across the United States (e.g., Southam-Gerow et al. 2014), suggesting a potential for scalability. Thus, the MAP system is seen as an opportunity to improve clinical outcomes, enhance accountability, and increase the knowledge and skills of the workforce.

The MAP System

The MAP system is an evidence-informed approach to providing mental health services to youth. This sophisticated system is designed to guide and support practitioners in the selection, review, adaptation, or construction of promising treatments to match particular child characteristics based on the latest scientific findings. Specifically, MAP consists of (i) the Practice Wise Evidence-Based Service, an online database that can make recommendations about formal evidence-based programs or about specific components of evidence-based treatments based on the clinical problem and client characteristics; (ii) Practitioner Guides, which provide practitioners with a description of a broad range of evidence-informed clinical interventions and their components in a user friendly way. The majority of these practice guidelines refer to cognitive-behavioral and psychoeducational approaches; and (iii) a Clinical Dashboard, an excel-based graphic display that tracks and monitors outcomes and associated practices.

Study Goals

In this study, NYS administrative data and prospectively collected data on clinician training participation were combined to assess the effects of multilevel factors associated with clinician participation, and more specifically, clinician dropout from MAP training within NYS children’s mental health system. Following the multilevel frameworks of Wisdom et al. (2014) and Sitzmann and Weinhardt (2015), we examined extra-organizational structural variables (e.g., region, urbanicity, clinic affiliation), intra-organizational variables (e.g., clinic, provider and client profiles) and individual clinician level variables (e.g., age, education, perception of work context) as predictors of MAP training dropout across the state. We hypothesized that variables more proximal to the individual clinician (e.g., practice characteristics and perception of organizational functioning) would be more predictive of clinician behavior than more distal system or extra-organizational variables, such as region or urbanicity.

We did not develop specific hypotheses about dropout based on the specific MAP system. While we expected better receptivity to MAP because of its broader coverage of client populations and clinician friendly decision support tools, the excel-based clinical dashboard is an added technological lift that could influence ease of use. We thus did not expect better or worse dropout from MAP compared to other EBPs.

Method

MAP Rollout

In the first NYS rollout of MAP (September 2013–June 2014), training at no cost to clinicians or agencies was made available to all clinicians (including supervising clinicians) working in OMH-licensed clinics using a train-the-trainer model. Two CBT experts at the New York University Child Study Center developed proficiency with MAP through training, coaching and consultation with PracticeWise LLC (the purveyor of MAP). In close consultation with MAP developers, the NYS training protocol was modeled after the PracticeWise MAP developers’ training model. Training included a five-day core MAP training with two in-person training days book-ending three webinar-based trainings on common youth disorders (e.g., Anxiety, Depression, Post-traumatic Stress Disorder, Disruptive Behavior). These in-person trainings were conducted across five regions of the state. The core training was followed by bi-weekly consultation calls with the NYU trainers in groups of eight to ten clinicians, for nine months. In NYS, additional clinic-based support was provided in monthly MAP supervisor calls to facilitate MAP implementation. Supervisor calls focused on supporting in-house supervisors in addressing clinical issues related to MAP implementation.

Certification was provided by EBTDC to participants who completed at least 27.5 h of core NYS MAP training plus at least 12 h (or approximately 70 %) of consultation calls directed by an EBTDC Trainer over the nine-month period. In addition, clinicians had to demonstrate proficiency by providing services to at least two clients using 10 distinct components of evidence-based practices; the EBTDC trainer provided ongoing consultation to facilitate clinician proficiency in using MAP. Certification also required submission of a MAP therapist portfolio, an achievement-based system for tracking and evaluating experience and proficiency of clinicians using the MAP system. These certification criteria were adapted from PracticeWise for NYS EBTDC. No incentives for participation were provided by the state.

Sample

One hundred and eighty-six individuals registered for NYS MAP training, of which 154 individuals attended the first day of training. Of the 154 who started the training, 140 provided consent to participate in the study. These individuals included 91 clinicians, 27 supervisors, 9 clinic directors, 7 interns and 6 who did not disclose their roles. The analytic sample excluded 13 individuals because they were administrators (6 clinic directors and 7 supervisors) who did not provide direct services. Our final analytic sample thus included 127 eligible MAP participants. These participants represented 34 clinics across the state; the average number of participants per clinic was 3.9 (SD = 3.5).

Because clinicians were nested within clinics (ICC = 0.16), we assessed whether to employ random-effects models to account for the unbalanced panel structure of the data and potential unobserved heterogeneity. First, we tested whether clinician “nestedness” (i.e., whether a clinician is one of several associated with a clinic vs. a single clinician from an agency) was associated with clinician dropout using χ2 analysis. Secondly, we ran our modeling procedure with and without clinic identifier random effects. Although dropout did not differ between nested and non-nested clinicians and results did not differ between the two modeling approaches, we report the random effects model on the grounds that the intra-class correlation coefficient was sufficient to justify the added model complexity.

Data Sources

Study data were extracted and merged from five sources. Participating clinicians completed the MAP study survey measures on Day 1, prior to the start of training. The MAP survey measures included items on demographics, clinician practice characteristics, perceptions of their clinic program’s climate and work attitudes. Attendance logs from EBTDC provided data on clinician attendance, number of MAP cases and portfolio submission. The training attendance dataset compiled by the Community Technical Assistance Center (CTAC), another training, consultation, and educational resource center available to support mental health clinics in NYS, provided data about the prior EBP adoption behavior of the clinics for which the participating clinicians worked (Chor et al. 2014a). The 2011 NYS OMH Patient Characteristics Survey (PCS) (bi.omh.ny.-gov/pcs/index) provided data on clinic client profiles, which included “snapshot” data collected during a one-week period in October 2011 on client populations served by the OMH-licensed clinics. The U.S. Department of Health and Human Services Area Health Resources Files (AHRF 2014) provided county demographic data.

Measures

Outcome

To align with NYS MAP certification requirements (described above), dropouts in this study were defined as clinicians who had attended fewer than 70 % of consultation calls and who did not submit a MAP portfolio for certification within 18 months of the inception of NYS MAP training in September 2013. All clinicians who began consultation calls attended the requisite number of in-person training hours. Of the 74 clinicians who attended fewer than 70 % of consultation calls, 10 later successfully submitted a MAP portfolio for certification and, therefore, did not meet the criteria for dropout employed in this analysis. Sixty-four (or 51.2 %) of the clinicians met criteria for MAP dropouts. On average, clinicians who dropped out attended only a third of the required calls, 33.4 ± 23.5 %, compared to 83.3 % ± 7.9 % among completers. On average, dropouts had fewer MAP cases than completers (0.78 ± 0.96 vs. 2.3 ± 1.3, p < .001).

Clinician Demographics

Socio-demographic information included age, ethnicity (categorized as non-white vs. white), gender (female, male), and education (bachelor’s, master’s, doctorate). There were substantial missing data on age (24 %); a simple imputation method was used in order to retain observations for later modeling purposes. If a clinician was missing age data, the mean age of other clinicians in their agency, provided there were two or more, was imputed. If fewer than two clinicians were in the clinic of the clinician with missing age data, then the sample mean was imputed.

Clinician Practice Characteristics

Clinician practice characteristics included employment status (full-time or part-time), licensure status (yes/no), weekly direct client contact hours, minutes of clinical supervision received weekly, number of clients on current caseload, and number of MAP cases. Since MAP relies heavily on the use of technology, clinicians were also asked to rate their experience with and attitudes towards the utility of information technology and computers on a 13-item measure (e.g., Using computers and IT is a useful tool in clinical practice; rated on a 1 to 4-point scale from Strongly Disagree to Strongly Agree). This technology measure was derived from a longer measure of technology adoption (Richardson 2011). Cronbach’s alpha for this technology measure in our sample was 0.91. Because actual behavior and skill associated with using scientific knowledge may relate to dropout from NYS MAP training, clinician self-reported use of and skill in using scientific resources to guide clinical practice was assessed using four items (2 items, “Do you refer to scientific resources to guide clinical practice?” and “In the past month, did you refer to scientific resources to guide clinical practice?” were rated on a 1 to 7-point scale from With No Patients to With All Patients; “Compared to your colleagues, do you use scientific resources to guide clinical practice (e.g., web-based resources, treatment guides/manuals, textbooks) more often than they do?” was rated on a 1 to 7-point scale from No, Certainly Not to Yes, Certainly; “How strongly would you rate your skills in referring to scientific resources to guide clinical practice?” was rated on a 1 to 7-point scale from Not At All to Very Strong). Cronbach’s alpha on this measure was 0.68.

Clinician Perception of Organizational Functioning and Work Attitudes

Four complete domains (76 items) from the Texas Christian University Survey of Organizational Functioning (TCU-SOF; Lehman et al. 2002) were used to assess individual clinician perception of their clinic program’s functioning prior to MAP implementation. The four domains included 13 subscales: Staff Attributes (growth, influence, efficacy, adaptability), Job Attitudes (job satisfaction, burnout, leadership), Organizational Climate (mission, cohesion, autonomy, communication, stress, change) and Workplace Practices (focus on outcomes). Because the TCU-SOF was developed for drug treatment facilities, modifications in language (not content) were made to fit the mental health clinic context. All items were rated on a 1 to 5-point scale Disagree Strongly to Agree Strongly. Individual level scores, representing individual clinician’s perception of organizational functioning, were used in this study. Scores for each subscale were obtained by averaging the responses to its set of items; each subscale score was then multiplied by 10 to rescale the final score so that it ranged from 10 to 50 (e.g., an average response of 2.6 for a scale became a score of “26”). Cronbach’s alpha for the 13 subscales ranged from 0.89 to 0.99.

Clinic-Level Variables

Clinic level innovation, a marker for organizational openness to adopt new practices, was measured by the extent clinics have previously participated in EBP trainings (none, low participation, high participation). Low participating clinics solely adopted hour-long lunchtime webinars while high participating clinics attended at least one in-person training event (Chor et al. 2014a; Olin et al. 2015). The clinics’ client profiles from the PCS served as proxies for innovation-values fit (Klein and Sorra 1996) and were measured by the proportion of clients under 18 years, with severe emotional disturbances (SED), and whose visits were paid for by Medicaid or Medicaid Managed Care (MMC) insurance.

Extra-Organizational Variables

Clinics were originally categorized by OMH administrative regions (downstate representing New York City, Long Island and upstate representing Central, Hudson, Western regions) and as rural or urban based on AHRF county rural–urban continuum codes. However, downstate NYS is only urban, and a new variable denoting region-urbanicity was created to reflect three categories: downstate-urban, upstate-urban, and upstate-rural. Clinics are associated with parent agencies that operate as community-based, hospital-based, or state-operated (i.e., OMH facility).

Statistical Analyses

Analyses were performed using Stata version 11.2 (StataCorp 2009). Descriptive statistics were used to characterize participating clinicians, their practice characteristics and perceptions of organizational climate and work attitudes, and the clinics within which they worked. Chi square, Fisher’s exact, and t-tests were used to compare clinicians and their respective clinics by dropout status.

Multivariable logistic regression (adjusted odds ratios, AORs) with clinic-level random effects was used to assess the independent fixed effects of the predictor variables on clinician dropout. Fixed effects independent variables were selected for model entry if they were associated with dropout in bivariate analyses at p < .25. We chose relaxed variable inclusion criteria because of the relatively small sample size and to protect against exclusion of potentially important variables whose unadjusted associations with dropout may be confounded (Hosmer and Lemeshow 2000). Independent variables were sequentially entered into the model in ordered categories as follows: extra-organizational variables, clinic-level variables, clinician practice characteristics and clinician demographics. We excluded two variables from our tested model, even though they met our inclusion criteria based on our bivariate analyses. The first excluded variable was the number of MAP cases, which was synonymous with dropout, our outcome variable (t = 7.228, p < .001). The second variable, proportion of youth served, was excluded primarily because the dataset was missing approximately 25 % of client level data for this study sample. Prior work has shown that proportions of youth served differ significantly depending on clinic affiliation (Olin et al. 2015). For example, within the population of 346 child-serving clinics in NYS, OMH facilities on average served a higher proportion of youth (68.7 vs. 39.6 %; t = −3.13; df = 325; p < .01 compared to other clinics that serve children) and those with serious emotional disturbance (76.3 vs. 35.0 %, t = −4.77; df = 306; p < .001 compared to other clinics). We thus considered clinic affiliation, for which we have complete data, to be a reasonable proxy for clientele served.

Akaike Information Criterion values were to determine whether the information gain at successive variable entries compensated for increased model complexity and potential over-fitting. Once the final model was determined, we also created a reduced model by removing statistically non-significant variables to maximize our analytic sample size and to produce a more parsimonious and interpretable model. Hypothesis tests were two-sided with alpha set to 0.05. We conducted post hoc analyses that examined all candidate predictor variables for differences by the statistically significant variables we retained in the model, age (categorized as <37 vs. ≥37 years) and region-urbanicity.

Missing data were considerable so we compared model results on the observed data with those from an imputation dataset. We used a random forest-based missing value imputation method (Waljee et al. 2013) that predicts missing values of a variable as function of the observations of the same variable. The observed and imputed models did not differ so we present results generated from the data as collected with the exception of age, which was imputed as described above.

Results

Table 1 displays clinician demographics, their associated practice characteristics and perception of organizational functioning. Clinicians were on average 37.4 ± 10.8 years old, predominantly white (62.5 %), female (83.9 %), and had master’s degrees (87.6 %). The majority worked full-time (94.8 %), were licensed (93.8 %), spent on average 20.9 ± 9.0 h per week in direct client contact, received 57.5 ± 22.4 min in weekly clinical supervision, and carried an average caseload of 37.6 ± 43.4 clients. On average, clinicians endorsed mid-range experience with technology (39.4 ± 6.4) and with use of scientific resources (17.9 ± 4.0); they averaged 1.5 ± 1.4 MAP cases.

Table 1.

Clinician demographics and associated practice and clinic characteristics

Overall, N = 127
Non-dropouts, n = 62 (48.82 %)
Dropouts, n = 65 (51.18 %)
p
N % N % N %
Clinician demographics
 Age (M ± SD), missing = 31   37.4 10.8 32.7   8.1 41.6 11.3 <0.001a
 Ethnicity, missing = 31   0.225a
  Non-white   36 37.5 14 31.1 22 43.1
  White   60 62.5 31 68.9 29 56.9
 Gender, missing = 3   0.741
  Female 104 83.9 51 85 53 82.8
  Male   20 16.1   9 15 11 17.2
 Education, missing = 30   0.311
  Bachelor’s degree     4   4.1   1   2.2   3   5.9
  Doctorate/physician     8   8.3   2   4.4   6 11.8
  Master’s degree   85 87.6 43 93.5 42 82.4
Clinician practice characteristics
 Employment status, missing = 31   0.367
  Full-time   91 94.8 44 97.8 47 92.2
  Part-time     5   5.2   1   2.2   4   7.8
 Licensed, missing = 30   0.68
  No     6   6.2   2   4.4   4   7.8
  Yes   91 93.8 44 95.7 47 92.2
 Direct client contact hours (M ± SD), missing = 32   20.9   9.0 21.5   8.3 20.4   9.7   0.541
 Number of clients (M ± SD), missing = 39   37.6 43.4 40.4 52.3 35.1 33.7   0.568
 Minutes clinical supervision session (M ± SD), missing = 33   57.5 22.4 58.2 28.3 56.8 15.6   0.763
 Experience with technology (M ± SD), range 13–51, missing = 17   39.4   6.4 40.2   7.1 38.5   5.5   0.1801a
 Use of scientific resources (M ± SD), range 5–28 missing = 10   17.9   4.0 17.9   3.8 17.9   4.1   0.9804
 MAP cases     1.5   1.4   2.3   1.3   0.8   1.0 <0.001b
Clinician perception of organizational climate/work attitudes
 Staff attributes, missing = 9
  Growth (M ± SD), range 14–48   35.1   6.8 34.3   7.2 35.8   6.4   0.2602
  Efficacy (M ± SD), range 24–50   39.9   4.0 40.1   4.2 39.7   3.9   0.5799
  Influence (M ± SD), range 18–50   36.3   6.0 36.0   6.1 36.5   5.9   0.6421
  Adaptability (M ± SD), range 25–48   37.6   4.6 38.1   4.3 37.2   5.0   0.26
 Job attitudes, missing = 9
  Burnout (M ± SD), range 13–42   25.8   6.2 25.3   6.0 26.4   6.5   0.337
  Satisfaction (M ± SD), range 25–50   40.3   5.2 40.0   4.9 40.7   5.6   0.4738
  Director leadership (M ± SD), range 20–50, missing = 10   38.5   6.4 39.1   6.4 38.0   6.4   0.364
 Organizational climate, missing = 9
  Mission (M ± SD), range 18–50   36.3   6.2 36.7   5.6 35.9   6.7   0.4476
  Cohesion (M ± SD), range 17–50   38.7   6.4 39.2   6.2 38.3   6.7   0.4737
  Autonomy (M ± SD), range 18–50   35.4   5.2 35.5   4.7 35.3   5.6   0.8486
  Communication (M ± SD), range 10–50   33.4   7.4 33.9   7.4 33.0   7.3   0.5147
  Stress (M ± SD), range 18–50   33.7   8.2 32.9   8.4 34.5   8.0   0.3017
  Change (M ± SD), range 18–48   34.6   5.6 34.6   5.5 34.6   5.7   0.9371
 Workplace practices, missing = 9
  Focus on outcomes (M ± SD), range 20–50   37.2   6.6 37.0   7.2 37.3   6.0   0.828
Clinic innovation
 Prior EBP training participation   0.112a
  None   37 29.13 16 25.8 21 32.3
  Low   50 39.37 21 33.9 29 44.6
  High   40 31.5 25 40.3 15 23.1
Clinic client profile
 % clients under 18 years (M ± SD), missing = 27   71.4 34.1 76.61 34.1 65.3 33.3   0.098b
 % Medicaid and MMC visits (M ± SD missing = 32   51.1 11.3 51   8.4 51.3 13.7   0.915
 % Severe emotionally Disturbances, SMI (M ± SD), missing = 30   39.8 22.5 40.21 23.9 39.3 21.0   0.846
Extra-organizational characteristics
 Region/urbanicity   0.108a
  Downstate urban   89 70.1 39 62.9 50 76.9
  Upstate rural   12   9.5   9 14.5   3   4.6
  Upstate urban   26 20.5 14 22.6 12 18.5
 Affiliation   0.14a
  Community   82 64.6 41 66.1 41 63.1
  Hospital   23 18.1 14 22.6   9 13.9
  OMH facility   22 17.3   7 11.3 15 23.1

Significant p values are bolded

a

Variables that meet criteria for inclusion into our model

b

Eligible variable excluded from model to avoid multicolinearity

Individual clinician perception scores on staff attributes, job attitudes, organizational climate, and workplace practices were generally in the mid-range. Notably, clinician job attitudes in this sample were generally positive, with relatively low burnout (25.8 ± 6.2) and high satisfaction (40.3 ± 5.2). Half of the clinicians (54.3 %) came from innovative clinics that were high adopters of prior EBP training initiatives. Clinicians came from clinics that on average, had a client base that served a large proportion of youth (71.4 ± 34.1 % under 18 years old), of whom 39.8 ± 22.5 % were classified as Seriously Emotionally Disturbed; 51.1 ± 11.3 % of these youth visits were insured by Medicaid or Managed Medicaid. The clinicians’ respective agencies were located mostly in downstate-urban regions (70.1 %) followed by upstate-urban regions (20.5 %); the majority came from community-based clinics (64.6 %) with the remainder evenly split between hospitals (18.1 %) and OMH facilities (17.3 %).

Dropouts vs. Completers

Approximately half the clinicians (51.2 %) who started NYS MAP training dropped out. In bivariate analyses, only two variables significantly differentiated dropouts from non-dropouts. Dropouts were significantly older (41.6 ± 11.3 vs. 32.7 ± 8.1 years; t = −4.4; df = 94, p < .001), and presented fewer MAP cases on consultation calls (0.8 ± 1.0 vs. 2.3 ± 1.3, p < .001) (Table 1).

Table 2 presents the sequential random effects logistic regressions that model clinician dropout as a function of extra-organizational variables, clinic-level variables, clinician practice characteristics and clinician demographics. Region-urbanicity and clinician age were the only significant predictors of clinician dropout. Clinic innovation (i.e. EBP adopter status) and clinician practice characteristics were not associated with dropout. The reduced random effects logistic regression model in Table 2 describes clinician dropout status as a linear function of agency region-urbanicity and clinician age. Clinicians at upstate rural clinics have reduced odds of dropping out relative to clinicians at downstate urban clinics (AOR = 0.11, 95 % CI 0.02–0.77), or in other words, clinicians at downstate urban clinics were 9.1 times more likely to drop out than upstate rural clinicians. Older clinicians were 1.13 times more likely to drop out (AOR = 1.12, 95 % CI 1.06–1.17).

Table 2.

Stepwise random effects logistic regression model of clinician dropout

Step 1: extra-organizational Step 2: extra-org. + clinic
EBP adoption status
Step 3: extra-org. + clinic
EBP adoption + clinician
practice characteristics
Step 4: extra-org. + clinic EBP
adoption + clinician practice
characteristics + clinician
demographics
Reduced model; region-
urbanicity, age (imputed)





AOR p 95 %
CI
AOR p 95 %
CI
AOR p 95 %
CI
AOR p 95 %
CI
AOR p 95 %
CI
Region/urbanicity
 Downstate urban Ref. Ref. Ref. Ref. Ref.
 Upstate rural 0.26 0.112 0.05 1.37 0.31 0.160 0.06 1.59 0.34 0.183 0.07 1.66 0.17 0.045 0.03 0.96 0.11 0.026 0.02 0.77
 Upstate urban 0.59 0.399 0.18 1.99 0.66 0.476 0.21 2.05 0.70 0.539 0.22 2.21 0.79 0.713 0.22 2.80 0.50 0.306 0.14 1.88
Affiliation
 Community Ref. Ref.
 Hospital 0.92 0.909 0.23 3.68 0.86 0.811 0.24 3.05 0.73 0.613 0.22 2.45 0.48 0.297 0.12 1.89
 OMH facility 2.44 0.182 0.66 9.04 2.34 0.204 0.63 8.72 1.38 0.644 0.35 5.35 0.33 0.185 0.06 1.71
Clinic EBP adoption
 status
 None Ref.
 Low 1.05 0.939 0.32 3.40 0.89 0.848 0.27 2.97 0.80 0.730 0.23 2.83
 High 0.51 0.244 0.16 1.59 0.53 0.271 0.17 1.65 0.53 0.296 0.16 1.76
Clinician practice characteristics
 Experience with technology 0.97 0.343 0.91 1.03 0.96 0.206 0.90 1.02
 Clinician info
 Age (Imputed) 1.12 <0.001 1.06 1.19 1.13 <0.001 1.06 1.19
Model info
N 127 127 110 110 127
N groups 35 35 34 34 35
ICC 0.14 0.08 0.01 <0.01 0.19
Log likelihood −82.7 −81.8 −72.3 −63.6 −72.5
df 6 8 9 10 5
AIC 177.4 179.7 162.7 147.2 154.9
BIC 194.5 202.4 187.0 174.2 169.2

Significant p values are bolded

To better understand how the two significant predictors may relate to other variables, we conducted post hoc analyses to provide a richer understanding of factors that may be associated with age and region-urbanicity. Using the mean clinician age of the sample as a cut-off (37 years) to define older versus younger clinicians, several interesting findings emerged. Older clinicians had significantly fewer MAP cases compared to younger clinicians (1.0 ± 1.2 vs. 1.8 ± 1.2, t = 3.43, p < .001). Older clinicians were also disproportionately represented in OMH facilities, with 84.2 % of clinicians in OMH facilities falling into the older clinician category (Fisher’s exact p < .001). The opposite pattern was seen in hospital-affiliated clinics and community-based clinics, where younger clinicians represented 70.6 and 65 % of clinicians in these settings respectively. In terms of perceptions of organizational functioning and climate, older clinicians endorsed more positive staff attributes, including a greater sense of self-efficacy (40.8 ± 3.8 vs. 39.32 ± 3.3, t = −2.04, p = .04) and greater influence with others (38.5 ± 5.8 vs. 35.5 ± 5.1, t = −2.67, p = .01). Older clinicians also perceived their workplace practices to be more focused on outcomes related to patient improvement (38.6 ± 6.2 vs. 35.8 ± 6.8, t = −2.05, p = .04).

With respect to region-urbanicity, clinicians from upstate-rural clinics were more likely to report feeling a sense of autonomy compared to clinicians from upstate-urban or downstate-urban areas (38.5 ± 4.9 vs. 36.4 ± 5.8 vs. 34.7 ± 4.9; F = 3.65, p = .03). Clinicians from upstate-rural areas were also more likely to come from innovative clinics that were high EBP training adopters (41.7 % compared to 34.5 % from upstate-urban vs. 29.2 % from downstate-urban clinics, Fisher’s exact = .013). Clinicians’ affiliation was also significantly different depending on region (Fishers exact <.001). None of the upstate-rural clinicians were affiliated with OMH facilities, compared to 16.9 % from downstate-urban and 26.9 % from upstate-urban clinics; three-quarters of upstate-rural clinicians (75 %) were from community-based clinics, compared to 30.8 % from upstate-urban clinics.

Discussion

Unlike prior studies that focused on clinicians who completed EBP training, this study examined factors associated with dropout. Surprisingly, only clinician age and their clinic location predicted clinician dropout from training. None of the expected clinician practice variables or clinic level variables were associated with dropout. In line with the theory of planned behavior (Ajzen 1991) and training engagement theory (Sitzmann and Weinhardt 2015), older clinicians were more likely to dropout, perhaps reflecting their greater sense of self-efficacy, influence, and perceived self-competence. Older clinicians reported that tracking weekly progress on selected measures was antithetical to how they were trained. Anecdotal reports from trainers suggested that older clinicians struggled with the technological demands of MAP, especially the use of the excel-based dashboard to track progress. Our measures of related clinician practice variables such as comfort with technology and use of scientific resources did not distinguish older from younger clinicians and may be related to the lack of specificity of our chosen measure. Given their greater sense of perceived efficacy, they likely relied on other sources of information (e.g., clinical intuition) to track client progress. These are hypotheses to be tested in future studies. To understand the importance of fluency with technology in the adoption and implementation of EBPs, much more selective technology skill measures are needed. Further, if lack of ease with technology is an impediment to learning and applying EBPs, then training in EBPs must include technology training to be successful.

Our negative findings related to other clinician characteristics are consistent with other literature (Bearman et al. 2013). While clinician characteristics have not been found to reliably predict EBP implementation, Bearman et al. (2013) found that supervision processes that included expert modeling and role-plays predicted practice use, especially for older therapists who were further away from formal training and less familiar with EBPs. More active learning strategies may be essential for changing practice among older clinicians. Current MAP training in NYS now includes a pre-training module on Excel software, the use of more active modeling of clinical strategies by NYS MAP trainers, and use of technology to allow clinicians to visually follow the use of clinical dashboards during consultation calls.

Clinic location was also predictive of dropout. Clinicians in upstate-rural regions were less likely to dropout compared to clinicians from urban areas (both up and down state). Since urbanicity is a proxy for training access (distance to urban locations where trainings are typically held), clinicians from rural clinics are likely to have fewer opportunities to engage in trainings, and are hence more motivated to attend and complete the training. Further, these rural-based clinicians tended to be from more innovative clinics (i.e., adopters of new practices) and hence may have been more open to EBPs. Interestingly, none of these rural-based clinicians worked in OMH facilities, which are structured and operated differently and serve a more severe population (Olin et al. 2015). We speculate that MAP, which was originally developed to target community-based clinicians, may need to be adapted to fit the training needs of clinicians who work in OMH facilities, which are predominantly residential treatment facilities. Further, older clinicians were disproportionately represented in OMH facilities, making efforts to tailor training even more critical for uptake of new practices.

Implications for Workforce Training

The NYS MAP training, like many provider trainings, followed prescribed protocols that do not take into account participant characteristics such as baseline knowledge, comfort and experience with technology, clinical competency, learning preferences, or their workplace context. Our findings suggest ways to adapt NYS MAP training to improve clinician engagement in and successful use of new practices. In this study, age was the only significant clinician characteristic that predicted dropout. However, age likely served as a proxy for other clinician practice characteristics, such as comfort with computer software (in this case, excel-based dashboards). Since presentation of MAP cases hinged on dashboard usage, the fact that older clinicians tended to have fewer MAP cases than younger clinicians supports our speculation about the generation gap with respect to technology use. Similarly, while clinic affiliation did not emerge as a significant predictor, post hoc analyses as well as anecdotes from trainees and trainers suggest that clinicians from residential treatment facilities serving more severe populations, who also tended to be older, could benefit from customization of the training. In particular this might include a focus on integrating progress monitoring on a more frequent basis. In these facilities, progress is tracked daily, rather than weekly.

Currently, flexible or adaptive trainings for providers are rare (Chorpita and Daleiden 2014). The literature suggests the importance of innovation-fit, trialability, relevance and ease of use (Chor et al. 2014b). In this study, we found the number of MAP cases to be synonymous with dropout. Clinician difficulty in presenting MAP cases could be used as a proxy for engagement. EBTDC MAP trainers have since started to use this variable to identify potential dropouts and provide targeted technical assistance.

Finally, our data also show that changing practices especially among older clinicians can be challenging. Clinicians tend to practice the way they were trained and are more likely to use EBP strategies that integrate well with their theoretical orientations (Brookman-Frazee et al. 2010). This finding supports recommendations to make EBP training a core part of the curriculum in professional schools, especially schools of social work which generate the majority of frontline providers in mental health services (Howard et al. 2007). The finding also suggests that targeting training to individuals who are less likely to drop out may be cost-effective.

Limitations

Because the state does not track the number of clinicians who are hired within these clinics, the possible pool of clinicians is unknown. All clinicians who work in OMH licensed clinics were invited to participate; however the generalizability of our findings may be limited since this study is based on a volunteer group. Not uncommon when using state administrative datasets, the state data did not completely coincide with the MAP rollout. For example, the NYS OMH Patient Characteristics information is based on 2011 data, while our MAP dropout data was based on the first MAP roll out, which occurred between September 2013 and June 2014. Further, because of the paucity of available measures, the measure on scientific resources to specifically assess for comfort with and actual use of scientific resources to guide clinical practice (a key aspect of MAP), was developed by our investigative team.

Although clinician practice characteristics and their perception of organizational functioning were included in this study, other potentially important contextual factors that might influence clinician engagement in NYS MAP training were not measured. For example, NYS MAP participants reported numerous disincentives for implementing new practices such as MAP, including time demands and conflicting priorities. Learning to use new practices like the MAP system can be highly time consuming. In addition to the didactic training and consultation calls, many clinicians reported that productivity demands significantly interfered with MAP use in practice. Anecdotal reports from clinic administrators confirmed patterns of dropouts and decreased use of MAP elements by clinicians. As projected fiscal deficits increased productivity demands towards the latter part of NYS MAP training, clinician attendance on NYS MAP consultation calls dropped precipitously, leading OMH to shorten the originally planned year-long training period to 9 months. Further, problems with workflow and practice compatibility interfered with clinical dashboard use to guide treatment activities. While the clinical dashboard was perceived to be useful in charting treatment activities and client progress, the dashboard was seen as an “add on” because it did not replace progress notes required to document a visit. Several clinics were in the process of transitioning to electronic health records; the inability to integrate the MAP dashboard into clinic electronic health records was a significant impediment to its uptake.

Clearly, training in and of itself is not a sufficient implementation strategy. Contextual factors related to organizational and system goals, demands, and conflicts are key factors often overlooked in efforts to scale up EBPs through workforce training (Sitzmann and Weinhardt 2015). The mental health field could benefit from the business literature to more effectively use training initiatives as a strategy to enhance a competitive edge and to document goal achievement (Sitzmann and Weinhardt 2015).

As with other EBTDC training efforts, the MAP system was intuitive and easy to grasp for a small minority of the clinical workforce; the vast majority struggled with case conceptualization, assessment and goal tracking (Gleacher et al. 2011). The lack of preparedness of the mental health workforce for meeting the demands of effective practices has been noted repeatedly (Schoenwald et al. 2010; SAMHSA 2013). The technological demand related to using an excel-based dashboard was an added challenge. Not surprisingly, a generation gap was evident, with younger clinicians showing more facility. It is thus unclear how well our findings, related to the MAP system, generalize to the adoption of other EBPs, especially those with fewer or no technology demands.

Additionally, the literature on both implementation and training engagement theories highlight the complex and dynamic interplay among innovation characteristics, individual clinician characteristics, and the work context (Chor et al. 2014b; Sitzmann and Weinhardt 2015). Empirical data to facilitate understanding the interplay among key factors that promote successful implementation and impact on client outcomes is needed to justify the substantial investment of time and resources by states, funders, agencies, and individual clinicians. Differences in the provision of EBPs across state systems are significant (Bruns et al. 2015) and state-to-state variations are substantial. Thus these findings may not be generalizable outside of NYS, although EBP implementation barriers noted in other studies are similar to what is reported here, and identical challenges have been found during other large-scale trainings (Chor et al. 2014a).

Conclusion

This study is, to our knowledge, the first to specifically examine multilevel predictors of dropouts from system-wide EBP trainings, attending to contextual factors that influence clinician engagement. The implications are that a more targeted approach to workforce training is warranted and likely more cost-effective. Because of the close working relationship the authors have with NYS policy leaders, our findings have been used to restructure the state’s plans for future rollouts of NYS MAP trainings. This study also highlights the importance of systematically monitoring training participation, completion, and competence so that course corrections can be made. Given high staff turnover and the acknowledged limitation of workforce competencies in EBPs (SAMHSA Report 2013; Schoenwald et al. 2013), targeted workforce development is needed to avoid wasted resources.

The study also suggests the need for additional research on efficient strategies to enhance workforce competency. This may include comparative studies of different modalities of training. As healthcare reform continues to influence the structure and funding for behavioral health, the pressure to create systems that will be more cost-effective, accountable and outcome-oriented will grow. The costs of retooling the workforce to deliver new practices are in direct conflict with these productivity demands. Policies that are flexible and targeted at tailored, incentivized, and judiciously selective trainings are likely preferable to blunt workforce development initiatives. Furthermore, research that can elucidate the comparative advantages of different training strategies are needed to keep pace with the workforce demands. Attending to workforce competencies will be essential to meet the demands of these system-wide behavioral healthcare changes.

Acknowledgments

This study was supported by funding from the National Institute of Mental Health (P30 MH090322).

References

  1. Aarons GA, Hurlburt M, Horwitz SM. Advancing a conceptual model of evidence-based practice implementation in public service sectors. Administration and Policy in Mental Health and Mental Health Services Research. 2011;38(1):4–23. doi: 10.1007/s10488-010-0327-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Ajzen I. The theory of planned behavior. Organizational Behavior and Human Decision Processes. 1991;50(2):179–211. [Google Scholar]
  3. Bearman SK, Weisz JR, Chorpita BF, Hoagwood K, Ward A, Ugueto AM, et al. More practice, less preach? The role of supervision processes and therapist characteristics in EBP implementation. Administration and Policy in Mental Health and Mental Health Services Research. 2013;40(6):518–529. doi: 10.1007/s10488-013-0485-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Beidas RS, Kendall PC. Training therapists in evidence-based practice: A critical review of studies from a systems-contextual perspective. Clinical Psychology: Science and Practice. 2010;17(1):1–30. doi: 10.1111/j.1468-2850.2009.01187.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Beidas RS, Koerner K, Weingardt KR, Kendall PC. Training research: Practical recommendations for maximum impact. Administration and Policy in Mental Health and Mental Health Services Research. 2011;38(4):223–237. doi: 10.1007/s10488-011-0338-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Bellg AJ, Borrelli B, Resnick B, Hecht J, Minicucci DS, Ory M, Treatment Fidelity Workgroup of the NIH Behavior Change Consortium Enhancing treatment fidelity in health behavior change studies: Best practices and recommendations from the NIH Behavior Change Consortium. Health Psychology. 2004;23(5):443–451. doi: 10.1037/0278-6133.23.5.443. [DOI] [PubMed] [Google Scholar]
  7. Brookman-Frazee L, Haine RA, Baker-Ericzén M, Zoffness R, Garland AF. Factors associated with use of evidence-based practice strategies in usual care youth psychotherapy. Administration and Policy in Mental Health and Mental Health Services Research. 2010;37(3):254–269. doi: 10.1007/s10488-009-0244-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Bruns EJ, Hoagwood KE, Hamilton JD. State implementation of evidence-based practice for youths, part I: Responses to the state of the evidence. Journal of the American Academy of Child and Adolescent Psychiatry. 2008;47(4):369–373. doi: 10.1097/CHI.0b013e31816485f4. [DOI] [PubMed] [Google Scholar]
  9. Bruns EJ, Kerns SEU, Pullmann MD, Hensley SW, Lutterman T, Hoagwood KE. Research, data, and evidence-based treatment use in state behavioral health systems, 2001–2012. Psychiatric Services in Advance. 2015 doi: 10.1176/appi.ps.201500014. [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Chor KHB, Olin SCS, Weaver J, Cleek AF, McKay MM, Hoagwood KE, Horwitz SM. Adoption of clinical and business trainings by child mental health clinics in New York State. Psychiatric Services. 2014a;65(12):1439–1444. doi: 10.1176/appi.ps.201300535. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Chor KHB, Wisdom JP, Olin SCS, Hoagwood KE, Horwitz SM. Measures for predictors of innovation adoption. Administration and Policy in Mental Health and Mental Health Services Research. 2014b:1–29. doi: 10.1007/s10488-014-0551-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Chorpita BF, Daleiden EL. Structuring the collaboration of science and service in pursuit of a shared vision. Journal of Clinical Child & Adolescent Psychology. 2014;43(2):323–338. doi: 10.1080/15374416.2013.828297. [DOI] [PubMed] [Google Scholar]
  13. Fixsen D, Blase K, Metz A, Van Dyke M. Statewide implementation of evidence-based programs. Exceptional Children. 2013;79(2):213–230. [Google Scholar]
  14. Gleacher AA, Nadeem E, Moy AJ, Whited AL, Albano AM, Radigan M, et al. Statewide CBT training for clinicians and supervisors treating youth: the New York State evidence based treatment dissemination center. Journal of Emotional and Behavioral Disorders. 2011;19(3):182–192. doi: 10.1177/1063426610367793. [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Hoagwood KE, Olin SS, Horwitz S, McKay M, Cleek A, Gleacher A, et al. Scaling up evidence-based practices for children and families in New York State: Toward evidence-based policies on implementation for state mental health systems. Journal of Clinical Child & Adolescent Psychology. 2014;43(2):145–157. doi: 10.1080/15374416.2013.869749. [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Hosmer DW, Jr, Lemeshow S. Applied logistic regression. 2nd. Hoboken, NJ: John Wiley & Sons; 2000. [Google Scholar]
  17. Howard MO, Allen-Meares P, Ruffolo MC. Teaching evidence-based practice: Strategic and pedagogical recommendations for schools of social work. Research on Social Work Practice. 2007;17(5):561–568. [Google Scholar]
  18. Kerker BD, Chor KHB, Hoagwood KE, Radigan M, Perkins MB, Setias J, et al. Detection and treatment of mental health issues by pediatric PCPs in New York State: An evaluation of project TEACH. Psychiatric Services. 2014;66(4):430–433. doi: 10.1176/appi.ps.201400079. [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Klein KJ, Sorra JS. The challenge of innovation implementation. Academy of Management Review. 1996;21(4):1055–1080. [Google Scholar]
  20. Lehman WE, Greener JM, Simpson DD. Assessing organizational readiness for change. Journal of Substance Abuse Treatment. 2002;22(4):197–209. doi: 10.1016/s0740-5472(02)00233-7. [DOI] [PubMed] [Google Scholar]
  21. McHugh RK, Barlow DH. The dissemination and implementation of evidence-based psychological treatments: A review of current efforts. American Psychologist. 2010;65(2):73–84. doi: 10.1037/a0018121. [DOI] [PubMed] [Google Scholar]
  22. National Child Traumatic Stress Network (NCTSN) 2015 http://www.nctsn.org. Retrieved from February 24, 2015.
  23. Novins DK, Green AE, Legha RK, Aarons GA. Dissemination and implementation of evidence-based practices for child and adolescent mental health: A systematic review. Journal of the American Academy of Child and Adolescent Psychiatry. 2013;52(10):1009–1025.e1018. doi: 10.1016/j.jaac.2013.07.012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Olin SS, Chor KHB, Weaver J, Duan N, Kerker BD, Clark LJ, et al. Multilevel predictors of clinic adoption of state-supported trainings in children’s services. Psychiatric Services in Advance. 2015;34:1–7. doi: 10.1176/appi.ps.201400206. [DOI] [PMC free article] [PubMed] [Google Scholar]
  25. Powell BJ, Proctor EK, Glass JE. A systematic review of strategies for implementing empirically supported mental health interventions. Research on Social Work Practice. 2014;24(2):192–212. doi: 10.1177/1049731513505778. [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Richardson JW. Technology adoption in Cambodia: Measuring factors impacting adoption rates. Journal of International Development. 2011;23(5):697–710. [Google Scholar]
  27. Saldana L, Chamberlain P, Wang W, Brown CH. Predicting Program Start-Up using the Stages of Implementation Measure. Administration and Policy In Mental Health. 2012;39(6):419–425. doi: 10.1007/s10488-011-0363-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Schoenwald SK, Hoagwood KE, Atkins MS, Evans ME, Ringeisen H. Workforce development and the organization of work: The science we need. Administration and Policy In Mental Health. 2010;37:71–80. doi: 10.1007/s10488-010-0278-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Schoenwald SK, Mehta TG, Frazier SL, Shernoff ES. Clinical supervision in effectiveness and implementation research. Clinical Psychology: Science and Practice. 2013;20(1):44–59. [Google Scholar]
  30. Sitzmann T, Weinhardt JM. Training engagement theory: A multilevel perspective on the effectiveness of work-related training. Journal of Management, Advance online publication. 2015 doi: 10.1177/0149206315574596. [DOI] [Google Scholar]
  31. Southam-Gerow MA, Daleiden EL, Chorpita BF, Bae C, Mitchell C, Faye M, Alba M. MAPping Los Angeles County: Taking an evidence-informed model of mental health care to scale. Journal of Clinical Child & Adolescent Psychology. 2014;43(2):190–200. doi: 10.1080/15374416.2013.833098. [DOI] [PubMed] [Google Scholar]
  32. StataCorp. Stata Statistical Software: Release 11. College Station, TX: StataCorp LP; 2009. [Google Scholar]
  33. Stewart RE, Chambless DL, Baron J. Theoretical and practical barriers to practitioners’ willingness to seek training in empirically supported treatments. Journal of Clinical Psychology. 2012;68(1):8–23. doi: 10.1002/jclp.20832. [DOI] [PMC free article] [PubMed] [Google Scholar]
  34. Substance Abuse and Mental Health Services Administration (SAMHSA) Report to Congress on the Nation’s Substance Abuse and Mental Health Workforce Issues. Rockville, MD: 2013. [Google Scholar]
  35. US Department of Health and Human Services Area Health Resources Files (AHRF) 2014 http://ahrf.hrsa.gov. Retrieved from March 3, 2014.
  36. Waljee AK, Mukherjee A, Singal AG, Zhang Y, Warren J, Balis U, et al. Comparison of imputation methods for missing laboratory data in medicine. BMJ Open. 2013;3(8):e002847. doi: 10.1136/bmjopen-2013-002847. [DOI] [PMC free article] [PubMed] [Google Scholar]
  37. Wisdom JP, Chor KHB, Hoagwood KE, Horwitz SM. Innovation adoption: A review of theories and constructs. Administration and Policy in Mental Health and Mental Health Services Research. 2014;41(4):480–502. doi: 10.1007/s10488-013-0486-4. [DOI] [PMC free article] [PubMed] [Google Scholar]

RESOURCES