Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2017 May 1.
Published in final edited form as: Adm Policy Ment Health. 2016 May;43(3):426–440. doi: 10.1007/s10488-015-0642-0

Implementing a Measurement Feedback System in Community Mental Health Clinics: A Case Study of Multilevel Barriers and Facilitators

Alissa A Gleacher 1, Serene S Olin 1, Erum Nadeem 1, Michele Pollock 1, Vanesa Ringle 1, Leonard Bickman 2, Susan Douglas 2, Kimberly Hoagwood 1
PMCID: PMC4560680  NIHMSID: NIHMS669251  PMID: 25735619

Abstract

Measurement feedback systems (MFSs) have been proposed as a means of improving practice. The present study examined the implementation of a MFS, the Contextualized Feedback System (CFS), in two community-based clinic sites. Significant implementation differences across sites provided a basis for examining factors that influenced clinician uptake of CFS. Following the theoretical implementation framework of Aarons, Hurlburt & Horwitz (2011), we coded qualitative data collected from eighteen clinicians (13 from Clinic U and 5 from Clinic R) who participated in semi-structured interviews about their experience with CFS implementation. Results suggest that clinicians at both clinics perceived more barriers than facilitators to CFS implementation. Interestingly, clinicians at the higher implementing clinic reported a higher proportion of barriers to facilitators (3:1 vs. 2:1); however, these clinicians also reported a significantly higher level of organizational and leadership supports for CFS implementation. Implications of these findings are discussed.

Introduction

Quality improvement initiatives in healthcare are increasingly including the use of electronic tools such as measurement feedback systems (MFSs) to improve service delivery and monitor outcomes (APA Task Force on Evidence-Based Practice for Children and Adolescents, 2006; Bickman et al., 2011; Jensen-Doss & Hawley, 2010; New Freedom Commission on Mental Health 2003; Sapyta, Riemer, & Bickman, 2005). MFSs have been found to have a positive impact on outcomes in different subspecialties of medicine (Duncan & Pozehl, 2000; Goebel, 1997; Holmboe, Scranton, Sumption, & Hawkins, 1998; Leshan, Fitzsimmons, Marbella, & Gottlieb, 1997; Mazonson et al., 1996; Robinson, Thompson, & Black., 1996; Rokstad, Straand, & Fugelli, 1995; Tabenkin et al., 1995), education (Arco, 1997; Furman, Adamek, & Furman, 1992; Mortenson & Witt, 1998; Rose & Church, 1998; Tuckman & Yates, 1980), and mental health (Chorpita, Bernstein & Daleiden, 2008; Chorpita, Bernstein, Daleiden, 2011; Howe, 1996; Lambert et al., 2001; Lambert et al., 2005; Lambert, Hansen & Finch, 2001; Mazonson et al., 1996). While quality improvement tools, such as MFSs (Cebul, 2008), have been successfully applied for several decades (Kluger & Denisi, 1996; Rose & Church, 1998), their use is not widespread within children’s mental health services. In many states, mental health services are being structurally reorganized and integrated into general health systems. Consequently, the use of MFSs to track outcomes of services for youth and families will gain even more traction (Bruns, Hoagwood & Hamilton, 2008).

This paper focuses on the implementation of a specific MFS called Contextualized Feedback System (CFS™; Bickman et al., 2011; Bickman, Kelley, & Athay, 2012) in two outpatient community-based mental health clinics in New York State. Clinicians and clients completed treatment progress information at every clinical encounter. CFS provided feedback to agency personnel (director, supervisors and therapists) on mental health progress and therapy process variables. This paper is a companion to Bickman and colleague’s in this issue. Both papers focus on the implementation of the CFS within the same two clinics. Bickman et al’s paper focuses on the impact of CFS implementation on client outcomes, focusing primarily on the degree of implementation measured quantitatively. In this paper, we focus on the complexities of the adoption and implementation process within these same agencies, examining multilevel factors from both a qualitative and quantitative perspective.

According to the theoretical framework of Aarons, Hurlburt, and Horwitz (2011), the adoption and implementation process is multilevel as well as multiphasic, with many challenges influencing how evidence-based practices are successfully implemented at different stages over time. This may account for the lag between the development of evidence-based practices and their widespread use. The process of adoption and implementation thus appears to be as critical to the overall effectiveness of the program as the specific treatment itself (Aarons et al., 2011; Fixsen, Blasé, Metz, & Van Dyke, 2013; Wisdom, Chor, Hoagwood, & Horwitz, 2013). Both inner context (e.g., organizational structures or processes, priorities, change readiness, openness to adoption, innovation-values fit, provider characteristics), and outer context factors (e.g., sociopolitical, funding, interorganizational networks, client advocacy) influence the adoption and implementation of innovations (Aarons et al., 2011). Studies that have examined barriers and facilitators to the implementation of evidence-based practices (EBPs) in children’s services, have highlighted multilevel factors ranging from organizational level factors (e.g., leadership support for training), the fit between the innovation and the service context (e.g., ease of use), and individual level provider and consumer factors (e.g., views of the usefulness of the intervention, competing demands, logical issues) (Aarons et al., 2007; Aarons et al., 2009; Langley et al., 2010). The present study extends this work to elucidate the interplay between barriers and facilitators that may influence the implementation of a technologically sophisticated measurement feedback system. Given the rapid rise of interest in the use of technology for monitoring mental health outcomes, identification of distinct implementation barriers associated with technology is particularly timely.

In this study, we focus on inner-context factors because CFS specifically pays attention to customizing the training and implementation of the measurement feedback system to fit an organization’s context. Research on organizational social context demonstrates that the context within agencies affects staff work attitudes and thereby an agency’s ability to improve their services (Peters & Waterman, 1982; Osborne & Gaebler, 1992). Organizational social context have been found to affect service implementation and quality (e.g., Aarons and Sawitzky 2006; Brunette et al. 2008; Carr et al. 2003; Glisson 2008; Glisson and Durick 1988; Glisson and Green 2006; Glisson and Hemmelgarn 1998; Glisson et al. 2013a; Glisson et al. 2008a; Glisson et al. 2010; Greener et al. 2007; Guzzo et al. 1985; Neuman et al. 1989; Olin, Williams, et al., 2014; Parker et al. 2003; Robertson et al. 1993; Sheridan1992; Shim 2010). In particular, Glisson and colleagues have found that a more positive organizational social climate improves uptake of new innovations (Glisson & Hemmelgarn, 1998; Glisson, 2002; Glisson & James, 2002). Thus, an effective innovation with proven efficacy in one context may fail to deliver expected effects in another due to barriers encountered in the different context. Importantly, strategies to target aspects of organizational culture and context can improve work environments to create organizational contexts that support the uptake of new practices and improve youth outcomes (Glisson, Hemmelgarn, Green, Dukes, Atkinson & Williams, 2012; Glisson, Hemmelgarn, Green & Williams, 2013; Glisson, Schoenwald et al., 2010).

In line with existing implementation theories, the CFS training focused on engaging leadership in supporting CFS implementation. Perceptions of risk and its management by agency leadership can affect uptake of innovations. Panzano and Roth (2006) applied a risk-based decision-making framework to examine the decision to adopt evidence-based and research-guided practices within agencies. The authors noted that decisions to adopt EBP innovations are likely to be political and complex, representing both strategy and risks. The investigation examined data from 83 projects involving 66 organizations. Overall, they found that adoption practices were related to perceived risks and perceived capacity to manage risks. Compared to non-adopters of EBPs, ‘early-adopters’ perceived fewer risks and saw the risks as more manageable. Such perceptions during the adoption phase influence the implementation process.

The current study followed the theoretical framework of Aarons et al (2011) to examine multilevel factors (organizational, innovation, staff and client level) that influenced the uptake and implementation of Contextualized Feedback System (CFS) (Bickman et al., 2011; Bickman et al., this issue). The differential uptake and impact of CFS at two clinic sites provided a context for examining factors that may be important in implementing measurement feedback systems such as CFS. In this paper, we describe data from qualitative interviews that were purposefully and systematically collected with the goal of augmenting the Bickman et al’s quantitative study (this issue). We hypothesized that implementation of CFS would be influenced by both barriers and facilitators at multiple levels, including characteristics of the outer context, organizational factors, provider factors, and the innovation (CFS) itself. Specifically, we expected that many of the barriers to CFS implementation would be similar across all clinics. Further, following Panzano and Roth (2006)’s concept of risk management and strategic fit, we hypothesized that CFS implementation would be facilitated by leadership capacity to manage these risks, or more specifically barriers associated with CFS implementation. The clinic that implemented CFS better was hypothesized to have leadership support to overcome barriers and align CFS implementation with their agency’s mission.

Method

Thirty clinics with at least 5 clinicians who had been trained on EBPs through a state training initiative were invited to participate. Eleven applied for the project and four were selected based on their application, experience and success in implementing other state initiatives. These four clinics represented 2 agencies and were enrolled in the study. Two clinics from the same agency de-adopted at 9 months into the study. This paper focuses on the remaining 2 clinics from the same agency, who implemented CFS for two years.

Description of CFS Implementation

CFS training and consultation followed the CFS Individual Site Training and Consultation Model previously described in Bickman, et al (this issue). In the current project, issues related to study resources and software development arose and notably altered the implementation process. First, there was a delay of over six months in the introduction of the technology due to development issues. Second, the developers introduced updates of the CFS program midway through the project. Third, at one of the sites (Clinic R), a senior administrator and her assistant assumed primary responsibility for facilitating participation and understanding of CFS because they were more directly accessible to the staff and were hoping to integrate and sustain CFS within the organization as part of the agency’s quality improvement efforts. At the other site (Clinic U), project staff conducted in-person consultation on a monthly basis to facilitate participation and understanding of the program. Finally, the agency issued a mandate midway through the project for both sites requiring the use of the program on all eligible cases and backing the mandate by including CFS use in performance evaluations. Therapists at both clinics, hereafter referred to as Clinic U and Clinic R, continued to have ongoing consultation either in person or via the telephone on the use of CFS data from the individuals described above.

As previously described in Bickman et al. (this issue), clinicians across both sites implemented CFS differently. Clinicians’ questionnaire completion rate and feedback viewing at Clinic R were 50% higher than clinicians at Clinic U. Mode of data entry was also different between clinics, with clinic R caregivers and clients more frequently using the computerized system instead of paper and pencil. Computerized data entry in both clinics by clinicians was equally high (over 95%). Overall, Clinic R had better implementation as measured for this study than Clinic U (Bickman et al., this issue).

Recruitment of Participants for Qualitative Interviews

At the end of the project, a list of staff who used the CFS program was obtained from clinic supervisors and administrators totaling 21 participants. Study staff approached 13 clinicians from Clinic U and 5 clinicians from Clinic R (3 clinicians of the initial 8 could not be interviewed because they were interns and had left their position with the agency). Of the clinicians approached to participate, 100% agreed to participate. The three intern-clinicians were not approached to participate because no contact information for them existed. In-depth, semi-structured exit interviews were conducted with a total of 18 clinicians (13 from Clinic U, and 5 from Clinic R).

Demographics

Demographic and pre-implementation measures were administered to and completed by all 21 participants at the initial training (Clinic U n = 13 and Clinic R n = 8). These measures are described further below.

Clinic Characteristics

Clinic U served a primarily urban catchment area, whereas Clinic R served a rural catchment area. Moreover, 82% of Clinic U’s clinical population received Medicaid as compared to 30% of Clinic R’s population. (New York State Office of Mental Health Dashboard, 2011)

Clinician Characteristics

The sample of clinician participants at both Clinics was primarily female (75%) and between 26 to 30 years of age (43%). Clinic U had a more ethnically diverse staff (Caucasian 46%, African-American 23%, Multi-Racial 31%, and Hispanic 13%). At Clinic R, therapists were primarily Caucasian (88%) with the remaining 12% identifying as Multi-Racial. While site differences in the age and racial/ethnic composition of clinicians were not statistically significant, there was a trend for clinicians at Clinic U to be younger and more racially diverse than at Clinic R, with the racial composition of clinicians reflecting the racial composition of their clients.

The majority of clinicians held a Master’s degree in Social Work, endorsed an unspecified orientation (48%), and had less than one year in their current position (47%). About a fifth (19%) of the clinicians had no experience at all providing services for children or youths either in their current work place or in any other place before they used CFS. The limited experience may be accounted for by the fact that 2 of the clinicians at Clinic R were interns and thus came with little or no prior experience. Finally, 52% of the clinicians were licensed to practice in the state in which they currently worked.

At pre-implementation, clinicians at Clinic R reported larger caseloads of children over the age of 11 than those at Clinic U which is important to note because this was the target population for the implementation of CFS. At Clinic R, 60% of clinicians had more than ten clients 11–18 years old compared to only 16% of clinicians at Clinic U. Although aggregated pre- implementation differences were not statistically significant (likely due to the small sample), these site differences may be clinically meaningful and may influence CFS implementation.

Measures
Semi-structured Interview

Interview questions were developed based on Aarons et al (2011) as a broad conceptual framework; further, we used the Klein and Sorras’ (1996) measure that focused on facilitators and barriers to technology implementation to develop more domain-specific questions. The key domains included: General Issues, Quality of Training, User Manuals and Technical Assistance, Management Support, Technical Issues, Clinical Use and Other. These domains guided the development of interview questions in our study (see Appendix A). The project staff who administered the interview were trained in qualitative interviewing techniques. After completing interviews, project staff routinely debriefed.

Data Collection and Analysis

Both qualitative and quantitative methods were used in this investigation. A trained research assistant conducted individual interviews over the telephone with participating staff at the two clinics over a four-month period. The semi-structured phone interview lasted up to thirty minutes. Participants were asked about the process and experience of using the system including characteristics of the innovation and implementation. Interviews were audiotaped and transcribed verbatim. A content analysis approach was used to analyze the data in which codes were developed based on barriers and facilitators that are commonly identified in theoretical models of innovation implementation (Bernard & Ryan, 2009). The interviews were analyzed by research staff using Atlas.ti 6.2 software, which facilitates the coding, organization, and retrieval of qualitative data (Friese, 2011). To develop the initial code list, an iterative process was used in which codes were developed based on the theoretical model, and modified through a process of independent coding, discussion, and refinement by members of the research team. Once the codes were established, two coders separately coded all interviews. Any disagreements were resolved through consensus. Kappa coefficients show reliability across coders was 0.87 for all interviews based on blinded double-coding of all text. After the completion of all coding, the team categorized coded material related to barriers and facilitators at multiple levels that are consistent with existing implementation frameworks (e.g., Aarons et al. 2011, Wisdom et al., 2013). Factors related to facilitators and barriers of CFS implementation were categorized into broad categories of innovation, organization, and individual (staff and client) levels. An external environment category was not included in this particular investigation because clinics were part of the same agency and experienced similar outer context factors including clinic restructuring. Positive comments were coded as facilitators whereas negative comments were coded as barriers. See Tables 1 and 2 for a list of barriers and facilitator codes and subcodes.

Table 1.

Clinician report on barriers to implementation of CFS




Clinic U (N =
13)
Clinic R (N =
5)
Total (N =
18)



N Percentage n Percentage n Percentage

Innovation Barriers
    Time consuming 11 0.85 5 1.00 16 0.89
    Complicated design 11 0.85 4 0.80 15 0.83
    Burdensome technical requirements 6 0.46 3 0.60 9 0.50
    Difficult to understand language 5 0.38 2 0.40 7 0.39
    Difficult to apply clinically 4 0.31 3 0.60 7 0.39
    Does not consider user skills 2 0.15 1 0.20 3 0.17
Organizational Barriers
    Insufficient resources and structure 9 0.69 2 0.40 11 0.61
    Lack of implementation efforts 8 0.62 2 0.40 10 0.56
    Network between developers and adopters 2 0.15 0 0.00 2 0.11
Client Barriers
    Lack of readiness/capacity to adopt innovation 3 0.23 4 0.80 7 0.39
    Lack of computer literacy 4 0.31 1 0.20 5 0.28
    Client speed when completing surveys 2 0.15 0 0.00 2 0.11
    Parent lateness 1 0.08 0 0.00 1 0.06
Staff Barriers
    CFS implementation not a priority 2 0.15 1 0.20 3 0.17
    Individual characteristics 2 0.15 1 0.20 3 0.17
Other Barriers
    Insufficient training 6 0.46 2 0.40 8 0.44
    Research study nature 4 0.31 3 0.60 7 0.39
    System rolled out too early 1 0.08 2 0.40 3 0.17
Table 2.

Clinician report on facilitators to implementation of CFS




Clinic U (N =
13)
Clinic R (N =
5)
Total (N = 18)



n Percentage n Percentage n Percentage

Organizational Facilitators
    Leadership champions implementation 6 0.46 5 1.00 11 0.61
    Implementation efforts 3 0.23 4 0.80 7 0.39
Innovation Facilitators
    Clinically applicable/relevant 4 0.31 2 0.40 6 0.33
    Design is user-friendly 1 0.08 2 0.40 3 0.17
    Help with technical requirements 2 0.15 0 0.00 2 0.11
    Sufficient time 1 0.08 1 0.20 2 0.11
    Relevant with user skills 1 0.08 0 0.00 1 0.06
Client Facilitators
    Individual client characteristic 2 0.15 1 0.20 3 0.17
Staff Facilitators
    Individual characteristics/experiences 2 0.15 0 0.00 2 0.11
    Implementation a priority to clinicians 0 0.00 1 0.20 1 0.06
Other Facilitators
    Clinical support 6 0.46 1 0.20 7 0.39
    Training support 2 0.15 3 0.60 5 0.28

Results

Factors related to CFS Implementation

Clinic R was found to be more successful in implementing and using CFS than Clinic U (Bickman, et al., this issue). We thus examined and compared clinician reported facilitators and barriers to CFS implementation across sites, as seen in Tables 1 and 2.

Barriers and Facilitators

Review of clinician interviews resulted in 169 unique implementation themes of which 119 were coded as barriers. There are thus more than twice as many reported facilitators (n = 50). For a list of barrier and facilitator codes, definitions and quotes elucidating the categories please see Table 4. To account for the unequal number of clinicians across sites, comparisons across sites were based on the mean number of barriers or facilitators reported per clinician. As see in Table 3, the average overall number of coded barriers per clinician (m = 6.61, sd = 2.28) was greater than facilitators (m = 2.78, sd = 2.05). This was true at both clinics across all categories (i.e., innovation, organization, client, and staff, and other). Interestingly, clinicians at the higher implementing clinic (Clinic R) reported a higher barrier to facilitator ratio (almost 3:1, 83 barriers to 30 facilitators) compared to clinicians at Clinic U (almost 2:1 (36 barriers to 20 facilitators).

Table 4.

Code Descriptions, Definitions, and Illustrative Quotes

Code name Definition Example
Barriers Any text relating to difficulties
in implementation, including
issues with the innovation,
organization, and individuals.
Innovation Barriers Any text relating to specific
factors about the innovation,
the Contextualized Feedback
System, that hindered
implementation.
Time consuming Any text in which time is
mentioned as a factor that
negatively affected use of CFS.
“…Well, it’s time consuming.
And so oftentimes even if it
was an effective and helpful
addition to the clinical work,
being able to fit it in and
actually do all of the measures
and get all the forms
completed, that doesn’t always
happen because of the time
consuming nature…”
Complicated design Any statement about the
design of the program (e.g., the
layout, interface, graphics)
being problematic.
“I do think that the program
itself could be a little bit more
helpful and specific in terms of
finding information. I mean
when you see ‘high risk’ you
should be able to click on that
and see…why immediately.”
Burdensome technical
requirements
Anything related to the
innovation’s specific
technological requirements,
and any technical issues arising
as a result of those
requirements.
“I don’t know if it’s the
network, the computers that
we have, if they’re just
outdated, but whatever the
issue is that created a lot of the
glitches in terms of…uploading
things.”
Difficult to understand
language
Statements about the language
of CFS being difficult to
understand because it was
confusing, repetitive, or not
adaptable for clients who
cannot read, or do not speak
English.
“Another huge barrier you’re
probably aware of is Spanish
language speakers or people
who are not literate.”
Difficult to apply clinically Any text relating to the clinical
application of CFS and how the
program fits with the clinician’s
practice. Information on how
helpful the program is in a
clinical setting.
In reference to feedback
reports: “…Sometimes I’d feel
like the session wasn’t very
good and the client felt like it
was really good…so it was
really helpful to use that data
in that way…”
Does not consider user
skills
Statements about CFS not
being user-friendly in respect to
specific client characteristics.
In response to question about
difficulties when using CFS:
“Giving a child who…is fifteen
and is on the spectrum, they’re
not doing it.”
Organizational Barriers Any text on organization level
characteristics/decisions that
impeded the
implementation/use of CFS
(e.g. allowing enough time for
completion of CFS).
Insufficient resources and
structure
Statements concerning the
clinic not having enough
resources and structure (e.g.
technical support, length of
therapy sessions, caseload
requirements.)
“This came at the same time as
clinic restructuring when out
caseloads, actually, have come
close to tripled at this point”
Lack of implementation
efforts
Text on the lack of efforts by
the agency to ease the process
of adopting/implementing CFS
(e.g. not allowing CFS to be
completed outside of session,
or not allowing more time in
session.)
In response to question about
agency support: “It’ not that
I’m resistant, that I don’t want
to use [CFS]. If there were
other supports in place, then
yeah, I would probably see
more benefit from it.
Network between
developers and adopters
Text that suggests that the
agency’s relationship and
collaboration with innovation
developers, consultants, and
adopters is an impediment to
successful
adoption/implementation.
“I wish they had asked us for
our feedback before they
decided they’re going to use
this…it’s unfortunate that they
went full speed ahead without
any feedback from the line staff
who have to use it on a daily
basis.”
Client Barriers Any text on implementation
barriers that pertain specifically
to clients.
Lack of readiness/capacity
to adopt innovation
References to clients not being
ready or wanting to adopt CFS.
“Clients don’t like it. They don’t
really want to fill out. A lot of
them just check whatever.”
Lack of computer literacy Reports about clients who do
not know how to use a
computer.
“Clients can’t use computers
very well.”
Staff Barriers Any text on implementation
barriers that pertain specifically
to staff.
CFS implementation not a
priority
Reports on individual clinician’s
attitudes, motivations, and
hesitations that impede the
adoption and implementation
of CFS.
“…among clinicians it’s not as
much of a priority.”
Individual Characteristics Individual clinician
characteristics and experiences
that impeded
adoption/implementation of
CFS.
“I know in my personal
experience, sometimes I just
forget to do [CFS]. Like I get
caught up in a session…”
Other Barriers Any statements about
impediments to the
adoption/implementation of
CFS not captured by other
codes.
Insufficient training Statements about the need for
more training in order to
successfully implement CFS.
“I think we needed more
training, that’s all.”
Research study nature Statements about research
characteristics of the project
(e.g. randomization) that made
implementation difficult.
In reference to a client being in
condition with less feedback:
“We don’t see everybody’s
responses…I look and I’m like, I
can’t believe I can’t see this
family’s responses. I want to
see these people’s responses.
And I know that’s all part of it.”
System rolled out too
early
Reports on materials or
program not being completely
ready for easy
use/implementation.
“…Realizing that this was not
an already debugged
system…was extremely
frustrating.”
Facilitators Any text relating to factors that
facilitated the adoption and
implementation of CFS.
Innovation Facilitators Any text relating to specific
factors about the innovation,
the Contextualized Feedback
System that facilitated
implementation.
Clinically
applicable/relevant
Any text on the clinical
applicability of the CFS.
“…For teenagers [clients] who
aren’t going to be too
forthcoming with like telling
you stuff anyway sometimes, I
think [the CFS] could be really
helpful.”
Design is user-friendly Any text that indicating that the
CFS design is user-friendly.
“I love the intake part. It’s
really self-explanatory. And it’s
user-friendly…”
Organizational Facilitators Any text on organization level
characteristics/decisions that
facilitated the
implementation/use of CFS
(e.g. allowing enough time for
completion of CFS).
Leadership champions
implementation
Statements relating to agency
leaders who champion CFS
adoption and implementation.
In reference to the CFS: “I think
especially my supervisor makes
it a priority and has us look at it
during supervision. So I think
they’re pretty active in trying to
get us to use it and make it a
part of treatment.”
Implementation efforts Any text on the agency’s
training and efforts related to
adoption/implementation of
the CFS (i.e. making
accommodations for using the
CFS).
On agency permitting
administrative assistant to help
with the CFS: “the
administrative assistants are
helpful with …the clients, like
giving us the computers and
making sure they do it.”
Client Facilitators Any text on implementation
facilitators that pertain
specifically to clients
characteristics.
Individual client
characteristic
Any text on specific individual
client characteristics that
facilitated implementation of
the CFS.
“With one client…I definitely
have time because he’s not
really the type of kid who
comes in with a lot crisis or like
so much to talk about. He
doesn’t really have his own
agenda…”
Staff Facilitators Any text on implementation
facilitators that pertain
specifically to clients
characteristics.
Individual
characteristics/experiences
Reports about clinicians’
individual experiences
facilitating
adoption/implementation of
CFS.
“I’m a pretty savvy person so I
taught myself [to use CFS]. And
then I was… asked to teach the
other clinicians how to really
use [CFS].
Implementation a priority
to clinicians
Reports on individual clinician’s
attitudes, motivations, and
readiness to implement the
CFS.
“I think once [clinicians] got
used to it and see the value of
it, it was more helpful. And
people started to really buy
into it more.”
Other Facilitators Any statements about
facilitators to the
adoption/implementation of
CFS not captured by other
codes.
Clinical support Any text on external clinical
consultation help.
“Having [clinical consultant
name] is always helpful,
because she provided real
clinical based support to
engage with CFS in a way that
would be helpful to us.”
Training support Statements about training
being helpful.
“Training was very clear.”

Table 3.

Total coded barriers and facilitators to implementation of CFS


Clinic A Clinic B Total



n Percentage Mean n Percentage Mean n Percentage Mean

Innovation
    Barriers 39.00 0.47 3.00 18.00 0.50 3.60 57.00 0.48 3.17
Facilitators 9.00 0.30 0.69 5.00 0.25 1.00 14.00 0.28 0.78
Organizational
    Barriers 19.00 0.23 1.46 4.00 0.11 0.80 23.00 0.19 1.28
Facilitators 9.00 0.30 0.69 9.00 0.45 1.80 18.00 0.36 1.00
Client
    Barriers 10.00 0.12 0.77 5.00 0.14 1.00 15.00 0.13 0.83
Facilitators 2.00 0.07 0.15 1.00 0.05 0.20 3.00 0.06 0.17
Staff
    Barriers 4.00 0.05 0.31 2.00 0.06 0.40 6.00 0.05 0.33
Facilitators 2.00 0.07 0.15 1.00 0.05 0.20 3.00 0.06 0.17
Other
    Barriers 11.00 0.13 0.85 7.00 0.19 1.40 19.00 0.15 1.06
Facilitators 8.00 0.27 0.62 4.00 0.20 0.80 12.00 0.24 0.67
Total
    Barriers 83.00 0.70 6.38 36.00 0.30 7.20 119.00 1.00 6.61
Facilitators 30.00 0.60 2.31 20.00 0.40 4.00 50.00 1.00 2.78

Barriers

Staff reported a total of 119 uniquely coded barriers across all categories (e.g., innovation, organization, other, client and staff, see Table 4 for exemplar quotes). There were no differences in the mean number of coded barriers reported by clinicians at Clinic U (m = 6.31, sd = 2.32) compared to those at Clinic R (m = 6.55, sd = 2.68).

By far, the most prevalent type of barrier was related to characteristics of the implemented technology. In total, 48% of the 119 coded barriers were in the innovation category (See Table 3). Within the innovation category, barriers were coded into 6 subcategories that are listed in order of their prevalence as defined by the number of participants that reported a barrier in the coded subcategory (See Table 1). The most common reports of innovation barriers were the system being time consuming (89%) and having a complicated design (83%). Other common innovation barriers included: a) burdensome technical requirements (50%) b) difficult to understand language (39%), and c) difficult to apply clinically (39%) (see Table 4).

The second most prevalent category of coded barriers was organizational, accounting for 19% of the 119 coded barriers (see Table 3). Organizational barriers included two main subscales: Insufficient resources and structure (61%), and Lack of implementation efforts (56%). One clinician described that they often had to complete measures outside of session or on their own time and this is what the agency wanted.

The least prevalent barriers were related to client (13% of all barriers) and staff level categories (5% of all barriers). In the client category, the most frequently reported barriers included a) Lack of readiness/capacity to adopt innovation (reported by 39% of clinicians), and b) Lack of computer literacy (reported by 28% of clinicians). One of the most notable differences between clinics occurred at this level, with four out of five clinicians (80%) in Clinic R, the more rural clinic, noted that clients’ lack of readiness and capacity to adopt CFS was a barrier, compared to only 3 of 13 (23%) clinicians in the more urban site (Clinic U). Only clinicians in Clinic U noted that client speed with completing surveys (2 of 13 clinicians) and lateness to appointments posed a barrier (1 of 13 clinicians). The majority of clients at Clinic U completed the forms in a paper and pencil format (youth= 60.6%; caregiver= 78.2%) versus those at Clinic R who completed them on the computer (youth= 84%; caregiver= 60% respectively).

With regard to staff level barriers, less than one fifth of clinicians across both sites noted barriers at the individual or personal level, such as not seeing CFS as a priority or being uncomfortable with technology. In one example an individual noted, “I am an older staff member and I am not that computer savvy.”

The Other category accounted for 15% of the 119 coded barriers; these included the following subcategories listed in order of prevalence based on number of clinicians reporting: a) Insufficient training (44%), b) Project’s nature as a research study (39%), c) System rolled out too early (17%). Examples describing the insufficient training barrier read, “It was a little too much to take at the training. I was feeling a little lost.” “I think we needed more training, that’s all.”

Facilitators

Across all 18 participants, there were 50 uniquely coded facilitators across the innovation, organizational, client, staff, and other categories (see Table 4 for quotes). Overall, Clinic R participants reported more facilitators on average (m = 4.0, sd = 2.35) than did participants at Clinic U (m = 2.31, sd = 1.80). Interestingly, a difference at the organizational level appears, such that Clinic R clinicians reported more organizational facilitators (m = 1.8, sd = .45) than those at Clinic U (m = .69, sd = .48).

The most frequently reported facilitators by clinic staff were in the organizational category, accounting for 36% of all coded facilitators. Under this category, all clinicians (100%) in Clinic R reported leadership championing of CFS as a facilitator compared to less than half of clinicians at Clinic U (46%). Similarly, four out of the five clinicians (80%) in Clinic R perceived organizational support in terms of implementation efforts as a facilitator compared to 3 of 13 (23%) in Clinic U. As an example of leadership facilitation, one person stated, “They’ve done everything that they could do to make everyone take this absolutely seriously.”

Innovation facilitators accounted for 28% of all reported facilitators. The most frequently noted facilitator in this category related to the clinical applicability/relevance of CFS (33%). One individual described the relevance and clinical use of the program as it pertained to one case where the client divulged a clinical issue on the CFS measures but not directly to their clinician.

Other facilitators accounted for 24% of the total reported facilitators and encompassed the following two subcategories: a) clinical support (39%) and b) training support (28%). As with barriers, client and staff level factors accounted for the lowest proportion of facilitators coded, with each accounting for 6% of the total reported facilitators.

Discussion

States and other entities have been strong proponents and investors in the dissemination and implementation of evidence based practices to improve care quality. Understanding facilitators and barriers to innovation implementation from the perspectives of staff is critical because they are typically the stakeholder group responsible for and hence challenged with the implementation of such innovations. Given the challenges associated with implementing a new technology like CFS, it is not surprising that clinicians across both clinics identified two to three times as many barriers than facilitators. Notably, the pattern of barriers was similar across both clinics, and both reported proportionately more barriers than facilitators, regardless of level of implementation. In fact, the clinicians in the higher implementing clinic reported a higher ratio of overall barriers to facilitators than those in the lower implementing clinic. The finding that higher implementing clinicians reported many more barriers may reflect their increased effort and experience in working through the challenges in integrating a new practice.

Interestingly, it appears that one distinguishing factor between the 2 clinics pertained to the average number of organizational level facilitators of CFS implementation, with the higher implementing clinic reporting more organizational level facilitators to CFS implementation. This finding is consistent with Panzano and Roth’s (2006) notion that perceived capacity for managing barriers may be key to adoption success and by extension, implementation effectiveness. Thus, despite clinician perception of similar barriers across sites, the clinicians in the higher implementing clinic (Clinic R) reported more leadership support, including champions of CFS and concomitant training and implementation support onsite to help them implement the new technology over time. At Clinic R, an internal senior clinic administrator and her assistant provided the ongoing consultation, clinical supervision and technical support in the use of CFS. The senior administrator provided more immediate oversight of CFS implementation. At Clinic U, a Doctoral level staff person provided regularly scheduled consultation on a monthly or bimonthly basis. While the project staff may have been as technically competent as the internal senior agency staff, it is plausible to assume that the internal staff had greater influence on the behavior of the clinicians. Clinic R having greater day-to-day involvement of senior leadership staff than Clinic U may have generated greater engagement of all staff. Even though staff at both clinics were required to complete CFS measures as part of performance evaluation, this did not occur until well into the implementation phase. Another possibility is that having in-house staff support resulted in accessibility to supports that facilitated clinician implementation behavior; Clinic U had to rely on external project staff to provide in-person support on a scheduled basis. There is emerging evidence that active and explicit leadership for innovations, both in terms of general support and setting expectations can have a positive impact on implementation (Hall & Yip, 2014; Martin et al., 2011; Unsworth, Cowie, & Green, 2012; Wolpert, Curtis-Tyler, & Edbrooke-Childs, 2014).

In this project, the most prevalent category of barriers was the innovation itself. As has been increasingly recognized in the literature, the fit between the innovation system and the organization is a critical factor (Aarons et al., 2011). While the clinical utility of CFS was valued, the design and technological demands of CFS use posed significant challenges for the majority of users. Specifically, clinicians described that it took considerable time and effort to learn, implement, and use feedback effectively. Time is a commodity in short supply at both the clinics. In addition, challenges with CFS design and technology interfered with ease of use. For example, a number of individuals stated that it took too many clicks on the computer to get where they wanted. Over the duration of the study, many of these features were changed and system upgrades were made to address these issues. However, these negative experiences likely created barriers by reducing clinician openness to the system and perception of clinical utility.

Related to innovation characteristics, CFS software was the same in both sites but the way the feedback was implemented was different. Clinic U did not use computers or the provided tablets for youth or caregivers to answer the questionnaires at the end of the session. This resulted in significantly more delay in the clinicians receiving the feedback since the paper forms had to be hand entered. The additional delay could have weakened the effectiveness of the feedback and impacted the implementation effort in Clinic U. Additionally, Clinic U had younger clinicians with a smaller number of relevant caseloads; they reported less organizational leadership; and they apparently experienced more problems or had more difficulties resolving problems with the computerized technology since they used paper instead of web-based entry. These various factors could have affected attitudes and behaviors and hence CFS implementation.

In addition to innovation characteristics, organizational barriers were the second most commonly described. Conflicting priorities with organizational demands such as productivity or other agency-mandated paperwork were often highlighted. Lack of direct support from administration to implement CFS was also a significant barrier for clinicians in the lower implementing clinic. Clinicians also described their internal struggle between prioritizing the completion of CFS according to agency priorities and their belief that, even with the information CFS supplied, the burden of CFS completion sometimes interfered with the content and process of their therapy sessions.

Client and staff level factors were the least frequently mentioned facilitators and barriers perhaps due to response bias. This finding may reflect the emphasis CFS training had on supporting staff and problem solving issues that occurred. On the other hand, this finding may also reflect the powerful influence of organizational social context in innovation implementation; staff may have emphasized organizational factors because of the critical role organizational culture (e.g., work priorities; expectations of frontline providers) played in their ability to take on and implement new technologies. The role of organizational factors in service and implementation quality has been well documented (e.g., Aaron et al., 2011).

This important role of organizational support in innovation implementation is supported in our case study. Organizational factors were the second most commonly reported type of barrier, and also the most commonly reported facilitator of CFS implementation. Thus organizational issues appear to exert both positive and negative effects on implementation. Interestingly, the only significant difference found between the two implementing clinics was related to organizational support. Despite being part of the same agency and operating under the same agency mandate to implement CFS, the higher implementing clinic reported significantly more organizational facilitators than the lower implementing clinic. Consistent with the literature, the agency leadership support and having a champion were endorsed as a facilitator by every clinician in the higher implementing clinic. By contrast, fewer than half of the clinicians in the lower implementing clinic perceived leadership support for this effort despite the mandate. Geographic distance may have been a factor affecting proximity to leadership. Clinic R was co-located with leadership administrative offices while the other clinic was almost 100 miles away, and thus had less direct access to agency support. Availability of leadership support and ongoing efforts by leadership to facilitate implementation likely mitigated clinician perception of barriers, creating a more receptive climate for implementing a new technology.

The Other category of barriers included aspects of the CFS training that may be unique to CFS as a technology and its development. At the time when the CFS training was rolled-out, the most current version of the CFS system was still being finalized. Initially, clinicians and users were trained with screen shots but not a fully working CFS program. After initial training, a Beta version was implemented and updated throughout the program. Participating clinicians reported feeling lost with some aspects of the training especially in the beginning when they were asked to imagine the use of the system in relation to their organization or clinical procedure.

Limitations

This is a case study that included only 2 clinics and a small sample of clinicians. As mentioned earlier, two of the original four clinics that started the project de-adopted before they implemented the full system thereby limiting our ability to get feedback from them on the implementation process. Moreover, due to clinician attrition at the 2 implementing clinics, the sample did not include all clinicians who implemented CFS. Our findings may thus be biased and have limited generalizability due to the small and restricted sample of adopters. It is possible that de-adopters experience different barriers and facilitators. Findings should thus be interpreted with caution.

Additionally, our crude counts of coded barriers and facilitators do not distinguish the relative importance of these factors in influencing CFS implementation. It is possible that some of the barriers and facilitators had a greater or differential impact on implementation than others. The relative impact of these various factors was not investigated.

As with many long-term implementation efforts, our study was subject to the ever-changing service system. During the course of the study, all clinics across the state were undergoing restructuring efforts that impacted how they were funded and paid. This had a major impact on caseload, length of session, and many other variables within each of the clinics. Interesting, participants did not mention clinic restructuring explicitly as a cause of implementation difficulty but the stressors imposed by the restructuring had and continues to have a large impact on most clinics across the state. This issue may be mitigated by the fact that clinic restructuring likely impacted both clinics similarly since they were part of the same parent agency.

Another study limitation relates to the use of semi-structured interviews rather than standardized measures to understand contextual issues around implementation. While semi-structured interviews have inherent limitations, the use of this methodology provided rich, detailed information not available from standard quantitative methods. Finally, the sole focus on clinicians and not other stakeholders from multiple levels of the organization is a limitation of the current study. Different stakeholders, especially in de-adopting clinics, are likely to have different perspectives about implementation, depending on their role. This would be an important area of future research.

Conclusion

This study highlights the importance of facilitating conditions, particularly the role of organizational support and leadership, in creating conditions that are conducive to integrating new practices. Our study also points to the importance of not focusing exclusively on barriers to implementation. While disparate clinics may be faced with similar barriers, the tipping point for improved implementation may be the presence of key facilitating organizational conditions that help users overcome the almost certain barriers that will undoubtedly arise. Future research on adoption and implementation of innovations should focus on the ratio of barriers to facilitators and the important role of organizational and leadership factors in tipping the balance to improve the installation of new practices in these complex child and adolescent healthcare systems.

Appendix A

Interview Guide

Hi, my name is ________. I’m speaking with you today because we want to get more information about aspects of the CFS project that have gone well for you, or aspects that could be improved. We will be recording these calls for note-taking purposes, but all of your responses are confidential and won’t be shared with other members of your agency. Your answers will be used to better understand this quality improvement initiative. We really appreciate your time and willingness to speak with us. Do you have any questions before we get started?

Let’s start with some general questions about your overall experience during this project

General Overview

  1. How would you describe your experience using CFS? What do you like or dislike?

  2. What supports have been helpful when using CFS?

  3. What have been some of the barriers to successful implementation?

Quality of Training, User manuals, and Technical Assistance

  1. What do you think of the support provided by the Columbia/Vanderbilt team? If subject needs prompting, ask the subject specifically about the phases of CFS contextualization:
    1. Training
    2. Consultation
    3. Technical support
  2. Would you work in collaboration with the Columbia/Vanderbilt team again? Why or why not?

Management Support

  1. How well has your agency supported the implementation of CFS? (For example: giving time, training, administrative support) If subject needs prompting, ask the following:
    1. Do you think CFS is a priority at the management level? Why or why not?
    2. How much do you feel like other clinic obligations or initiatives impact your CFS use?
  2. How have your co-workers responded to CFS?

  3. Is there someone at your agency that particularly helped support CFS? What does he or she do that is helpful?

Technical Issues

  1. How did you feel about the design of the program? (For example: the layout, interface, graphics, etc.) If subject needs prompting, ask the following:
    1. What are some of the technical difficulties, if any, with the program?
    2. Which components of CFS do you feel most comfortable using?
    3. Which components of CFS do you feel least comfortable using?
  2. What could have been done differently to make you feel more comfortable with the technical aspects of using CFS?

Now I’m going to ask you about the clinical application of CFS and how the program fits with your clinical practice.

Clinical Use

  1. Do you think the measures reported by the program accurately reflect your client’s current state? Does it mesh with what you see in session?

  2. Is there sufficient time in the course of a session for clients and caregivers to complete the CFS questionnaires? Is there sufficient time to discuss their responses?

  3. Did you feel comfortable interpreting CFS data? If not, what would be helpful to increase your comfort level?

  4. How does using CFS inform your clinical practice? Can you think of a time when it has (or hasn’t) changed the way you handled a case?

  5. Overall, do you believe a program like CFS could be helpful to you in a clinical setting? Why or why not?

  6. How do the kids you work with respond to the use of CFS? If subject needs prompting, ask the following:
    1. What aspects of the program do kids struggle with?
    2. What aspects of the program do kids use successfully?
  7. How do caregivers respond to the use of CFS? If subject needs prompting, ask the following:
    1. What aspects of the program do caregivers struggle with?
    2. What aspects of the program do caregivers use successfully?
  8. How could CFS be better adapted to suit the needs of your clients and their caregivers (the population you work with)?

Other

  1. Some of your new clients were randomly assigned to have data available all the time, and some only had data available every six months. What did you think about the randomization process? How did it affect your engagement with CFS?

  2. Is there any other feedback you’d like to offer regarding your experience with CFS?

Those are all of the questions I have for you today. Thank you again for participating! We have one final piece of this interview--it’s an online survey that you should be able to complete quickly. I’ll send you the link to the survey now. It would be great if you could complete the survey as soon as possible, while your experience with CFS is still fresh in your mind.

References

  1. Aarons GA, Hurlburt M, Horwitz SM. Advancing a conceptual model of evidence-based practice implementation in public service sectors. Administration and Policy in Mental Health and Mental Health Services Research. 2011;38(1):4–23. doi: 10.1007/s10488-010-0327-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Aarons GA, Sawitzky AC. Organizational climate partially mediates the effect of culture on work attitudes and staff turnover in mental health services. Administration and Policy in Mental Health and Mental Health Services Research. 2006;33(3):289–301. doi: 10.1007/s10488-006-0039-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. American Psychological Association (APA) Presidential Task Force on Evidence-Based Practice. Evidence-based practice in psychology. American Psychologist. 2006;61(4):271–285. doi: 10.1037/0003-066X.61.4.271. [DOI] [PubMed] [Google Scholar]
  4. Arco L. Improving Program Outcome with Process-Based Performance Feedback. Journal of Organizational Behavior Management. 1997;17(1):37–64. [Google Scholar]
  5. Bernard HHR, Ryan GW. Analyzing qualitative data: Systematic approaches. SAGE publications; 2009. [Google Scholar]
  6. Bickman L, Douglas Kelley S, Athay M. Couple and Family Psychology: Research and Practice. 2012;1(4):274–284. doi: 10.1037/a0031022. [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Bickman L, Douglas Kelley S, Breda C, de Andrade AR, Riemer M. Effects of routine feedback to clinicians on mental health outcomes of youths: results of a randomized trial. Psychiatric Services. 2011;62(12):1423–1429. doi: 10.1176/appi.ps.002052011. [DOI] [PubMed] [Google Scholar]
  8. Brunette M, Asher D, Whitley R, Lutz W, Wieder B, Jones A, et al. Implementation of integrated dual disorders treatment: a qualitative analysis of facilitators and barriers. Psychiatric Services. 2008;59(9):989–995. doi: 10.1176/ps.2008.59.9.989. [DOI] [PubMed] [Google Scholar]
  9. Bruns EJ, Hoagwood KE, Hamilton JD. State implementation of evidence-based practice for youths, part I: Responses to the state of the evidence. Journal of the American Academy of Child & Adolescent Psychiatry. 2008;47(4):369–373. doi: 10.1097/CHI.0b013e31816485f4. [DOI] [PubMed] [Google Scholar]
  10. Carr JZ, Schmidt AM, Ford JK, DeShon RP. Climate perceptions matter: a meta-analytic path analysis relating molar climate, cognitive and affective states, and individual level work outcomes. Journal of Applied Psychology. 2003;88(4):605. doi: 10.1037/0021-9010.88.4.605. [DOI] [PubMed] [Google Scholar]
  11. Cebul RD. Using Electronic Medical Records To Measure And Improve Performance. Transactions of the American Clinical Climatological Association. 2008;119:65–76. [PMC free article] [PubMed] [Google Scholar]
  12. Chorpita BF, Bernstein A, Daleiden EL. Driving with roadmaps and dashboards: Using information resources to structure the decision models in service organizations. Administration and Policy in Mental Health and Mental Health Services Research. 2008;35(1–2):114–123. doi: 10.1007/s10488-007-0151-x. [DOI] [PubMed] [Google Scholar]
  13. Chorpita BF, Bernstein A, Daleiden EL. Empirically guided coordination of multiple evidence-based treatments: An illustration of relevance mapping in children’s mental health services. Journal of Consulting and Clinical Psychology. 2011;79(4):470–80. doi: 10.1037/a0023982. [DOI] [PubMed] [Google Scholar]
  14. Duncan K, Pozehl B. Effects of performance feedback on patient pain outcomes. Clinical Nursing Research. 2000;9(4):379–397. doi: 10.1177/10547730022158645. [DOI] [PubMed] [Google Scholar]
  15. Fixsen D, Blasé K, Metz A, Van Dyke M. Statewide implementation of evidence-based programs. Exceptional Children. 2013;79(2):213–230. [Google Scholar]
  16. Friese S. ATLAS. ti 6 user manual. Berlin, Germany, ATLAS. ti Scientific Software Development GmbH. 2011 [Google Scholar]
  17. Furman CE, Adamek MS, Furman AG. The use of an auditory device to transmit feedback to student therapists. Journal of Music Therapy. 1992;29(1):40–53. [Google Scholar]
  18. Glisson C. The organizational context of children’s mental health services. Clinical Child and Family Psychology Review. 2002;5(4):233–253. doi: 10.1023/a:1020972906177. [DOI] [PubMed] [Google Scholar]
  19. Glisson C. Interventions with organizations: The ARC model. In: Sowers K, Dulmus C, editors. The comprehensive handbook of social work and social welfare. New Jersey: Wiley; 2008. [Google Scholar]
  20. Glisson C, Durick M. Predictors of job satisfaction and organizational commitment in human service organizations. Administrative Science Quarterly. 1988;33(1):61–81. [Google Scholar]
  21. Glisson C, Green P. The effects of organizational culture and climate on the access to mental health care in child welfare and juvenile justice systems. Administration and Policy in Mental Health and Mental Health Services Research. 2006;33(4):433–448. doi: 10.1007/s10488-005-0016-0. [DOI] [PubMed] [Google Scholar]
  22. Glisson C, Hemmelgarn A. The effects of organizational climate and inter-organizational coordination on the quality and outcomes of children’s service systems. Child Abuse and Neglect. 1998;22:401–421. doi: 10.1016/s0145-2134(98)00005-2. [DOI] [PubMed] [Google Scholar]
  23. Glisson C, Hemmelgarn A, Green P, Williams NJ. Randomized trial of the availability, responsiveness and continuity (ARC) organizational intervention for improving youth outcomes in community mental health programs. Journal of the American Academy of Child and Adolescent Psychiatry. 2013a;52(5):493–500. doi: 10.1016/j.jaac.2013.02.005. [DOI] [PubMed] [Google Scholar]
  24. Glisson C, James LR. The cross-level effects of culture and climate in human service teams. Journal of Organizational Behavior. 2002;23:767–794. [Google Scholar]
  25. Glisson C, Landsverk J, Schoenwald S, Kelleher K, Hoagwood KE, Mayberg S, et al. Assessing the organizational social context (OSC) of mental health services: Implications for research and practice. Administration and Policy in Mental Health and Mental Health Services Research. 2008a;35(1):98–113. doi: 10.1007/s10488-007-0148-5. [DOI] [PubMed] [Google Scholar]
  26. Glisson C, Schoenwald SK, Hemmelgarn A, Green P, Dukes D, Armstrong KS, et al. Randomized trial of MST and ARC in a two-level evidence-based treatment implementation strategy. Journal of Consulting and Clinical Psychology. 2010;78(4):537. doi: 10.1037/a0019160. [DOI] [PMC free article] [PubMed] [Google Scholar]
  27. Goebel LJ. A peer review feedback method of promoting compliance with preventive care guidelines in a resident ambulatory care clinic. Joint Commission Journal on Quality Improvement. 1997;23:196–202. doi: 10.1016/s1070-3241(16)30309-1. [DOI] [PubMed] [Google Scholar]
  28. Greener JM, Joe GW, Simpson DD, Rowan-Szal GA, Lehman WE. Influence of organizational functioning on client engagement in treatment. Journal of Substance Abuse Treatment. 2007;33(2):139. doi: 10.1016/j.jsat.2006.12.025. [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Guzzo RA, Jette RD, Katzell RA. The effects of psychologically based intervention programs on worker productivity: A meta-analysis. Personnel Psychology. 1985;38(2):275–291. [Google Scholar]
  30. Hall DT, Yip K. Career cultures and climates in organizations. In: Schneider B, Barbera K, editors. The Oxford Handbook of Organizational Climate and Culture. New York: Oxford University Press; 2014. [Google Scholar]
  31. Holmboe E, Scranton R, Sumption K, Hawkins R. Effect of medical record audit and feedback on residents’ compliance with preventive health care guidelines. Academic Medicine. 1998;73(8):901–3. doi: 10.1097/00001888-199808000-00016. [DOI] [PubMed] [Google Scholar]
  32. Howe A. Detecting psychological distress: can general practitioners improve their own performance? British Journal of General Practice. 1996;46(408):407–410. [PMC free article] [PubMed] [Google Scholar]
  33. Jensen-Doss A, Hawley KM. Understanding barriers to evidence-based assessment: Clinician attitudes toward standardized assessment tools. Journal of Clinical Child & Adolescent Psychology. 2010;39(6):885–896. doi: 10.1080/15374416.2010.517169. [DOI] [PMC free article] [PubMed] [Google Scholar]
  34. Klein KJ, Sorra JS. The challenge of innovation implementation. Academy of management review. 1996;21(4):1055–1080. [Google Scholar]
  35. Kluger AN, Denisi A. The effects of feedback interventions on performance: a historical review, a meta-analysis and a preliminary feedback intervention theory. Psychological Bulletin. 1996;119(2):254–284. [Google Scholar]
  36. Lambert M, Hansen N, Finch A. Patient-focused research: Using treatment outcome data to enhance treatment effects. Journal of Consulting and Clinical Psychology. 2001;69:159–172. [PubMed] [Google Scholar]
  37. Lambert MJ, Whipple JL, Smart DW, Vermeersch DA, Nielsen SL, Hawkin EJ. The effects of providing therapists with feedback on patient progress during psychotherapy: Are outcomes enhanced? Psychotherapy Research. 2001;11:49–68. doi: 10.1080/713663852. [DOI] [PubMed] [Google Scholar]
  38. Lambert MJ, Harmon C, Slade K, Whipple JL, Hawkins EJ. Providing feedback to psychotherapists on their patients’ progress: clinical results and practice suggestions. Journal of Clinical Psychology. 2005;61(2):165–174. doi: 10.1002/jclp.20113. [DOI] [PubMed] [Google Scholar]
  39. Lehman WE, Greener JM, Simpson DD. Assessing organizational readiness for change. Journal of Substance Abuse Treatment. 2002;22(4):197–209. doi: 10.1016/s0740-5472(02)00233-7. [DOI] [PubMed] [Google Scholar]
  40. Leshan LA, Fitzsimmons M, Marbella A, Gottlieb M. Increasing clinical prevention efforts in a family practice residency program through CQI methods. The Joint Commission Journal on Quality Improvement. 1997;23(7):391–400. doi: 10.1016/s1070-3241(16)30327-3. [DOI] [PubMed] [Google Scholar]
  41. Martin AM, Fishman R, Baxter L, Ford T. Practitioners’ attitudes towards the use of standardized diagnostic assessment in routine practice: a qualitative study in two child and adolescent mental health services. Clinical Child Psychology and Psychiatry. 2011;16(3):407–420. doi: 10.1177/1359104510366284. [DOI] [PubMed] [Google Scholar]
  42. Mazonson PD, Mathias SD, Fifer SK, Buesching DP, Malek P, Patrick DL. The mental health patient profile: Does it change primary care physicians’ practice patterns? The Journal of the American Board of Family Practice. 1996;9(5):336–345. [PubMed] [Google Scholar]
  43. Mortenson BP, Witt JC. The use of weekly performance feedback to increase teacher implementation of a pre-referral academic intervention. School Psychology Review. 1998;27:613–627. [Google Scholar]
  44. Neuman GA, Edwards JE, Raju NS. Organizational development interventions: A meta-analysis of their effects on satisfaction and other attitudes. Personnel Psychology. 1989;42(3):461–489. [Google Scholar]
  45. New Freedom Commission on Mental Health. Achieving the Promise: Transforming Mental Health Care in America: Final Report. President's New Freedom Commission on Mental Health. 2003 Retrieved from http://govinfo.library.unt.edu/mentalhealthcommission/reports/reports.htm.
  46. New York State (NYS) Office of Mental Health (OMH) NYS OMH Dashboard. 2011 Retrieved from http://bi.omh.ny.gov/cmhp/dashboard.
  47. Olin SS, Williams N, Pollock M, Armusewicz K, Kutash K, Glisson C, Hoagwood KE. Quality indicators for family support services and their relationship to organizational social context. Administration and Policy in Mental Health and Mental Health Services Research. 2014;41(1):43–54. doi: 10.1007/s10488-013-0499-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
  48. Osborne D, Gaebler T. Reinventing Government: How the entrepreneurial spirit is transforming the public sector. New York: Plume; 1992. [Google Scholar]
  49. Panzano P, Roth D. The decision to adopt evidence-based and other innovative mental health practices: Risky business? Psychiatric Services. 2006;57(8):1153–1161. doi: 10.1176/ps.2006.57.8.1153. [DOI] [PubMed] [Google Scholar]
  50. Parker CP, Baltes BB, Young SA, Huff JW, Altmann RA, Lacost HA, et al. Relationships between psychological climate perceptions and work outcomes: A meta-analytic review. Journal of Organizational Behavior. 2003;24(4):389–416. [Google Scholar]
  51. Peters TJ, Waterman RH. In Search of Excellence. New York: Harper & Row Inc.; 1982. [Google Scholar]
  52. Robertson PJ, Roberts DR, Porras JI. Dynamics of planned organizational change: Assessing empirical support for a theoretical model. Academy of Management Journal. 1993;36(3):619–634. [Google Scholar]
  53. Robinson MB, Thompson E, Black NA. Evaluation of the Effectiveness of Guidelines, Audit and Feedback: Improving the use of Intravenous Thrombolysis in Patients with Suspected Acute Myocardial Infarction. International Journey of Quality Health Care. 1996;8:211–222. doi: 10.1093/intqhc/8.3.211. [DOI] [PubMed] [Google Scholar]
  54. Rokstad K, Straand J, Fugelli P. Can drug treatment be improved by feedback on prescribing profiles combined with therapeutic recommendations? A prospective, controlled trial in general practice. Journal of Clinical Epidemiology. 1995;48:1061–8. doi: 10.1016/0895-4356(94)00238-l. [DOI] [PubMed] [Google Scholar]
  55. Rose DJ, Church RJ. Learning to teach: the acquisition and maintenance of teaching skills. Journal of Behavioral Education. 1998;8(1):5–35. [Google Scholar]
  56. Sapyta J, Riemer M, Bickman L. Feedback to clinicians: Theory, research, and practice. Journal of Clinical Psychology. 2005;61(2):145–153. doi: 10.1002/jclp.20107. [DOI] [PubMed] [Google Scholar]
  57. Sheridan JE. Organizational culture and employee retention. Academy of Management Journal. 1992;35(5):1036–1056. [Google Scholar]
  58. Shim M. Factors influencing child welfare employee’s turnover: Focusing on organizational culture and climate. Children and Youth Services Review. 2010;32(6):847–856. [Google Scholar]
  59. Tabenkin H, Steinmetz D, Eilat Z, Heman H, Dagan B, Epstein L. A peer review programme to audit the management of hypertensive patients in family practices in Israel. Family Practice. 1995;12(3):309–312. doi: 10.1093/fampra/12.3.309. [DOI] [PubMed] [Google Scholar]
  60. Tuckman BW, Yates D. Evaluating the Student Feedback Strategy for Changing Teacher Style. Journal of Educational Research. 1980;74(2):74–77. [Google Scholar]
  61. Unsworth G, Cowie H, Green A. Therapists’ and clients’ perceptions of routine outcome measurement in the NHS: A qualitative study. Counselling and Psychotherapy Research: Linking research with practice. 2012;12(1):71–80. [Google Scholar]
  62. Wolpert M, Curtis-Tyler K, Edbrooke-Childs J. A qualitative exploration of patient and clinician views on patient reported outcome measures in child mental health and diabetes services. Administration and Policy in Mental Health and Mental Health Services Research. 2014 doi: 10.1007/s10488-014-0586-9. Advance online publication. [DOI] [PMC free article] [PubMed] [Google Scholar]
  63. Wisdom JP, Chor KHB, Hoagwood KE, Horwitz SM. Innovation adoption: A review of theories and constructs. Administration and Policy in Mental Health and Mental Health Services Research. 2013;41(4):480–502. doi: 10.1007/s10488-013-0486-4. [DOI] [PMC free article] [PubMed] [Google Scholar]

RESOURCES