Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2013 Nov 26.
Published in final edited form as: Child Maltreat. 2011 Dec 5;17(1):10.1177/1077559511426908. doi: 10.1177/1077559511426908

Mixed Methods for Implementation Research: Application to Evidence-Based Practice Implementation and Staff Turnover in Community Based Organizations Providing Child Welfare Services

Gregory A Aarons 1,*, Danielle L Fettes 2, David H Sommerfeld 3, Lawrence Palinkas 4
PMCID: PMC3841106  NIHMSID: NIHMS525839  PMID: 22146861

Abstract

Many public sector services systems and provider organizations are in some phase of learning about or implementing evidence-based interventions. Child welfare service systems represent a context where implementation spans system, management, and organizational concerns. Research utilizing mixed methods that combine qualitative and quantitative design, data collection, and analytic approaches are particularly well-suited to understanding both the process and outcomes of dissemination and implementation efforts in child welfare systems. This paper describes the process of using mixed methods in implementation research and provides an applied example of an examination of factors impacting staff retention during an evidence-based intervention implementation in a statewide child welfare system. We integrate qualitative data with previously published quantitative analyses of job autonomy and staff turnover during this statewide implementation project in order to illustrate the utility of mixed method approaches in providing a more comprehensive understanding of opportunities and challenges in implementation research.

Keywords: child welfare, implementation, dissemination, evidence-based practice, turnover, retention, fidelity


Mixed method research involves the integration of qualitative and quantitative method philosophies, designs, strategies, analytic approaches, and interpretations (Greene, 2006; Johnson, Onwuegbuzie, & Turner, 2007; Tashakkori & Teddlie, 2003). Mixed method research is increasingly being recognized as critical for studies of innovation implementation in health and human service settings (Demakis, McQueen, Kizer, & Feussner, 2000; Greenhalgh et al., 2010; Palinkas, Aarons et al., 2011; Soh et al., 2011; Stetler et al., 2006). For implementation research, mixed method designs are being utilized to develop a science base for understanding facilitators and barriers to implementation, for understanding the process and outcomes of implementation, and for testing novel implementation strategies (Palinkas, Aarons et al., 2011; Palinkas, Horwitz, Chamberlain, Hurlburt, & Landsverk, 2011; Waitzkin, Schillaci, & Willging, 2008). Mixed method studies have great potential to improve the quality and breadth of implementation research; however few empirical studies have illustrated the advantages of mixed method approaches for understanding and improving implementation processes. In practice, many studies with qualitative and quantitative components lack effective “mixing” of results from these different and potentially complementary methods. Therefore, in this paper we illustrate how mixed method design, analysis, and integration of data can be used to better understand issues related to large-scale evidence-based practice (EBP) implementation impacts on the child welfare provider workforce.

Mixed Method Approaches

Mixed method research designs encompass collecting, analyzing, and integrating quantitative and qualitative data and their analyses and interpretations. The central premise of these designs is that the use of mixed methods provides a more comprehensive, more detailed, and richer understanding of research issues than either approach alone (Creswell & Plano Clark, 2011; Robins et al., 2008; Waitzkin et al., 2008). In implementation studies, mixed method designs can be used to explore and obtain depth of understanding not possible with one approach and data source alone. For example, one can explore the reasons for success or failure to attain important implementation outcomes such as model fidelity or the reach of the intervention to service providers and to the appropriate clinical population. Mixed methods can also be used to identify strategies for facilitating implementation. For example, conceptual models and their components can be assessed through formative and confirmatory evaluation (Aarons, Hurlburt, & Horwitz, 2011; Mendel, Meredith, Schoenbaum, Sherbourne, & Wells, 2008; Stetler et al., 2006) and implementation approaches can be tested and evaluated (e.g., Glisson et al., 2010). Quantitative methods may be best used to test and confirm hypotheses based on an existing conceptual model, whereas qualitative data can increase breadth of understanding of predictors and quantitative outcomes (Creswell & Plano Clark, 2011; Teddlie & Tashakkori, 2003). Quantitative data can also be used for purposes such as identifying and selecting appropriate samples for qualitative data collection (e.g., Aarons & Palinkas, 2007).

Mixed Method Functions

Mixed method approaches are characterized by several functions (also known as “component features”). The functions of mixed methods depend on whether the two methods are being used to answer the same question, to answer related questions, or to answer different questions. In regard to the relative roles of one in relation to the other, designs can emphasize quantitative or qualitative data as the primary method (indicated by abbreviations in all capital letters to indicate quantitative “QUAN” or qualitative “QUAL” approaches) with the other being secondary (indicated by abbreviations in all lower case letters such as “quan” or “qual”) or with both of equal emphasis within a given design. The sequencing of methods is another design element and methods can be sequential or concurrent in regard to data collection, analysis, and interpretation. Sequencing can be indicated by one preceding the other with an arrow in between (e.g., “quant → qual” or “quant → qual”) or concurrent methods indicated by a plus sign (e.g., “qual + quant” or “quant + qual). Where both emphasis and sequencing are indicated combinations of the above indicators can be used (e.g., QUANT → qual to indicate emphasis on quantitative methods that precede qualitative methods).

A number of sources provide rich descriptions of multiple approaches to mixed method designs (Greene, Caracelli, & Graham, 1989; Morse, 1991; Palinkas, Aarons et al., 2011; Teddlie, Tashakkori, & Johnson, 2008). For this paper, we focus on the functions of convergence, complementarity, and expansion to provide an example of mixed methods in examining staff turnover in a statewide child welfare system during implementation of an EBP.

Qualitative and quantitative data can be integrated through “triangulation” - a process for strategically utilizing multiple methods together - in order to examine convergence, expansion, and complementarity of qualitative and quantitative data sets (Creswell & Plano Clark, 2011; Teddlie & Tashakkori, 2003). Convergence is a strategy used to determine if qualitative and quantitative results provide the same answer to the same question. For example, does interview or observational data concur with quantitative data regarding factors influencing turnover? Complementarity is used to answer related questions for the purpose of evaluation or elaboration. In evaluative designs, quantitative data are used to evaluate outcomes while qualitative data are used to evaluate process. In elaborative designs, qualitative methods are used to provide depth of understanding and quantitative methods are used to provide breadth of understanding. In Expansion is used to determine whether or not unanticipated findings produced by one data set can be explained by another. For example, can survey data that suggest reasons for turnover be further expanded on or explained by qualitative data?

Mixed Method Sequencing

The use of mixed methods can involve concurrent or sequential collection, analysis, and use of quantitative and qualitative data sets. For example, data sets can be merged or connected to one another and/or by embedding one data set within the other so that each plays a supportive role for the other. The illustrative study we describe below follows recommendations to place the results of each data set side-by-side to examine convergence, complementarity, and expansion (Palinkas, Aarons et al., 2011). Placing results side by side in the triangulation process can also provide a way to share results with stakeholders to determine if there is concurrence on the extent to which the two types of data sets have demonstrated convergence, expansion, or complementarity.

Mixed Method Sampling and Sample Selection

There are a number of choices that can be made in regard to sampling in mixed method research. While the complexities are beyond the purview of this paper, the major concepts are relatively simple. Common approaches to sampling for mixed method research include probability sampling, purposive sampling, convenience sampling, and mixed methods sampling (Teddlie & Yu, 2007). We briefly describe each below and refer the reader to the taxonomy described by Teddlie and Yu (2007) for a more detailed description of the issues and choices.

Probability sampling is typically used to promote the representativeness of a sample so that inferences can be made to a larger heterogeneous population (Babbie, 2010). Probability sampling can be random (i.e., individuals from a population), stratified (i.e., from target subgroups to assure representativeness), or cluster (i.e., from groups of people in units such as neighborhoods or regions). The particular approach to probability sampling depends on the goals of the study and the population to which inferences are to be made.

Purposive sampling may be utilized to identify and select specific groups of interest. One approach to purposive sampling is maximum variation sampling where the goal is to select participants with divergent perspectives, for example those with the most positive or negative views on a topic, issue, or experience (e.g., Aarons & Palinkas, 2007).

Convenience sampling involves utilizing samples that are readily available and willing to participate. However there are drawbacks to this approach in that such participants may not be representative of populations of interest or may not be comparable to those who avoid or refuse participation (Babbie, 2010).

Mixed methods sampling can involve probability sampling and purposive sampling (Teddlie & Yu, 2007). It also includes consideration of whether quantitative and qualitative aspects of a study are sequential or concurrent and considers the multiple levels of interest in a study. As such, mixed methods sampling has a high degree of utility for studies across implementation phases of Exploration, Adoption Decision/Preparation, Active Implementation and Sustainment and across the outer context (e.g., child welfare system level) and inner context (e.g., child welfare organizational provider level) of child welfare and other public sector service systems (Aarons et al., 2011).

Application of Mixed Method Design to Staff Turnover in Child Welfare

Staff turnover is an ongoing challenge in child welfare and mental health service systems and community-based service organizations that increases costs and limits effective organizational functioning. In particular, turnover has a negative impact on staff morale, short and long-term productivity, and organizational effectiveness (Gray, Phillips, & Normand, 1996; Jayaratne & Chess, 1984; Mowday, Porter, & Steers, 1982). Turnover is also associated with suboptimal work-team performance, low productivity (Argote, Insko, Yovetich, & Romero, 1995), organizational climate and culture (Aarons & Sawitzky, 2006; Glisson et al., 2008), and can impact the quality and outcomes of services provided by child mental health and social service staff working with clients (Glisson, Dukes, & Green, 2006).

Turnover rates in public sector child welfare, mental health and social services tend to be high and such organizations would benefit from better service provider retention. Turnover has implications for costs of recruitment and training and the quality of products and services (Glisson, 2002; Knudsen, Johnson, & Roman, 2003). Turnover can be a particularly serious concern in child welfare and human service agencies with annual rates often in excess of 25% and sometimes exceeding 50% (Aarons & Sawitzky, 2006; Gallon, Gabriel, & Knudsen, 2003; Glisson et al., 2006; Glisson & James, 2002; Howard & Gould, 2000; Landsman, 2007; Williams, Nichols, & Wilson, 2011).

For child welfare in particular, staff turnover has been a consistent and ongoing concern (DePanfilis & Zlotnik, 2008; Landsman, 2007; Williams et al., 2011). While studies often report staff perceptions without actually predicting retention or turnover, others have attempted to apply more comprehensive approaches. DePanfilis and Zlotnik’s (2008) review of multivariate studies addressing staff retention in child welfare found that provider factors of higher self-efficacy and lower emotional exhaustion, and organizational factors such as supervisor and co-worker support, salary, and benefits were related to lower turnover. The authors noted, however, that the studies reviewed varied widely in regard to methods and measures and called for more rigorous research of retention strategies and for the use of common measures. While organizational level interventions studies are extremely rare, Glisson, Dukes, and Green (2006) found that use of the “ARC” organizational intervention in child welfare and juvenile justice settings significantly reduced staff turnover. Still most child welfare studies are either quantitative or qualitative, with a smaller number “mixing” the two methods.

The use of EBP or even practices consistent with the use of evidence is also pertinent to the work described below. For example, relative to child welfare public agency staff, child welfare private agency staff reported the availability of more adequate data and reported more use of data in their practice (Collins-Camargo, Sullivan, & Murphy, 2011). While not EBPs or treatments per se, having adequate data and utilizing those data is consistent with the use EBP. In some cases child welfare service providers have reported that the implementation and use of an EBP can increase negative perceptions of work processes such as increased oversight (Aarons & Palinkas, 2007). Still few studies have examined the relationship of EBP implementation along with staff turnover. Our intent here is to bring together the issues of evidence-based practice and turnover, utilizing a mixed method design, in order to inform these issues in light of the increasing impetus for EBPs to be implemented in child welfare service systems in the United States and other countries.

The Present Study

The present study focuses on staff turnover during a statewide EBP implementation trial. Two common features of EBPs are a high degree of structure or manualization of the intervention and the use of some form of monitoring to insure that the intervention is delivered with fidelity. The implementation of an EBP and fidelity monitoring represent significant changes to organizational structure and process that could serve to reduce perceived job autonomy and subsequently lead to increased turnover. Staff retention is of particular concern for child welfare systems and organizations when considering EBP implementation because of the increased resources required for initial training, certification, and provider support needed to promote adherence to particular intervention protocols. Taken together, the present implementation study uses a mixed method approach to examine the impact of EBP implementation and fidelity monitoring on staff retention by systematically disentangling model (EBP vs. services as usual [SAU]) and monitoring (fidelity monitoring vs. no monitoring) effects in the context of an effectiveness trial of a statewide system change and EBP implementation.

Methods

Study Context

Data used in the present study were collected across a three year period as part of a larger five-year longitudinal mixed method study examining organizational factors likely to impact the statewide implementation of an EBP throughout a statewide network of private nonprofit community based mental health and social service provider organizations contracted with the Oklahoma Children’s Service system (OCS). OCS is a regionalized statewide contracted community-based home-visiting family preservation and reunification service system for child-welfare cases. It serves approximately 1,500 new child-welfare referred families annually. In collaboration with the OCS system, investigators at the University of Oklahoma Health Sciences Center (OUHSC) conducted a randomized effectiveness trial and implemented an EBP to reduce child neglect, SafeCare® (SC) (Lutzker, 1990). The effectiveness of the model was being experimentally tested with SC being implemented in a randomized fashion, by region, such that providers in three of six regions provided SC while providers in the other three regions continued to provide customary case-management services as usual (SAU). Teams were then randomized to fidelity monitoring or no monitoring. The present mixed method implementation study was concurrently conducted to identify factors that impede or facilitate real-world implementation of an EBP and to examine the bi-directional impacts of implementation on organizations and staff, and organizational context on implementation effectiveness.

Study Design

A unique aspect of this study is the 2 × 2 experimental design, in which the EBP vs. SAU was crossed with the level of fidelity monitoring (monitored vs. non-monitored). In this study, there were 21 teams of home-based service providers operating in six regions covering the entire state, with approximately one quarter of the teams operating in each study condition. For quantitative analyses, we examined both the independent effects of EBP and fidelity monitoring (framed as ongoing consultation for service providers) on provider turnover as well as the interaction of EBP and monitoring conditions on provider turnover. The four experimental groups were defined as follows: SC/M - participating in SafeCare and receiving fidelity monitoring; SC/NM - participating in SafeCare with no fidelity monitoring; SAU/M - services as usual and receiving fidelity monitoring; SAU/NM - services as usual/and with no fidelity monitoring.

Mixed Method Design Elements

The study utilized both component and integrated features of a mixed method design and included longitudinal concurrent (i.e., qualitative and quantitative data collected concurrently) and sequential (i.e., one data collection method followed by the other) processes. The primary analytic strategy involved sequential analysis with quantitative analysis hypothesis testing conducted first, followed by qualitative analysis (i.e., QUAN → qual). Convergence involved concurrent utilization of qualitative data to validate or confirm conclusions reached from quantitative analyses (QUAN + QUAL). Complementarity was used to obtain depth as well as breadth of understanding of the reasons for turnover or provider retention. Expansion involved sequential examination of data to further elucidate and explain the findings of the quantitative analyses (quant → QUAL).

Quantitative Data

Home-based service providers and supervisors employed by contracted agencies to provide SC or SAU were asked to completed bi-annual web-surveys. The surveys took approximately 45–90 minutes to complete and response rates ranged from 90.2% to 96.8% with an average of 94.5% over the four waves of data collection included in these analyses. The organizational participation rate was 100%. Quantitative analyses presented as part of this paper were previously published (Aarons, Sommerfeld, Hecht, Silovsky, & Chaffin, 2009).

Quantitative Participants

A total of 153 home-based service providers were included in the analyses. Of these, 85.6% were female, 63.4% were Caucasian, 19.6% African American, 12.4% American Indian, and 4.6% Hispanic. At the time of their first survey, the mean age of the home-based providers was 36.8 years (SD=10.2). The highest educational attainment for the home-based providers consisted of high school graduate (0.7%), college graduate (41.8%), some graduate level education (25.5%), and master’s degree (32.0%). Their educational backgrounds included social work (39.9%), psychology (25.5%), human relations (13.1%), child development (7.2%), marriage and family therapy (5.2%) and “other” (8.2%). Mean job tenure at the time of the first survey was 31.1 (SD=36.7) months.

Quantitative Measures

Provider Demographics

A provider survey incorporated questions regarding home-based provider demographics including age, sex, race, education level, and job tenure (Aarons, 2004). Sex was binary coded to indicate whether the home-based provider was female. For this study, race was treated dichotomously indicating if the home-based provider was Caucasian or non-Caucasian. Provider education was also collapsed into a dichotomous measure indicating whether the participant had received at least some graduate level education. Job tenure represents length of employment (measured in months) and operates as the indicator of time in the quantitative analyses.

Job Autonomy

Job autonomy is defined as the degree of perceived control that an employee has over how they perform tasks and the degree to which they operate independently. Job autonomy varies across work contexts (Dobbin & Boychuk, 1999), has a direct effect on turnover intentions (Knudsen et al., 2003), and mediates the relationship between employment status, work attitudes, and performance (Marchese & Ryan, 2001). Job autonomy was quantitatively assessed with items drawn from previous organizational studies and having good psychometric characteristics (Cronbach’s alphas .75–.81)(Hackman & Oldham, 1975; Knudsen et al., 2003; Marchese & Ryan, 2001; Spreitzer, 1995; Wang & Netemeyer, 2002). The 11-item job autonomy measure demonstrated very high internal consistency in our sample (Cronbach’s alpha = .94).

Turnover Intentions

Turnover intention is the degree to which employees are considering leaving their current job and/or are actively seeking another job. Turnover intention is related to organizational characteristics and to actual decisions to terminate employment and to other withdrawal behaviors such as tardiness and absenteeism (Halfhill, Huff, Johnson, Ballentine, & Beyerlein, 2002). Turnover intention is a reliable predictor of actual turnover behavior (Knudsen, Ducharme, & Roman, 2007). Turnover intentions were assessed with five items derived from organizational studies and adapted for use in human service agencies (Knudsen et al., 2003; Walsh, Ashford, & Hill, 1985). Respondents were asked about their intentions to leave or stay at their present job, measured on a five-point Likert scale (5-items; Cronbach’s alpha=.91).

Staff Turnover

For all home-based providers employed during at least one data collection wave from wave one (May, 2004) through wave three (February, 2006), we determined the presence and timing of participants’ turnover events (if any) through the start of the fourth survey wave (October, 2006). For providers not eligible for participation in subsequent survey waves due to leaving their agency, employment separation dates and departure reason (involuntary versus voluntary) were collected from the agencies and/or the provider.

With a primary study goal of examining provider volitional behavior during EBP implementation, voluntary turnover was the event of interest. Employment status changes involving promotions or transitions to other positions within the same agency not involving the direct provision of SafeCare removed the participant from continued observation as a SafeCare home-based provider, but were not considered to be turnover events. Based on these definitions, 57 of the 153 home-based providers recorded a voluntary turnover event during the study period.

Quantitative Analyses

We used survival analysis to examine provider turnover by allowing the timing of the event (or lack thereof) as well as any covariate changes over time to contribute to the analyses (Allison, 1984; Box-Steffensmeier & Jones, 2004; Cleves, Gould, & Gutierrez, 2004; Kammeyer-Mueller, Wanberg, Glomb, & Ahlburg, 2005; Willett & Singer, 1993). We report results from multivariate survival analysis models testing factors associated with the “hazard” or risk of provider turnover as a function of experimental condition. We utilized discrete-time exponential proportional hazards modeling (Willett & Singer, 1993).

Qualitative Data

Semi-structured interviews and focus groups were conducted at three different intervals over a 3-year period by the first and fourth authors and an experienced doctoral level medical anthropologist. An interview guide for case managers and trainers was designed to elucidate the experience of being trained and using SC with a focus on identifying barriers and facilitators to implementation. An interview guide was designed for use with administrators to focus on their experience with the statewide trial of SC, the nature of interactions between agencies and OUHSC investigators, experience in using SC and other EBPs, impact of SC on agency and staff, and requirements for implementation and sustainment of SC at the conclusion of the trial. Interview and focus group duration was approximately one hour.

All interviews were digitally recorded, transcribed, and checked for accuracy. Using a methodology of “Coding Consensus, Co-occurrence, and Comparison” outlined by Willms et al. (1992) and rooted in grounded theory (i.e., theory derived from data and then illustrated by characteristic examples of data) (Glaser & Strauss, 1967), qualitative data were analyzed in the following manner. First, all data were reviewed to develop a broad understanding of content as it relates to the project’s specific aims and to identify topics of discussion and observation. During this step, short descriptive statements or “memos” were prepared to document initial impressions of topics and themes and their relationships and to define the boundaries of specific codes (i.e., the inclusion and exclusion criteria for assigning a specific code) (Miles & Huberman, 1994). Second, material in field notes, interviews, and archival material were coded to condense the data into analyzable units. Segments of text ranging from a phrase to several paragraphs were assigned codes based on a priori (i.e., from the interview guide) or emergent themes (also known as open coding) (Strauss & Corbin, 1998). Third, codes were assigned to describe connections between categories and between categories and subcategories (also known as axial coding) (Strauss & Corbin, 1998). The final list of codes or codebook consisted of a list of themes, issues, accounts of behaviors, and opinions that related to individual, organizational and system characteristics that influenced implementation of the EBPs in each case study. Fourth, based on these codes, the computer program QSR NVivo (Fraser, 2000) was used to generate a series of categories arranged in a treelike structure connecting text segments grouped into separate categories of codes or “nodes” to further the process of axial or pattern coding to examine the association between different a priori and emergent categories. Fifth, by constantly comparing these categories with each other, the different categories were further condensed into broad themes using a format that placed the EBPs within the framework of the individual, organizational and system characteristics.

Qualitative Participants

Participants in Wave 1 of data collection for this study were fifteen case managers and two ongoing consultants (i.e., trainer/coaches). There were 13 women and 4 men between the ages of 22 and 60 involved in the implementation of SC and ongoing fidelity monitoring of the intervention. Wave 1 case manager participants were selected by maximum variation sampling to represent those having the most positive and those having the most negative views of SC based on results of a web-based quantitative survey asking about the perceived value and usefulness of SC. Participants in Wave 2 were thirteen executive and program directors (8 women and 5 men) between the ages of 35 and 65 representing each of the agencies participating in the statewide implementation trial. A sampling strategy was not employed since every administrator eligible to participate agreed to do so. Participants in Wave 3 were 95 case managers (92.2% participation rate) from 21 teams (100% of teams), who participated in focus groups addressing issues related to their experiences with SC and ongoing consultation.

Institutional review board approval was obtained for this mixed method study from the appropriate academic institutions. Because participants were employees of community-based organizations (CBOs) contracted by the OCS to deliver home visitation services, we took particular care to protect confidentiality at the organizational, team, and individual levels. Confidentiality was assured through a number of means (e.g., separation of individual responses from identifying information, home-visitor interviews conducted apart from supervisor interviews, aggregating data so that teams or organizations could not be identified). As such participation in the study did not impact work performance evaluations or individual, team, or organizational standing with the service system. Participants received an incentive for their participation in the study.

Mixed Method Analysis

We conducted quantitative and qualitative analyses separately and then combined data to illuminate issues related to turnover. We first present the quantitative results and then the combined results. Analyses for the current study were conducted within the three components of convergence, complementarity, and expansion (Creswell, 2008).

First, for convergence we examined whether the quantitative and qualitative data provided the same answers to the same question, with the quantitative analyses preceding the qualitative (i.e., QUAN → qual). We conducted this analysis via five questions in the current study: (1) Does SC implementation increase risk of turnover?, (2) Does fidelity monitoring increase risk of turnover?, (3) Is SC implementation coupled with fidelity monitoring associated with greatest risk of turnover?, (4) Does lower perceived job autonomy increase risk of turnover?, (5) Does higher turnover intention increase risk of turnover?

Second, we examined the complementarity of our data sources for understanding staff turnover in the context of EBP implementation. To do so, we asked whether different methods provided related answers to related questions via four sets of questions, with the analyses giving equal emphasis to quantitative and qualitative analyses (QUAN + QUAL): (1) Quantitative: Does SC implementation lead to increased turnover?; Qualitative: Does a low rate of turnover signify satisfaction with SC? (2) Quantitative: Does monitoring lead to increased turnover?; Qualitative: Does low rate of turnover signify satisfaction with monitoring? (3) Quantitative: Does lower perceived job autonomy lead to increased turnover?; Qualitative: Did SC increase or decrease autonomy? (4) Quantitative: Does higher turnover intentions lead to increased turnover?; Qualitative: Did SC increase or decrease turnover intention?

Finally, we incorporated expansion, examining whether one method provides answers to questions raised by use of the other method. This was accomplished with one set of qualitative questions generating data to provide additional depth and insights into the variability and meaning of the quantitative data (i.e., quan → QUAL): Quantitative: Does SC and/or monitoring lead to increased turnover?; Qualitative: Why are they more likely to stay?

RESULTS

The probability of a case-manager staying with an agency for more than a year (12 months) was 86.2% in the SC/M group, 61.4% in the SC/NM group, 76.2% in the SAU/M group, and 75.7% in the SAU/NM group, respectively. Controlling for a range of factors, we found a 2.6 times greater likelihood of staff turnover in the combined other three experimental conditions relative to the SC/M condition (p<.05). Compared to the SC/M condition, the SC/NM and SAU/M conditions demonstrated a significantly higher risk of turnover (HR = 2.966; p < .01 and HR = 2.504; p < .05, respectively) and a marginally significant risk for the SAU/NM group (HR = 2.246, p = .07). We also found that case-managers reporting greater perceived job autonomy had a reduced risk of turnover (p < .05). Higher turnover intentions were associated with a greater risk of turnover (p < .05) and older providers had a reduced likelihood of leaving an agency (p< .01) (see Aarons et al., 2009 for details on quantitative approach, analyses, and results).

The first set of mixed methods analyses focused on convergence of data for answering several research questions noted above. As shown in Table 1, several analyses demonstrated how the merging of the qualitative and quantitative data provided convergence of findings for important research questions. Identical findings were observed with respect to whether the risk of turnover was increased by EBP implementation, fidelity monitoring, or the combination of the two. As noted, the high degree of structure and manualization that accompanies EBPs may threaten stability and service provision, in particular within public sector service agencies, if accompanied with increased staff turnover. As such, we first asked whether EBP implementation increased the risk of turnover. Results from our quantitative study illustrated that home-based service providers in the EBP condition and with monitoring (SC support) had a greater likelihood of staying with their agencies. Qualitative results were consistent with this finding. Further, none of the service providers that were interviewed reported leaving or intending to leave their agency as a result of their inclusion in EBP implementation. Similarly, while the quantitative data indicated that home-based providers in the SC/M condition and the SAU/M condition had a greater likelihood of staying with their agencies for a longer period of time, none of the service providers in these conditions reported dissatisfaction with being monitored. Quantitative and qualitative data were also consistent with respect to the association between low autonomy/high supervisor micromanagement and turnover intention, and the association between turnover intention and actual turnover.

Table 1.

Mixed method Results Demonstrating Convergence of Findings

Method Quantitative Qualitative
Question Does SC implementation increase risk of turnover? Does SC implementation increase risk of turnover?
Answer No: Home based providers in the SC/M condition had a greater likelihood of staying with their agencies for a longer period of time. No: Many of the providers reported satisfaction with the structure provided by the EBP.
No: None of the providers interviewed reported leaving primarily because of their involvement in the EBP effectiveness trial.
Question Does fidelity monitoring increase risk of turnover? Does fidelity monitoring increase risk of turnover?
Answer No: Home based providers in the SC/M condition and UC/M condition had a greater likelihood of staying with their agencies for a longer period of time. (See figure at right) No: Many of the providers reported satisfaction with the support they received from monitors.
Question Is SC implementation + fidelity monitoring associated with greatest risk of turnover? Is SC implementation + fidelity monitoring associated with greatest risk of turnover?
Answer No: Home based providers in the SC/M condition had a greater likelihood of staying with their agencies for a longer period of time. No: Many of the providers reported satisfaction with the support they received from monitors/consultants.
Question Does lower perceived job autonomy increase risk of turnover? Does lower perceived job autonomy increase risk of turnover?
Answer Yes: Lower perceived job autonomy was associated with turnover. Yes: Some providers reported intentions to leave due to supervisor micromanagement but this was unrelated to the EBP.
Question Does higher turnover intention increase risk of turnover? Does higher turnover intention increase risk of turnover?
Answer Yes: Higher turnover intention was associated with turnover. Yes: Some providers who reported intentions to leave during focus groups resigned from their positions within the following year because they felt unsupported by their supervisor.

We also used the qualitative and quantitative findings to demonstrate complementarity. Here, we examined related but not identical questions. As shown in Table 2, once we had established that home-based service providers in the SC with monitoring condition had a greater likelihood of staying with their agencies for a longer period of time, we asked whether the low rate of turnover signified satisfaction with SC. Results from the qualitative data collection clearly indicate that service-providers were largely satisfied with the SC. As noted above, many service providers reported appreciating the value of the structure provided by SC. In addition, many service providers felt that SC was a benefit to their families as illustrated by one provider: “And, I think it, specifically, for kids under … six, seven, I think it really does benefit the families.”

Table 2.

Mixed method Results Demonstrating Complementarity of Findings

Method Quantitative Qualitative
Question Does SC implementation lead to increased turnover? Does low rate of turnover signify satisfaction with SC?
Answer Home based providers in the SC/M condition had a greater likelihood of staying with their agencies for a longer period of time. Yes: Some providers loved the structure provided by the EBP.
Yes: Many providers felt that there was some value to the EBP and some felt it benefited their families.
No: Some providers disliked having to implement some of the EBP modules.
No: Many providers felt that the EBP was not appropriate for all families.
No: Some providers felt the EBP detracted from dealing with more immediate issues (e.g., crises).
Question Does monitoring lead to increased turnover? Does low rate of turnover signify satisfaction with monitoring?
Answer Home based providers in the SC/M condition and the UC/M condition had a greater likelihood of staying with their agencies for a longer period of time. Yes: Some providers loved the supervision that came with monitoring.
No: Some providers resented being monitored. According to administrator interviews, some of those providers subsequently left the agency.
No: Some providers disliked their ongoing consultants.
Question Does lower perceived job autonomy lead to increased turnover? Did SC increase or decrease autonomy?
Answer Yes: Lower perceived autonomy predicted greater turnover. Decrease: Some providers reported use of the EBP reduced their ability to respond to more immediate demands like substance abuse or unemployment.
Increase: Most providers reported that the EBP gave them more structure to do what they were already doing, making them feel more competent at their jobs (thus increasing perceived autonomy).
Question Do higher turnover intentions lead to increased turnover? Did SC increase or decrease turnover intention?
Answer Yes: Higher turnover intention predicted greater turnover. No: Most newer providers came in with the EBP as part of the work milieu and the service model so it did not impact turnover intentions.
Yes: some experienced staff felt that they already had the knowledge and tools to provide effective services.

However, the method of complementarity illustrates that satisfaction with SC is not universal. Service providers reported some discontent with particular components of SC, and others reported it to be not appropriate for all families: “The degree of the information I think is dependent on each family. But the gentleman I saw the other day in the health module, we went through that. The first aid kit, we went through the manual and talked about the stuff that was in it. The role-playing wasn’t appropriate for him.”

With regard to the monitoring aspect of SC, the quantitative findings showed that service providers with monitoring – in all service conditions – stayed with their agencies longer. This is reflected in the qualitative interviews as well, with many home-based service providers appreciating the supervision and consultation that accompanied the monitoring. Again, however, this was not universal – some service providers disliked their consultants and/or resented the monitoring: “I don’t understand how having someone come & follow me twice a month – I don’t get that … I’m uncomfortable” and “It just becomes another stressor.”

Finally, complementarity of findings was also illustrated with regard to turnover intentions and actual turnover. In the quantitative study, turnover intention was significantly related to actual turnover. Qualitative results showed that, in a select number of cases, service providers did leave their organizations because they did not value fidelity monitoring.

In our final set of mixed methods analyses, we incorporated the technique of expansion via two questions. Continuing with the quantitative findings that service providers in the SC with monitoring condition had the greatest longevity at their agencies, we asked why they were more likely to stay. In addition to previous qualitative findings that reflect the appealing nature of the structure of SC and that it is beneficial to families, case managers reiterated the importance of the consultation aspect of SC, seeing it as “free supervision.” Also, the EBP provides informal support to the service providers inasmuch as the case managers identify as a group and rely on one another even outside of the EBP context as noted by one provider: “I mean we would all run into different problems at different times. And it may not even be [a SC] problem. It may be a different problem. But we all try to give each other insight.” Using expansion also revealed aspects of satisfaction across the EBP conditions. For example, non-SC service providers reported declines in morale, a lack of team identity, and other factors unrelated to EBP.

Discussion

This study demonstrated the use of mixed methods to examine the issue of staff turnover during a large-scale EBP implementation across a statewide child welfare system. The mixed methods approach allowed for examination of the impact of implementation and fidelity monitoring on turnover and also for provider and supervisor views on the more subtle issues related to staff turnover and staff retention not during this implementation. Both quantitative and qualitative data were necessary in order to illuminate process and outcomes of the implementation effects on staff turnover. Our data demonstrated three structural approaches to data collection and analysis (i.e., QUAN → qual; QUAN + QUAL; quan → QUAL) and three functions (i.e., convergence, complementarity, expansion) common in mixed methods research.

We found that EBP implementation along with supportive consultation to support fidelity was associated with significantly greater staff retention relative to EBP implementation without consultation or either of the services-as usual conditions (i.e., with or without consultation). However, integration of qualitative data illuminated these findings and added information regarding variability of provider reactions and responses to the implementation. In contrast, other studies have found that turnover can negatively impact evidence-based practice implementation resulting in poorer practice fidelity (Woltmann et al., 2008). This is an important area for future research and for the development of strategies to improve staff retention and the implementation effectiveness (Klein, Conn, & Sorra, 2001; Proctor et al., 2011).

For convergence we found that service providers were generally satisfied with the EBP structure and process. In addition, rather than being seen as reducing job autonomy, providers were satisfied with the additional support they received from the ongoing consultants. Thus, as service systems and organizations consider EBP implementation, it will be important to consider EBP fit with their expectations regarding what is acceptable and practical from system, organizational, and provider perspectives (Mendel et al., 2008; Raghavan, Bright, & Shadoin, 2008).

Convergence is particularly important within the context of implementation research because sample size (especially at the level of organizational units) may limit some hypothesis testing with sufficient power to draw definitive conclusions. For example there may be a large number of service providers, (e.g., n=120) who are nested within teams (e.g., n=21). However, the team is the unit of analysis therefore the sample size is restricted. The issue of restricted sample size is an emerging concern in implementation science and implementation research study design. Frequently in implementation research, rather than focusing on the client or patient level, the interest is in implementation of a practice or intervention at the unit (e.g., team), organization (e.g., hospital), or service system (e.g., child welfare) level. As such, mixed method designs can increase the validity of studies by triangulating findings across quantitative and qualitative methods.

In regard to complementarity of quantitative and qualitative date we found that while EBP implementation did not lead to increased turnover, the low rate of turnover did not always signify satisfaction with the EBP. Some service providers reported liking the structure of the EBP, felt that it benefited their clients, but some also disliked implementing some modules, that the EBP was not appropriate for all of their families, and that being required to use the EBP interfered with crisis management. Similarly, while the quantitative data indicated that fidelity monitoring did not lead to increased turnover, the qualitative data suggested that the additional consultation was seen as positive by some, but some providers felt that it was invasive. There were also some personality conflicts in that some home visitors did not like their assigned consultant. We also found that implementation of the EBP served to both increase and decrease a sense of autonomy depending on the home visitor and client circumstances. However, the decrease was associated with limiting discretion in dealing with emergent issues of the family while the increase was associated with and increase in structured that was related to increased self-efficacy that freed them to address issues more efficiently. Although the quantitative data suggested that lower perceived autonomy was associated with greater likelihood of turnover, the qualitative data suggested that some home visitors found the use of the EBP to reduce their autonomy by limiting their ability to respond to more immediate issues in the home. Nevertheless, most home visitors believed the EBP gave them increased autonomy by providing them with more structure, thus making them feel more competent in their positions. Finally, there was evidence for an increase in turnover intention for more experienced workers who reported that they already had the experience to carry out effective services but decreased turnover intention for providers early in their career or with the EBP in place so there was not a contrast in service models.

For expansion, we attempted to answer the question of why providers in the EBP plus consultation condition were more likely to stay. Qualitative data expanded on the quantitative data indicating that, among other reasons, providers were more likely to stay because they liked the structure of the EBP and the support received from consultants. They also related that the EBP facilitated their sense of a distinct professional identity and helped them engage more effectively with their peers. In addition, we learned that declines in morale were due to factors not related to the EBP or consultation. Finally, we learned that the intention to leave their current job was more a function of general dissatisfaction with overall supervision from the organizational supervisor rather than consultation related to the EBP. Thus, the qualitative data considerably enhanced our understanding of both reasons for staying in the organization and reasons for leaving, issues not captured in the quantitative data alone.

There are some additional considerations in conducting mixed methods research that should be noted. First, mixed method studies require consideration of design elements and integration of design elements not found in either alone. For example, for convergence it is important to have congruence in measures, while for complementarity or expansion it is important to consider the potential of each approach add unique information. Thus, it is important to consider both the degree and the way in which quantitative and qualitative measures (e.g., surveys, interview guides) either overlap or diverge.

Mixed methods research may also require more involvement from participants than either alone. For example, engaging participants in multiple data collection activities can be challenging and time intensive. Quantitative and qualitative methods require different ways of thinking about research questions for both researchers and participants. For example, in contrast to completing quantitative surveys, providing participants the opportunity to “tell their stories” can be an aspect of research that is welcomed.

When conducting mixed method studies, researchers should be sensitive to both the burden of research, but also to what can be offered to research participants at the individual and organizational levels. For example, it may be possible to offer feedback at the team, organization, or system level that may be welcomed by participants. It is important to note, however, that feedback must be given in such a way as to not jeopardize the validity of the research study and that care must be taken to protect the confidentiality of individuals, teams, organizations and service systems.

In regard to our substantive findings, it is clear that EBP implementation has the potential to improve workforce outcomes as well as clinical outcomes. For policy makers and organization leaders this represents a potential offset of the initial costs of EBP implementation. Evidence-based practice implementation generally entails a number of costs including training, materials, fidelity monitoring and clinicians time away from clinical duties. Staff turnover is also costly and if turnover can be reduced, there may be important cost savings for service systems and employers. Our findings suggest that policy makers and organization leaders consider these benefits, in addition to those of EBP utilization, in making decisions about EBP implementation. In particular, decision makers should consider the fit and potential benefits of a given EBP across the outer and inner contexts of the service system for provider organizations, clinical and case-management service providers, and clients (Aarons, Hurlburt, & Horwitz, 2011)

Limitations

Some limitations of the present study should be noted. First, while our quantitative survey was comprehensive in regard to organizational context and provider perspectives, some issues identified in the qualitative data were not captured quantitatively and this limited our ability to identify more convergent issues. Second, our study began after initial SC implementation and some providers may have left prior to our study. Third, interviews and focus groups covered a range of issues related to factors impacting implementation of the EBP in this study and thus some of the qualitative data pertaining to turnover were explored as a function of probes and interaction between interviewers and respondents. More targeted interview questions might have yielded additional information.

Conclusions

The use of mixed methods research designs is critical for development of implementation research in order to further a more comprehensive understanding of issues and factors that impact, or result from, the growing dissemination of EBPs into child-welfare systems. We illustrated the utility of mixed methods for examining one implementation question: What is the impact of implementation on staff work perceptions and provider turnover. We demonstrated that mixed methods could illuminate different aspects of the implementation process and outcomes through functions and sequencing of methods (Palinkas, Horwitz et al., 2011). In addition, we showed that EBP implementation can lead to system and organization benefits in addition to service and clinical outcome benefits.

We also demonstrated that using mixed methods can add to and deepen our understanding of the meaning of variability not captured in quantitative confidence intervals. For example, while our quantitative conclusion is that the EBP plus monitoring leads to better staff retention, qualitative data revealed some differences of opinion among service providers. While statistical significance indicates an overall probability for a finding, there is something to be learned from the variability that makes up standard deviations and confidence intervals. For implementation research, mixed methods can guide hypothesis generation and testing, contextualization of results, and deepen our understanding of what it takes to successfully implement EBPs. Such knowledge is critical in planning for implementation of EBPs in public sector service systems where there is much variability and many challenges in system and organizational change to improve child welfare practice and outcomes for children and families.

Table 3.

Mixed method Results Demonstrating Expansion of Findings

Method Quantitative Qualitative
Question Does SC Implementation and/or monitoring lead to increased turnover? Why are they more likely to stay?
Answer Home based providers in the SC/M condition had a greater likelihood of staying with their agencies for a longer period of time. Providers like the structure that SC provides to services.
Providers like the support they receive from monitors. They view it as “free” supervision.
EBP providers supported one another in application of the EBP and developed a distinct identity.
SAU providers reported decline in morale due to factors unrelated to the EBP (e.g., conflicts with supervisor, change in leadership, few opportunities for promotion or pay raise, lack of distinct team identity).
Question Does lower perceived job autonomy increase risk of turnover? Is job autonomy threatened by SC or other work conditions?
Answer Yes: Lower perceived job autonomy was associated with turnover. Yes: Some providers reported intentions to leave due to supervisor micromanagement however this may be more related to work activities rather than the EBP.

Acknowledgments

This study was supported by National Institute of Mental Health Grants R01MH072961, R21MH082731, and P30MH074678 and Centers for Disease Control grant R01CE001556. Quantitative analyses and results were published previously and some portions of the research reported here were previously presented and/or published.

Contributor Information

Gregory A. Aarons, University of California San Diego, Child & Adolescent Services Research Center at Rady Children’s Hospital, San Diego.

Danielle L. Fettes, University of California San Diego, Child & Adolescent Services Research Center at Rady Children’s Hospital, San Diego

David H. Sommerfeld, University of California San Diego, Child & Adolescent Services Research Center at Rady Children’s Hospital, San Diego

Lawrence Palinkas, University of Southern California, Child & Adolescent Services Research Center at Rady Children’s Hospital, San Diego

References

  1. Aarons GA. Mental health provider attitudes toward adoption of evidence-based practice: The Evidence-Based Practice Attitude Scale (EBPAS) Mental Health Services Research. 2004;6(2):61–74. doi: 10.1023/b:mhsr.0000024351.12294.65. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Aarons GA, Hurlburt M, Horwitz SM. Advancing a conceptual model of evidence-based practice implementation in child welfare. Administration and Policy in Mental Health and Mental Health Services Research. 2011;38(1):4–23. doi: 10.1007/s10488-010-0327-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Aarons GA, Palinkas LA. Implementation of evidence-based practice in child welfare: Service provider perspectives. Administration and Policy in Mental Health and Mental Health Services Research. 2007;34:411–419. doi: 10.1007/s10488-007-0121-3. [DOI] [PubMed] [Google Scholar]
  4. Aarons GA, Sawitzky AC. Organizational climate partially mediates the effect of culture on work attitudes and staff turnover in mental health services. Administration and Policy in Mental Health and Mental Health Services Research. 2006;33(3):289–301. doi: 10.1007/s10488-006-0039-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Aarons GA, Sommerfeld DH, Hecht DB, Silovsky JF, Chaffin MJ. The impact of evidence-based practice implementation and fidelity monitoring on staff turnover: Evidence for a protective effect. Journal of Consulting and Clinical Psychology. 2009;77(2):270–280. doi: 10.1037/a0013223. [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Allison PD. Event history analysis regression for longitudinal event data. Beverly Hills, Calif: Sage; 1984. [Google Scholar]
  7. Argote L, Insko CA, Yovetich N, Romero AA. Group learning curves: The effects of turnover and task complexity on group performance. Journal of Applied Social Psychology. 1995;25(6):512–529. [Google Scholar]
  8. Babbie E. The Practice of Social Research. 12. Belmont, CA: Wadsworth Publishing Company; 2010. [Google Scholar]
  9. Box-Steffensmeier JM, Jones BS. Event history modeling a guide for social scientists. Cambridge; New York: Cambridge University Press; 2004. [Google Scholar]
  10. Cleves MA, Gould WW, Gutierrez RG. An introduction to survival analysis using Stata. College Station, Tex: Stata Press; 2004. [Google Scholar]
  11. Collins-Camargo C, Sullivan D, Murphy A. Use of data to assess performance and promote outcome achievement by public and private child welfare agency staff. Children and Youth Services Review. 2011;33:330–339. [Google Scholar]
  12. Creswell JW. Research design: Qualitative, quantitative, and mixed methods approaches. 3. Thousand Oaks, CA: Sage Publications; 2008. [Google Scholar]
  13. Creswell JW, Plano Clark VL. Designing and conducting mixed methods research. 2. Thousand Oaks, CA: Sage; 2011. [Google Scholar]
  14. Demakis JG, McQueen L, Kizer KW, Feussner JR. Quality Enhancement Research Initiative (QUERI): A collaboration between research and clinical practice. Medical Care. 2000;38:17–25. [PubMed] [Google Scholar]
  15. DePanfilis D, Zlotnik JL. Retention of front-line staff in child welfare: A systematic review of research. Children and Youth Services Review. 2008;30(9):995–1008. doi: 10.1016/j.childyouth.2007.12.017. [DOI] [Google Scholar]
  16. Dobbin F, Boychuk T. National employment systems and job autonomy: Why job autonomy is high in the Nordic countries and low in the United States, Canada and Australia. Organization Studies. 1999;20(2):257–291. [Google Scholar]
  17. Fraser D. QSR NVivo reference guide. Melbourne: QSR International; 2000. [Google Scholar]
  18. Gallon SL, Gabriel RM, Knudsen JRW. The toughest job you’ll ever love: A Pacific Northwest treatment workforce survey. Journal of Substance Abuse Treatment. 2003;24(3):183–196. doi: 10.1016/s0740-5472(03)00032-1. [DOI] [PubMed] [Google Scholar]
  19. Glaser BG, Strauss AL. The discovery of grounded theory: Strategies for qualitative research. New York: Aldine de Gruyter; 1967. [Google Scholar]
  20. Glisson C, Dukes D, Green P. The effects of the ARC organizational intervention on caseworker turnover, climate, and culture in children’s service systems. Child Abuse and Neglect. 2006;30(8):849–854. doi: 10.1016/j.chiabu.2005.12.010. [DOI] [PubMed] [Google Scholar]
  21. Glisson C, James LR. The cross-level effects of culture and climate in human service teams. Journal of Organizational Behavior. 2002;23:767–794. [Google Scholar]
  22. Glisson C, Schoenwald S, Kelleher K, Landsverk J, Hoagwood K, Mayberg S. Therapist turnover and new program sustainability in mental health clinics as a function of organizational culture, climate and service structure. Administration and Policy in Mental Health and Mental Health Services Research. 2008;35:124–133. doi: 10.1007/s10488-007-0152-9. [DOI] [PubMed] [Google Scholar]
  23. Glisson C, Schoenwald SK, Hemmelgarn A, Green P, Dukes D, Armstrong KS. Randomized trial of MST and ARC in a two-level evidence-based treatment implementation strategy. Journal of Consulting & Clinical Psychology. 2010;78(4):537–550. doi: 10.1037/a0019160. [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Glisson CA. The organizational context of children’s mental health services. Clinical Child and Family Psychology Review. 2002;5(4):233–253. doi: 10.1023/a:1020972906177. [DOI] [PubMed] [Google Scholar]
  25. Gray A, Phillips VL, Normand C. The costs of turnover: Evidence from the British National Health Service. Health Policy. 1996;38(2):117–128. doi: 10.1016/0168-8510(96)00854-8. [DOI] [PubMed] [Google Scholar]
  26. Greene JC. Toward a methodology of mixed methods social inquiry. Research in the Schools. 2006;13(1):93–98. [Google Scholar]
  27. Greene JC, Caracelli VJ, Graham WF. Toward a conceptual framework for mixed-method evaluation design. Educational Evaluation and Policy Analysis. 1989;11(3):255–274. [Google Scholar]
  28. Greenhalgh T, Stramer K, Bratan T, Byrne E, Russell J, Hinder S. Adoption and non-adoption of a shared electronic summary record in England: a mixed-method case study. British Medical Journal. 2010;340:c3111. doi: 10.1136/bmj.c3111. [DOI] [PubMed] [Google Scholar]
  29. Hackman JR, Oldham GR. Development of the Job Diagnostic Survey. Journal of Applied Psychology. 1975;60(2):159–170. [Google Scholar]
  30. Halfhill TR, Huff JW, Johnson DA, Ballentine RD, Beyerlein MM. Interventions that work (and some that don’t): An executive summary of the organizational change literature. In: Lowman RL, editor. The California School of Organizational Studies: Handbook of organizational consulting psychology: A comprehensive guide to theory, skills, and techniques. San Francisco, CA: Jossey-Bass; 2002. pp. 619–644. [Google Scholar]
  31. Howard B, Gould KE. Strategic planning for employee happiness: A business goal for human service organizations. American Journal on Mental Retardation. 2000;105(5):377–386. doi: 10.1352/0895-8017(2000)105<0377:SPFEHA>2.0.CO;2. [DOI] [PubMed] [Google Scholar]
  32. Jayaratne S, Chess WA. Job satisfaction, burnout, and turnover: A national study. Social Work. 1984;29(5):448–453. doi: 10.1093/sw/29.5.448. [DOI] [PubMed] [Google Scholar]
  33. Johnson RB, Onwuegbuzie JA, Turner LA. Toward a definition of mixed methods research. Journal of Mixed Methods Research. 2007;1(2):112–133. [Google Scholar]
  34. Kammeyer-Mueller JD, Wanberg CR, Glomb TM, Ahlburg D. The role of temporal shifts in turnover processes: It’s about time. Journal of Applied Psychology. 2005;90(4):644–658. doi: 10.1037/0021-9010.90.4.644. [DOI] [PubMed] [Google Scholar]
  35. Klein KJ, Conn AB, Sorra JS. Implementing computerized technology: An organizational analysis. Journal of Applied Psychology. 2001;86(5):811–824. doi: 10.1037/0021-9010.86.5.811. [DOI] [PubMed] [Google Scholar]
  36. Knudsen HK, Ducharme LJ, Roman PM. Research participation and turnover intention: An exploratory analysis of substance abuse counselors. Journal of Substance Abuse Treatment. 2007;33(2):211–217. doi: 10.1016/j.jsat.2006.12.013. [DOI] [PMC free article] [PubMed] [Google Scholar]
  37. Knudsen HK, Johnson JA, Roman PM. Retaining counseling staff at substance abuse treatment centers: Effects of management practices. Journal of Substance Abuse Treatment. 2003;24(2):129–135. doi: 10.1016/s0740-5472(02)00357-4. [DOI] [PubMed] [Google Scholar]
  38. Landsman M. Supporting child welfare supervisors to improve worker retention. Child Welfare: Journal of Policy, Practice, and Program Special Issue: Adoption. 2007;86(2):105–124. [PubMed] [Google Scholar]
  39. Lutzker JR. Behavioral treatment of child neglect. Behavior modification. 1990;14(3):301–315. doi: 10.1177/01454455900143005. [DOI] [PubMed] [Google Scholar]
  40. Marchese MC, Ryan JP. Capitalizing on the benefits of utilizing part-time employees through job autonomy. Journal of Business and Psychology. 2001;15(4):549–560. [Google Scholar]
  41. Mendel P, Meredith L, Schoenbaum M, Sherbourne C, Wells K. Interventions in organizational and community context: A framework for building evidence on dissemination and implementation in health services research. Administration and Policy in Mental Health and Mental Health Services Research. 2008;35(1–2):21–37. doi: 10.1007/s10488-007-0144-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  42. Miles MB, Huberman AM. Qualitative data analysis: An expanded sourcebook. Thousand Oaks, CA: Sage Publications; 1994. [Google Scholar]
  43. Morse JM. Approaches to qualitative-quantitative methodological triangulation. Nursing Research. 1991;40(2):120–123. [PubMed] [Google Scholar]
  44. Mowday RT, Porter LW, Steers RM. Employee-organization linkages: The psychology of commitment, absenteeism and turnover. New York: Academic Press; 1982. [Google Scholar]
  45. Palinkas LA, Aarons GA, Horwitz S, Chamberlain P, Hurlburt M, Landsverk J. Mixed method designs in implementation research. Administration and Policy in Mental Health and Mental Health Services Research. 2011;38(1):44–53. doi: 10.1007/s10488-010-0314-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
  46. Palinkas LA, Horwitz SM, Chamberlain P, Hurlburt MS, Landsverk J. Mixed-methods designs in mental health services research: A review. Psychiatric Services. 2011;62:255–263. doi: 10.1176/ps.62.3.pss6203_0255. [DOI] [PubMed] [Google Scholar]
  47. Proctor E, Silmere H, Raghavan R, Hovmand P, Aarons GA, Bunger A. Outcomes for implementation research: Conceptual distinctions, measurement challenges, and research questions. Administration and Policy in Mental Health and Mental Health Services Research. 2011;38(2):65–76. doi: 10.1007/s10488-010-0319-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  48. Raghavan R, Bright CL, Shadoin AL. Toward a policy ecology of implementation of evidence-based practices in public mental health settings. Implementation Science. 2008;3(26):3–26. doi: 10.1186/1748-5908-3-26. [DOI] [PMC free article] [PubMed] [Google Scholar]
  49. Robins CS, Ware NC, dosReis S, Willging CE, Chung JY, Lewis-Fernández R. Dialogues on mixed-methods and mental health services research: Anticipating challenges, building solutions. Psychiatric Services. 2008;59(7):727–731. doi: 10.1176/appi.ps.59.7.727. [DOI] [PMC free article] [PubMed] [Google Scholar]
  50. Soh KL, Davidson PM, Gavin L, DiGiacomo M, Rolley JX, Soh KG. Factors to drive clinical practice improvement in a Malaysian intensive care unit: Assessment of organisational readiness using a mixed method approach. International Journal of Multiple Research Approaches. 2011;5(1):104–121. [Google Scholar]
  51. Spreitzer GM. Psychological empowerment in the workplace: Dimensions, measurement, and validation. Academy of Management Journal. 1995;38(5):1442–1465. [Google Scholar]
  52. Stetler CB, Legro MW, Rycroft-Malone J, Bowman C, Curran G, Guihan M. Role of “external facilitation” in implementation of research findings: a qualitative evaluation of facilitation experiences in the Veterans Health Administration. Implementation Science. 2006;1(23) doi: 10.1186/1748-5908-1-23. [DOI] [PMC free article] [PubMed] [Google Scholar]
  53. Strauss A, Corbin J. Basics of qualitative research: Techniques and procedures for developing grounded theory. 2. Thousand Oaks, CA: Sage Publications, Inc; 1998. [Google Scholar]
  54. Tashakkori A, Teddlie C. Issues and dilemmas in teaching research methods courses in social and behavioural sciences: US perspective. International Journal of Social Research Methodology: Theory & Practice. 2003;6(1):61–77. [Google Scholar]
  55. Teddlie C, Tashakkori A. Handbook of mixed methods in social and behavioral research. Thousand Oaks, CA: Sage Publications; 2003. Major issues and controversies in the use of mixed methods in the social and behavioral sciences; pp. 3–50. [Google Scholar]
  56. Teddlie C, Tashakkori A, Johnson B. Emergent techniques in the gathering and analysis of mixed methods data. In: Biber H, Nagy S, editors. Handbook of emergent methods. New York, NY: Guilford Press; 2008. pp. 389–413. [Google Scholar]
  57. Teddlie C, Yu F. Mixed methods sampling: A typology with examples. Journal of Mixed Methods Research. 2007;1(1):77–100. [Google Scholar]
  58. Waitzkin H, Schillaci M, Willging CE. Multimethod evaluation of health policy change: An application to Medicaid managed care in a rural state. Health Services Research. 2008;43(4):1325–1347. doi: 10.1111/j.1475-6773.2008.00842.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  59. Walsh JP, Ashford SJ, Hill TE. Feedback obstruction: The influence of the information environment on employee turnover intentions. Human Relations. 1985;38(1):23–46. [Google Scholar]
  60. Wang G, Netemeyer RG. The effects of job autonomy, customer demandingness, and trait competitiveness on salesperson learning, self-efficacy, and performance. Journal of the Academy of Marketing Science. 2002;30(3):217–228. [Google Scholar]
  61. Willett JB, Singer JD. Investigating onset, cessation, relapse, and recovery: Why you should, and how you can, use discrete-time survival analysis to examine event occurrence. Journal of Consulting and Clinical Psychology. 1993;61(6):652–965. doi: 10.1037//0022-006x.61.6.952. [DOI] [PubMed] [Google Scholar]
  62. Williams SE, Nichols QI, Wilson T. A recent look at the factors influencing workforce retention in public child welfare. Children and Youth Services Review. 2011;33(1):157–160. [Google Scholar]
  63. Willms DG, Best AJ, Taylor DW, Gilbert JR, Wilson DMC, Lindsay EA. A systematic approach for using qualitative methods in primary prevention research. Medical Anthropology Quarterly. 1992;4(4):391–409. [Google Scholar]
  64. Woltmann EM, Whitley R, McHugo GJ, Brunette M, Torrey WC, Coots L. The role of staff turnover in the implementation of evidence-based practices in mental health care. Psychiatric Services. 2008;59(7):732–737. doi: 10.1176/appi.ps.59.7.732. [DOI] [PubMed] [Google Scholar]

RESOURCES