Abstract
Although often discussed, there is a lack of empirical research on the role of leadership in the management and delivery of health services. The Implementation Leadership Scale (ILS) assesses the degree to which leaders are knowledgeable, proactive, perseverant, and supportive during evidence-based practice (EBP) implementation. The purpose of this study was to examine the psychometric properties of the ILS for leaders’ self-ratings using a sample of mental health clinic supervisors (N=119). Supervisors (i.e., leaders) completed surveys including self-ratings of their implementation leadership. Confirmatory factor analysis, reliability, and validity of the ILS were evaluated. The ILS factor structure was supported in the sample of supervisors. Results demonstrated internal consistency reliability and validity. Cronbach alpha’s ranged from .92–.96 for the ILS subscales and .95 for the ILS overall scale. The factor structure replication and reliability of the ILS in a sample of supervisors demonstrates its applicability with employees across organizational levels.
Keywords: Implementation, leadership, supervisors, evidence-based practice, measurement
The gap between the development and subsequent effective delivery of evidence-based practices (EBPs) in allied healthcare settings is becoming increasingly recognized as an important implementation process to be studied (Aarons, Hurlburt, & Horwitz, 2011; Proctor et al., 2009). Effective implementation and sustainment is critical for EBPs to translate into the intended benefits for patients. Although research has identified individual provider factors related to EBP implementation success (Aarons, 2004), there are also numerous organizational factors that are likely to have an even greater impact on the implementation of EBPs (e.g., Beidas et al., 2015). Such organizational factors include organizational culture, climate, and leadership (Aarons & Sommerfeld, 2012). Of the aforementioned factors, leadership has been repeatedly identified as one essential component of organizational context that influences organizational change such as the implementation of new innovations (Aarons, Ehrhart, Farahnak, & Sklar, 2014; Bass & Avolio, 1990).
Research on leadership and implementation is nascent, with the focus being primarily on general leadership constructs (e.g., transformational leadership; Aarons & Sommerfeld, 2012; Michaelis, Stegmaier, & Sonntag, 2010). However, research in other contexts have considered leadership focused on the achievement of a specific strategic outcome. One such example is in the customer service literature, where strategically-focused customer service leadership has been shown to create a strong customer service climate, which in turn is associated with higher customer satisfaction (Schneider et al., 2005). Furthermore, a recent meta-analysis (Hong, Liao, Hu, & Jiang, 2013) demonstrated that such “service-oriented leadership” had stronger relationships with service climate than measures of general leadership. A similar strategic leadership approach can be applied to the effective implementation of EBPs in the form of implementation leadership (Aarons, Ehrhart, & Farahnak, 2014). The complexity involved with implementing EBPs can be incredibly challenging for leaders, and require skill sets that differ from and complement the skills needed for leading clinicians in delivery of care as usual. Leaders may be faced with specific implementation challenges such as being knowledgeable about and communicating the benefits of utilizing the new practice, allocating various resources and supporting staff in EBP implementation, and being proactive and perseverant in the implementation process.
Answering the call for the identification of implementation constructs and development of brief and pragmatic implementation measures (Martinez, Lewis, & Weiner, 2014; Proctor et al., 2009), Aarons and colleagues (Aarons et al., 2014) developed the Implementation Leadership Scale (ILS) to assess specific leader behaviors that actively support effective implementation through the promotion of a strategic climate for implementing EBPs. The ILS was initially tested in a sample of mental health clinicians working in 93 different outpatient mental health programs in Southern California, who rated their primary leader. Factor analyses provided support for a 12-item scale with four subscales: 1) Proactive leadership: the degree to which the leader anticipates and addresses implementation challenges; 2) Knowledgeable leadership: the degree to which a leader has a deep understanding of EBP and implementation issues; 3) Supportive leadership: the degree of the leader’s support of followers’ adoption and use of EBP; and 4) Perseverant leadership: the degree to which the leader is consistent, unwavering, and responsive to EBP implementation. Confirmatory factor analyses supported the second-order factor structure in which the subscales served as indicators for an overall implementation leadership latent construct. Convergent and discriminant validity of the ILS was also supported in the clinician sample.
The purpose of the present study was to examine the ILS factor structure and psychometrics for first-level leader (i.e., those who supervise direct service providers) self-ratings of implementation leadership. First-level supervisors are particularly influential in supporting new innovations as these leaders are on the frontline working directly with EBP providers as they integrate the EBP into their daily work with clients (Priestland & Hanig, 2005). Because the original ILS scale development and validation was conducted with mental health clinician data, it is important to determine if the instrument’s psychometric characteristics hold with first-level supervisor self-reports. Such information is critical for comparing ratings across sources, whether for research or applied purposes. We hypothesized that the factor structure, reliability, and validity of the ILS would be supported with first-level supervisors’ (leaders’) self-reports.
Method
Participants
Participants were 136 mental health supervisors (i.e., leaders) from 31 different mental health programs organizations in California (n=87) and Pennsylvania (n=32). Of the 136 eligible participants, 119 completed the measures that were used in these analyses (87.5% response rate). The average age of participants was 45.2, and the majority were female (75.6%). Participants reported an average of 13.9 (SD=7.7) years of experience in mental health services and 5.9 (SD=4.5) years tenure with their respective agency. Of the participants, 68.9% identified as Caucasian, 7.8% African-American, 7.8% Asian-American, 16.0% “other” and 16.0% were Hispanic. The majority held a Master’s degree (84.9%). While approximately 9.0% of participants held Ph.D., M.D. or equivalent degrees, 0.8% had some graduate work, 1.7% were college graduates, 2.5% had some college experience, and 0.8% indicated having a high school diploma.
Data Collection Procedures
Data were collected in California and Pennsylvania and the studies were approved by the Institutional Review Boards of San Diego State University and the University of Pennsylvania, respectively. Participation was voluntary and informed consent was obtained from all participants. Details of the data collection for the two samples are presented below.
California Data Collection
This data collection occurred as part of a larger National Institutes of Health (NIH) study focused on implementation measure development. The research team first obtained permission from agency executive directors or their designees to recruit leaders and their followers for participation in the study. Eligible leaders were identified as those that directly supervise staff in mental health treatment teams. Data collection was completed using online surveys or in-person (paper-and-pencil) surveys. For online surveys, each participant received a link to the web survey and a unique password via email. For in-person surveys, participants were provided the paper form of the survey and those agreeing to participate, completed the survey at their team meetings. The survey took approximately 20–40 minutes to complete. Participants were provided incentives ($30 US) following survey completion.
Pennsylvania Data Collection
Measures for the present study were included in a larger study of behavioral health system change (Beidas et al., 2015). Agency executives were provided with information about the study and agreed upon procedures for recruiting participants. The research team scheduled a two-hour visit at each agency and data were collected using paper-and-pencil surveys. Research staff handed out surveys to all eligible participants and ensured completion before providing an incentive. When in-person data collection was not feasible, surveys were left with eligible staff and participants mailed them back to the research team or an online survey option was provided. As the measures collected were a part of a larger survey, the survey took approximately 60 minutes to complete, Participants were provided incentives ($60 US) following survey completion.
Measures
Implementation Leadership Scale (ILS)
(Aarons, Ehrhart, & Farahnak, 2014). The ILS includes 12 items scored on a 0 (‘not at all’) to 4 (‘to a very great extent’) scale. The ILS includes 4 subscales, Proactive Leadership (α = .95), Knowledgeable Leadership (α = .96), Supportive Leadership (α = .95), and Perseverant Leadership (α = .96). The total ILS score (α = .98) was created by computing the mean of the four subscales. The complete ILS measure and scoring instructions can be found in the “additional files” associated with the original scale development study (Aarons, Ehrhart, & Farahnak, 2014).
Implementation Climate Scale (ICS)
(Ehrhart, Aarons, & Farahnak, 2014). The ICS includes 16 items scored on a 0 (‘not at all’) to 4 (‘to a very great extent’) scale. The ICS includes 6 subscales, each consisting of 3 items each: Focus on EBP, Educational Support for EBP, Recognition for EBP, Rewards for EBP, Selection for EBP, and Selection for Openness. The total ICS score (α = .92) was created by computing the mean of the six subscales.
Evidence-based Practice Attitudes Scale (EBPAS-15)
(Aarons, 2006). The EBPAS-15 includes 15 items scored on a 0 (‘not at all’) to 4 (‘to a very great extent’) scale. The EBPAS-15 includes 4 subscales, Requirements (3 items), Appeal (four items), Openness (four items), and Divergence (4 items). The total EBPAS-15 score (α = .69) was created by reverse-coding the Divergence subscale, then computing the mean of the four subscales.
Statistical Analyses
Using Mplus statistical software (Muthén & Muthén, 1998–2012), we conducted a confirmatory factor analysis (CFA) of ILS leader self-ratings specifying the same factor structure as previously found for follower ratings. Analyses adjusted for the nested data structure (leaders nested in programs) using maximum likelihood estimation with robust standard errors (MLR), which appropriately adjusts standard errors and chi-square values. Additionally, examination of skewness revealed that some items showed minor departures from normality that was also addressed through the use of MLR estimation to adjust for non-normality. Although missing data were minimal, any missing data were addressed through the use of full information maximum likelihood (FIML) estimation. Model fit was assessed using several empirically supported indices: the comparative fit index (CFI), the root mean square error of approximation (RMSEA), and the standardized root mean square residual (SRMR). CFI values greater than .95, RMSEA values less than .06, and SRMR values less than .08 indicate acceptable model fit (Hu & Bentler, 1999). Consistent with the ILS development study (Aarons, Ehrhart, & Farahnak, 2014), the higher-order model was tested to evaluate the four factor model with each subscale as an indicator of the overall implementation leadership latent construct. We also examined the internal consistency reliability of each subscale and the total scale using Cronbach’s alpha. Convergent and discriminant validity were assessed by computing Pearson Product Moment Correlations of the ILS total scale scores with ICS and EBPAS-15 total scores.
Results
The hypothesized second-order factor model demonstrated acceptable fit (χ2(50)=96.944, p < .001; CFI=.960; RMSEA=.089; SRMR=.050). Although the model met the recommended cutoffs for both the CFI and SRMR, it slightly exceeded the cutoff for RMSEA. However, Hu and Bentler (1998) have recommended cautious interpretation of RMSEA with smaller sample sizes and specifically recommended using a combination of CFI and SRMR in such situations. Therefore, we deemed the second-order model to have acceptable model fit. First-order factor loadings ranged from .83 to .96, second-order factor loadings ranged from .74 to .95 (all factor loadings were statistically significant p’s < .001). Internal consistency reliabilities were excellent: Proactive Leadership (α=0.92), Knowledgeable Leadership (α=0.96), Supportive Leadership (α=0.93), Perseverant Leadership (α=0.93) and the ILS total score (α=0.95). As expected, the ILS total score had a high correlation with ICS total score (r=.72), indicating convergent validity. The ILS total score and EBPAS-15 total score resulted in a low correlation (r=.24), thus supporting divergent validity.
Discussion
This study provides support for the higher-order factor structure of the ILS for leader self-ratings. Leader self-ratings can provide important insight into how leaders perceive their own leadership behaviors when compared with peer or follower ratings. Organizations can use the ILS as a tool for leaders to assess their own leadership for EBP implementation at any stage of the implementation process as outlined in the Exploration, Preparation, Implementation and Sustainment (EPIS) implementation framework (Aarons et al., 2011). Furthermore, the first-level leader self-evaluations can be used as a metric for more formal leadership interventions geared at employing leaders with the tools and knowledge necessary for creating a climate for implementation (Aarons, Ehrhart, Farahnak, & Hurlburt, 2015). Such data can be used in comparison with provider ratings in order to provide insight to leaders about the degree to which their own perspective of their implementation leadership is aligned with that of their followers and superiors. Alignment is also important as discrepancy between leader and follower ratings can affect organizational context (Aarons, Ehrhart, Farahnak, Sklar, et al., 2015).
Some limitations of the present study should be noted. First, this study was conducted with mental health organizations. Generalizability of these findings should be examined through replication in other health and allied health service sectors where EBP implementation occurs such as nursing and substance use disorder treatment. Moreover, future research could examine whether the ILS factor structure holds for higher level leaders (e.g., agency executives) to ensure construct validity across hierarchical levels. Finally, future research should examine the relative validity of self-ratings versus ratings from other sources in predicting implementation outcomes.
Conclusions
This study demonstrated consistency in the factor structure and psychometrics of the ILS in a leader sample, suggesting that further tests of the generalizability of the measure and its relationship with ratings of implementation leadership from other sources as well as implementation outcomes are warranted. Leadership and organizational change interventions to improve the implementation and sustainment of EBPs should be further developed and include validated measures such as the ILS in order to test whether improvements in these constructs advance implementation science and improve the public health impact of implementation initiatives (Aarons, Ehrhart, Farahnak, & Hurlburt, 2015).
Acknowledgements
This study was supported by National Institute of Mental Health grants R21MH098124 (PI: Ehrhart), R21MH082731 (PI: Aarons), R01MH072961 (PI: Aarons), K23MH099179 (PI: Beidas) P30MH074678 (PI: Landsverk), R25MH080916 (PI: Proctor). The authors thank the community-based organizations, clinicians, and leaders that made this study possible.
Footnotes
Conflict of interest: The authors report no conflict of interest and certify that this manuscript has not been published or submitted elsewhere.
Compliance with Ethical Standards: This study was approved by the San Diego State University Human Research Protection Program.
References
- Aarons GA (2004). Mental health provider attitudes toward adoption of evidence-based practice: The Evidence-Based Practice Attitude Scale (EBPAS). Mental Health Services Research, 6(2), 61–74. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Aarons GA, Ehrhart MG, & Farahnak LR (2014). The Implementation Leadership Scale (ILS): Development of a brief measure of unit level implementation leadership. Implementation Science, 9(1), 45. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Aarons GA, Ehrhart MG, Farahnak LR, & Hurlburt MS (2015). Leadership and organizational change for implementation (LOCI): a randomized mixed method pilot study of a leadership and organization development intervention for evidence-based practice implementation. Implementation Science, 10(1), 11. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Aarons GA, Ehrhart MG, Farahnak LR, & Sklar M (2014). Aligning leadership across systems and organizations to develop a strategic climate for evidence-based practice implementation. Annual Review of Public Health, 35(1), 255–274. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Aarons GA, Ehrhart MG, Farahnak LR, Sklar M, & Horowitz J (2015). Discrepancies in leader and follower ratings of transformational leadership: Relationships with organizational culture in mental health. Administration and Policy in Mental Health and Mental Health Services Research, 1–12. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Aarons GA, Hurlburt M, & Horwitz SM (2011). Advancing a conceptual model of evidence-based practice implementation in public service sectors. Administration and Policy in Mental Health and Mental Health Services Research, 38(1), 4–23. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Aarons GA, & Sommerfeld DH (2012). Leadership, innovation climate, and attitudes toward evidence-based practice during a statewide implementation. Journal of the American Academy Child and Adolescent Psychiatry, 51(4), 423–431. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bass BM, & Avolio BJ (1990). The implications of transformational and transactional leadership for individual, team, and organizational development In Pasmore W & Woodman RW (Eds.), Research in organizational change and development (pp. 231–272). Greenwich, CT: JAI Press. [Google Scholar]
- Beidas RS, Marcus S, Aarons GA, Hoagwood KE, Schoenwald S, Evans AC, … Mandell DS (2015). Predictors of community therapists’ use of therapy techniques in a large public mental health system. JAMA Pediatrics, 169(4), 374382. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hong Y, Liao H, Hu J, & Jiang K (2013). Missing link in the service profit chain: A meta-analytic review of the antecedents, consequences, and moderators of service climate. Journal of Applied Psychology, 98(2), 237–267. [DOI] [PubMed] [Google Scholar]
- Hu LT, & Bentler PM (1999). Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling: A Multidisciplinary Journal, 6(1), 1–55. [Google Scholar]
- Martinez RG, Lewis CC, & Weiner BJ (2014). Instrumentation issues in implementation science. Implementation Science, 9(1), 118. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Michaelis B, Stegmaier R, & Sonntag K (2010). Shedding light on followers’ innovation implementation behavior: The role of transformational leadership, commitment to change, and climate for initiative. Journal of Managerial Psychology, 25(4), 408–429. [Google Scholar]
- Muthén LK, & Muthén BO (1998–2012). Mplus user’s guide (7th ed.). Los Angeles, CA: Muthén & Muthén. [Google Scholar]
- Priestland A, & Hanig R (2005). Developing first-level leaders. Harvard Business Review, 83(6), 112–120. [PubMed] [Google Scholar]
- Proctor EK, Landsverk J, Aarons GA, Chambers D, Glisson C, & Mittman B (2009). Implementation research in mental health services: An emerging science with conceptual, methodological, and training challenges. Administration and Policy in Mental Health and Mental Health Services Research, 36(1), 24–34. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Schneider B, Ehrhart MG, Mayer DM, Saltz JL, & Niles-Jolly K (2005). Understanding organization-customer links in service settings. Academy of Management Journal, 48(6), 1017–1032. [Google Scholar]
- Zohar D (2002). Modifying supervisory practices to improve subunit safety: A leadership-based intervention model. Journal of Applied Psychology, 87(1), 156–163. [DOI] [PubMed] [Google Scholar]