Skip to main content
Annals of Family Medicine logoLink to Annals of Family Medicine
. 2013 May;11(Suppl 1):S27–S33. doi: 10.1370/afm.1492

Facilitators of Transforming Primary Care: A Look Under the Hood at Practice Leadership

Katrina E Donahue 1,2,, Jacqueline R Halladay 1,2, Alison Wise 3, Kristin Reiter 4, Shoou-Yih Daniel Lee 5, Kimberly Ward 2, Madeline Mitchell 2, Bahjat Qaqish 3
PMCID: PMC3707244  PMID: 23690383

Abstract

PURPOSE

This study examined how characteristics of practice leadership affect the change process in a statewide initiative to improve the quality of diabetes and asthma care.

METHODS

We used a mixed methods approach, involving analyses of existing quality improvement data on 76 practices with at least 1 year of participation and focus groups with clinicians and staff in a 12-practice subsample. Existing data included monthly diabetes or asthma measures (clinical measures) and monthly practice implementation, leadership, and practice engagement scores rated by an external practice coach.

RESULTS

Of the 76 practices, 51 focused on diabetes and 25 on asthma. In aggregate, 50% to 78% made improvements within in each clinical measure in the first year. The odds of making practice changes were greater for practices with higher leadership scores (odds ratios = 2.41–4.20). Among practices focused on diabetes, those with higher leadership scores had higher odds of performing nephropathy screening (odds ratio = 1.37, 95% CI, 1.08–1.74); no significant associations were seen for the intermediate outcome measures of hemoglobin A1c, blood pressure, and cholesterol. Focus groups revealed the importance of a leader, typically a physician, who believed in the transformation work (ie, a visionary leader) and promoted practice engagement through education and cross-training. Practices with greater change implementation also mentioned the importance of a midlevel operational leader who helped to create and sustain practice changes. This person communicated and interacted well with, and was respected by both clinicians and staff.

CONCLUSIONS

In the presence of a vision for transformation, operational leaders within practices can facilitate practice changes that are associated with clinical improvement.

Keywords: primary care, quality improvement, practice facilitator, coaching, change, organizational, innovation, practice-based research

INTRODUCTION

Transforming primary care practices is a complex process that requires many factors for success. Practice leadership has been identified as one of the most critical factors1 because leaders can influence individual, team, and practice engagement.2,3 Vague leadership definitions in previous studies have limited the usefulness of existing evidence for guiding successful change in primary care practices, however.4

Leadership in health delivery organizations is often defined on the basis of hierarchical positions occupied by senior physicians and top managers.59 These top-level managers are typically the focus of analysis in studies of evidence-based practices, programs, and quality improvement (QI) initiatives.57,10 Hierarchical positions may not be the best or only indicator of effective leadership required for successful practice change, however.11 Recent work suggests a model of collective leadership, whereby change leaders build a coalition of people—physicians and nonphysicians—with complementary skills to support and facilitate change.12,13

One key role of practice leadership is to help members at all levels of an organization understand the importance of making purposeful and systematic changes to the care processes. Research indicates that a broadly shared vision and commitment for systemic change are important for success.14 Primary care practices whose leaders promote inclusiveness and create an environment for open and honest communication are more likely to implement such systems changes as those embedded in the Chronic Care Model.15 Additional performances of leadership that promote practice engagement in a transformation effort may include daily huddles to communicate with the staff, regular meetings16 to reinforce clear rules, reservation of time for reflection, and cross-training of staff to ensure a broad understanding of team roles and responsibilities.15

In this study, we defined leaders on the basis of their actions, rather than formal hierarchical positions, and used quantitative and qualitative analyses of primary care practices participating in a statewide QI initiative to examine the association of leadership with practice change and clinical improvement. We had several research questions: What are the associations between leadership and implementation of systematic changes in clinical care? Who are the leaders? And what do they do to facilitate the change?

METHODS

This study used mixed methods, involving quantitative analyses of existing QI data from Improving Performance in Practice (IPIP) from 76 practices and qualitative analyses of semistructured focus group interviews of practice clinicians and management, and of staff from a subsample of 12 practices.

Setting

North Carolina IPIP, started in 2005 and housed in the North Carolina Area Health Education Centers Practice Support Program, is a statewide initiative that assists primary care practices with dramatically improving the quality of care delivery (additional information on context is given in Supplemental Appendix 1, available online at http://annfammed.org/content/11/Suppl_1/S27/suppl/DC1).17 Key elements of IPIP include having practices report and reflect on common population-level quality measures, creating a system for documenting the degree of practice changes in care delivery, supporting regional quarterly collaborative dinner meetings modeled on the Institute for Healthcare Improvement’s Breakthrough Series model,18 and providing community-based practice coaches to work on site with practices to assist them with practice change. The IPIP program incorporates many key elements of the Chronic Care Model, a precursor to the patient-centered medical home model as defined by the National Committee for Quality Assurance (NCQA).

Quantitative Analyses

Data and Sample

Of the existing 200 practices, we excluded those that were involved in the pilot project (n = 17), had not worked with a practice coach for at least 12 months beginning after February 2008 (n = 71), or did not report clinical measures for at least 3 months (n = 36). Seventy-six practices (38%) met inclusion criteria.

Measures

Dependent variables included monthly diabetes and asthma clinical measures and monthly ratings of practice change by a coach. Diabetes measures included the percentage of sampled diabetes patients with a hemoglobin A1c level of less than 9%, blood pressure less than 130/80 mm Hg, low-density lipoprotein cholesterol level less than 100 mg/dL, yearly eye examinations, and annual nephropathy screening. Asthma measures included the percentage of asthma patients with an asthma control assessment, controller medicine use, influenza vaccination, and a bundled patient measure including all 3. Monthly practice change ratings by the coach using a scale of 0 to 5 indicated the extent of implementation and use of patient registries, planned care templates, protocols, and patient self-management support tools. Leadership was measured monthly by the coach using a scale of 0 to 3 that described the extent of leadership around activities in the practice: 0 was defined as “no management or leadership support exists,” 1 as a “single champion with no organized structure,” 2 as “special projects where temporary roles are assigned to staff,” and 3 as “organizational integration where QI work was integrated into daily routines, roles to support improvement were assigned to staff, performance evaluations were tied to improvement efforts, and leadership for improvement existed to select and launch new improvement efforts.” Practice engagement also ranged from 0, defined at “no activity,” up to 3, defined as “active engagement” where an improvement team planned and discussed multiple tests and communicated findings to one another; the team participated in collaborative activities such as conference calls and listservs. Further details of practice coach ratings can be found in Supplemental Appendix 2, available online at http://annfammed.org/content/11/Suppl_1/S27/suppl/DC1.

Analyses

For each practice, we estimated clinical improvement, defined as a positive trend over time, using a logistic regression model on clinical data from the first 12 months of QI implementation. We used a Williams scale parameter20 to account for overdispersion that can result from multiple patient outcomes at a given time point. We then calculated the percent of practices with a positive time trend.

To evaluate the associations of leadership with clinical measures and practice change scores, we grouped leadership ratings into high (2 or 3) and low (0 or 1) categories. For clinical outcomes, we used a repeated-measures logistic regression model on all time points and clinics to estimate these associations. A compound symmetric working correlation was assumed for within-clinic correlation, and the Williams scale parameter was used as above. We further adjusted for time and clinician count (model 1). The associations of leadership with practice change scores were assessed using proportional odds models adjusting for time and time squared (model 1). Furthermore, we tested whether practice engagement mediated the impact of leadership on practice changes and clinical outcomes by including practice engagement in the models (model 2).

Qualitative Analyses

Data and Sample

To select a subsample of practices for the qualitative analysis, we used clinical measures and practice change ratings to group the 76 practices described above into a 2 × 2 table (high vs low improvement in clinical measures, high vs low improvement in practice change scores). Clinical improvement was defined using the first 12 months of data; we ran a repeated-measures logistic regression analysis for each clinical outcome with random intercept and slope for each practice. We averaged the standardized slopes (time trends) across all the clinical outcomes of a practice. Practices with a mean greater than 1 were designated to have “high improvement” and otherwise, “low improvement.” A practice with high improvement in systems change was defined as a practice that had 2 or more of the coach ratings either start high and stay high (stayed at a 4 or 5), or had 2 or more coach ratings increase to a threshold of 4 at some point within the first year of implementation. Practices not meeting either of the criteria were designated as having low improvement in practice change.

The research team met with a key IPIP leader, who trained the coaches, and with several practice coaches to review the classification results and decide which practices would be appropriate for interview. We excluded practices that had high staff turnover or were inactive, and purposefully selected from each quadrant 3 or 4 practices to participate in interviews. Thirteen practices were selected; 12 agreed to the interview, whereas 1 did not return calls.

In each of the 12 practices, we conducted 2 semistructured focus group interviews from April 2011 to May 2012—one with practice clinicians and administrators (practice managers, lead nurses, clinicians) (n = 49) and the other with practice staff (front office personnel, billing staff, and other nursing/clinical staff) (n = 50) to maximize the diversity of perspectives and minimize single-source bias. Some of the questions in the interview guide were tailored for clinician informants and staff informants. All participants were asked about environmental conditions and intraorganizational dynamics that affected the adoption and implementation of IPIP (see Supplemental Appendix 3 for interview guide, available online at http://annfa-mmed.org/content/11/Suppl_1/S27/suppl/DC1).

Analyses

Interviews were audio-taped and transcribed verbatim, with the exception of 1 group for which permission to audio-tape was denied. A transcript for the nonrecorded interview was prepared immediately from field notes by team members. Interviews were coded using both deductive and inductive methods.21,22 A codebook was initially developed based on the interview guide, and additional codes were discussed and added as new themes emerged. Four team members coded 2 common transcripts and met to reach consensus on code definitions and use. Two team members coded each of the 22 remaining transcripts, and differences were reconciled at team meetings. We used ATLAS.ti23,24 to support coding and generate reports. Throughout the coding process, the team met regularly to ensure coding consistency and to discuss emerging themes and subthemes seen in the data. Themes and subthemes were defined when supported by evidence from multiple sites and agreement among members of the coding team. Results were compared between practices with high vs low practice improvement. In this article, we focus on themes around leadership and how it helped facilitate practice change.

The Biomedical Institutional Review Board at the University of North Carolina reviewed and approved this project.

RESULTS

Quantitative Results

In terms of QI efforts, two-thirds of the 76 practices (67%) focused on diabetes and the rest focused on asthma (Table 1). Forty-two percent of practices were family medicine practices, 26% were pediatrics, and 13% were internal medicine. The median percent of patients covered by Medicaid and with no insurance was 20% and 4%, respectively. One-half of the practices were located in rural settings and one-half used electronic health records. For each diabetes or asthma measure, between 50% and 78% of practices showed improvement (ie, a positive trend) in the first year.

Table 1.

Practice Characteristics

Characteristic All Practices (N=76) Interviewed Practices (n=12)
Quality focus, No. (%)
 Diabetes 51 (67) 7 (58)
 Asthma 25 (33) 5 (42)
Service area, No. (%)
 Rural 37 (49) 6 (50)
 Urban 39 (51) 6 (50)
Clinicians, No. (%)
 ≤3 18 (24) 5 (42)
 4–6 26 (34) 2 (17)
 ≥7 32 (41) 5 (42)
Practice specialty, No. (%)
 Family medicine 42 (55) 7 (58)
 Pediatric medicine 20 (26) 4 (33)
 Internal medicine 10 (13) 1 (8)
 Mixed 4 (5) 0 (0)
Practice type, academic, No. (%) 6 (8) 2 (17)
Insurance
 Medicaid, median % 20 30
 Uninsured, median % 4 8
Affiliated with CCNC Medicaid Network, No. (%) 65 (85) 12 (100)
Practice visits per day, median No. 60 43
Use of EHR, No. (%) 38 (50) 9 (75)
PCMH recognition by NCQA, No. (%)
 Have recognition 22 (29) 6 (50)
 Actively working on recognition 17 (23) 4 (34)
Improved in first year: diabetes measures, No. (%)
 Hemoglobin A1c <9% 25 (50)
 LDL cholesterol <100 mg/dL 23 (55)
 Blood pressure <130/80 mm Hg 33 (73)
 Annual eye examination 35 (78)
 Nephropathy screening 34 (77)
Improved in first year: asthma measures, No. (%)
 Severity assessed 17 (68)
 Annual influenza vaccine 19 (76)
 Bundled measure (assessed, influenza vaccine, controller medication use) 16 (70)

CCNC = Community Care of North Carolina; EHR = electronic health record; LDL = low-density lipoprotein; NCQA = National Committee for Quality Assurance; PCMH = patient-centered medical home.

Note: For the All Practices column, the number of practices having data was fewer than 76 for the measures of Medicaid insurance (n = 67), uninsured (n = 68), and practice visits per day (n = 64).

Tables 2 and 3 show the associations of leadership with clinical measures and with practice change scores for implementation of various tools, respectively. Leadership was significantly associated with only 1 clinical measure, the proportion of patients having nephropathy screening (odds ratio [OR] = 1.37: 95% CI, 1.08–1.74). Inclusion of practice engagement reduced these odds, but the association remained significant. The odds of making practice changes were greater for practices with higher leadership scores at any given time (ORs = 1.92–6.78). Inclusion of practice engagement, which was also significantly associated with making practice changes, reduced these odds (ORs = 2.41 to 4.20), but the association remained significant for all changes except for registry implementation.

Table 2.

Association of Higher Leadership With the Proportion of Patients Within a Practice Achieving Various Clinical Measures

Model Nephropathy Screening Yearly Eye Examination LDL Cholesterol <100 mg/dL Blood Pressure <130/80 mm Hg Hemoglobin A1c <9%
Model 1
 Leadership 1.51 (1.20–1.90) 1.13 (0.93–1.37) 1.02 (0.87–1.20) 1.07 (0.91–1.26) 1.07 (0.92–1.25)
<.001 .23 .77 .43 .39
Model 2
 Leadership (adjusted for engagement) 1.37 (1.08–1.74) 1.04 (0.86–1.25) 1.06 (0.91–1.23) 1.10 (0.94–1.28) 1.08 (0.93–1.26)
.01 .68 .48 .25 .31
 Engagement (adjusted for leadership) 1.26 (1.06–1.51) 1.21 (1.02–1.43) 0.93 (0.82–1.05) 0.94 (0.80–1.11) 0.97 (0.77–1.23)
.01 .03 .25 .49 .83

LDL=low-density lipoprotein cholesterol.

Note: Values are odds ratios (95% CIs) and P values. Values were adjusted for time.

Table 3.

Association of Higher Leadership With Practice Levels of Implementation of Various Tools

Model Registry Templates for Planned Care Protocols Self-Management Support
Model 1
 Leadership 1.92 (1.07–3.42) 6.78 (4.02–11.44) 5.23 (2.99–9.14) 3.66 (2.26–5.91)
.03 <.0001 <.0001 <.0001
Model 2
 Leadership (adjusted for engagement) 1.24 (0.66–2.34) 4.20 (2.44–7.23) 3.53 (1.99–6.25) 2.41 (1.54–3.79)
.50 <.001 <.001 <.0001
 Engagement (adjusted for leadership) 2.50 (1.41–4.42) 3.30 (1.87–5.82) 2.81 (1.66–4.77) 2.80 (1.74–4.50)
.002 <.001 <.001 <.001

Note: Values are odds ratios (95% CIs) and P values. Values were adjusted for time.

Qualitative Results

Among the 12 practices interviewed, 5 practices had 3 or fewer clinicians and 7 had 4 or more (range = 1–32). Seven practices had high ratings of practice change by the coach. One-half were NCQA certified as a patient-centered medical home. These practices were similar to the quantitative analysis sample except for higher rates of electronic health record use and Community Care of North Carolina Medicaid membership, likely due to a time lag between collection of the quantitative and qualitative data.

Leadership-related themes from the focus groups included having (1) someone with a vision about the importance of the work, (2) a middle manager who implemented the vision, and (3) a team who believed in and were engaged in the work.

Visionary Leadership

Across all practices, including those with relatively high and low practice change ratings, there was general support for participating in IPIP from lead clinicians and administrators. In all cases, these top-level leaders provided the vision for practice transformation. In many cases, they saw the importance of QI, making systematic changes in care delivery, engaging the assistance of practice coaches in program implementation, and using population-level patient data to gauge improvements.

I think it needs to be done. I think every practice should be doing quality improvement. And the fact that you’ve got someone who can kind of come in and teach your whole practice how to do it, why not? (practice L clinician).

Many also took a strategic view, recognizing opportunities for increased reimbursement in the future.

I understand the value of quality improvement…I was all ready to do it because I knew that this information that’s being gathered is going to be parlayed into how much money you’re going to get for seeing patients in the future. And so, why not get involved now? (practice G clinician).

Although the practice management provided the vision for change, patterns emerged among the practices that suggested leaders with vision are a necessary but not sufficient condition for successful implementation.

Leading From the Middle

All practices had leaders who initiated the change, but practices with high and low practice change ratings reported very different “operational” leaders. Operational leaders in practices with low practice change ratings were generally the same clinicians, practice managers, or both who introduced the change. In contrast, in practices with high practice change ratings, implementation was led by someone other than the lead physician or top manager. Specifically, the top management in those practices delegated the operational authority to a middle manager to carry out activities specified in the initiative.

So you need to have a designated person that works with different groups in your practice to be successful, somebody working with the nurses, coordinating, calling, working with the (practice coach). And somebody that knows how to talk with providers and how to fill out the form and all of those things (practice H leader).

The staff recognized the critical role of the middle manager as well. The middle manager was frequently a nurse or nurse practitioner who interacted daily with both the lead physician/top manager and the clinical and front office staff.

But if it’s going to happen, either you as the one sharing the ideas better find a way to make it happen or just do it. X (staff nurse) is usually the one that makes it happen; she does it (practice E staff).

Team Orientation

One strategy of the successful operational leaders was to develop and engage teams of practice staff in transformation efforts, echoing the quantitative findings that practice engagement was significantly associated with most of the clinical and practice change measures. Staff on well-functioning teams were comfortable with change and cross-training. For example, in one small practice, a laboratory technician would jump in to help nurses with their tasks. Team accountability was a common comment among practices with high practice change ratings.

So if I’m not going to be here, I’ll have the nurses (do it)… (instead of) waiting on them if they are busy, I can go ahead and do some of those things. (We) work together as a group; she’s busy, I’m not (practice B staff).

Staff at several practices joked about how their job description was more of a guideline and how they were allowed to have expanded roles.

We’re a very team-oriented practice…you kind of have to be in some ways for it to work. We all can pretty much do each other’s roles (practice E staff).

Practice engagement also meant everyone had a role and was accountable for that role. When asked about lessons learned, clinicians and management noted the importance of having “everyone involved” and “making everyone take a piece of the responsibility in the process” (practice L).

DISCUSSION

After 12 months of working with a practice coach, the majority of practices participating in IPIP had improvements in clinical measures and practice change ratings. Higher ratings of leadership were associated with greater adoption and depth of implementation of practice system changes, but the effect was mediated partly by practice engagement. Having midlevel, operational leaders “in the trenches” within a practice appeared to facilitate practice change and ultimately clinical process outcomes amenable to systems interventions.

Existing health care research has primarily focused on the roles of physicians and top managers in innovation implementation.11 As teamwork designs become popular in health care, however, those in middle management positions may be particularly influential in facilitating organizations change.11 This influence may stem from middle managers’ overseeing team activities, mediating between organizational strategy and the day-to-day activities of staff members, serving as direct role models of implementation activities, being positioned to disseminate the innovation information widely in an organization, and helping interpret information in a way that is relevant to each member of the team. Their strategic location between top managers and front-line employees gives them the ability to bridge gaps in information that might otherwise impede innovation implementation.11 In essence, middle managers work as boundary spanners25,26 who link stakeholders from different levels of an organization, integrate the work of the teams in care delivery processes, and drive momentum for change.

What type of top-level leadership facilitates these middle managers? Our findings suggest it is leaders with a vision for change that includes all members of the organization. In practices that have implemented the Chronic Care Model, the concept of inclusive leadership has been discussed.15 Other leadership research has also noted the importance of supporting teams so that members can manage themselves.2 Encouraging nontraditional ideas and activities is one of several important components of successfully making major changes in an organization.27

There are several limitations to our data. Some of the analyses are limited to the first year of reported data. Although practice changes occur in the first year and are affected by leadership, a longer time frame and a larger sample are needed to better assess whether and how clinical outcomes are affected. There was also potential bias in the selection of practices for the qualitative interviews. The voices of practices that had high turnover or had dropped out from IPIP are not represented. The time difference between the quantitative data collection and qualitative interviews may also increase the risk of recall bias.

In summary, certain aspects of leadership are helpful to move practices forward in primary care transformation. The vision of top-level practice managers is essential in setting the strategic direction and validating the value of QI activities for a practice; it is this vision that allows change to happen. Middle managers are critical to successful implementation, however. These middle managers test and implement innovations, empower individuals to participate in transformation activities, foster accountability and a culture of teamwork, and serve as the link between leadership and staff. They act as the glue and the catalyst that make change a reality in primary care practices.

Acknowledgments

We gratefully acknowledge the assistance of the following individuals in this research: Ann Lefebvre, MSW, CPHQ; Laura Brown, MPH; Warren Newton, MD, MPH; Samuel Cykert, MD; Darren DeWalt, MD, MPH; Beat Steiner, MD, MPH; Greg Randolph, MD, MPH; and the North Carolina IPIP practices and North Carolina IPIP practice coaches.

Footnotes

Conflicts of interest: authors report none.

Funding support: This work was supported by Agency for Healthcare Research and Quality grant R18 HS019131. Alison Wise received additional support from NIH/NIEHS Training grant T32ES007018.

Disclaimer: The authors of this report are responsible for its content. Statements in this presentation should not be construed as endorsements by the Agency for Healthcare Research and Quality or the US Department of Health and Human Services.

References

  • 1.Homer CJ, Baron RJ. How to scale up primary care transformation: what we know and what we need to know? J Gen Intern Med. 2010;25(6):625–629 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Hackman JR. Leading Teams: Setting the Stage for Great Performance.Boston, MA: Harvard Business School Press; 2002 [Google Scholar]
  • 3.Pfeffer J, Davis-Blake A. Administrative succession and organizational performance: how administrator experience mediates the succession effect. Acad Manage J. 1986;29(March):72–83 [Google Scholar]
  • 4.Podolny JM, Khurana R, Hill-Popper M. Revisiting the meaning of leadership. In: Staw B, Kramer RM, eds. Research in Organizational Behavior. Vol 26 New York, NY: Elsevier; 2005:1–36 [Google Scholar]
  • 5.Fischer LR, Solberg LI, Zander KM. The failure of a controlled trial to improve depression care: a qualitative study. Jt Comm J Qual Improv. 2001;27(12):639–650 [DOI] [PubMed] [Google Scholar]
  • 6.Levinson W, D’Aunno T, Gorawara-Bhat R, et al. Patient-physician communication as organizational innovation in the managed care setting. Am J Manag Care. 2002;8(7):622–630 [PubMed] [Google Scholar]
  • 7.Palinkas LA, Schoenwald SK, Hoagwood K, Landsverk J, Chorpita BF, Weisz JR; Research Network on Youth Mental Health An ethnographic study of implementation of evidence-based treatments in child mental health: first steps. Psychiatr Serv. 2008;59(7):738–746 [DOI] [PubMed] [Google Scholar]
  • 8.Kralovec PJ. Clinical quality improvement without fear. Health Forum J. 1990;33(4):32–34 [PubMed] [Google Scholar]
  • 9.Blumenthal D, Kilo CM. A report card on continuous quality improvement. Milbank Q. 1998;76(4):625–648, 511 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Flanagan ME, Ramanujam R, Doebbeling BN. The effect of provider- and workflow-focused strategies for guideline implementation on provider acceptance. Implement Sci. 2009;4(1):71. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Birken SA, Lee SY, Weiner BJ. Uncovering middle managers’ role in healthcare innovation implementation. Implement Sci. 2012;7(1):28. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Chreim S, Williams BE, Janz L, Dastmalchian A. Change agency in a primary health care context: the case of distributed leadership. Health Care Manage Rev. 2010;35(2):187–199 [DOI] [PubMed] [Google Scholar]
  • 13.Gallagher K, Nutting PA, Nease DE, Jr, et al. It takes two: using coleaders to champion improvements in small primary care practices. J Am Board Fam Med. 2010;23(5):632–639 [DOI] [PubMed] [Google Scholar]
  • 14.Nutting PA, Gallagher KM, Riley K, White S, Dietrich AJ, Dickinson WP. Implementing a depression improvement intervention in five health care organizations: experience from the RESPECT-Depression trial. Adm Policy Ment Health. 2007;34(2):127–137 [DOI] [PubMed] [Google Scholar]
  • 15.Bowers KW, Robertson M, Parchman ML. How inclusive leadership can help your practice adapt to change. Fam Pract Manag. 2012; 19(1):8–11 [PMC free article] [PubMed] [Google Scholar]
  • 16.Bray P, Cummings DM, Wolf M, Massing MW, Reaves J. After the collaborative is over: what sustains quality improvement initiatives in primary care practices? Jt Comm J Qual Patient Saf. 2009;35(10):502–508 [DOI] [PubMed] [Google Scholar]
  • 17.Newton WP, Lefebvre A, Donahue KE, Bacon T, Dobson A. Infrastructure for large-scale quality-improvement projects: early lessons from North Carolina Improving Performance in Practice. J Contin Educ Health Prof. 2010;30(2):106–113 [DOI] [PubMed] [Google Scholar]
  • 18.Institute for Healthcare Improvement. 2012. [Accessed May 2012]. http://www.ihi.org.
  • 19.SAS Institute Inc SAS 9.2 Companion for Windows, 2nd ed Cary, NC: SAS Institute Inc; 2010 [Google Scholar]
  • 20.Williams DA. Extra-binomial variation in logistic linear models. Appl Stat. 1982;31(2):144–148 [Google Scholar]
  • 21.Miles MB, Huberman AM. Qualitative Data Analysis: An Expanded Sourcebook. Thousand Oaks, CA: Sages Publications, Inc; 1994 [Google Scholar]
  • 22.Strauss A, Corbin J. Basics of Qualitative Research: Techniques and Procedures for Developing Grounded Theory. Thousand Oaks, CA: Sage Publications, Inc; 1998 [Google Scholar]
  • 23.Allison PD. Logistic Regression Using SAS: Theory and Application. Cary, NC: SAS Institute Inc; 1999 [Google Scholar]
  • 24.ATLAS/ti. Version 4.2. [Computer software]. Berlin, Germany: Scientific Software Development; 1999 [Google Scholar]
  • 25.Etz RS, Cohen DJ, Woolf SH, et al. Bridging primary care practices and communities to promote healthy behaviors. Am J Prev Med. 2008;35(5 Suppl):S390–S397 [DOI] [PubMed] [Google Scholar]
  • 26.Gittell JH. Coordinating mechanisms in care provider groups: relational coordination as a mediator and input uncertainty as a moderator of performance effects. Manage Sci. 2002;48:1408–1426 [Google Scholar]
  • 27.Kotter JP. Leading Change. Boston, MA: Harvard Business Review Press; 1996 [Google Scholar]

Articles from Annals of Family Medicine are provided here courtesy of Annals of Family Medicine, Inc.

RESOURCES