Abstract
Process evaluations assess program structures and implementation processes so that outcomes can be accurately interpreted. This article reports the results of a process evaluation of Partnerships for Health, an initiative targeting interprofessional primary healthcare teams to improve chronic care in Southwestern Ontario, Canada. Program documentation, participant observation, and in-depth interviews were used to capture details about the program structure, implementation process, and experience of implementers and participants. Results suggest that the intended program was modified during implementation to better meet the needs of participants and to overcome participation barriers. Elements of program activities perceived as most effective included series of off-site learning/classroom sessions, practice-based/workplace information-technology (IT) support, and practice coaching because they provided: dedicated time to learn how to improve chronic care; team-building/networking within and across teams; hands-on IT training/guidance; and flexibility to meet individual practice needs. This process evaluation highlighted key program activities that were essential to the continuing education (CE) of interprofessional primary healthcare teams as they attempted to transform primary healthcare to improve chronic care.
Keywords: Continuing education, evaluation research, health services research, interprofessional care, interprofessional learning, team-based practice, work-based learning
Introduction
Chronic diseases account for 60% of all deaths worldwide and have substantial cost implications; therefore, a priority for primary healthcare reform internationally is to improve the effectiveness of chronic care delivery (Coleman, Mattke, Perrault, & Wagner, 2009; Friedberg, Hussey, & Schneider, 2010; Hutchison, Levesque, Strumpf, & Coyle, 2011; World Health Organization, 2005) Due to the complexity of chronic care, chronic care models like the Chronic Disease Prevention and Management (CDPM) framework were developed. Based on the chronic care model by Wagner et al. (2001), the CDPM framework promotes a planned, integrated, and population-based approach with emphasis on practical, supportive, and evidenced-based interactions between informed/activated patients and prepared/proactive clinical teams (Ontario Ministry of Health and Long-Term Care, 2007). Because traditional continuing education (CE) approaches have limited impact on care delivery and patient outcomes (Davis et al., 1999), it is suggested that an approach with classroom and workplace learning opportunities that recognize the role of interprofessional teams, patient empowerment, cost reduction, and sustainability by emphasizing quality improvement (QI) is needed to change chronic care (Friedberg et al., 2010; Gilbert, Staley, Lydall-Smith, & Castle, 2008; Reeves, Goldman, Burton, & Sawatzky-Girling, 2010; Solberg, 2007; Wagner et al., 2001).
The Institute for Healthcare Improvement developed an educational approach called the Breakthrough Series QI methodology (Institute for Healthcare Improvement, 2003) that brings together multiple teams to participate in a short-term (6-to-15-month) learning system that includes three face-to-face meetings (i.e. learning sessions) with expert presentations and time to work within a team to plan practice changes and share lessons learned across teams. Between the learning sessions, termed action periods, participants work with additional team members to test and implement changes in their workplace using the Model for Improvement that was designed to encourage graduated re-design (i.e. testing new approaches on a small scale prior to broad implementation) (Bricker et al., 2010; Langley, 2009).
Building on these educational and chronic care models, the Partnerships for Health Project was implemented (2008–2011) to improve diabetes care in Southwestern Ontario, Canada. Interprofessional primary healthcare teams participated in educational activities (off-site or practice-based learning sessions), supportive activities (teleconference calls, practice coaching, information-technology (IT) support, and web-based tools), and reporting activities (documentation of QI efforts and clinical data). To be eligible, practice-based primary care teams had to incorporate a community-based healthcare professional into their team for the duration of the program such as a case manager, diabetes educator, or physiotherapist. The goal was to enhance opportunities for improvement across the continuum of care.
Evidence supports the use of chronic care models and QI methodologies, but the need to adapt them to different settings and diseases raises questions about their generalizability and effectiveness (Gilbert et al., 2008; Schouten et al., 2008). To date, evaluation studies of QI initiatives have failed to capture details about program activities, the implementation process, and the perceptions of multiple stakeholders so that clear inferences about program impact can be made (Crabtree et al., 2011; Dehar, Casswell, & Duignan, 1993; Schouten et al., 2008). The aim of this study was to conduct a process evaluation of Partnerships for Health to capture program details that would allow for an accurate interpretation of program outcomes and help refine future programs (Dehar et al., 1993). The focus was on obtaining: (1) details about the program’s origin/structure/implementation process, participant characteristics, and participation rates to determine if the program was implemented as intended; (2) and to capture the perceptions of implementers and participants regarding the effectiveness of program activities.
Methods
The study employed a process evaluation approach (Dehar et al., 1993) that used three qualitative methods (program documentation; participant observation; and in-depth interviews) to examine whether the program was implemented as intended and to determine which program activities were perceived as most effective. At the outset, a logic model was developed in collaboration with program implementers to provide a visual representation of how the program was intended to work and to ensure the alignment of evaluation measures to the intended program activities (Chen, 2005; Cooksy, Gill, & Kelly, 2001). Also, an action/change model was created to describe the hypothesized causality within the program by delineating the connections among the program’s rationale, development, implementation, and intended outcomes (Chen, 2005). The results of the outcome evaluation of “Partnerships for Health” initiative are reported elsewhere (Harris et al., 2013).
Data collection
The logic model provided clarity related to intended program activities and helped to ensure that program documentation data (e.g. log books, hand-outs, and reports) were collected continuously throughout program implementation. Participant observation data were collected by an evaluator during program activities such as the learning sessions, coaching sessions, and steering committee meetings to capture details about implemented activities, contextual factors influencing adherence to intended activities, and participation rates. The evaluator recorded detailed field notes. A negative consent (opt-out) option was provided to all participants.
Post-program in-depth interviews with implementers were conducted to obtain their perspectives about their team’s functioning, administration and/or implementation processes including challenges encountered, critical elements to success, and reasoning behind changes made from intended to implemented activities. Finally, program participant (e.g. administrative staff, case manager, family physician, nurse, pharmacist) interviews were conducted post-program to capture their views about the program including expectations of and the value of each activity and the implementation processes used. Signed consents were obtained. All implementers were eligible for an interview, and program participants were eligible if they participated in at least one learning session during the evaluation period. A purposive sampling approach was used to ensure maximum variation for funding models (Hutchison et al., 2011), professional roles, location (practice-based: physician employed versus community-based: agency employed), and selected program stream.
Data analysis
The program documentation and participant observation data were summarized and aligned to the intended program activities listed in the logic model by two independent analysts. Immersion and crystallization techniques (Borkan, 1999) were used to complete a thematic analysis of interview data supported by NVivo 8 (©QSR International, Doncaster, VIC, Australia). Obtaining documentation directly from the implementers, field notes, verbatim transcription, and independent and team analyses increased the trustworthiness and credibility of the findings (Borkan, 1999; O’Cathain, Murphy, & Nicholl, 2010). Analyses for all methods were completed independently; however, to avoid sequentially presenting results of individual methods, results were integrated through triangulation: convergence of inter-method findings to provide a comprehensive picture of the program (O’Cathain et al., 2010). Triangulation was facilitated by displaying the results of each method alongside the anticipated activities and outcomes in the logic model.
Ethical issues
The study was approved by The University of Western Ontario Research Ethics Board before data collection activities commenced.
Results
Program documentation and participant observation data were obtained throughout program implementation. Ten implementers and 93 program participants (84% female) were interviewed (Table I). The triangulated results are presented below using the following sub-headings: “program origins, structure and implementation process”; “participant characteristics and participation rates”; and “perceptions about program activities”.
Table I.
Program implementers | Total | ||
---|---|---|---|
Co-manager | 3 | ||
Practice coach | 3 | ||
Administrative staff | 2 | ||
Clinical advisor | 1 | ||
E-health lead | 1 | ||
Total | 10 | ||
Program participants | Practice-based | Community-based | Total |
Health professionals | |||
Physician | 16 | 0 | 16 |
Registered nurse | 13 | 0 | 13 |
Diabetes nurse educator | 1 | 9 | 10 |
Dietitian | 7 | 3 | 10 |
Case manager | 0 | 7 | 7 |
Pharmacist | 6 | 1 | 7 |
Nurse practitioner | 6 | 0 | 6 |
Social worker | 6 | 0 | 6 |
Health promoter | 2 | 0 | 2 |
Registered practical nurse | 2 | 0 | 2 |
Physiotherapist | 0 | 1 | 1 |
Administrative staff | |||
Office manager/Executive director | 8 | 4 | 12 |
Administrative assistant | 1 | 0 | 1 |
Total | 68 | 25 | 93 |
Program origins, structure and implementation process
The program was designed as a CDPM Framework demonstration project using diabetes as a proxy for chronic diseases. A steering committee (government and community stakeholders) oversaw the program and a program implementation team (three co-managers, an e-health lead, three practice coaches, two administrative staff, and two clinical advisors) was responsible for program development, administration, and implementation. The evaluation team was contracted to conduct a comprehensive (process and outcome) external evaluation of the program.
Implementers sent recruitment packages to all family physicians in Southwestern Ontario and used a mix of outreach activities. Participation was voluntary, but required a team presence (at least one physician, one practice-based team member, and a community-based healthcare provider) to be eligible. The overall program goal was to demonstrate how practice-based and community-based primary healthcare providers could work together, share information across the continuum of care, advance the use of and linkages among information systems, and engage patients in self-management to improve chronic care. All participating teams were intended to participate in the same program, but during implementation, four new program streams were developed to better meet the preferences of potential participants (e.g. less time spent offsite, flexible participation dates and time) and to increase enrolment. The overall program goal remained the same for all streams; however, each stream included different proportions of educational, supportive, and reporting activities. The program streams are described in Table II and the educational, supportive, and reporting activities are described in the sub-sections below.
Table II.
Intended activities |
|||||||||
---|---|---|---|---|---|---|---|---|---|
Pre-work | Offsite learning session | Practice-based action-period | |||||||
Program streams | Offsite | Manual | Number | Length | Number | Length (m) | Overall duration | Round of implementation during the life of the program | Implemented activities |
A – Learning collaborative | 1 d | ✓ | 3 | 2 d | 3 | 4 | 12 m | 2 | 7/8 (88%) |
B – Spread collaborative | – | ✓ | 1 + 1 | 1.5 d + 1 d | 1 | 6 | 6 m | 3 | 6/6 (100%) |
C – Knowledge transfer | – | ✓ | 1 | 1 d | 0 | 0 | 1 d | 4 | 4/4 (100%) |
D – Practice coaching | – | ✓ | 1 | 2 h* | 0 | 0 | 2 h | ** | N/A |
E – Web-based program | – | ✓ | On-line modules | Self-directed | Online access as needed | Modules developed and accessible |
d = day; h = hour; m = months.
*Practice-based session.
**Educational (classroom) versus supportive (practice-based) practice coaching could not be determined.
Educational activities
Educational activities (i.e. pre-work sessions, learning sessions, and instruction manuals) aimed to educate participants about chronic and interprofessional care approaches, QI methodologies to redesign care processes, advanced access to care, diabetes clinical practice guidelines, advanced use of IT systems, patient self-management, and spread/sustainability strategies. Documentation review and participant observation data revealed the introduction of refresher sessions focused on the review of QI mechanisms and progress, education regarding topics of interest, and review of spread and sustainability strategies, as well as spread day sessions dedicated to discussing spread and sustainability strategies. Interview data suggested that these sessions were introduced to address concerns about participants’ ability to spread and sustain QI efforts and outcomes. Overall, program documentation, participant observer notes, and interview data suggested that educational activities were implemented as intended with respect to their frequency (88–100%) and content. Table II outlines the differences among the intended and implemented activities of each program stream.
Supportive activities
Supportive activities are summarized in Table III. They were made available to all participants to enhance comprehension of the educational content, support QI efforts in the practice/workplace, and increase participants’ abilities to complete monthly reports. Supportive activities were modified during implementation to: (1) accommodate program changes and increasing number of participants; (2) better meet participants’ learning needs; and (3) overcome participation barriers. For example, teleconferences evolved from practice coaching sessions to guest speaker events and data sharing sessions; practice coaching became more practice-based and hands-on; and web-based tools were created to improve opportunities for networking/sharing of lessons-learned and to facilitate data-reporting. Program documentation, participant observation, and interview data indicated that supportive activities were made available as intended (e.g. 90% of the intended monthly teleconferences were implemented and practice coaching/IT support/web-based tools were available as needed).
Table III.
Description | |
---|---|
Teleconferences | Monthly discussions about learning session content, lessons-learned, QI efforts, data tracking, new material (guest speakers), and/or troubleshooting. |
Practice coaching | Assigned coaches met with each team (on-site, telephone, email) as needed to discuss questions/concerns/progress, data extraction/reporting, tools/resources, facilitate communication with program managers and other participating teams, and to keep participants motivated to do QI work. |
IT training/support | Provided strategically/as needed to facilitate team collaboration using electronic communication within/external to practice, and to develop a QI mechanism by advancing use of technology. |
Web-based tools | Provided program information, videos of expert faculty presentations, diabetes care/information management resources, communication forum for interaction with implementers and other participants, and reporting forms for clinical indicators/QI efforts. |
Reporting activities
Participants were asked to complete monthly reports about their QI efforts and clinical data (14 pre-set diabetes indicators). For the clinical data, implementers encouraged participants to start with a small number of patients and gradually placed more emphasis on including the practice’s entire diabetes population. Reporting templates were provided and modified based on participant feedback. Reports were sought monthly as intended using a variety of communications strategies (e.g. e-mail, telephone, and meetings).
Program participant characteristics and participation rates
Implementers recruited 106 teams from 47 primary healthcare sites across Southwestern Ontario to participate in the program. Seventy-eight teams (12 stream A teams, 14 stream B teams, 10 stream C teams, 9 stream D teams, 1 stream E team, and 32 teams that participated in supportive activities in association with a stream A team [i.e. same practice site]) were included in the evaluation. Twenty-eight teams (14 sites) were excluded because they were recruited outside the evaluation time period. Beyond enrolling in a specific program stream, program documentation data on attendance/participation rates in specific educational or supportive activities were limited. In terms of reporting activities, program documentation revealed that 27 (84%) sites documented their QI efforts (mean = 18; median = 10; range = 1–72 documented QI efforts per team), which were focused on team communication, medical directives, patient communication and education, diabetes pre-planned visits, and patient identification. The self-reported clinical data was submitted by 22 (69%) sites, but the quality and completeness of the data varied.
Interview findings
Perceptions about program activities
Data gathered from the 103 individuals interviewed for the study (see Table I) indicated various barriers and facilitators to participation including unclear and inconsistent communication about program deliverables and time commitment, as well as team characteristics such as the involvement of both administrative and clinical team members, and leadership:
Being the administrator and the person in charge of the facility and the staffing, I could make the decision, ‘Yes, you can work an extra 2 hours to do that’ and ‘yes, reception, your job description now is changed to this.’ So I could legislate some of the change within the staff which made it easier to move [forward]. (Office Manager)
Barriers to participation related to resources were identified such as team composition, staff turnover, organizational changes/physical space/structure, information technology capacity, funding models, and time constraints. These factors were described as being beyond the control of the program:
We had other issues that had nothing to do with the program. We had just formed a new group. We had moved locations, started the electronic medical record implementation. There were just a lot of things going on. (Physician)
Team functioning, including having a shared vision among team members, positive interactions and collaboration, was said to influence participation:
It’s just getting everybody organized and to get everybody on board … to shift the focus, to get people involved in the meetings and to realize that we do have to take time away from patient care to have the meetings. (Nurse Practitioner)
Finally, participants talked about organizational structures and privacy concerns related to partnerships with community-based team members as affecting participation.
Overall program
Participants said that the program: increased their awareness of opportunities for QI; helped build their knowledge, skills, and confidence related to chronic care; provided motivation and support to reach their goals; taught them how to use data to focus their QI efforts; and enhanced their appreciation of the time, continued effort, and support required for system-level change.
Series of off-site learning sessions separated by relatively short (∼4 months) practice-based action-periods, IT support, and practice coaching were consistently reported as critical elements of the program:
The initial information session was very important for us to get enthusiasm and impetus … for the project, and then we had another day … It was good to get the enthusiasm going again because things kind of fell off. Whenever we would meet as a group with the facilitators and the coaches, that was really worthwhile. (Physician)
Participants associated program effectiveness with: time to learn about how to improve chronic care; opportunities for team-building/networking within and across teams to build on lessons-learned; hands-on IT training/guidance to facilitate clinical data collection and interpretation; and flexibility to tailor QI efforts to meet individual practice needs. A decrease in perceived effectiveness was experienced when these elements were not available.
With respect to program implementation, participants would have appreciated: more acknowledgement of challenges faced in practice (e.g. diverse professional training, workload, scarce resources, patient characteristics); increased integration of success stories to facilitate understanding of program tools and their applicability; extra time dedicated to team-building, networking, and collaboration; and further support in gaining organizational/leadership buy-in:
There was so much focus on the data points and the outcomes, that it was easy for us to forget about looking at the people individually, not only the patients, but the individual team members and how we are working together. (Case Manager)
Finally, participants talked about their fear of losing momentum upon program completion because even with the support of implementers, they had insufficient time to conduct QI work in practice.
Educational activities
Participants described the pre-work and learning sessions as overwhelming, but expressed having greater comfort and positive views about their added value (educational content and format) at the end of the program. Series of off-site learning sessions (stream A and B) were described as bringing teams together to learn: how to improve diabetes prevention and management; the value of an interprofessional team approach; and the importance of tracking individual and population-based data to inform QI work and improve patient outcomes:
They did all their sessions about all the different components of how to manage chronic diseases … That was really helpful … The process stuff like the tests of change and how to approach some of the little projects … I found that to be helpful. (Social Worker)
Off-site sessions were said to: provide teams with focused and dedicated time to network, troubleshoot, and plan QI activities; motivate and energise teams; and create a sense of togetherness; whereas single off-site (stream C) and practice-based (stream D) sessions were described as providing less opportunity for initial team development, consensus building around QI process and outcome measures, and interprofessional learning. Similarly, the pre-work manual and educational handouts (stream C, D and E) were described as useful, but required self-directed learning that was challenging without dedicated time to devote to this work.
Supportive activities
The practice-based action-periods provided the time needed for teams to take advantage of supportive activities and apply new knowledge in practice; however, participants found that the length of action-periods influenced their effectiveness (e.g. longer than 4 months resulted in loss of momentum). Practice coaching was described as contributing to team-building efforts, enhancing participants’ understanding of the theory and applicability of QI methodologies, and providing valuable knowledge to build on lessons-learned:
They would talk and show us different things that have been tried in different areas. I found that helpful … learning from other participant’s trial and error. (Nurse Practitioner)
However, the style of coaches was said to impact their effectiveness; some coaches were too directive and others were not directive enough, particularly early on in the program.
Teleconferences that involved presentations by guest speakers were viewed as informative and beneficial, especially for those involved in shorter, less intensive educational activities (stream C, D, and E). However, the large number of teams on the teleconferences increased the variability in teams’ QI focus and stage of progress, making group discussions difficult:
[Early in the program] it was more like one on one, there were only three [teams] … We were pushed to really think … Now it’s more, we listen in and hear a guest speaker. Not that that’s terrible. But, in the early days, it gave us momentum. It was more effective than it is now. (Case Manager)
Participants found the IT support and web-based tools as instrumental in understanding the need for data quality, establishing QI mechanisms (identifying patients, building a registry, tracking key indicators), coping with limitations of charting systems, and facilitating reporting/interpretation and inter-team collaboration/communication:
They [e-health lead and practice coaches] were hugely helpful in getting the IT training and support we needed … That was invaluable and they took us, they leap-frogged us to a much higher functionality … That was invaluable! (Physician)
Reporting activities
Participants and implementers described the retrieval of clinical data as challenging due to the limitations of the data collection tools, the deficiencies in data quality, and the restrictions of charting systems. Participants talked about implementers’ lack of understanding of their capacity to effectively enter/capture data, and the problems introduced by the flexibility of reporting activities (i.e. inclusion/omission of patient data based on ease of access resulting in monthly reports with changing denominators that impacted the accuracy of patient population profiles needed to guide QI work):
Sometimes we weren’t sure we were getting the real data from our report, the correct data … When you see them trend month-to-month you can see some big changes … [but,] you can have one more diabetes patient come on, … and it changes your percentage so drastically that you feel like you’re going backwards. (Office Manager)
As for the documentation and reporting of QI efforts, participants described it as a time consuming process that was without added value.
Discussion
This process evaluation provides details about a CE program, its implementation process, and the perceptions of implementers and participants related to the effectiveness of program activities. Partnerships for Health was a complex and dynamic program implemented in “real world” clinical practice settings and was modified during implementation to better address the needs of participants and overcome participation barriers. The data revealed that it was specific elements of program activities and the way activities were combined that led to a perception of effectiveness rather than one activity versus another (i.e. educational, supportive, reporting). Elements of program activities perceived to be the most effective at improving chronic care were, in part, consistent with those previously identified in the literature: series of off-site/classroom learning sessions separated by relatively short practice-based/workplace action-periods; IT support; and practice coaching (Marsteller et al., 2007; Moretti, Kalucy, Hordacre, & Howard, 2010; Ovretveit, 2002; Ovretveit et al., 2002). These elements spanned all three program activities.
The coming together of multiple teams for off-site learning sessions (streams A and B) was described as the most effective activity because it facilitated interaction and created a sense of “togetherness” that was enabling, energising, and motivating through the sharing of strategies to tackle common challenges in QI. The challenges identified in this study, such as data entry and retrieval, organizational/leadership buy-in, and a lack of time/staff/practice resources, were similar to those previously reported (Hammick, Freeth, Koppel, Reeves, & Barr, 2007). The introduction of program streams with different proportions and lengths of time dedicated to classroom versus workplace learning successfully increased enrolment and potential for impact, but elements of activities viewed as critical to overcoming QI challenges were lost (e.g. devoted time to pre-work, collaborative faculty, learning session interactions, and team-building). In some instances, supportive activities compensated for shorter/less intense educational activities (e.g. practice coaching to build on the successes of other teams), but contrary to suggestions by Mills and Weeks (2004), they could not replace them. Further study of the impact of different program streams on outcomes will strengthen conclusions about critical program activities and the ideal ratio of classroom versus workplace learning.
Because activities were optional, participants’ preferences influenced participation. Participants preferred more directive coaching styles, pre-work sessions versus manual, series of sessions versus single sessions, opportunities to network, and hands-on practice coaching support to help ensure progress and maintain momentum. This would suggest the need for a combination of classroom and workplace learning with a higher proportion dedicated to classroom learning. That being said, as previously described by Wilson, Berwick, and Cleary (2003), participants appreciated the flexibility in the program to tailor their QI work to address individual practice needs and characteristics. Thus, a balance between didactic and learner-centred approaches is needed to support the sharing of individual skills, knowledge and context, and to provide examples of how to integrate new knowledge and apply it in individual workplaces.
An assessment of participants’ needs and readiness during the recruitment process may help strategically align practices to program activities that best suit needs rather than preferences. It may help participants improve the balance between time spent on knowledge acquisition and team-building, versus data collection/reporting. The concept of assessment prior to and flexibility in the delivery of QI initiatives was previously described, and assessment tools have been developed (Coleman et al., 2009; Duckers, Wagner, & Groenewegen, 2008; Schroder et al., 2011; Wilson et al., 2003). Future studies are needed to assess how these tools can be used to effectively align teams to program streams with different proportions of classroom and workplace learning activities.
This study has a number of limitations, including the availability and quality of program documentation, the use of self-report data, as well as the nature of studying an evolving program. A developmental or formative evaluation approach may have been better suited for this type of program; however, keeping with a utilization-focused evaluation approach, the evaluation team strived to meet the evaluation goals of the stakeholders within the prescribed project timeline (Patton, 1994). The process evaluation approach was strengthened by the fact that the evaluation team was external to the program team, data collection occurred concurrently with program implementation, and logic/causal models were employed (Chen, 2005; Petrosino, 2000; Rush & Ogborne, 1991). Also, multiple qualitative methods were applied to capitalize on the strengths of each method and triangulation was undertaken to merge the data rather than simply collecting data and categorically publishing separately (O’Cathain et al., 2010). This study may enhance opportunities to validate the findings from the outcomes evaluation component of Partnerships for Health by allowing inferences to be tentatively drawn between program development and program outcomes (Bamberger, Rao, & Woolcock, 2010; Harris et al., 2013).
Concluding comments
CE for healthcare providers is critical to improving care and patient outcomes, but traditional CE approaches are inadequate. This study captured the complexity of a QI initiatives’ program structure, implementation process, and the perceptions of implementers and participants related to the effectiveness of program activities. Findings revealed a multi-faceted structure that included both classroom and workplace learning components that brought together interprofessional teams. This study revealed changes during implementation and series of off-site/classroom learning sessions, and practice-based/workplace IT support and practice coaching as the most effective elements of program activities. The program details captured will facilitate drawing causal inferences between the program and outcomes. Finally, this study contributes valuable information about program activities that were essential to the CE of interprofessional primary healthcare teams as they attempt to transform primary healthcare to improve chronic care.
Declaration of interest
The authors report no conflicts of interest. The authors alone are responsible for the writing and content of this article.
References
- Bamberger M., Rao V., Woolcock M. Using mixed methods in monitoring and evaluation. In: Tashakkori A., Teddlie C., editors. Mixed methods in social & behavioural research. 2nd. Thousand Oaks, CA: Sage; 2010. pp. 613–641. [Google Scholar]
- Borkan J. Immersion/crystallization. In: Crabtree B.F., Miller W.L., editors. Doing qualitative research. 2nd. Thousand Oaks, CA: Sage; 1999. pp. 179–194. [Google Scholar]
- Bricker P.L., Baron R.J., Scheirer J.J., DeWalt D.A., Derrickson J., Yunghans S., Gabbay R.A. Collaboration in Pennsylvania: Rapidly spreading improved chronic care for patients to practices. Journal of Continuing Education in the Health Professions. 2010;30:114–125. doi: 10.1002/chp.20067. [DOI] [PubMed] [Google Scholar]
- Chen H. Practical program evaluation: Assessing and improving planning, implementation, and effectiveness. Thousand Oaks, CA: Sage; 2005. [Google Scholar]
- Coleman K., Mattke S., Perrault P.J., Wagner E.H. Untangling practice redesign from disease management: How do we best care for the chronically ill? Annual Review of Public Health. 2009;30:385–408. doi: 10.1146/annurev.publhealth.031308.100249. [DOI] [PubMed] [Google Scholar]
- Cooksy L.J., Gill P., Kelly P.A. The program logic model as an integrative framework for a multimethod evaluation. Evaluation and Program Planning. 2001;24:119–128. [Google Scholar]
- Crabtree B.F., Chase S.M., Wise C.G., Schiff G.D., Schmidt L.A., Goyzueta J.R., Malouin R.A., et al. Evaluation of patient centered medical home practice transformation initiatives. Medical Care. 2011;49:10–16. doi: 10.1097/MLR.0b013e3181f80766. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Davis D., O’Brien M.A.T., Freemantle N., Wolf F.M., Mazmanian P., Taylor-Vaisey A. Impact of formal continuing medical education: Do conferences, workshops, rounds, and other traditional continuing education activities change physician behavior or health care outcomes? Journal of the American Medical Association. 1999;282:867–874. doi: 10.1001/jama.282.9.867. [DOI] [PubMed] [Google Scholar]
- Dehar M.A., Casswell S., Duignan P. Formative and process evaluation of health promotion and disease prevention programs. Evaluation Review. 1993;17:204–220. [Google Scholar]
- Duckers M., Wagner C., Groenewegen P. Developing and testing an instrument to measure the presence of conditions for successful implementation of quality improvement collaboratives. BMC Health Services Research. 2008;8:172. doi: 10.1186/1472-6963-8-172. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Friedberg M.W., Hussey P.S., Schneider E.C. Primary care: A critical review of the evidence on quality and costs of health care. Health Affairs. 2010;29:766–772. doi: 10.1377/hlthaff.2010.0025. [DOI] [PubMed] [Google Scholar]
- Gilbert M., Staley C., Lydall-Smith S., Castle D.J. Use of collaboration to improve outcomes in chronic disease. Disease Management & Health Outcomes. 2008;16:381–390. [Google Scholar]
- Hammick M., Freeth D., Koppel I., Reeves S., Barr H. A best evidence systematic review of interprofessional education: BEME guide no. 9. Medical Teacher. 2007;29:735–751. doi: 10.1080/01421590701682576. [DOI] [PubMed] [Google Scholar]
- Harris S., Paquette-Warren J., Roberts S., Fournie M., Thind A., Ryan B., Thorpe C., et al. Results of a mixed-methods evaluation of partnerships for health: A quality improvement initiative for diabetes care. Journal of the American Board of Family Medicine. 2013;26:711–719. doi: 10.3122/jabfm.2013.06.120211. [DOI] [PubMed] [Google Scholar]
- Hutchison B., Levesque J.F., Strumpf E., Coyle N. Primary health care in Canada: Systems in motion. Milbank Quarterly. 2011;89:256–288. doi: 10.1111/j.1468-0009.2011.00628.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- http://www.IHI.org Institute for Healthcare Improvement. (2003). The breakthrough series: IHI’s collaborative model for achieving breakthrough improvement. Boston, MA: Institute for Healthcare Improvement. Retrieved from.
- Langley G.J. The improvement guide: A practical approach to enhancing organizational performance. 2nd. San Francisco, CA: Jossey-Bass; 2009. [Google Scholar]
- Marsteller J.A., Shortell S.M., Lin M., Mendel P., Dell E., Wang S., Cretin S., et al. How do teams in quality improvement collaboratives interact? Joint Commission Journal on Quality & Patient Safety. 2007;33:267–276. doi: 10.1016/s1553-7250(07)33031-6. [DOI] [PubMed] [Google Scholar]
- Mills P.D., Weeks W.B. Characteristics of successful quality improvement teams: Lessons from five collaborative projects in the VHA. Joint Commission Journal on Quality & Safety. 2004;30:152–162. doi: 10.1016/s1549-3741(04)30017-1. [DOI] [PubMed] [Google Scholar]
- Moretti C., Kalucy E., Hordacre A., Howard S. South Australian divisions of general practice supporting diabetes care: Insights from reporting data. Australian Journal of Primary Health. 2010;16:60–65. doi: 10.1071/py09057. [DOI] [PubMed] [Google Scholar]
- O’Cathain A., Murphy E., Nicholl J. Three techniques for integrating data in mixed methods studies. British Medical Journal. 2010;341:c4587. doi: 10.1136/bmj.c4587. [DOI] [PubMed] [Google Scholar]
- http://www.health.gov.on.ca/en/pro/programs/cdpm/pdf/framework_full.pdf Ontario Ministry of Health and Long-Term Care. (2007). Preventing and managing chronic disease: Ontario’s framework. Retrieved from.
- Ovretveit J. How to run an effective improvement collaborative. International Journal of Health Care Quality Assurance. 2002;15:192–196. [Google Scholar]
- Ovretveit J., Bate P., Cleary P., Cretin S., Gustafson D., McInnes K., McLeod H., et al. Quality collaboratives: Lessons from research. Quality and Safety in Health Care. 2002;11:345–351. doi: 10.1136/qhc.11.4.345. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Patton M.Q. Developmental evaluation. Évaluation Practice. 1994;15:311–319. [Google Scholar]
- Petrosino A. Answering the why question in evaluation: The causal-model approach. The Canadian Journal of Program Evaluation. 2000;15:1–24. [Google Scholar]
- Reeves S., Goldman J., Burton A., Sawatzky-Girling B. Synthesis of systematic review evidence of interprofessional education. Journal of Allied Health. 2010;39:198–203. [PubMed] [Google Scholar]
- Rush B., Ogborne A. Program logic models: Expanding their role and structure for program planning and evaluation. The Canadian Journal of Program Evaluation. 1991;6:93–106. [Google Scholar]
- Schouten L.M., Schouten L.M., Hulscher M.E., van Everdingen J.J., Huijsman R., Grol R.P. Evidence for the impact of quality improvement collaboratives: Systematic review. British Medical Journal. 2008;336:1491–1494. doi: 10.1136/bmj.39570.749884.BE. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Schroder C., Medves J., Paterson M., Byrnes V., Chapman C., O’Riordan A., Pichora D., Kelly C. Development and pilot testing of the collaborative practice assessment tool. Journal of Interprofessional Care. 2011;25:189–195. doi: 10.3109/13561820.2010.532620. [DOI] [PubMed] [Google Scholar]
- Solberg L.I. Improving medical practice: a conceptual framework. Annals of Family Medicine. 2007;5:251–256. doi: 10.1370/afm.666. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wagner E.H., Glasgow R.E., Davis C., Bonomi A.E., Provost L., McCulloch D., Carver P., Sixta C. Quality improvement in chronic illness care: A collaborative approach. Joint Commission Journal on Quality Improvement. 2001;27:63–80. doi: 10.1016/s1070-3241(01)27007-2. [DOI] [PubMed] [Google Scholar]
- Wilson T., Berwick D.M., Cleary P.D. What do collaborative improvement projects do? Experience from seven countries. Joint Commission Journal on Quality & Safety. 2003;29:85–93. doi: 10.1016/s1549-3741(03)29011-0. [DOI] [PubMed] [Google Scholar]
- http://www.who.int/chp/chronic_disease_report/full_report.pdf World Health Organization. (2005). Preventing chronic diseases: A vital investment. WHO global report. Geneva, Switzerland: World Health Organization. Retrieved from.