Abstract
Objectives
To synthesize lessons learned from the experiences of Agency for Healthcare Research and Quality-funded patient safety projects in implementing safe practices.
Data Sources
Self-reported data from individual and group interviews with Original, Challenge, and Partnerships in Implementing Patient Safety (PIPS) grantees, from 2003 to 2006.
Study Design
Interviews with three grantee groups (n=60 total) implementing safe practice projects, with comparisons on factors influencing project implementation and sustainability.
Data Collection
Semi-structured protocols contained open-ended questions on lessons learned and more structured questions on factors associated with project implementation and sustainability.
Principal Findings
The grantees shared common experiences, frequently identifying lessons learned regarding structural components needing to be in place before implementation, components of the implementation process, components of interventions' results needed for sustainability, changes in timelines or activities, unanticipated issues, and staff acceptance/adoption. Also, fewer Original grants had many of the factors related project to implementation/sustainability than the PIPS or Challenge grantees had.
Conclusions
Although much of what was reported seemed like common sense, surprisingly few projects actually planned for or expected many of the barriers or facilitators they experienced during their project implementation. Others implementing practice improvements likely will share the experiences and issues identified by these implementation projects and can learn from their lessons.
Keywords: Patient safety, safe practices, implementation, program evaluation
Background
In 2002, the U.S. Congress gave the Agency for Healthcare Research and Quality (AHRQ) the mandate to lead federal patient safety improvement activities. In response, AHRQ formulated a strategy with an explicit commitment to improving quality and safety in health care through a combination of scientific research and promotion of improvement (AHRQ 2003). In September 2002, AHRQ contracted with RAND to serve as its Patient Safety Evaluation Center and perform a longitudinal, formative evaluation of its patient safety initiative (Farley et al. 2008;Farley and Battles 2008, in this issue).
This paper reports the results of one component of this broader evaluation—assessment of the experiences and lessons learned from AHRQ-funded projects that focused on the implementation of patient safety practices. Within our process evaluation, these projects were assessed as contributing to the Development of Effective Practices and Tools component of the system framework (see Farley and Battles 2008, in this issue).
Our goal was to inform future related work by AHRQ, other policy makers, and health care providers by synthesizing lessons learned from the experiences of these projects, assessing how their experiences varied, and identifying factors associated with successful implementation. To our knowledge, this is one of the first reported evaluations of patient safety implementation projects. This paper also describes the data collection methods used and how we adjusted those methods in response to changes in data needs during the course of our evaluation. Future evaluations of patient safety implementation projects might learn from our choice of tools and data collection processes.
The Implementation Projects Addressed
We addressed three groups of projects in this evaluation component, as shown below (see Sorbero et al. 2008, in this issue, for details on the sets of funded projects).
Original Grants
In fiscal years (FY) 2000 and 2001, AHRQ funded 81 patient safety projects in seven project groups. Of these 81 projects, we identified 39 that implemented at least one patient safety intervention. These 39 projects were included in the evaluation of implementation experiences reported here. (We refer to these projects here as the “Original Grants.”)
Challenge Implementation Grants
AHRQ awarded 13 Challenge Grants in FY 2003, seven of which were funded to implement and evaluate proven patient safety practices in a variety of health care institutions and health care systems.1 These seven Challenge Implementation Grant projects were included in this portion of our evaluation.
Partnerships in Implementing Patient Safety (PIPS) Grants
In FY 2005, AHRQ awarded 17 PIPS grants to assist health care institutions in implementing safe practices. One of the unique grant requirements was the development of “toolkits” that would assist others in any future implementation of the practices being tested. All of these grants were included in this portion of our evaluation.
METHODS
Our evaluation was formative—not only because we created a feedback loop to AHRQ to share lessons learned after evaluating each group of grantees—but also in the sense that we modified and refined our methods as we learned from our early assessments of grantees' experiences. The methodological changes enabled us to collect richer and more precisely defined data, allowing improved comparisons across project groups. The Original and Challenge grants were awarded early in the patient safety initiative. Therefore, we could collect data from them at two points in time to capture their early implementation experiences as well as their later assessments regarding project success and sustainability. Because the PIPS grants were awarded close to the end of the 4-year term of the evaluation, we could only gather data from them once, in the second year of their projects.
For the Original grants and Challenge grants, we collected qualitative data in the first year of the projects to characterize their early experiences. Specifically, we conducted telephone interviews using semi-structured protocols to collect data from the Original grants in 2003. In 2004, we also used semi-structured protocols during site visits to the Challenge grant project locations, allowing us to gain richer insights into the dynamics of the implementation process and related issues. Site visits were financially feasible for this project group because of the small number of seven projects; it was not feasible for the larger group of 39 Original projects. This is the first of two major differences in methods used, in this case a difference across groups.
We returned to the Original (in 2006) and Challenge (in 2005) grant projects in the last year of the evaluation soon after most of them had completed their work to obtain their retrospective views on their implementation experiences. In this second round of interviews, which were done by telephone, we collected both qualitative data and quantifiable data that allowed us to make more systematic comparisons across groups than was possible with only qualitative data. The development and use of the questions to collect quantifiable data is the other major methodological modification, in this case a difference in the type of data collected.
For the PIPS grants, we collected data only once using telephone interviews conducted in the second year of the projects (2006). We used a semi-structured interview protocol similar to that used for the second data collection for the Original and Challenge grants to obtain both qualitative and quantifiable data on their implementation experiences.
In all interviews except those conducted during the Challenge grant site visits we interviewed the project Principal Investigators (PIs). At the site visits, we obtained data from PIs as well as other project staff; institutional leadership; end users (physicians, nurses, and pharmacists); and other stakeholders. All data collection was done in accordance with the human subjects requirements of RAND's Institutional Review Board (IRB).
Telephone Interviews on Early Implementation Experiences
The protocols used for the interviews on early experiences of the three project groups covered many of the same topics, focusing on what was working well and where they had faced challenges. The protocol for each group also included questions tailored to the specific circumstances of each grantee group. For example, we asked the Original grantees what difficulties they had with implementing their projects, how they addressed the difficulties, and which implementation issues arose that were unanticipated. For the PIPS grantees, we specifically asked about factors that influenced health care worker acceptance or adoption of the patient safety intervention(s).
Site Visits
We conducted 1.5-day site visits to the seven Challenge Grant projects in 2004. Being on-site, we could delve into many more areas of interest than was possible in telephone interviews. We conducted semi-structured interviews with the PIs and key project staff as well as individual or group interviews with between 25 and 75 other key stakeholders for each grantee. The interviews were similar to those conducted with the Original and PIPS grantees except we also asked questions specific to this grant portfolio by probing further into their experience with facilitators and barriers to implementing their interventions. During the site visits, we also conducted participant observations of the projects and institutional “walk-arounds” to observe the larger project settings (Silverman, Ricci, and Gunter 1990;Yin 2003). Data collection was conducted by senior investigators with experience in site visit data collection and analysis.
Evidence-Informed Factors Influencing Project Implementation and Sustainability
Results from the first round of interviews with the Original and Challenge grantees revealed that their implementation experiences were quite similar, leading us to expect more of the same results with subsequent groups of projects. Seeing the need to be able to compare groups more systematically on a defined set of factors, we developed more structured questions addressing a set of factors identified as being important for successful implementation of practice changes or quality improvement based on results from an extensive review of the literature.
Synthesizing information from >300 peer-reviewed articles, we identified 12 factors in two domains: (1) infrastructure to support implementation and (2) support for the implementation process (Tharp-Taylor and Farley, submitted for publication) (Table 1). We then developed a set of questions for each factor to be asked in the project interviews. All items within a factor had the same scale so responses could be aggregated at the factor level. Items in many factors were scored on Likert response scales (1=“not at all” and 5=“a great deal”) (results shown in Table 4, as discussed later). Items for the other factors had yes/no response options.
Table 1.
Infrastructure | |
---|---|
1. Patient safety culture† | 3. Incentives or rewards for patient safety |
Places a priority on ensuring safe care | Provide financial incentives |
Facilitates reporting of errors and events | Provide nonfinancial incentives |
Provides nonpunitive environment re: errors | Offer recognition for efforts taken |
Allows anonymous error reporting | 4. Data system effectiveness† |
Responds actively when issues are identified | 5. Culture of excellence † |
Enables staff to share information to learn from errors | Emphasis on meeting quality performance Standards |
Communicates with physicians and staff about patient safety | Structure and process to support quality improvement |
Communicates with patients about patient safety | Involved staff in quality improvement |
2. Patient safety standards | Management style that supports quality improvement |
Documented in protocols or guidelines | |
Published and disseminated widely | |
Clear and easy to understand | |
Implementation Process | |
1. Resource support for project | 5. Financial support |
Adequate time to carry out tasks related to the project? | To complete project activities To sustain the project |
Adequate funding to carry out the project? | 6. Monitoring performance outcomes† |
Autonomy to carry out the project? | Use of quantified measures Analyzing trends Reporting data to stakeholders regularly |
2. Types of stakeholders represented on the project team | 7. Leadership support/involvement for the Project† |
3. Implementation team performance† | |
Defined a strategy and plan for the project | |
Persevered in implementing the project | Shaping project vision |
Collaborated effectively across disciplines | |
Felt empowered by the organization's leadership | Planning for start-up |
4. Degree of end-user involvement† | Making revisions during implementation |
Shaping project vision | Requesting project updates from the team |
Planning for start-up | Providing guidance and feedback to the team |
Implementing the project | Assisting in removing barriers to implementation |
Making revisions during the implementation | Promoting/marketing the project |
Promoting/marketing the project |
These were identified in a review of the literature as being important to the success of implementation projects. However, not surprisingly, many factors were identified in the early grantee interviews, indicating that the grantees were sharing experiences already reported by others in published papers.
Likert scale response options (1=“not at all” and 5=“a great deal”) were used for these items. Items for the other factors had yes/no response options.
Table 4.
Implementation Success Factor | Original Grants (n=39)† | Challenge and PIPS Grants (n=21)† |
---|---|---|
A. Organizational infrastructure scales | ||
Degree to which the following elements are in place | Percent Reporting Score of 4 or 5‡ | |
1. A patient safety culture | 9 | 45 |
2. An effective data system | 46 | 42 |
3. A culture of excellence (quality improvement) | 42 | 65 |
The existence of | Scale Average (Scale Maximum) | |
4. Patient safety standards | 2.1 (of 3) | 2.7 (of 3) |
5. Incentives for safety improvement efforts | 1.3 (of 3) | 1.9 (of 3) |
B. Planning and implementation scales | ||
Scale Average (Scale Maximum) | ||
1. The degree of leadership's involvement in the project | 3.4 (of 10) | 3.6 (of 10) |
2. Whether or not organizations' leadership provided autonomy and adequate resources to support project | 2.3 (of 3) | 2.9 (of 3) |
3. Whether or not project's financial resources are sufficient | 1.3 (of 2) | 1.5 (of 2) |
4. Types of stakeholders on the project team | Percent Having Stakeholder on Team | |
Senior management | 66 | 61 |
Mid-level management | 91 | 90 |
Physicians | 89 | 90 |
Nurses | 86 | 86 |
Patients | 16 | 19 |
Community representatives | 38 | 14 |
Information technology | 71 | 71 |
Legal | 44 | 19 |
5. Extent to which the project team has | Percent Reporting Score of 4 or 5‡ | |
Defined a strategy and plan for the project | 88 | 95 |
Persevered in implementing the project | 97 | 100 |
Collaborated effectively across disciplines | 91 | 100 |
Felt empowered by the organization's leadership | 81 | 95 |
6. Degree of end-user involvement | 28 | 33 |
7. Extent to which data will be used for monitoring performance outcomes | 70 | 95 |
Factors identified in a review of the literature as important for project implementation and sustainability.
We did not attempt to examine whether these differences were significant, given the small sample sizes, so statistically significant differences cannot be noted.
Response scale is 1=not at all to 5=a great deal.
PIPS, Partnerships in Patient Safety projects.
These questions were included in the second round of interviews with the Original and Challenge grantees as well as in the only interviews done with the PIPS grantees. As such, the PIs of the Original and Challenge grant projects were reporting on mature projects and could reflect on their full implementation experience. All but one of the Challenge grants participated in these interviews. The questions were e-mailed in advance of the interviews so respondents could have the structured questions and response scales in front of them during the interview.
Data Analysis
To examine the qualitative data collected from open-ended questions, we conducted content analysis to identify key issues raised. Drawing on grounded theory (Glaser and Strauss 1967;Miles and Huberman 1994) we first read the texts to identify themes (Blumer 1969). Themes can be considered abstract constructs that researchers identify before, during, and after data collection and come from literature reviews, theoretical models, prior experiences, and the text itself. We looked for repetitions across informants, shifts in content, and examples that suggest processes, actions, assumptions, and consequences. (Glaser and Strauss 1967;Miller and Crabtree 1992;Jehn and Doucet 1996;Jehn and Doucet 1997;Strauss and Corbin 1998;Ryan and Bernard 2000;Ryan and Bernard 2003). We also explored both “within-grantee group” thematic similarities and differences and “cross-grantee group” thematic similarities and differences. To identify the first, we reviewed themes for each set of grantees, examining the degree to which themes were central or peripheral to them. To identify cross-group patterns, we examined the degree to which the same themes were salient to different groups.
To analyze responses to the more structured evidence-informed implementation factors, we first constructed factor score indexes that combined responses to individual items in each factor (e.g., all the questions about patient safety culture). As responses were in two formats (1–5 scale or yes/no responses), two types of indexes were built. For scaled responses, average scores were calculated and then category scores (1–5) were defined for each respondent using the average score. For yes/no responses, the number of “yes” responses was summed across all the relevant questions for each factor. Index scores are reported for all but two of the success factors. For team membership, we present the percentage of respondents that reported having each type of stakeholder on its implementation team. For team performance, we report the categorized scores for each element within that factor.
RESULTS
The Original, Challenge Implementation, and PIPS Grants focused on a wide variety of patient safety issues and targeted a mix of special populations (Table 2). They also focused on a variety of health care settings with the most frequent settings being inpatient acute units and outpatient clinics or provider offices.
Table 2.
Number of Issues by Type of Grant |
|||
---|---|---|---|
Original Grants | Challenge Implementation | PIPS | |
Patient safety issue | |||
Medication ordering/administration | 15 | 4 | 9 |
Nosocomial infections | 3 | 2 | 1 |
Falls/pressure ulcers | 3 | ||
Nurse staffing | 1 | ||
Provider fatigue, working conditions | 3 | 1 | |
Surgical/invasive procedure errors | 6 | 1 | |
Diagnostic/treatment errors | 9 | 3 | |
Equipment/device failure | 1 | 1 | |
Ordering/administering blood | 1 | 1 | |
Care procedures and coordination* | 1 | 3 | |
Wrong patient/procedure/test | |||
General patient safety | 25 | 2 | |
Hand offs | 1 | 3 | |
Other issues | 3 | ||
Total number of issues studied | 70 | 13 | 20 |
Average number per project | 1.7 | 1.9 | 1.2 |
Projects That Targeted Special Populations |
|||
Special populations | |||
Elderly | 4 | 3 | 4 |
Minority populations | 10 | 1 | 3 |
Low income | 7 | 1 | 2 |
Health vulnerable | 3 | 4 | 3 |
Other vulnerable | 2 | 1 | 1 |
Care procedures and coordination include errors in the admitting process, such as applying the wrong patient identification bracelet, misplaced documentation such as “lost” medical records, failure to notify patients of a positive test result, failure to register a patient in the emergency department resulting in delayed care and adverse outcome, etc.
AHRQ, Agency for Healthcare Research and Quality; PIPS, Partnerships in Patient Safety projects.
Lessons Learned from Project Implementation
Table 3 summarizes the main type of lessons identified in the qualitative interviews—grantees' perceptions of facilitators or barriers to successful project implementations. These results highlight the similarities in the lessons learned reported by the three groups of projects. We organized these lessons into three categories: (1) structural components that (ideally) should be in place as an intervention is being introduced, (2) components of the implementation process, and (3) components of the intervention's results that are needed for sustainability.
Table 3.
Key Facilitator* | Identified by Original Grants | Identified by Challenge Grants | Identified by PIPS Grants |
---|---|---|---|
Structural components in place before implementation | |||
Existing relationships among involved organizations | X | X | X |
Prior experience with relevant technologies | X | ||
Institutional commitment and leadership | X | X | X |
Prior experience in performance improvement | X | X | |
Integrated data systems | X | X | |
Supportive organizational culture | X | X | X |
Existing trust among participating stakeholders | X | X | X |
Components of implementation process | |||
Choice of feasible and simple intervention | X | X | |
Champions designated to drive the process | X | X | |
Interdisciplinary and skilled staff team | X | X | |
Sufficient staff to execute intervention | X | X | |
Team commitment and perseverance | X | X | X |
Financial support for the change process | X | X | |
Participative planning process for the intervention | X | ||
Sufficient time to develop/execute intervention | X | X | X |
Sufficient expected return on investment from change | X | ||
Buy-in from all disciplines involved or affected | X | X | |
Effective transition to new practices | X | X | |
Management of inter-organization activities | X | X | X |
Management of evolving technologies | X | X | X |
Technical assistance and training | X | X | |
Change in organizational culture | X | X | |
Open communication with stakeholders | X | X | |
Effective use of data for transparency | X | X | |
Benefits that accrue to all involved | X | X | |
Plan for HIPAA privacy requirements | X | ||
Components of the intervention results needed for sustainability | |||
Evidence of effects on costs and outcomes | X | ||
Infrastructure to sustain new practices | X | ||
Evaluation integrated into regular business process | X | X | |
Dissemination of results to other organizations | X |
Grantees responded that having these were facilitators to project implementation and not having them were barriers.
HIPAA, Health Insurance Portability and Accountability Act; PIPS, Partnerships in Patient Safety projects.
Structural Components in Place before Implementation
Many grantees felt they were able to successfully implement their projects because their institutions had distinct resources and characteristics. For example, many grantees reported it was important to have existing relationships with partnering organizations (because many of the interventions involved multiple partners). Given the amount of time required for individuals from different institutions to become acquainted it would be difficult to do that at the same time the project was implementing practice and culture changes. Rural and smaller grantees almost always reported having existing relationships, and the trust that this brought allowed them to more quickly pool their otherwise-limited resources. Having prior experience in performance improvement was also seen as a key structural component. This was felt especially among smaller institutions, as they were less likely to have had performance-improvement experience. Grantees reported that this inexperience often led to project time delays, budgetary issues, and the inability to robustly assess changes in their facilities' outcomes.
Components of the Implementation Process
The grantees we interviewed identified several components of the implementation process that were integral to their projects' implementation. In general, they reported that project simplicity won over complexity. For example, they felt that it was important to keep the project scope narrow or use small-scale pilots initially with later replication on a larger scale once promising interventions were identified and methods were refined. Grantees reported that interventions with multiple components met with mixed levels of success, in which some project components were deemed successful while others were less so. Grantees noted research methods could be applied more rigorously at a few sites, so it was more likely that smaller-scale projects would result in higher quality, well-documented data. Likewise, they felt it important to focus on a limited set of outcomes.
Almost all grantees emphasized either the need to designate “champions” to drive the implementation process or to obtain “buy-in” from persons of all professional disciplines involved in the project. This often is because of the differing cultures among professional disciplines, some of which have tended to focus on individual performance when unanticipated events occur. As such, sometimes institutions implementing patient safety culture or practices face “push-back” because of fear among health professional that creating nonpunitive systems will create a tolerance of failure, foster carelessness, and reduce accountability. Although opinions differed on how to change culture, several grantees advised the following, based on their experiences: (1) change peoples' minds with data rather than argument; (2) let the success of projects help shape attitudes and allow a patient safety culture to emerge over time, and (3) make the patient the guiding focus to mitigate the “turf” wars among professional groups that could develop.
Another key component of successful interventions was having technical assistance. For example, as most grantees from smaller institutions reported they had no experience with writing grants or conducting interventions, they needed help across several project activities such as managing human subjects committees and budgets. However, almost all grantees stressed that team perseverance and commitment to the intervention often counterbalanced any challenges experienced implementing the intervention. Additionally, although it would seem that all projects would consider finances to be important for project implementation, only the Challenge grantees reported it was important that their projects have a sufficient expected return on investment. Likewise, only Challenge and PIPS grantees mentioned it was important to have financial support for their projects.
Components of the Interventions' Results Needed for Sustainability
Grantees identified several components they felt were needed to sustain the intervention beyond the project period. These included evidence of their interventions' effects on costs and outcomes and creation of an infrastructure that can support continued use of the new practices. Establishing a system of regular monitoring and performance evaluation can help keep the project on track to meet project goals. As part of the monitoring process, one grantee offered that it is important to choose an achievable short-run aim to enhance the possibility of early success and establish a “backbone” for continued activities. Grantees also thought it important to disseminate results to others to continue to build support for the project. Reasons cited for lack of sustainability included financial constraints, lack of staff buy-in, limited staff time, insufficient resources (e.g., leadership support or infrastructure), and burden created by substantial data collection or management requirements.
Grantees reported three additional types of lessons learned, as shown below.
Challenges Resulting in Shifts in the Project Timeline or Activities
Owing to the many challenges involved in implementing their interventions, several grantees reported experiencing time delays in implementation. The most frequently mentioned challenges included obtaining buy-in from participating clinicians and institutions, obtaining IRB approvals, using information technology vendors and other outside contractors, timing conflicts with participating providers schedules, recruiting and staffing issues, Health Insurance Portability and Accountability Act, and other legal issues. IRB issues were especially prevalent among projects that had partnering organizations because each organization often had its own requirements and approval procedures, resulting in project teams having to manage several, disparate IRB processes. Additionally, we found that projects evaluating products produced by vendors (e.g., handheld devices, intelligent pumps) can be extremely time consuming because it can take months for vendors to provide prototypes to be tested in practice. Grantees also reported that implementing new technology took longer than planned, often due to testing requirements they had not anticipated in advance.
Many grantees reported that they modified their project activities as a result of the challenges they faced. Some extended their timelines because they had underestimated staffing needs and the time required to complete tasks. Others added personnel or downsized their projects by reducing the number of patients or sites involved. One grantee identified the staff implications of new technology, noting that “A good rule of thumb is the higher the probability that a technology can be used to avoid human errors, the more human resources are needed for successful implementation.”
Unanticipated Issues
Few projects actually planned for or expected many of the barriers or facilitators they experienced during their project implementation. For example, grantees reported that they had not anticipated how important it was to obtain buy-in from physicians. They also did not anticipate that provider buy-in needed to be a continuous process instead of something that occurred only at project inception. This was a particular issue for projects that involved medical staff who rotate across units (e.g., residents and other trainees) and, as such, were not involved throughout the project. For example, one grantee offered it was important to avoid interventions that are controversial, that it is much easier to get projects up and going by selecting projects that will pass the “nod” test early on.
Staff Adoption or Acceptance
The grantees reported that it was more difficult to change staff practices than they had realized, with some personnel refusing to let go of their traditional practices to adopt new routines. One grantee said it was important to select interventions that are highly supported by evidence because they are less open to debate and more readily acceptable by staff.
After hearing from Original and Challenge grantees that issues of staff acceptance was one of the lessons they learned, we asked PIPS grant leadership specifically about this issue. They reported that staff whose workloads were increased or whose workflow changed as a result of the intervention were more likely to resist change. Additionally, one grantee reported it was difficult to obtain staff acceptance because the project was imposed on them by the hospital network leadership and staff did not see the need for the intervention. Alternatively, several grantees said that involving end users in the design and implementation process was integral to their accepting the intervention.
Evidence-Informed Factors Influencing Implementation and Sustainability
We used the structured questions on the 12 factors identified to influence implementation to compare grantees' reports of the degree to which each factor was in place for their projects. We found that the Challenge and PIPS grantees had similar scores and that these scores tended to differ from the Original grantees' scores. Therefore, we aggregated the index scores for the Challenge and PIPS grantees and compared them with the Original grantees' scores (Table 4). We did not attempt to examine whether these differences were statistically significant, given the small sample sizes. Instead, we note below observed differences.
In general, many grantees were lacking in many of the factors found to be important to implementation. The majority had low scores for having most organizational infrastructure elements in place and for having organizational leaderships' involvement. Less than a third had end-user involvement. However, most reported strong project team performances.
When comparing the Original grants with those funded later (PIPS and Challenge grants), the former group appeared to have fewer factors available that might support their project implementation or its sustainability. Specifically, fewer Original grantees perceived organizational success factors being available than PIPS/Challenge grantees for four of the five organizational factors: patient safety culture, culture of excellence, patient safety standards, and incentives to encourage safety improvement efforts. Fewer differences were found between the Original Grants and the other two groups for factors related to the implementation process. The Original grantees were, however, less likely to report that their organization provided autonomy and adequate resources to support the project relevant to the PIPS/Challenge grantees. The Original grantees also were less likely to report that data will be used for monitoring performance outcomes and to report that their project team felt empowered by their organizations' leadership.
The three groups of grantees appeared to be similar regarding the reported availability of other factors that could influence the implementation or its sustainability. These included: leadership involvement in their projects, financial investments in the projects, team performance, and the degree of end-user involvement. They also have similar types of stakeholders on their project teams, with two exceptions: more Original grantees reported having community representatives and legal staff on their team than the Challenge or PIPS grantees did.
DISCUSSION
Empirical Results
Many of the AHRQ patient safety grantees in the three sets of grantees reported similar qualitative information on lessons they learned regarding project implementation. They identified the several structural components that they learned should have been in place before implementing their project. Also, they identified many similar components of the implementation process that were barriers or facilitators to project implementation. Finally, they reported similar issues related to project sustainability.
Although most of what we found is common sense, surprisingly few projects actually planned or expected many of the barriers or facilitators they experienced during their project implementation. For example, project teams need to be sure that they are ready to implement before they attempt to do so, and they have to accept the fact that they are going to run into challenges and be willing to make mid-course corrections. Project teams also need to realize how time consuming and difficult it can be to implement a project.
When we gathered more quantified data on the set of factors that contribute to implementation success we found that many grantees were lacking several of those factors. In addition, the Original grantees tended to have lower scores on these factors than the other two groups, suggesting less readiness to implement change. Perhaps the Original grantees had steep learning curves because they were the first projects undertaken in AHRQ's patient safety initiative, with limited previous experience by others to guide them. By the time the Challenge and PIPS grants began, patient safety had become a more visible priority and more was known about implementing safer practices. Grantees also differed in some surprising ways. For example, fewer of the later grantees (Challenge and PIPS) reported having community representatives or legal staff on their project teams. We do not know the reasons for this difference, but it may relate to their focus on fewer practices than the Original grantees.
Methods Used
The choice of methods used and the data we collected evolved over time in response to what grantees were reporting. Specifically, the similarity of the grantees' reports to open-ended questions on lessons learned or facilitators and barriers led us to seek a more systematic method for documenting and comparing implementation experiences. Based on results of our extensive literature review, we designed a set of more structured measures with which to assess grantees' experiences. We then used these structured measures in the second interviews with the Original and Challenge grantees and in the only PIPS interviews conducted, enabling us to compare the groups' experiences systematically.
Limitations
We were unable to statistically compare differences between groups of grantees in the degree to which they felt particular factors accounted for the success of their project implementation. However, differences were apparent between the grantees funded earlier versus later, suggesting that those funded later had higher levels of readiness to effectively implement projects. Additionally, we did not explore how the project experiences varied by the projects' demographic characteristics. Although this would have been appropriate for a research study of characteristics predicting their experiences, the purpose of this analysis was to identify general experiences and issues to help guide others in their implementation efforts. Finally, relying on notes instead of transcribed recordings may introduce posthoc bias into the interpretation of the data. We took careful notes and finalized them quickly after each interview, so we believe that biases are limited in size. Further, the consistency of the information obtained from the grantees gives us confidence in the results.
CONCLUSION
The three sets of AHRQ projects we examined were addressing a diversity of patient safety issues and practices, yet they shared many experiences. It is likely that others implementing similar practice improvements also will share these experiences and issues. To our knowledge, this is one of the first reported evaluations of patient safety implementation projects. We did not find a “magic formula” for effective implementation—each project has unique characteristics and issues that it needs to manage to achieve safer health care practices. Even the best prepared organization can expect to encounter obstacles that must be managed and many unanticipated issues are likely to arise. Additionally, future evaluations might consider using both quantifiable and qualitative data to enable systematic comparisons to be made of projects' experiences while best capturing the dynamics of how organizations and teams experience the implementation process.
Acknowledgments
Joint Acknowledgment/Disclosure Statement: The research described in this manuscript was undertaken as one part of a larger evaluation project funded under contract No. 290-02-0010 with the Agency for Healthcare Research and Quality (AHRQ). Pursuant to that contract, AHRQ had the right to review and comment on this manuscript before its publication.
This manuscript was developed with data collected and/or analyzed under contract with AHRQ. The information and opinions expressed herein reflect solely the position of the authors. Nothing herein should be contrued to indicate AHRQ support or endorsement of its contents.
Disclosures: None.
We would like to thank Jake Dembosky for his assistance with this project.
NOTE
The other six Challenge Grants were limited to testing specific risk assessment methodologies for identifying patient safety problems and, therefore, were assessed separately; the results are not included in this paper.
Supporting Information
Additional supporting information may be found in the online version of this article:
Appendix SA1: Author Matrix.
Please note: Wiley-Blackwell is not responsible for the content or functionality of any supporting materials supplied by the authors. Any queries (other than missing material) should be directed to the corresponding author for the article.
References
- Agency for Healthcare Research and Quality. AHRQ's Patient Safety Initiative: Building Foundations, Reducing Risk. An Interim Report to the United States Senate Committee on Appropriations. Rockville, MD: AHRQ; 2003. [Google Scholar]
- Blumer H. Symbolic Interactionism: Perspective and Method. Englewood Cliffs, NJ: Prentice-Hall; 1969. [Google Scholar]
- Farley DO, Battles JB. Evaluation of the AHRQ Patient Safety Initiative: Framework and Approach. Health Services Research. 2008 doi: 10.1111/j.1475-6773.2008.00931.x. DOI 10.1111/j.1475-6773.2008.00931.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Farley DO, Damberg CL, Ridgely MS, Sorbero MES, Greenberg MD, Haviland AM, Teleki SS, Mendel P, Bradley LA, Dembosky JW, Fremont A, Nuckols TK, Shaw RN, Straus S, Taylor SL, Yu H, Tharp-Taylor S. Assessment of the AHRQ Patient Safety Initiative: Final Report—Evaluation Report IV. Santa Monica, CA: RAND Corporation; 2008. No. TR-563-AHRQ. [Google Scholar]
- Glaser BG, Strauss AL. The Discovery of Grounded Theory: Strategies for Qualitative Research. Chicago, IL: Aldine; 1967. [Google Scholar]
- Jehn K, Doucet L. Developing Categories from Interview Data: Text Analysis and Multidimensional Scaling, Part 1. Field Methods. 1996;8(2):15–6. [Google Scholar]
- Jehn K. Developing Categories for Interview Data: Consequences of Different Coding and Analysis Strategies in Understanding Text: Part 2. Field Methods. 1997;9(1):1–7. [Google Scholar]
- Miles MB, Huberman AM. Qualitative Data Analysis: An Expended Sourcebook. 2d edition. Thousand Oaks, CA: Sage Publications; 1994. [Google Scholar]
- Miller WL, Crabtree BF. Primary Care Research: A Multimethod Typology and Qualitative Road Map. In: Miller WL, editor. Doing Qualitative Research. Vol. 3. Thousand Oaks, CA: Sage Publications; 1992. pp. 3–28. [Google Scholar]
- Ryan GW, Bernard HR. Data Management and Analysis Methods. In: Lincoln Y, editor. Handbook of Qualitative Research. 2d edition. Thousand Oaks, CA: Sage Publications; 2000. pp. 769–802. [Google Scholar]
- Ryan GW. Techniques to Identify Themes. Field Methods. 2003;15(1):85–110. [Google Scholar]
- Silverman M, Ricci EM, Gunter MJ. Strategies for Increasing the Rigor of Qualitative Methods in Evaluation of Health Care Programs. Evaluation Review. 1990;14(1):57–73. [Google Scholar]
- Sorbero MES, Ricci K, Lovejoy S, Haviland AM, Smith L, Bradley LA, Hiatt L, Farley DO. Assessment of Contributions to Patient Safety Knowledge by the AHRQ-Funded Patient Safety Projects. Health Services Research. 2008 doi: 10.1111/j.1475-6773.2008.00930.x. DOI 10.1111/j.1475-6773.2008.00930x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Strauss AL, Corbin JM NetLibrary Inc. Basics of Qualitative Research Techniques and Procedures for Developing Grounded Theory. 2d edition. Thousand Oaks, CA: Sage Publications; 1998. [Google Scholar]
- Tharp-Taylor S, Farley DO. Development of Evidence Regarding Important Factors for Successful Implementation of Health Care Practice Improvements, submitted for publication.
- Yin RK. Case Study Research: Design and Methods. Beverly Hills, CA: Sage Publications; 2003. [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.