Skip to main content
Health Services Research logoLink to Health Services Research
. 2009 Apr;44(2 Pt 2):701–716. doi: 10.1111/j.1475-6773.2008.00927.x

Training a Patient Safety Work Force: The Patient Safety Improvement Corps

Stephanie S Teleki, Cheryl L Damberg, Melony E S Sorbero, Rebecca N Shaw, Lily A Bradley, Denise D Quigley, Allen M Fremont, Donna O Farley
PMCID: PMC2677036  PMID: 21456112

Abstract

Objective

Evaluate short-term effects of the Patient Safety Improvement Corps (PSIC), an Agency for Healthcare Research and Quality–sponsored program to train state teams in patient safety skills/tools, to assess its contribution to building a national infrastructure supporting effective patient safety practices.

Data Source

Self-reported information gathered from (1) group interviews at the end of each year; (2) individual telephone interviews 1 year later; (3) faxed information forms 2 years later.

Study Design

Program evaluation of immediate and short-term process and impact (use of skills/tools, information sharing, changes in practice).

Data Collection

Semistructured interviews; faxed forms.

Principal Findings

One year after training, approximately half of Year 1 and 2 state agency representatives reported they had initiated or modified legislation to strengthen safe practices, and modified adverse event oversight procedures. Approximately three-quarters of hospital representatives said training contributed to modifications to adverse event oversight procedures and promotion of patient safety culture. Two years posttraining, approximately three-quarters of Year 1 trainees said they continued to use many skills/tools.

Conclusions

The PSIC contributed to building a national infrastructure supporting effective patient safety practices. Expanded training is needed to reach a larger fraction of the population for which this training is important.

Keywords: Patient safety, infrastructure, training program, program evaluation


The Patient Safety Improvement Corps (PSIC) is a nationwide training program that is a key component of the patient safety initiative operated by the Agency for Healthcare Research and Quality (AHRQ). The goal of the PSIC is to increase the number and capacity of health care professionals with core patient safety skills/tools, given a deficit of such individuals. AHRQ developed the PSIC based upon feedback from states concerning their anticipated patient safety responsibilities and lack of resources to address them (Teleki 2007).

This paper presents methods and findings of an evaluation of the PSIC undertaken within the larger evaluation of AHRQ's patient safety initiative. It specifically addressed the infrastructure component of the system framework used in that larger evaluation (see Farley and Battles 2008, in this issue). Our goal was to assess the PSIC's contribution to building a national infrastructure supporting effective patient safety practices.1

Design of the PSIC

The PSIC was designed to provide training to teams from all U.S. states and the District of Columbia over 3 years (2003–2006). It aimed to develop participants’ skills to (1) conduct effective investigations of reports of medical errors, (2) prepare meaningful reports on findings, (3) develop and implement sustainable system interventions, (4) measure and evaluate the impact of interventions, and (5) ensure sustainability of effective interventions (AHRQ 2006). AHRQ contracted and collaborated with the Department of Veterans Affairs National Center for Patient Safety (NCPS) to provide training, given their experience implementing patient safety education and practices.

AHRQ developed the format and curriculum based upon feasibility study findings, in consultation with experts and key stakeholders. AHRQ specified that participants should be teams of individuals from state agencies with oversight responsibility for patient safety, and up to two of each state's selected hospital partners, for a total of approximately four participants per state. AHRQ originally envisioned focusing on state agency representatives, but included hospital representatives at the former's request, so that these diverse stakeholders could build collaborative relationships.

The training was a 1-year program, repeated for 3 years with three different trainee groups. Each annual program consisted of three 1-week, in-person sessions; homework assignments between sessions; and an improvement project. The didactic sessions provided training on practical applications of patient safety science, change management, medical errors reporting and analysis, medical/legal issues, and application of skills/tools. Examples of topics include the following: Root Cause Analysis (RCA); Healthcare Failure Mode and Effect Analysis (HFMEA); human factors engineering; and patient safety culture.2

Instructors were NCPS staff and experts from AHRQ and the private sector. The NCPS also facilitated technical assistance conference calls. The program was tuition-free; travel expenses were reimbursed; and participants received resource materials. In total, 52 teams from 49 states and the District of Columbia received training. In Year 1, 15 teams participated; in Year 2, 21; and in Year 3, 16. Two states, Maryland and Massachusetts, sent teams in both Years 1 and 2. Louisiana was unable to participate due to Hurricane Katrina.

Evaluation Goals

We aimed to assess the extent to which participants gained knowledge and skills from the training and to document how they used those skills to improve patient safety. We also aimed to provide real-time feedback to AHRQ and the NCPS. We used this information to assess the PSIC's cumulative contributions toward building a national patient safety infrastructure for effective practices.

Methods

We employed a longitudinal study design, gathering data from and tracking the progression of each of the three trainee groups. Given that the program was voluntary, it was important to track specific characteristics and experiences of each group so that we could assess any differences in state teams across years. We did not use a randomized control design because the program trained teams from all the U.S. states, precluding availability of other states to use as controls. In addition, because numbers of participants in the population of teams were small, we did not randomly sample which would have further reduced sample size.

Table 1 summarizes data collection methods and numbers of participants involved. In the first data collection step for each group, we conducted in-person, group interviews with most of the teams during their final training session each year to assess immediate program impact. We used semistructured protocols that included questions about practical aspects, skills/tools developed, immediate challenges, perceived benefits, and information sharing. We also asked each participant to assess retrospectively his/her growth in knowledge and skills (i.e., entering and leaving training).

Table 1.

Evaluation Data Collection for Each of the Three Patient Safety Improvement Corps (PSIC) Trainee Groups

Year 1 Year 2 Year 3
Group interviews 73% 57% 75%
(11 of 15 groups) (12 of 21 groups) (12 of 16 groups)
One-year follow-up (telephone interview) 72% 64%
(38 of 53 individuals) (58 of 91 individuals) NA
Second-year follow-up (faxback form) 66%
(25 of 38 individuals) NA NA

NA, not applicable; no data were collected given that RAND's evaluation period had ended.

In the first year, we used an exploratory strategy to identify pertinent issues for examination in a more structured way in subsequent years. The protocol used for the first group interviews was comprised of open-ended questions. Using information obtained from these interviews, we refined our tools to collect more quantitative data on specific topics in the second and third years.

Approximately 1 year after each of the Year 1 and 2 groups completed training, we performed the second data collection step. We conducted individual, follow-up, telephone interviews with a minimum of two graduates from each team (at least one state and one hospital representative).3 Our goal was to determine whether changes to practices were occurring 1 year following training completion. Questions focused on short-term usefulness of skills/tools, how these were used in practice, and the impact of training on actions.

Finally, 2 years after the Year 1 group completed training, we collected updated information via a faxback form from members of this group who had participated previously in the telephone interviews. Through this follow-up, we aimed to begin assessing sustainability of changes that occurred in the first year following training. To enable comparisons 1 and 2 years posttraining, we purposefully sought feedback from individuals from whom we had collected data the prior year. The form contained questions about sustained use of skills/tools, networking, and information sharing.

We recruited for the group interviews by asking teams to volunteer at the beginning of the third training session each year. We interviewed a significant proportion of the entire population of state teams: 11 of 15 Year 1 teams (73 percent); 12 of the 21 Year 2 teams (57 percent); and 12 of the 16 Year 3 teams (75 percent).

For the telephone interviews, we recruited study participants from all participating teams using NCPS's participation lists. Within each state team, we sorted individuals by the type of entity they represented (i.e., state agency or hospital), and aimed to recruit at least one representative from each type. We obtained high response rates: 93 percent (38/41) and 85 percent (58/68) for the Year 1 and 2 groups, respectively. As such, we interviewed a significant proportion of the population of all participants: 38 of the 53 Year 1 trainees (72 percent), and 58 of the 91 Year 2 trainees (64 percent). In both years, we achieved our goal of interviewing at least two members of each state team—at least one of whom was a state representative and one, a hospital representative. For the 2-year follow-up data collection, we sent faxback forms to all Year 1 trainees who had participated in the 1-year, telephone interviews 1 year prior (n=38), and 25 of these trainees (66 percent) returned forms.

RAND researchers reviewed all qualitative data and conducted content analyses to identify and compare themes. Counts were tabulated for quantitative responses. Using data collected 1 and 2 years after the program's end, we focused on percentages of respondents reporting use of skills/tools, and on changes in specific actions undertaken because of training. For year-to-year comparisons, we conducted two-tailed tests of statistical significance (95 percent confidence interval). However, due to small sample sizes, we did not anticipate adequate power to detect differences.

Results

Across the 3 years, participants from state agencies held a variety of positions (e.g., director of hospital programs, assistant attorney general), whereas the hospital representatives typically had explicit responsibilities for patient safety and quality improvement. Hospital representatives more likely had titles such as “patient safety officer” in later years than the first year. In Year 3, AHRQ relaxed eligibility requirements to encourage states that had not yet participated. Consequently, more Year 3 teams included representatives from Quality Improvement Organizations (QIOs).4

The experience levels of Year 1 trainees varied widely. Some trainees reported they had used or taught others about patient safety tools, designed interventions, and evaluated them, while others reported being exposed to these concepts for the first time. Using what we learned from Year 1 trainees, we collected more quantitative data in Years 2 and 3 regarding participants’ prior experience. Most Year 2 and 3 participants (91 percent of those interviewed in each of these years; p=1) reported modest-to-high levels of understanding of patient safety issues at training outset, rating their experience level as 3 or higher on a scale of 1–5, with 5 being highest experience. However, fewer than 12 percent of Year 2 and 3 participants rated themselves highly (i.e., 5) in terms of experience with specific tools, interventions, and evaluation techniques (p=.59, .69, and .45, respectively).5

Impact of Training on Skills and Use of Tools

As shown in Table 1, due to the time limitation of the overall evaluation, we were able to assess only immediate experiences for all three groups. We could only track experiences 1 year after training for two groups (Years 1 and 2), and experiences 2 years after training for one group (Year 1).

Immediate Impact

Directly upon completion, nearly all Year 1 trainees said they had acquired valuable skills, and most voiced confidence using them. In the group interviews conducted at the final Year 2 and 3 training sessions, when we asked participants to rate their skill levels more specifically than we did in Year 1, many rated their skill levels highly. For example, most Year 2 and 3 trainees, when asked about their ability to select appropriate tool(s) to investigate an error or near miss, rated themselves a 4 or 5 on a scale of 1–5 (with 5 being highest skill level) (91 percent in Year 2, 80 percent in Year 3; p=.21) upon program completion, and attributed their high ratings directly to participation in the PSIC.

One Year Later

One year after course completion, the first two trainee groups reported valuing and regularly using the skills/tools learned (Table 2). For example, substantial proportions of Year 1 and 2 trainees reported regular use of RCA (79 and 78 percent, respectively; p=1), and human factors engineering (71 and 81 percent, respectively; p=.37).

Table 2.

Percentage of Skills or Tools Used by Patient Safety Improvement Corps (PSIC) Trainees at 1 and 2 Years Posttraining, for the First and Second Year Trainees

Percent Saying Used 1 Year After Training Percent Saying Used 2 Years After Training*


Tool or Skill Year 1 (n=38) Year 2 (n=58) Ever Used (n=25) Currently Use (n=25)
Risk assessment
Root cause analysis 79 78 76 68
Health care failure mode and effect analysis 58 48 76 40
Probabilistic risk assessment 13 14 32 12
VA's Safety Assessment Code 42 29 40 24
Measurement tools
Patient safety culture survey and tools 29** 57** 56 40
Patient safety indicators 42 33 64 52
Analysis of patient safety data 42 31 60 60
Reporting of adverse events and near misses 79** 55** 72 72
Safety management tools
Human factors engineering 71 81 72 60
Tools to identify high-alert medications 50 34 48 24
Tools to assess patient safety business case 18** 48** 48 8
Tools to evaluate patient safety programs 21 26 40 28
*

Responses from the first year PSIC trainings in 2-year posttraining update.

**

Year-to-year differences statistically significant at the p<.05 level.

For two skills/tools, reported use was statistically significantly higher for the Year 2 group than the Year 1 group at the equivalent point in time. Twenty-nine percent of the Year 1 group said they had used the patient safety culture survey and related tools 1 year after training, versus 57 percent of Year 2 participants (p=.01). Year 2 trainees remarked that their interest in measuring culture reflected greater acceptance by clinical staff and administrators of the important role culture plays in improving safety. As patient safety increased in national prominence, culture was viewed less as a “soft” issue and more as a serious one meriting attention, trainees said. Similarly, only 18 percent of Year 1 participants said they used tools to assess the business case for patient safety 1 year later, compared with 48 percent of Year 2 trainees (p=.01). Year 2 trainees noted that their need to establish a business case reflects widespread hospital budgetary challenges.

However, for tools to assist with reporting adverse events and near misses, reported use was statistically significantly lower for the Year 2 than the Year 1 group at the equivalent point in time (55 and 79 percent, respectively; p=.03). A few Year 2 trainees mentioned they were not actively using these tools given well-functioning systems already in place.

Two Years Later

Two years posttraining, Year 1 trainees reported continued use of many of the skills/tools taught—especially RCA (76 percent), HFMEA (76 percent), human factors engineering (72 percent), and reporting adverse events and near misses (72 percent) (Table 2). These participants particularly valued RCA and HFMEA for pinpointing areas for targeted improvement. Hospital representatives noted that learning about human factors engineering made them better purchasers of medical equipment and more aware of potential safety gaps in equipment currently used.

Generally, hospital team members providing direct patient care were more likely to report using skills/tools, and having greater confidence applying them, compared with those not on the front lines of patient care. Nonetheless, state regulators and hospital administrators said it was instructive for them to learn about skills/tools. These individuals said that, before the PSIC, they had not fully appreciated the complexities of performing analyses or developing interventions. Representatives from all types of organizations noted that awareness helped bridge communication gaps between hospitals and state regulators.

Impact on Patient Safety Actions

According to Year 1 and 2 participants, the training had a substantial effect on states’ and hospitals’ actions. As shown by responses to the telephone interviews summarized in Tables 3 and 4, specific actions were taken by states and hospitals, respectively, within the first year following training.

Table 3.

How the Patient Safety Improvement Corps (PSIC) Training Influenced Patient Safety Actions by States, Reported in 1-Year Follow-up Interviews with the Year 1 and 2 Trainees, 2005 and 2006

Percentage of States Responding “Yes”*
Patient Safety Action Year 1 Trainees (n=15)** Year 2 Trainees (n=18)
Initiation of or influence on regulation(s)/legislation 47 56
Modification of hospital oversight procedures when an adverse event occurs (e.g., change content of Root Cause Analysis [RCA]) 47 56
Modification of an existing state reporting system to improve how it captures patient safety issues or how information is reported to others 33 22
New membership in or formation of a patient safety coalition of stakeholders 20 50
Creation of a state-wide reporting system 20 17
*

Entities labeled as Quality Improvement Organizations (QIOs) or “other” were reclassified as either states or hospitals based on their core functions. Counts for hospital and state-specific questions vary depending on the respondent's ability to answer the question.

**

No year-to-year differences presented in this table were found to be statistically significant at the p<.05 level.

Table 4.

How the Patient Safety Improvement Corps (PSIC) Training Influenced Patient Safety Actions by Hospitals, Reported in 1-Year Follow-up Interviews with Year 1 and 2 Trainees, 2005 and 2006

Percentage of Hospitals Responding “Yes”*
Patient Safety Action Year 1 Trainees (n=23)** Year 2 Trainees (n=40)
Modification of processes to review/analyze adverse events or errors 83 73
Promotion of patient safety culture 78 83
Sharing data across organizations to better understand causes of error 52 50
Other changes in review of adverse events 48 48
Other state- or organization-wide initiatives 48 50
New membership in or formation of a patient safety group of stakeholders 35 45
Creation of institutional adverse event reporting system 30 13
*

Entities labeled as Quality Improvement Organizations (QIOs) or “other” were reclassified as either states or hospitals based on their core functions. Counts for hospital and state-specific questions vary depending on the respondent's ability to answer the question.

**

No year-to-year differences presented in this table were found to be statistically significant at the p<.05 level.

For both Years 1 and 2, actions that state agency participants most frequently reported taking because of training were initiation of or modifications to legislation to strengthen patient safety practices (47 and 56 percent, respectively; p=.87), and modification of adverse event oversight procedures (47 and 56 percent, respectively; p=.87). For example, participants said they promoted legislation to make findings of RCAs and HFMEAs nondiscoverable to promote a more open, nonthreatening, blame-free culture. Regarding adverse event oversight procedures, state agency participants encouraged use of more rigorous RCA methods, learned through the PSIC.

The patient safety action for which state agency representatives trained in Years 1 and 2 differed most notably was new membership in or formation of a patient safety coalition. This action was taken by 20 percent of Year 1 state trainees and by 50 percent of those from Year 2 (p=.16). This growth suggests increasing awareness by state-government staff of the need for diverse stakeholders to work together.

Participants from hospitals in both Years 1 and 2 reported that training was an important factor in modifications made to adverse event oversight procedures (83 and 73 percent, respectively; p=.55) and to promote a strengthened patient safety culture (78 and 83 percent, respectively; p=.94). Like the state agency representatives, hospital team members encouraged use of more rigorous RCA methods in adverse event oversight procedures. In addition, hospital administrators noted a tendency to take patient safety culture more seriously because of training.

The action for which hospital representatives trained in Years 1 and 2 differed most notably was the creation of institutional adverse event reporting systems. This action was taken by 30 percent of Year 1 hospital representatives and 13 percent from Year 2 (p=.16). Year 2 trainees noted that the PSIC did not substantially influence actions in this area because hospitals already had created such systems before training in response to increased national and local attention.

Other Activities Undertaken

One year after training ended, both the Year 1 and 2 participants said they had trained others within their organizations, communities, and/or state in the use of PSIC skills/tools (87 and 91 percent, respectively; p=.71). These trainees underscored that such activities have given increased visibility to patient safety issues throughout their states.

All graduates noted the value of having like-minded peers with whom to share ideas during and after training, and viewed relationships formed during training as significant, ongoing resources. During the year following training, almost all of Year 1 and 2 trainees had communicated with their own team members (97 and 97 percent, respectively; p=1); and a substantial number had communicated with members from other teams (39 and 36 percent, respectively; p=.91), the NCPS (63 and 53 percent, respectively; p=.47), and AHRQ (32 and 28 percent, respectively; p=.85). Furthermore, 48 percent of Year 1 trainees reported being in contact with at least some of these individuals 2 years later.

Challenges Faced by Trainees

Trainees noted barriers to making changes at home, ranging from lack of resources (e.g., time) to lack of an established patient safety culture. They emphasized a need for follow-up training, and for training of more diverse participants, including front-line clinicians and high-level decision makers (e.g., CEOs, legislators) who have authority to drive change at higher organizational levels. They also identified the need for representatives from CMS and the Joint Commission to learn about issues and methods through training attendance, given the prominent roles they play in setting accreditation standards that drive quality improvement.

In many instances, both state and hospital representatives said they were the “lone individual” within their institutions championing patient safety improvements, making success difficult. Some recommended that teams include more than one representative from each organization to address this concern.

Overall Assessment by Trainees

According to many trainees, the PSIC played an instrumental role in improving their skill set and changing attitudes about patient safety within their organizations, and often more broadly. Trainees across all years expressed increased confidence and more in-depth appreciation of the complexities of patient safety because of the PSIC. One year after the training ended, the overwhelming majority of Year 1 and 2 participants (92 and 95 percent, respectively; p=.91) rated highly the helpfulness of training in improving processes to monitor and improve patient safety, giving it ratings of 7 points or higher on a 10-point scale. Two years after training, 92 percent of Year 1 trainees continued to rate the training similarly.

Discussion

Empirical Findings

Our evaluation tracked and compared the PSIC's progression and impact on a broad scale over 3 years. We especially were interested to track differences in participant groups depending on when they were trained. Some differences regarding use of skills/tools 1 year after training were large enough to reach statistical significance, despite small sample sizes for each group. These differences suggest a growing awareness among trainees of patient safety issues, mirroring increasing awareness nationally over time.

Even differences that were not statistically significant suggest some distinctions among the groups that can be instructive for future planning. For example, programs that roll out over multiple years may need to revise curricula annually to address participants’ changing perspectives and needs. Additionally, multiyear, voluntary programs like the PSIC may need to consider unique situations of latter-year participants who may be slower to engage due to specific challenges.

The findings of this evaluation suggest that AHRQ's investment in the PSIC has provided an important start to building a national resource of health personnel trained in patient safety skills. AHRQ invested US$7 million in the program, in response to documentation that such training was needed; it reached approximately 250 individuals in state governments and hospitals. These participants have become a nationwide network with shared training and experiences in patient safety improvement, who have continued to interact with each other following training. In addition, trainees reported training others, further expanding this network.

Through the training, participants reported improving their knowledge and skills, and subsequently applying the training to improve patient safety practices in their organizations. Our findings on early sustainability for the first group of participants also suggest that the PSIC potentially may have lasting effects on practices, although further assessment of sustainability (for both later groups and longer times) is needed before firm conclusions can be reached.

However, we note that the PSIC has not yet achieved the depth of coverage needed to secure extensive and lasting improvements in patient safety practices and outcomes. Graduates identified a need to strengthen and expand this network through continued training to further refine skills, stay abreast of new tools, and extend training. As they noted, there are many more “back home” who still need training.

In this context, AHRQ faces decisions regarding the PSIC's future, including which audiences it needs to reach and how best to structure training. Feedback from participants suggests that the PSIC model could be improved and expanded by (1) developing training modules focusing on key decision makers whose commitment is needed to achieve improvements (e.g., senior management, state legislators); (2) developing training modules focusing on unique needs of hospital versus state representatives (i.e., more “hands-on” for the former, more “big picture” for the latter); (3) providing postgraduate training for former trainees to keep skills and knowledge current and encourage continued interactions among them; and (4) replicating this model for larger numbers of health care personnel to build a critical mass of trained individuals.

Various types of training could be conducted by AHRQ in collaboration with other organizations, to further strengthen the national patient safety infrastructure while leveraging AHRQ resources and enhancing return on investment. One example is a train-the-trainer program that AHRQ offered after completing the 3 years of state team training, to reinforce dissemination of practices

Methodological Considerations

Several methodological considerations are exemplified in the PSIC evaluation, especially the importance of matching methods to the nature of the program component being examined. In the case of the PSIC, we examined a one-of-a-kind program intended to change the patient safety knowledge, skills, and practices of participants, and to train teams from all U.S. states. We sought to employ an evaluation design that (1) gathered self-report information from participants about training experiences and subsequent safety activities, (2) allowed us to identify differences in experiences across the three groups, and (3) followed each group as long as possible to assess sustainability of training effects on practices.

The methodological design that best-fulfilled evaluation needs was a longitudinal evaluation tracking each of the groups annually for as long as possible within the time constraints imposed by the overall evaluation. Given that a central part of the work was to assess information on participants’ skill status and experiences, the key source of this information was the participants themselves. This approach had the strength of being able to document participants’ self-reported experiences and perceptions. It also had some limitations inherent to the study design and its role as only one of many assessments being conducted within the overall evaluation.

The ideal design for many evaluations is considered to be the randomized control design, which enables inference regarding intervention effects by controlling for confounding factors that also might affect outcomes. However, this design was not feasible—or appropriate—for the PSIC evaluation for several reasons. Because the PSIC was a national program including teams from all states, there are no other states that could serve as controls. Further, participants’ growth in knowledge and skills was best assessed using before-and-after measures for each individual; comparisons with others who did not participate would not add useful information and could be confounded by other training or learning they obtained that was unknown to the study. Finally, to assess actions undertaken by participants within each state, it would have been virtually impossible to identify meaningful controls at the individual level whose actions could be compared with those of PSIC participants. Instead, we opted for the more direct method of asking participants about the actions they took and whether they could attribute them to the training.

A significant limitation of this assessment method was the time constraint created by the 4-year term of the overall patient safety evaluation. Ideally, we would have followed all three groups from the training start through at least 2 years from the end, to assess fully the sustainability of training on practices. However, because the PSIC evaluation had to end with the overall evaluation, we could collect data for only 3 years. We were able to capture the most data about immediate impressions (i.e., for all three groups), less data about experiences 1 year posttraining (i.e., for the Year 1 and 2 groups), and the least amount about experiences 2 years posttraining (i.e., for the Year 1 group only). Despite this truncation of the ideal study timeline, we obtained useful information. We also could document training effects on subsequent actions taken by participants, and obtain early information on sustainability of those actions. However, we were not able to capture trends for later groups regarding the sustainability of practices initiated in their organizations following training. AHRQ is currently funding an independent, impact analysis to examine longer-term effects.

Another limitation was incomplete coverage of participants in the interviews conducted. Our group interviews at the end of each year could not include all participating teams, due to constraints in evaluation budget and meeting time availability. In the telephone interviews, we interviewed at least two individuals from every team, enabling us to cover all teams, but we did not interview every member of all teams. Thus, there could be nonresponse bias in our data, if experiences of trainees interviewed differed systematically from those of trainees who were not. However, the consistency of data collected across years gives us confidence in the integrity of our findings that participants learned and highly valued the skills/tools taught, shared information, and took actions to improve safety.

It is possible that participants may not have recalled information accurately or may have knowingly provided inaccurate responses—issues inherent to any self-reported data. We mitigated these issues by collecting data at times close to when experiences occurred that were addressed in the interviews, and by informing participants that responses would remain confidential and be used to improve future training activities. We found that participants were sincere and thoughtful in answering questions. For example, when asked whether they could attribute an action directly to PSIC participation, participants generally considered the question carefully and sometimes responded “no.”

Conclusion

Through its team training approach, the PSIC created a core group who report they are using their new skills and educating others. This training has made a start toward strengthening the infrastructure required to support patient safety improvements, although the number of individuals trained is small relative to the national need for patient safety expertise. The next challenge is to build upon this start by expanding the network of trained personnel across the country through continued PSIC training and other similar programs, coupled with ongoing evaluation and refinements of training processes to ensure effectiveness.

Acknowledgments

Joint Acknowledgment/Disclosure Statement: The research described in this manuscript was undertaken as one part of a larger evaluation project funded under contract with the AHRQ (contract no. 290-02-0010). Pursuant to that contract, AHRQ had the right to review and comment on this manuscript before its publication. The information and opinions expressed herein reflect solely the position of the authors. Nothing herein should be construed to indicate AHRQ support or endorsement of its contents. We thank the trainees for their time; AHRQ staff Marge Keyes and James Battles, and VA NCPS staff Caryl Lee for their commitment to the evaluation process; and RAND colleagues Chau Pham, Stephanie Taylor, Stacy Fitzsimmons, Shannah Tharp-Taylor, and Alison DeCristofaro for their data-collection and analysis contributions.

Disclosures: None.

Disclaimers: No disclaimers need to be made in the manuscript.

Notes

1

Additional results available in Teleki et al. (2006) and Farley et al. (2006).

2

Patient safety culture: a commitment to safety permeating all levels of an organization.

3

The terms “Year 1,”“Year 2,” and “Year 3” trainees/participants correspond to participants in the 2003–2004, 2004–2005, and 2005–2006 training years, respectively.

4

QIOs: a national network directed by the Centers for Medicare and Medicaid Services (CMS) working with consumers, physicians, hospitals, and other caregivers to refine care delivery systems (CMS 2007).

5

We did not collect comparable data for Year 1 trainees; see “Methods” and “Discussion.”

Supporting Information

Additional supporting information may be found in the online version of this article:

Appendix SA1: Author Matrix.

Appendix SA2: Summary of PSIC Trainees.

Appendix SA3: Year 2 and 3 PSIC Trainees’ Experience with Patient Safety Prior to Training. Appendix SA4: Self-Reported Skill Level at End of PSIC Training, Team Interviews, May 2005 and May 2006.

hesr0044-0701-SD1.doc (65.5KB, doc)
hesr0044-0701-SD2.doc (175KB, doc)

Please note: Wiley-Blackwell is not responsible for the content or functionality of any supporting materials supplied by the authors. Any queries (other than missing material) should be directed to the corresponding author for the article.

References

  1. Agency for Healthcare Research and Quality (AHRQ). 2006. [July 2, 2007]. “Patient Safety Improvement Corps: An AHRQ/VA Partnership (Fact Sheet)”. Available at http://www.ahrq.gov/about/psimpcorps.htm.
  2. Centers for Medicare and Medicaid Services (CMS). Definition of Quality Improvement Organization (QIO) 2007. [January 30, 2007]. Available at http://www.cms.hhs.gov/QualityImprovementOrgs/
  3. Farley D O, Battles J B. Health Services Research. 2008. Evaluation of the AHRQ Patient Safety Initiative: Framework and Approach. DOI 10.1111/j.1475-6733.2008.00927.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Farley D O, Damberg C L, Ridgely M S, Sorbero M E S, Greenberg M D, Haviland A M, Teleki S S, Mendel P, Bradley L A, Dembosky J W, Fremont A, Nuckols T K, Shaw R N, Straus S, Taylor S L, Yu H, Tharp-Taylor S. Assessment of the AHRQ Patient Safety Initiative. Final Report, Evaluation Report IV. Santa Monica, CA: RAND Corporation; 2007. [Google Scholar]
  5. Teleki S S. Personal communication with Marge Keyes, Patient Safety Team Leader, Center for Quality Improvement and Patient Safety, AHRQ.
  6. Teleki S S, Damberg C L, Sorbero M E S, Fremont A M, Bradley L, Farley D O. Evaluation of the Patient Safety Improvement Corps: Experiences of the First Two Groups of Trainees. RAND TR-407-AHRQ. Santa Monica, CA: RAND Corporation; 2006. [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

hesr0044-0701-SD1.doc (65.5KB, doc)
hesr0044-0701-SD2.doc (175KB, doc)

Articles from Health Services Research are provided here courtesy of Health Research & Educational Trust

RESOURCES