Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2023 Mar 10.
Published in final edited form as: Am J Med Qual. 2021 Jul-Aug;36(4):209–214. doi: 10.1177/1062860620945024

Healthcare Project Improvement Design: Proficiency Among University Faculty

Angela F Gardner 1, Tiffany B Kindratt 2, Venetia Orcutt 1, Patrice Griffith 1, Lona Sandon 1, Heather Salinas 1, Gary Reed 1, Raymond L Fowler 1
PMCID: PMC9999462  NIHMSID: NIHMS1870417  PMID: 32757762

Abstract

Our purpose was to measure faculty members’: 1) knowledge of quality improvement and patient safety (QIPS); 2) attitudes and beliefs about their own QI skills, and 3) self-efficacy towards participating in, leading, and teaching QIPS. Faculty completed an online survey. Questions assessed demographic and academic characteristics, knowledge, attitudes/beliefs, and self-efficacy. Knowledge was measured using the Quality Improvement Knowledge Assessment Tool-Revised (QIKAT-R). Participants provided free-text responses to questions on clinical scenarios. Almost half of participants (n=236) self-reported they were moderately or extremely comfortable with QIPS skills. Few were very (20%) or most (15%) comfortable teaching QIPS. Ninety-one participants attempted the QIKAT-R, and 78 participants completed it. The mean score was 16.6 (SD=5.6). Despite positive attitudes and beliefs about their own QIPS skills, our results demonstrate a general lack of knowledge among surveyed faculty members. Faculty development efforts are needed to improve proficiency in participating, leading, and teaching QIPS projects.

INTRODUCTION

Leaders in the practice and teaching of medicine must actively participate in the identification of process issues and other obstacles to safe patient care, and to that end must take active steps to remove barriers and improve processes. This movement towards quality improvement and patient safety (QIPS) began in 2000 with two Institute of Medicine (IOM) reports, To Err is Human1 and Crossing the Quality Chasm.2 To Error is Human created awareness of the problem by highlighting over 98,000 preventable deaths a year in the US hospitals from medical errors.1 Crossing the Quality Chasm started to create solutions for the problem by providing guidance for patient-clinician relationships, promoting evidence-based practice, and creating better alignment for financial incentives.2 Not only is there an existing ethical and moral mandate for clinicians to provide the highest quality care, there are also growing cultural norms in the areas of patient safety and financial incentives for quality patient care. To meet the needs of the changing clinical norms, accreditation bodies35 require that providers be trained to participate in QIPS projects in their clinical practices. However, few studies have evaluated whether faculty who entered practice prior to the QIPS movement are prepared to train and mentor future providers on these concepts. Several articles have been published describing QIPS curriculum for faculty development and resident training.612 These articles have offered guidance in developing short-term and longitudinal faculty development programs separately and through co-learning with residents. While some studies have shown improvements in QIPS knowledge,8,9,11,12 none of these studies were developed to assess the need for training faculty across an entire institution.

To fill this gap in the published literature, the University of Texas Southwestern (UTSW) Quality Improvement and Patient Safety (QIPS) Collaborative set out to determine the needs of faculty members at a major medical education institution in the areas of knowledge of QIPS, attitudes and beliefs about their own QI skills, and self-efficacy towards participating in, leading, and teaching QIPS. Results will be used to determine the future training needs for faculty across the campus.

METHODS

Participants and Setting

During summer 2016, the authors recruited University of Texas Southwestern Medical Center (UTSW) faculty to participate in a cross-sectional study. The Office of Quality, Safety, and Outcomes Education directed surveys to 3,000 clinical and nonclinical faculty members via official campus email addresses. Surveys were administered by Redcap.1314 Responding to all elements of the survey required an estimated thirty minutes to complete. All responses were anonymous.

Survey Measures

The survey contained four areas designed to facilitate collection of relevant data: Demographic and Academic Characteristics; Knowledge of QIPS; Attitudes and Beliefs about their own QI skills; and Self-Efficacy towards participating in, leading and teaching QIPS. Each section is described as follows:

Demographic and Academic Characteristics

We collected basic demographic (age and gender) and academic characteristics. Academic characteristics included highest degree attained (Bachelor’s, Master’s, MD/DO, PhD, or MD/DO and PhD), school appointment (medical, health professions, graduate), specialty, rank (instructor to full professor), tenure status (not tenured, tenure track, tenured), length of time in profession (less than one year to greater than 10 years), and length of time at this institution (less than one year to greater than 10 years).

Knowledge of QIPS

We measured faculty members’ knowledge using elements of the validated Quality Improvement Knowledge Assessment Tool – Revised (QIKAT-R).15 Prior utilization of the QIKAT-R revealed that use of three scenarios was optimum for survey completion.16 We chose three short scenarios from inpatient, primary care, and anesthesia settings. For each QI scenario, faculty members were asked to identify a problem and devise a project to address it. Free-text responses were to include the following:

  1. the aim of the project;

  2. the item(s) to be measured; and

  3. a change that would address the system-level issue.

Knowledge results were scored based on the QIKAT-R 15 scoring rubric by two judges. The judges were fourth-year medical students in the UTSW Quality Improvement Distinction program who had been educated and trained on how to use the scoring rubric.

Attitudes and Beliefs about QI Skills

We assessed how comfortable faculty members were in their current skills with the following 14 aspects of quality improvement and patient safety based on previous research and subject matter experts:

  1. writing a clear problem statement;

  2. identifying the best professional knowledge;

  3. identifying best practices and comparing them to their local practice;

  4. identifying key stakeholders needed to improve a problem;

  5. recognizing facilitators and barriers to implementing change;

  6. studying the process;

  7. using measurement to improve a problem;

  8. making changes in a system;

  9. identifying whether a change leads to an improvement in a problem;

  10. identifying how data is linked to specific processes;

  11. using small cycles of change;

  12. implementing a structured plan to test a change;

  13. using a QI model as a systematic framework for trial and learning; and

  14. building the next improvement upon prior success or failure experiences.

Self-Efficacy towards Participating In, Leading, and Teaching QIPS

Faculty members’ were asked to rate “how comfortable you would be 1) participating in and 2) leading “an actual quality improvement project in clinic or the hospital, working with faculty, residents, nurses, students, and administrators, based on your current knowledge of the principles of quality improvement” on a five-point Likert scale (1=not comfortable at all; 2=somewhat comfortable; 3=comfortable; 4=very comfortable; 5=most comfortable). Faculty members also rated “how comfortable you would be in leading an actual quality improvement project in clinic or the hospital, working with faculty, residents, nurses, students, and administrators, based on your current knowledge of the principles of quality improvement” using the same Likert scale. The self-efficacy survey was previously developed by subject matter experts within the UTSW Office of Quality, Safety and Outcomes Education with prior administration to medical, engineering, and nursing students.16

Data Analysis

We used frequencies and percentages to report demographic/academic characteristics, knowledge, attitudes/beliefs, and self-efficacy results. We used STATA 14.0 for analysis.17 Free text responses were reviewed and scored by two judges utilizing the QIKAT-R15 rubric. Rater agreement was calculated by standard mathematical methods. The highest composite score of the two judges was utilized to calculate participant mean scores.

RESULTS

Demographic and Academic Characteristics

Demographic and academic characteristics of participants who completed the survey are reported in Table 1. Over half of survey participants were male (55.5%). Most survey participants were aged 40 and older, with 24.3% aged 20-39 years, 21.3% aged 40-49 years, 28.2% aged 50-59 years, and 26.2% aged 60 years or older. The majority of survey participants had an MD or DO degree (84.1%); had faculty appointments in the medical school (93.6%); and were active in training residents (77.2%). Most survey participants reported practicing in their primary occupation for greater than 10 years (69.4%) and approximately half have been UTSW faculty members for greater than 10 years (49.6%).

Table 1:

Demographic and academic characteristics, n=236.

n (%)
Gender
 Male 116 (55.5)
 Female 93 (44.5)
Age
 20-39 years 49 (24.5)
 40-49 years 43 (21.3)
 50-59 years 57 (28.2)
 60+ years 53 (26.2)
Highest Level of Degree
 Masters 5 (2.2)
 MD or DO 195 (84.1)
 PhD (or equivalent) 14 (6.0)
 MD/DO and PhD 18 (7.8)
Faculty Appointments
 Medical School 204 (93.6)
 Health Professions School 10 (4.6)
 Graduate School 4 (1.8)
Top 5 specialties/affiliations
 Internal Medicine 50 (20.3)
 Anesthesiology 30 (12.2)
 Pediatrics 30 (12.2)
 Obstetrics/Gynecology 17 (6.9)
 Radiology 12 (4.9)
Level of trainees
 Pre-Medical 1 (0.4)
 Health Professions Students 10 (4.3)
 Medical Students 34 (14.7)
 Other Graduate Students 8 (3.5)
 Residents 179 (77.2)
Academic Rank
 Faculty Associate 1 (0.4)
 Instructor 5 (2.2)
 Assistant Professor 72 (31.4)
 Associate Professor 59 (25.8)
 Professor 92 (40.2)
Tenure Status
 Not tenure-track 187 (82.0)
 Tenure-track 14 (6.1)
 Tenured 27 (11.8)
Years in Primary Occupation
 Less than 1 year 7 (3.0)
 At least 1 year but less than 5 years 28 (12.1)
 At least 5 years but less than 10 years 36 (15.5)
 Greater than 10 years 161 (69.4)
Years as faculty at UT Southwestern
 Less than 1 year 21 (9.5)
 At least 1 year by less than 5 years 53 (23.9)
 At least 5 years but less than 10 years 38 (17.1)
 Greater than 10 years 110 (49.6)

Knowledge of QIPS

The knowledge survey had two distinct parts: the first was multiple-choice, and the second was free text. Of the 280 responses, 91 participants attempted the QIKAT-R clinical scenario questions (32.5% response rate) and 79 answered all three questions in all three clinical scenarios (28.2% response rate). Each of the 79 responses had 27 possible rating opportunities (nine for each of the three scenarios) for a total of 2133 possible ratings. Of the 2133 ratings, the judges agreed 1,690 times, representing a 79% rater agreement. Among those (n=79) who completed this section, the mean total composite score was 16.6 (SD=5.6; median=17). Scores ranged from 4 to 27. A frequency distribution of knowledge assessment scores is presented in Figure 1

Figure 1:

Figure 1:

Knowledge Assessment Score Distribution

Attitudes and Beliefs about QI Skills

Attitudes and beliefs about QI skills are reported in Table 2. Almost half of participants were extremely comfortable writing a clear problem statement, identifying the best professional knowledge, identifying best practices, and comparing them to their local practices. Approximately half were moderately comfortable identifying best practices and comparing to their local practices (46.8%); identifying key stakeholders needed to improve a problem (49.6%); recognizing facilitators and barriers to implementing change (50.9%); and studying the process. Similarly, nearly half were moderately comfortable using measurement to improve a problem (45.9%); making changes in a system (49.6%); identifying whether a change leads to an improvement in a problem (50.0%); and identifying how data is linked to specific processes. Only 19.2% of participants were extremely comfortable using a QI model as a systematic framework for trial and learning, and 29% were extremely comfortable building their next improvement process upon prior successes or failures.

Table 2:

Attitudes and beliefs towards quality improvement and patient safety, n=236.

Questions N (%)
Question Stem: How comfortable are you in your current skills with the following aspects of quality improvement and patient safety

Writing a clear problem statement (goal, aim)
 Not at all 4 (1.8)
 Slightly 20 (9.1)
 Moderately 95 (41.6)
 Extremely 105 (47.5)
Mean (SD) 3.34 (0.72)
Identifying the best professional knowledge
 Not at all 2 (0.9)
 Slightly 19 (8.6)
 Moderately 94 (42.7)
 Extremely 105 (47.7)
Mean (SD) 3.4 (0.68)
Identifying best practices and comparing these to your local practice
 Not at all 5 (2.3)
 Slightly 18 (8.2)
 Moderately 103 (46.8)
 Extremely 94 (42.7)
Mean (SD) 3.3 (0.72)
Identifying key stakeholders needed to improve a problem
 Not at all 6 (2.7)
 Slightly 21 (9.6)
 Moderately 109 (49.6)
 Extremely 84 (38.2)
Mean (SD) 3.23 (0.73)
Recognizing facilitators and barriers to implementing change
 Not at all 4 (1.8)
 Slightly 23 (10.5)
 Moderately 112 (50.9)
 Extremely 81 (36.8)
Mean (SD) 3.23 (0.70)
Studying the process
 Not at all 6 (2.7)
 Slightly 40 (18.2)
 Moderately 101 (45.9)
 Extremely 73 (33.2)
Mean (SD) 3.10 (0.79)
Using measurement to improve a problem
 Not at all 7 (3.2)
 Slightly 41 (18.6)
 Moderately 101 (45.9)
 Extremely 71 (32.3)
Mean (SD) 3.07 (0.80)
Making changes in a system
 Not at all 8 (3.6)
 Slightly 48 (21.8)
 Moderately 109 (49.6)
 Extremely 55 (25.0)
Mean (SD) 2.96 (0.78)
Identifying whether a change leads to an improvement in a problem
 Not at all 9 (4.1)
 Slightly 32 (14.6)
 Moderately 110 (50.0)
 Extremely 69 (31.4)
Mean (SD) 3.09 (0.79)
Identifying how data is linked to specific processes
 Not at all 8 (3.5)
 Slightly 49 (22.3)
 Moderately 99 (45.0)
 Extremely 64 (29.1)
Mean (SD) 3.00 (0.81)
Using small cycles of change
 Not at all 16 (7.3)
 Slightly 58 (26.4)
 Moderately 94 (42.7)
 Extremely 52 (23.6)
Mean (SD) 2.83 (0.87)
Implementing a structured plan to test a change
 Not at all 15 (6.8)
 Slightly 48 (21.8)
 Moderately 98 (44.6)
 Extremely 59 (26.8)
Mean (SD) 2.91 (0.87)
Using a QI model (PDSA, Lean, Six, Sigma models) as a systematic framework
 Not at all 53 (24.2)
 Slightly 58 (26.5)
 Moderately 66 (30.1)
 Extremely 42 (19.2)
Mean (SD) 2.44 (1.06)
Building your next improvement upon prior success or failure experiences
 Not at all 13 (5.9)
 Slightly 44 (20.1)
 Moderately 98 (44.8)
 Extremely 64 (29.2)
Mean (SD) 2.97 (0.86)

Self-Efficacy towards Participating In, Leading, and Teaching QIPS

All participants reported that they were at least somewhat comfortable participating in a QI project (mean=3.68). There were 30% of faculty members who were most comfortable participating in a QI project whereas 14% were only somewhat comfortable. There were 17% of faculty members that were not comfortable at all in leading a QI project and less than half were very comfortable (21%) or most comfortable (21%) leading a QI project. Moreover, only 15% were most comfortable and 20% were very comfortable teaching their colleagues about QI.

DISCUSSION

Academic leaders in medicine must guide efforts to ensure the quality of patient care. This study was conducted at a major academic institution, revealing gaps in the knowledge base of the individuals who are expected to measure, teach, and lead QI efforts, both academically and clinically. It would be impractical to assess actual QI projects being conducted by thousands of faculty members. The survey implemented served as a proxy for measuring the understanding of the necessary elements of a “good” QI project.

Located in Dallas, Texas, UTSW is an academic medical center that integrates education, biomedical research, and clinical care. The institution’s faculty has received six Nobel Prizes, and includes 22 members of the National Academy of Sciences, 17 members of the National Academy of Medicine, and 14 Howard Hughes Medical Institute Investigators. The faculty of 3,000 members is responsible for groundbreaking medical advances and is committed to translating science-driven research quickly to new clinical treatments. UTSW clinicians provide care in about 80 specialties to more than 100,000 hospitalized patients and 600,000 emergency department cases and oversee an additional 2.2 million outpatient visits a year. With an academic mission this broad in breadth and scope, it could reasonably be assumed that there would be many faculty with proficiency in designing and completing quality improvement projects.

Despite utilizing the higher composite score for each respondent, these results demonstrate minimal knowledge, experience, or ability to describe a well-constructed QI project, despite general positivity about their ability to do so. The scoring reveals a gap between perceived and actual ability to describe a quality improvement project. The mean of 16.6 indicates that approximately 50% of individuals responding to the survey fell below an arbitrary standard (70%) for designing and describing quality improvement projects. Of the 280 respondents, only 3 (1.1%) people were able to complete the survey perfectly (score=27), and 26 scored 70% or higher (refer to figure 1).

Limitations

Limitations of this study include the inability to individually link the level of comfort asserted by the participant with that participant’s actual ability. This might be addressed in future surveys through alteration of the software, but in this version, anonymity could only be guaranteed by aggregate responses. In addition, there was a relatively low responder completion rate, with 79 people completing the entire survey. However, this response rate (~33%) is consistent with average response rates for online surveys in higher education.18 One might postulate that the descriptive narrative required more effort than the responders found reasonable, but this is only conjecture. While we measured faculty members’ years in their primary occupation and years as a faculty member at UTSW, we did not measure whether faculty members entered practice before or after the QIPS movement began in 20001,2 and whether QIPS content was embedded into their training. Future studies should ask additional questions to determine when faculty members began practicing (regardless of primary occupation) and determine their prior exposure to QIPS. Additionally, the original 280 participants may represent a selection bias toward confidence in their knowledge of quality improvement.

Conclusion

This study demonstrates that providers think that they are familiar with and comfortable with QI projects yet cannot demonstrate competency to trained scorers. The conclusion may follow that well-informed academicians will be unable to teach content that they cannot demonstrate to others. Patient safety and outcomes may suffer in the balance, since ultimately quality analysis of clinical care and the related clinical processes is vital to protecting those we serve. As never before, future medical professionals will be cast into the role of assuring the value of care to the patient and to the healthcare system. It is imperative that they be trained by faculty who demonstrate QIPS competency. Future faculty development efforts should include QIPS education, with periodic reassessments of abilities.

Funding Acknowledgement

The research received no specific grant from any funding agency in the public, commercial or non-profit sectors.

Footnotes

Prior Abstract Presentations: None

REFERENCES

  • 1.Institute of Medicine (US) Committee on Quality of Health Care in America, Kohn LT, Corrigan JM, Donaldson MS, eds. To Err is Human: Building a Safer Health System. Washington (DC): National Academies Press; (US: ); 2000. [PubMed] [Google Scholar]
  • 2.Institute of Medicine (US) Committee on Quality of Health Care in America. Crossing the Quality Chasm: A New Health System for the 21st Century. Washington (DC): National Academies Press (US); 2001. [PubMed] [Google Scholar]
  • 3.Liaison Committee on Medical Education, Association of American Medical Colleges and American Medical Association. Standards for Accreditation of Medical Education Programs Leading to the MD Degree. 655 K Street NW, Suite 100, Washington D.C., 20001. 2019. www.LMCE.org. [Google Scholar]
  • 4.Accreditation Review Commission on Education for the Physician Assistant, Inc. Accreditation Standards for Physician Assistant Education, 5th ed. 12000 Findley Road, Suite 275, Johns Creek, Georgia, 30097. 2019. www.arc-pa.org. [Google Scholar]
  • 5.Accreditation Council for Education in Nutrition and Dietetics of the Academy of Nutrition and Dietetics. ACEND Accreditation Standards for Nutrition and Dietetics Coordinated Programs. 120 South Riverside Plaza, Suite 2190, Chicago, IL, 60606. 2016. www.eatright.org/ACEND. [Google Scholar]
  • 6.Aysola J, Myers JS. Integrating Training in Quality Improvement and Health Equity in Graduate Medical Education: Two Curricula for the Price of One. Acad Med. 2018;93(1):31–34. doi: 10.1097/ACM.0000000000002021 [DOI] [PubMed] [Google Scholar]
  • 7.van Schaik SM, Chang A, Fogh S, et al. Jump-Starting Faculty Development in Quality Improvement and Patient Safety Education: A Team-Based Approach. Acad Med. 2019;94(11):1728–1732. doi: 10.1097/ACM.0000000000002784 [DOI] [PubMed] [Google Scholar]
  • 8.Kroker-Bode C, Whicker SA, Pline ER, et al. Piloting a patient safety and quality improvement co-curriculum. J Community Hosp Intern Med Perspect. 2017;7(6):351–357. Published 2017 Dec 14. doi: 10.1080/20009666.2017.1403830 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Myers JS, Tess A, Glasheen JJ, et al. The Quality and Safety Educators Academy: fulfilling an unmet need for faculty development. Am J Med Qual. 2014;29(1):5–12. doi: 10.1177/1062860613484082 [DOI] [PubMed] [Google Scholar]
  • 10.Rodrigue C, Seoane L, Gala RB, Piazza J, Amedee RG. Implementation of a faculty development curriculum emphasizing quality improvement and patient safety: results of a qualitative study. Ochsner J. 2013;13(3):319–321. [PMC free article] [PubMed] [Google Scholar]
  • 11.Wong BM, Goguen J, Shojania KG. Building capacity for quality: a pilot co-learning curriculum in quality improvement for faculty and resident learners. J Grad Med Educ. 2013;5(4):689–693. doi: 10.4300/JGME-D-13-00051.1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Yanamadala M, Criscione-Schreiber LG, Hawley J, Heflin MT, Shah BR. Clinical Quality Improvement Curriculum for Faculty in an Academic Medical Center. Am J Med Qual. 2016;31(2):125–132. doi: 10.1177/1062860614558086 [DOI] [PubMed] [Google Scholar]
  • 13.CTSA NIH Grant UL1-RR024982.
  • 14.Harris PA, Taylor R, Thielke R, Payne J, Gonzalez N, Conde JG, Research electronic data capture (REDCap) – A metadata-driven methodology and workflow process for providing translational research informatics support. J Biomed Inform. 2009;42(2):377–81. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Singh MK, Ogrinc G, Cox KR, et al. The Quality Improvement Knowledge Application Tool revised (QIKAT-R). Acad Med. 2014;89(10):1386–1391. [DOI] [PubMed] [Google Scholar]
  • 16.Componation P, Ferreira S, Reed G, Rhee C, Phelps E. Education background and team assignment as influencing factors on healthcare process improvement program training feedback. In: Proceedings of the 2017 International Annual Conference of the American Society for Engineering Management. Huntsville, AL; October 2017. [Google Scholar]
  • 17.StataCorp. 2015. Stata Statistical Software: Release 14. College Station, TX: StataCorp LP. [Google Scholar]
  • 18.Nulty DD. The adequacy of response rates to online and paper surveys: what can be done? Assess Eval High Educ. 2008;33(3):301–314. [Google Scholar]

RESOURCES