Abstract
Background
One way to provide performance feedback to hospitalists is through the use of dashboards, which deliver data based on agreed-upon standards. Despite the growing trend on feedback performance on quality metrics, there remain limited data on the means, frequency and content of feedback that should be provided to frontline hospitalists.
Objective
The objective of our research is to report our experience with a comprehensive feedback system for frontline hospitalists, as well as report the change in our quality metrics after implementation.
Design, setting and participants
This quality improvement project was conducted at a tertiary academic medical centre among our hospitalist group consisting of 46 full-time faculty members.
Intervention or exposure
A monthly performance feedback report was distributed to provide ongoing feedback to our hospitalist faculty, including an individual dashboard and a peer comparison report, complemented by coaching to incorporate process improvement tactics into providers’ daily workflow.
Main outcomes and measures
The main outcome of our study is the change in quality metrics after implementation of the monthly performance feedback report
Results
The dashboard and rank order list were sent to all faculty members every month. An improvement was seen in the following quality metrics: length of stay index, 30-day readmission rate, catheter-associated urinary tract infections, central line-associated bloodstream infections, provider component of Healthcare Consumer Assessment of Healthcare Providers and Systems scores, attendance at care coordination rounds and percentage of discharge orders placed by 10:00.
Conclusions
Implementation of a monthly performance feedback report for hospitalists, complemented by peer comparison and guidance on tactics to achieve these metrics, created a culture of quality and improvement in the quality of care delivered.
Keywords: healthcare quality improvement, hospital medicine, performance measures, quality improvement
Introduction
The Institute for Healthcare Improvement’s triple aim approach to optimising health system performance includes improving the patients’ experience (including quality and satisfaction) and population health and reducing the per capita cost of healthcare.1 Since its inception more than 20 years ago, hospital medicine has been integral to providing high-quality care in the inpatient setting.2 Hospitalist participation in achieving local quality priorities is recognised by hospital executives as a key area of collaboration between hospital medicine groups and hospital administration.3
Healthcare spending constitutes nearly 18% of the gross domestic product in the USA.4 An estimated 30% of this expenditure is considered wasteful.5 The emphasis on quality was operationalised with the Center for Medicare Services’ 2016 Inpatient Prospective Payment System (IPPS). IPPS paved the way for a shift in reimbursement from quantity to quality with a reimbursement model based on quality metrics.6 Providers and hospitals have a significant opportunity for financial gain through focusing on improving quality metrics, as well as a substantial risk of losing money if they fail to meet quality goals.7
While outcomes on many quality metrics are related to hospital-based factors,8 variability in physician performance is also associated with variation in quality outcomes.9 10 Despite the emphasis and focus on life-long self-directed learning and improvement, research has consistently shown that physicians have a limited ability in accurately assessing their individual performance.11 12
Gerteis et al13 asserted that performance feedback is pivotal to data-driven models of quality improvement. A Cochrane review of audit and feedback found a median absolute improvement of 4% in guideline-concordant care,14 whereas a recent systematic review15 and meta-analysis16 demonstrated that audit and feedback had a modest but significant effect on quality outcomes.
One way to provide feedback to hospitalists is through the use of dashboards that deliver data on individual performance compared with agreed-upon standards or peer comparison.14 The research regarding the impact of improvement on quality of care with the use of dashboards is mixed.15 17 18 While dashboards have so far been primarily used in the outpatient setting,6 some hospitalist groups have deployed them in the inpatient setting with improvement in performance metrics.19–21
Despite the growing trend on feedback performance on quality metrics, there remain limited data on the means, frequency and content of feedback that should be provided to physicians, particularly frontline hospitalists. In addition, there is also a paucity of published information on best practices around mentoring and support that should accompany sharing of performance metrics with hospitalists. The Agency of Healthcare Quality and Design (AHRQ) has issued guidance addressing these areas. We report our experience with a comprehensive feedback system for frontline hospitalists created and implemented using best practices for designing and operationalising confidential physician performance reports as laid out by the AHRQ.22 We also report the change in our quality metrics after the implementation of this system for the same time period.
Methods
This quality improvement project was conducted in the section of hospital medicine at the Medical College of Wisconsin in Milwaukee, USA. Our hospitalist group consists of 46 full-time equivalent (FTE) faculty members. Of the 46 members, 40 are daytime faculty, and the rest exclusively work night shifts. All daytime hospitalists were included in this project. The purpose of our performance feedback report was to provide ongoing feedback to our hospitalist faculty on their individual performance on outcome and process measures, along with a reminder of section targets and tactics for improvement. A performance feedback report for hospitalists was created based on the guidelines provided by the AHRQ. There are two documents included in the feedback report: an individual dashboard and a peer comparison report. The performance metrics chosen for this quality improvement project are presented in table 1.
Table 1.
Category | Metric used |
Patient satisfaction | Healthcare Consumer Assessment of Healthcare Providers and Systems scores for overall and provider communication domains |
Discharge metrics | Percentage of discharge summaries completed in 24 hours Percentage of discharge orders placed by 10:00 |
Service metrics | Proportion of division meetings attended Proportion of section meetings attended |
Quality metrics | Length of stay Readmission rate |
Infection metrics | Number of central line-associated bloodstream infections Number of catheter-associated urinary tract infections |
These metrics were chosen in active discussion with hospitalists who are leaders within the section of hospital medicine and the leadership team of our hospital partner. We considered other metrics such as mortality rate and utilisation of inpatient admission order sets for specific diseases for inclusion in the performance feedback report. Mortality rate was excluded because our group has historically done very well on mortality metrics. Order set utilisation rate was excluded because we did not have a robust mechanism to capture these data on an ongoing basis.
Included on each hospitalist’s dashboard are the individual provider’s performance on each metric and the fiscal year-to-date target goals for each metric (see figure 1). The rank order list arranges all faculty by performance on the following metrics: Healthcare Consumer Assessment of Healthcare Providers and Systems (HCAHPS) scores, readmission rates, length of stay (LOS), attendance at care coordination rounds (CCRs) and discharge orders placed by 10:00 (see figure 2). The names of faculty meeting quality metric targets are unmasked, along with the name of the receiving faculty member. The names of underperforming faculty are masked. At least 5 and usually 10 (or more) hospitalists underperform on each quality metric during any given month. Therefore, it is not possible to easily determine who these hospitalists are. In any case, such ‘relative social ranking’ is an accepted way to provide confidential performance feedback to physicians.22
The methods by which data are collected and integrated to form the dashboard and rank order list are described below.
Dashboard data sources
We integrated data from multiple data sources to develop hospitalist dashboards for the specified fiscal year-to-date time period. Sources of data for the dashboard are listed in table 2. Complete data extraction and transformation was performed using Microsoft SQL Server Management Studio and R Studio.
Table 2.
Performance metric | Data source |
Patient satisfaction | Healthcare Consumer Assessment of Healthcare Providers and Systems data from Center for Medicare Services |
Discharge metrics | Electronic medical record (Epic Clarity) |
Readmission rate | Electronic medical record (Epic Clarity)* |
Length of stay | Vizient† |
Service metrics | Administrative reports generated by the section of hospital medicine |
CAUTI/CLABI events | Hospital data, adjudicated by the infection control department |
Complete data extraction and transformation was performed using Microsoft SQL Server Management Studio and R Studio.
*Data obtained after being vetted by Vizient.
†Vizient is a national database of clinical data.
CAUTI, catheter-associated urinary tract infection; CLABI, central line-associated bloodstream infection.
Rank order list data sources
Six measures are currently displayed in the rankings. Ranked metrics include readmissions, LOS, HCAHPS scores (overall and provider communication), discharge orders placed by 10:00 and attendance at CCRs. Data for these measures are pulled from multiple sources including Vizient (readmissions and LOS), Epic Clarity (patient discharge time) and Press Ganey (HCAHPS). Each provider’s National Provider Identifier (NPI) Standard is included in the download when the data are pulled. Including the NPI ensures that there is a unique identifier for each provider. Each pulled data set is then uploaded into its own individual table within SQL Server. SQL Views are created to evaluate the ranking of each provider within each of the seven measures. The SQL Views also assign colours based on the percentile of the provider’s performance or whether the provider meets the goals established by the Hospitalist Section. Microsoft Visual Studio is then used to assemble the data from the SQL Views into an HTML page using the NPI to join the seven data sets. We then created a separate HTML page for each provider, thus allowing each provider to see their own score and where they rank while ‘masking’ the scores of other providers who have not met the established goal.
The performance feedback report consisting of both the dashboard and rank order list were sent to each individual provider on a monthly basis. Two data analysts generated the dashboard and rank order list for each hospitalist. One administrative assistant in the section of hospital medicine collated reports from both analysts and generated a performance feedback report for each hospitalist. In accordance with AHRQ guidelines, specific tactics to address each metric were shared with providers on a regular basis through a monthly newsletter and a Quality Improvement (QI) guide. The section of hospital medicine monthly newsletter listed each quality metric, the current fiscal year target for each metric, the current state of each metric and specific improvement tactics applicable to each metric. Some of these tactics were derived from existing best institutional practices. For instance, appropriate indications for inserting foley catheters, sending urine analysis samples only after replacing chronic foley catheters and not ordering a urine analysis with reflex culture were some of the tactics listed in the monthly newsletter to reduce catheter-associated urinary tract infections (CAUTIs). Tactics were also derived from existing QI projects. For instance, we had an ongoing QI project on calling primary care providers with a verbal sign-out at the time of discharging someone at a high readmission risk. A reminder regarding this tactic was included in the monthly section newsletter and in the daily QI guide. The purpose of the ‘daily QI guide’ was to provide a framework to incorporate tactics into a provider’s daily workflow by making suggestions on the best time point in the workday to incorporate various tactics. For instance, the best time to write care coordination notes is before starting bedside rounds so that discharge information is available to members of the care coordination team early in the day. Each hospitalist received his or her own laminated copy in the beginning of the year. Copies of QI guides were posted in all common work areas.
The dashboard and rank order list were complemented by peer mentorship and support that occurs during monthly faculty meetings. The status of each quality metric was discussed and specific tactics for improvement reinforced on an ongoing basis. In addition, names of the top performers on individual quality metrics were shared with the group during section meetings on a quarterly basis. Top performers were then invited to share best practices that led them to be successful in achieving high scores.
Finally, individualised in-person feedback on performance metrics was provided by the section chief via once-a-year one-on-one meetings. During these meetings, the section chief discussed with the faculty member their performance on each metric, reviewed tactics and brainstormed barriers in achieving targets. After discussing these barriers, individualised action plans were developed with each faculty member to improve his or her performance on quality metrics.
A discussion of performance metrics and tactics to achieve targets was included as part of the onboarding process for new faculty. This was key because our section onboarded 11 FTEs over the 1-year period for this report. The section chief then met with each new hire at 3 months to provide individualised one-on-one feedback and develop individualised action plans.
Results
Starting July 2018, the dashboard and rank order list were sent to all faculty members every month. An improvement was seen in the following quality metrics after the introduction of performance feedback reports: LOS index, 30-day readmission rate, CAUTIs, central line-associated bloodstream infections (CLABSIs), provider communication component of HCAHPS scores, attendance at CCRs and percentage of discharge orders placed by 10:00. Table 3 shows the section’s performance on these metrics before (July 2017 to June 2018) and after (July 2018 to June 2019) the introduction of the performance feedback report.
Table 3.
Quality metric | FY 2018 (July 2017 to June 2018) | FY 2019 (July 2018 to June 2019) |
HCAHPS provider communication | 74% | 74.9% |
Discharge summary rate completed within 24 hours | 84.7% | 86% |
Discharge orders before 10:00 | 14.6% | 31.7% |
30-day readmission rate | 18.2% | 17.4% |
Length of stay index* | 0.97 | 0.90 |
CAUTI† | 11 | 1 |
CLABSI† | 7 | 5 |
Attendance at care coordination rounds | 61% | 67% |
*Observed to expected ratio.
†Number of events.
CAUTI, catheter-associated urinary tract infection; CLABSI, central line-associated bloodstream infection; FY, fiscal year; HCAHPS, Healthcare Consumer Assessment of Healthcare Providers and Systems.
Control charts show trends in 30-day readmission rate (figure 3), number of CAUTIs (figure 4), attendance at CCRs (figure 5), discharge orders placed by 10:00 (figure 6) and discharge summaries completed in 24 hours (figure 7) in the 1 year before and after the introduction of performance feedback reports. A run of six or more data points on control charts is called a ‘shift’ and indicates that the variation is due to a non-random change in the process and not due to random variation inherent in that process.23 Due to changes in how data are extracted and reported in our healthcare system, we did not have access to month-by-month data on HCAHPS by provider, CLABSIs and LOS index. We also correlated change in performance for individual providers on discharge orders placed by 10:00 (figure 8), 24-hour discharge summary completion rate (figure 9), provider HCAHPS scores (figure 10) and 30-day readmission rate (figure 11) to baseline performance on these same metrics. Using the Pearson’s correlation coefficient, there was a trend towards a negative correlation between baseline and change in proportion of discharge orders placed by 10:00 (r=−0.2675, p=0.08) and a statistically significant negative correlation between baseline and change in provider HCAHPS scores (r=−0.6713, p<0.00001), baseline and change in readmission rates (r=−0.8127, p<0.00001) and baseline and change in 24-hour discharge summary completion rate (r=−0.3327, p=0.02). We did not have provider-level data for LOS index and attendance at CCRs, and the number of CAUTI/CLABSI events per provider was too low to calculate a meaningful correlation.
Discussion
Our study provides a practical approach to providing feedback that can help improve quality outcomes.18 The time spent by analysts generating the dashboard and rank order lists, once the templates to do so had been set up, was considered manageable by their respective divisions. By spreading the tasks for generating and disseminating the feedback report among three individuals, we were able to create a sustainable process for auditing and disseminating feedback data.24
The goal of our feedback system, in combination with tactics for improvement, was to help faculty identify areas of workflow and practice improvement and provide faculty the opportunities to implement tactics in quality domains in which they are consistently and significantly underperforming. While there is no consensus on the most effective way to implement provider feedback, some factors that have been correlated with effective feedback systems include the following: having a low baseline performance,11 15 18 provided by a supervisor or trusted colleague,15 18 provided more than once,15–18 and delivered in verbal and written formats and includes explicit targets and action plan.15 16 18 We incorporated many of these elements by brainstorming tactics to help improve performance in one-on-one meetings between frontline hospitalists and the section chief and through peer-mentoring during monthly section meetings, and by sending out performance feedback data and improvement tactics on a monthly basis. Anecdotal feedback on performance feedback reports and one-on-one meetings was positive, particularly from hospitalists straight out of residency training. These hospitalists reported they found these reports and meetings helpful as an objective guide to potential areas of improvement.
We provided feedback to our hospitalists on readmission rates, overall and provider-specific HCAHPS scores, CAUTI/CLABSI events, LOS index, discharge orders placed by 10:00 and discharge summaries completed in 24 hours. The choice of these metrics was driven by a discussion between the section of hospital medicine and our hospital management and reflected the top financial and operational priorities of our hospital, namely, patient satisfaction, readmissions, healthcare-associated infections, LOS and hospital capacity. As the average financial support provided by hospitals is as high as $166 806 per FTE academic hospitalist (State of Hospital Medicine 2020 report, Society of Hospital Medicine),25 it is important to align performance improvement efforts to the needs and goals of the health system. We saw an improvement in all metrics included in performance feedback reports except for overall HCAHPS scores. Overall HCAHPS scores are determined by patient responses to 19 different questions on 7 different domains ranging from the quietness of the hospital environment to communication with doctors and nurses.26 We saw a slight improvement in our provider HCAHPS scores despite registering a decline in the overall HCAHPS score for our section during the before and after time periods of our study.
We saw an inverse correlation in improvement on performance metrics and baseline performance on these metrics. This is consistent with previous literature that suggests greater improvement in performance with feedback when baseline performance is low.15 We saw a weak correlation between baseline proportion of discharge orders placed by 10:00 and improvement in discharge orders placed by 10:00, a moderate correlation between baseline discharge summaries completed in 24 hours and improvement in proportion of discharge summaries completed in 24 hours, and a high correlation between low baseline provider HCAHPS scores and improvement in provider HCAHPS scores. We also saw a high correlation between high baseline 30-day readmission rates and decline in 30-day readmission scores in the 1 year after the introduction of performance feedback reports. These findings suggest that the potential for performance improvement is the highest among hospitalists with the lowest baseline performance.
We made active efforts to engage the group as we started providing monthly feedback around performance metrics. This was particularly important as our inpatient census increased significantly over the same time period as we introduced this feedback system. We were able to work with our hospital management to increase our workforce allocation, increase the number of hospitalist teams and bring patient encounter numbers down to levels that were considered safe and manageable by the group. This allowed the group to focus on tactics to improve performance metrics.
We solicited and incorporated feedback on ongoing additions and changes to the feedback system. Subsequently, several changes were implemented. For instance, a monthly readmission meeting was created for patients with readmissions within 72 hours. The purpose of these meetings was to discuss what could have been done differently for these patients from both a clinical and systems-based perspective to prevent the readmission. In addition, each hospitalist was provided Medical Record Numbers (MRNs) of their patients with 30-day readmissions. The addition of these MRNs to the monthly feedback report was to give faculty the opportunity to review the chart and reflect on what could have been done differently. We also changed the target on discharge summaries completed in 24 hours from 90% to 85%. All of these changes were suggested by frontline faculty.
We actively emphasised to the group in electronic and in-person communication that performance on many quality metrics was based on systems of care rather than solely on individual performance. In particular, 30-day readmissions,27 LOS index,28 hospital-acquired infections and HCAHPS29 were specifically identified as metrics that cannot be ‘fixed’ solely by an individual provider. For example, 30-day readmission rates tend to be multifactorial in nature and may be based on system factors such as the ability of a patient to obtain and attend a follow-up appointment, as well as refilling medications after discharge. The emphasis in faculty communication was on incorporating clearly identified tactics assigned to addressing each metric rather than the metric itself. For example, an individual faculty member has more control on attendance at CCRs rather30 than LOS. Process metrics that are under greater control of an individual provider were identified and communicated as such. These included percentage of discharge orders placed by 10:00, CCR attendance and percentage of discharge summaries completed within 24 hours. In addition, we tied a nominal portion (1%–2%) of total faculty compensation to meeting targets on these three-process metrics. Faculty compensation was not affected by performance on systems-based metrics.
Our project has several limitations. First and foremost, this project was conducted at a single tertiary academic medical centre, and this impacts generalisability. However, our group is fairly representative of other academic hospital medicine groups as our hospitalists practise at a tertiary care hospital with both medical students and resident physicians. Another limitation of this project was that other quality improvement projects were simultaneously conducted surrounding these same quality metrics. Subsequently, we are unable to determine a causal relationship between performance feedback reports and the improvement seen in several of the quality metrics. An increase in the number of hospitalists hired by the section made our census per team more manageable. This could also have resulted in an improvement in quality metrics by allowing staff more time to focus on their patients. The lack of information on balancing measures for each quality metric is also a limitation of our study. For instance, completing discharge summaries within the 24 hours’ time frame can result in poorly constructed discharge summaries. We did not evaluate the quality of our discharge summaries and are unable to determine whether this occurred. We did see an improvement in our LOS index despite an improvement in the proportion of discharge orders placed by 10:oo. This was reassuring, as one study on improving the timing of discharge orders found an increase in LOS with patients staying longer so that they could be discharged early the following day.31 A final limitation to consider is that we did not formally survey our hospitalists on their reaction to the performance feedback reports. While faculty feedback was sought and incorporated via one on-one and group meetings on an ongoing basis, it may be valuable to survey faculty on the impact and utility of feedback reports.
Conclusion
We designed and implemented a monthly performance feedback report containing individual performance metrics and peer comparison for our frontline hospitalists. Feedback reports were complemented by ongoing guidance on tactics to help achieve quality targets, celebration of individual successes and peer mentoring on best practices by high performers. We saw an improvement in our group’s performance on almost all targeted process and outcome metrics. A performance feedback report that delivers individual performance metrics complemented by guidance on tactics to achieve these metrics can help create a culture of quality that can improve the quality of care delivered by a hospitalist group.
Footnotes
Twitter: @ankursegon
Contributors: BB contributed to the drafting of the paper, literature review and data analysis. SN contributed to the drafting of the paper and data collection and analysis. NW contributed to the drafting of the paper and data collection and analysis. RW contributed to the drafting of the paper and data collection and analysis. YS contributed to generating the project idea and drafting of the manuscript. AS contributed to generating the project idea, project implementation, data analysis and drafting of the manuscript.
Funding: The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.
Disclaimer: This project was deemed IRB exempt by our institution.
Competing interests: None declared.
Patient and public involvement: Patients and/or the public were not involved in the design, or conduct, or reporting, or dissemination plans of this research.
Patient consent for publication: Not required.
Provenance and peer review: Not commissioned; externally peer reviewed.
Data availability statement: Data are available upon request.
References
- 1.The IHI Triple Aim | IHI - Institute for Healthcare Improvement. Available: http://www.ihi.org/Engage/Initiatives/TripleAim/Pages/default.aspx [Accessed 22 Apr 2020].
- 2.Wachter RM, Goldman L. Zero to 50,000 - The 20th Anniversary of the Hospitalist. N Engl J Med 2016;375:1009–11. 10.1056/NEJMp1607958 [DOI] [PubMed] [Google Scholar]
- 3.White AA, McIlraith T, Chivu AM, et al. Collaboration, not calculation: a qualitative study of how Hospital executives value Hospital medicine groups. J Hosp Med 2019;14:662–7. 10.12788/jhm.3249 [DOI] [PubMed] [Google Scholar]
- 4.Historical | CMS. Available: https://www.cms.gov/Research-Statistics-Data-and-Systems/Statistics-Trends-and-Reports/NationalHealthExpendData/NationalHealthAccountsHistorical [Accessed 17 Dec 2020].
- 5.Shrank WH, Rogstad TL, Parekh N. Waste in the US health care system: estimated costs and potential for savings. JAMA 2019;322:1501–9. 10.1001/jama.2019.13978 [DOI] [PubMed] [Google Scholar]
- 6.IPPS proposed rule is more of the same with emphasis on quality. Hosp Case Manag 2015;23:92-4. [PubMed] [Google Scholar]
- 7.Make quality metrics work for you | ACP hospitalist. Available: https://acphospitalist.org/archives/2014/06/quality-metrics.htm [Accessed 17 Dec 2020].
- 8.Burns LR, Wholey DR. The effects of patient, Hospital, and physician characteristics on length of stay and mortality. Med Care 1991;29:251–71. 10.1097/00005650-199103000-00007 [DOI] [PubMed] [Google Scholar]
- 9.Gutacker N, Bloor K, Bojke C, et al. Should interventions to reduce variation in care quality target doctors or hospitals? Health Policy 2018;122:660–6. 10.1016/j.healthpol.2018.04.004 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Romano P, Hussey P, Ritley D. Selecting quality and resource use measures: a decision guide for community quality Collaboratives, 2010. Available: www.ahrq.gov [Accessed 17 Dec 2020].
- 11.Davis DA, Mazmanian PE, Fordis M, et al. Accuracy of physician self-assessment compared with observed measures of competence: a systematic review. JAMA 2006;296:1094. 10.1001/jama.296.9.1094 [DOI] [PubMed] [Google Scholar]
- 12.Gordon MJ. A review of the validity and accuracy of self-assessments in health professions training. Acad Med 1991;66:762–9. 10.1097/00001888-199112000-00012 [DOI] [PubMed] [Google Scholar]
- 13.Gerteis M, Peikes D, Ghosh A, et al. Uses and limitations of Claims-based performance feedback reports: lessons from the comprehensive primary care initiative. J Healthc Qual 2018;40:187–93. 10.1097/JHQ.0000000000000099 [DOI] [PubMed] [Google Scholar]
- 14.Ivers NM, Barrett J. Using report cards and dashboards to drive quality improvement: lessons learnt and lessons still to learn. BMJ Qual Saf 2018. [DOI] [PubMed] [Google Scholar]
- 15.Ivers N, Jamtvedt G, Flottorp S, et al. Audit and feedback: effects on professional practice and healthcare outcomes. Cochrane Database Syst Rev 2012:CD000259. 10.1002/14651858.CD000259.pub3 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.Hysong SJ. Meta-Analysis: audit and feedback features impact effectiveness on care quality. Med Care 2009;47:356–63. 10.1097/MLR.0b013e3181893f6b [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Grimshaw JM, Thomas RE, MacLennan G. Effectiveness and efficiency of guideline dissemination and implementation strategies. Int J Technol Assess Health Care 2005. [DOI] [PubMed] [Google Scholar]
- 18.Ivers NM, Grimshaw JM, Jamtvedt G, et al. Growing literature, stagnant science? systematic review, meta-regression and cumulative analysis of audit and feedback interventions in health care. J Gen Intern Med 2014;29:1534–41. 10.1007/s11606-014-2913-y [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.Patel S, Rajkomar A, Harrison JD, et al. Next-Generation audit and feedback for inpatient quality improvement using electronic health record data: a cluster randomised controlled trial. BMJ Qual Saf 2018;27:691–9. 10.1136/bmjqs-2017-007393 [DOI] [PubMed] [Google Scholar]
- 20.Fantasy physician Leagues? introducing the physician equivalent of the Qbr (Quarterly Metric- based rating) – SHM Abstracts. Available: https://www.shmabstracts.com/abstract/fantasy-physician-leagues-introducing-the-physician-equivalent-of-the-qbr-quarterly-metric-based-rating/ [Accessed 8 Apr 2020].
- 21.Linking physician schedules to the electronic health record to provide real-time, individualized data feedback – Abstracts. Available: https://shmabstracts.org/abstract/linking-physician-schedules-to-the-electronic-health-record-to-provide-real-time-individualized-data-feedback/ [Accessed 17 Dec 2020].
- 22.Ahrq . Confidential physician feedback reports: designing for optimal impact on performance. Available: www.ahrq.gov [Accessed 17 Dec 2020].
- 23.Provost LP, Murray SK. The health care data guide: learning from data for improvement. Heal Care Data Guid 2011. [Google Scholar]
- 24.Foy R, Eccles MP, Jamtvedt G, et al. What do we know about how to do audit and feedback? pitfalls in applying evidence from a systematic review. BMC Health Serv Res 2005;5. 10.1186/1472-6963-5-50 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25.Home | SHM | Society of hospital medicine. Available: https://www.hospitalmedicine.org/ [Accessed 17 Dec 2020].
- 26.HCAHPS: Patients’ Perspectives of Care Survey | CMS. Available: https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/HospitalQualityInits/HospitalHCAHPS [Accessed 17 Dec 2020].
- 27.Kripalani S, Theobald CN, Anctil B, et al. Reducing Hospital readmission rates: current strategies and future directions. Annu Rev Med 2014;65:471–85. 10.1146/annurev-med-022613-090415 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28.Baek H, Cho M, Kim S, et al. Analysis of length of hospital stay using electronic health records: a statistical and data mining approach. PLoS One 2018;13:e0195901. 10.1371/journal.pone.0195901 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29.McFarland DC, Ornstein KA, Holcombe RF. Demographic factors and hospital size predict patient satisfaction variance--implications for hospital value-based purchasing. J Hosp Med 2015;10:503–9. 10.1002/jhm.2371 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30.The impact of a multidisciplinary care coordination protocol on patient-centered outcomes at an academic medical center. Journal of Clinical Pathways https://www.journalofclinicalpathways.com/article/impact-multidisciplinary-care-coordination-protocol-patient-centered-outcomes-academic [Google Scholar]
- 31.Rajkomar A, Valencia V, Novelero M, et al. The association between discharge before noon and length of stay in medical and surgical patients. J Hosp Med 2016;11:859–61. 10.1002/jhm.2529 [DOI] [PubMed] [Google Scholar]