Key Points
Question
What is the annual cost to an acute care hospital of measuring and reporting quality metric data, excluding resources spent on quality improvement interventions?
Findings
Preparing and reporting data for 162 unique quality metrics required an estimated 108 478 person-hours, costing an estimated $5 038 218.28 (2022 USD) in personnel costs plus an additional $602 730.66 in vendor fees. Claims-based and chart-abstracted metrics used the most resources per metric, while electronic metrics consumed far less.
Meaning
Policy makers should consider reducing the number of metrics and shifting to electronic metrics, when possible, to optimize resources spent in the overall pursuit of higher quality.
Abstract
Importance
US hospitals report data on many health care quality metrics to government and independent health care rating organizations, but the annual cost to acute care hospitals of measuring and reporting quality metric data, independent of resources spent on quality interventions, is not well known.
Objective
To evaluate externally reported inpatient quality metrics for adult patients and estimate the cost of data collection and reporting, independent of quality-improvement efforts.
Design, Setting, and Participants
Retrospective time-driven activity-based costing study at the Johns Hopkins Hospital (Baltimore, Maryland) with hospital personnel involved in quality metric reporting processes interviewed between January 1, 2019, and June 30, 2019, about quality reporting activities in the 2018 calendar year.
Main Outcomes and Measures
Outcomes included the number of metrics, annual person-hours per metric type, and annual personnel cost per metric type.
Results
A total of 162 unique metrics were identified, of which 96 (59.3%) were claims-based, 107 (66.0%) were outcome metrics, and 101 (62.3%) were related to patient safety. Preparing and reporting data for these metrics required an estimated 108 478 person-hours, with an estimated personnel cost of $5 038 218.28 (2022 USD) plus an additional $602 730.66 in vendor fees. Claims-based (96 metrics; $37 553.58 per metric per year) and chart-abstracted (26 metrics; $33 871.30 per metric per year) metrics used the most resources per metric, while electronic metrics consumed far less (4 metrics; $1901.58 per metric per year).
Conclusions and Relevance
Significant resources are expended exclusively for quality reporting, and some methods of quality assessment are far more expensive than others. Claims-based metrics were unexpectedly found to be the most resource intensive of all metric types. Policy makers should consider reducing the number of metrics and shifting to electronic metrics, when possible, to optimize resources spent in the overall pursuit of higher quality.
This study examines externally reported inpatient quality metrics for adult patients and estimates the cost of data collection and reporting, independent of quality-improvement efforts.
Introduction
Health care costs in the US are high and rising, with expenditures increasing from 8.9% of gross domestic product in 1980 to 19.7% in 2020.1 In response, policy makers have promoted payment models encouraging value, seeking to optimize care quality while reducing costs.2 Pay-for-performance programs and risk- and quality-adjusted capitation models are leading examples of this strategy.3,4
Multiple government and private organizations (eg, the Centers for Medicare & Medicaid Services [CMS], The Leapfrog Group) have created and promoted quality metrics and ratings, but these efforts have generally not been coordinated. Over the past several years, concern has grown about the financial burden to the US health care system of multiple requirements for quality measurement and reporting.5,6 Health care administration costs are estimated to make up 15% to 30% of total national health care spending,7 and pay-for-performance approaches that unintentionally incentivize increased spending on chart review and coding optimization may contribute. To date, few reports have quantified all of the costs of quality measurement and reporting8,9 independent of resources spent working to improve quality and safety.
Quantifying the burden of reporting could increase understanding of the overall “cost-effectiveness” of quality measurement, creating opportunities for policy makers to improve hospital resource allocation efficiency and care quality. It is particularly important in capitated, value-based systems, such as Maryland’s Global Budget Revenue model, which is often seen as a demonstration case for future national programs.10 To inform such policymaking, we identified quality and safety metrics reported annually at the Johns Hopkins Hospital, a large academic medical center in Maryland. We then used time-driven activity-based costing11,12 to estimate annual personnel time and financial cost to the hospital of quality metric reporting (excluding time spent designing or implementing quality improvement interventions). The Johns Hopkins Hospital reports metrics generally relevant throughout the US as well as Maryland-specific metrics. We focused on metrics imposed by government agencies and by widely followed national rating organizations.
Methods
Definitions
As defined by the National Quality Forum, a metric (or measure) is a “numeric quantification of health care quality for a designated healthcare provider” consisting of a unique set of specifications outlining how it should be built and calculated.13 For example, the Patient Safety Indicator 04 is a unique metric. In 2018, the Johns Hopkins Hospital reported Patient Safety Indicator 04 results to 4 organizations (CMS, The Leapfrog Group, the Maryland Health Care Commission, and US News & World Report).
Quality Metric Inventory and Characterization
We used a database compiled and routinely updated by our hospital quality department to assemble a comprehensive list of metrics reported in 2018 to 7 government and nationally prominent health care rating organizations. We included only adult inpatient and emergency department metrics. We excluded metrics used only for specialty-specific (eg, cardiac catheterization) registries or certification because such metrics may not be relevant to many hospitals. Similarly, we excluded commercial payer metrics, which, in 2018, consisted of reports issued in response to independent requests from payers. We estimate that the collective effort was relatively small (<5% of the time devoted to metrics included in this study). Moreover, there was wide variation from payer to payer and from year to year. In excluding both specialty-specific registry and commercial payer reporting, we aimed to create a more generalizable base case for hospitals nationally, accepting some potential underestimation in these 2 highly variable aspects.
Metrics were characterized using 3 frameworks: (1) method of ascertainment (claims-based, chart-abstracted, electronic [measured and reported through the electronic health record], survey or direct reporting from patients or staff, or summative [metrics reporting overall assessments of hospital care and assembled from combinations of metrics from the prior categories]); (2) the Donabedian model of structure-process-outcome,14 modified to include patient satisfaction and payment categories; and (3) the National Academy of Medicine (formerly the Institute of Medicine) quality domains of safety, effectiveness, efficiency, timeliness, equity, and patient-centeredness.15 The Donabedian14 and National Academy of Medicine15 frameworks for categorizing quality metrics are 2 of the most widely used schemata for conceptualizing care quality. For each framework, 1 author (S.J.M.) assigned a characteristic to each metric and a second author (S.A.B.) reviewed the assignment. Further details on measure characterization are provided in Supplement 1.
Data Collection and Time-Driven Activity-Based Cost Analysis
From January 1, 2019, to June 30, 2019, 1 investigator (S.J.M.) conducted in-person, semi-structured interviews with quality metric reporting personnel. No individuals who were asked to be interviewed declined. To encourage unbiased reporting about time spent, we promised anonymity, recording only role type and not names or other identifiers. We completed 75 interviews, collecting information on the work of 168 personnel. Of these, the largest share (n = 46) was in clinical leadership (eg, departmental physician vice chair), followed by quality improvement and clinical documentation (Table 1). Twenty-two individuals were physicians and 59 (including 88% of hospital quality improvement staff) had nursing degrees.
Table 1. Personnel Interviewed and Included in a Study on the Cost of Quality Metric Reportinga.
Category | Personnel interviewed | Personnel represented in interviews | Example roles within category (description relevant to time since completing training) | Qualifications | Clinical and administrative role |
---|---|---|---|---|---|
Clinical leadership | 21 | 21 | Department vice chairs for quality and safety (typically ≥10 years since training, associate or full professor rank) | MD | Yes |
6 | 13 | Certified wound care nurse (≥10 y bedside experience) | RN | Yes | |
Utilization review nurse (≥5 y bedside experience) | RN | No | |||
1 | 12 | Infection control practitioner (varying experience) | Varied (RN and/or master’s or doctoral degree) | No | |
Quality improvement (QI) | 17 | 30 | QI specialist, QI team lead (≥5 y bedside experience) | RN and/or master’s degree | No |
1 | 1 | Project manager (varying experience) | Bachelor’s degree, MBA | No | |
3 | 3 | Director, assistant director (≥5 y in quality improvement) | Master’s/doctoral degree | No | |
Analytics | 5 | 8 | Systems engineers and systems architects (varying experience, range of <3 y to ≥10 y) | Bachelor’s and/or master’s degree | No |
3 | 9 | Project managers and safety data coordinators (1-5 y experience) | Bachelor’s and/or master’s degree | No | |
4 | 4 | Analysts (experience varying from 2 to ≥10 y) | Bachelor’s and/or master’s degree | No | |
Health information management (coding department)b | 1 | 8 | Coding validator (varying experience) | Bachelor’s degree (few RN) | No |
2 | 2 | Director (≥10 y in coding and documentation) | Master’s degree | No | |
Clinical documentation excellence | 3 | 16 | Clinical documentation specialist, documentation educators (3 to ≥5 y bedside experience) | RN | No |
1 | 1 | Senior director (≥10 years in clinical documentation and ≥5 years bedside experience) | Master’s degree | No | |
1 | 15 | Analysts (varying experience) | Bachelor’s and/or master’s degree | No | |
Financial | 2 | 2 | Directors (≥10 y in hospital financial management) | Master’s degree | No |
1 | 20 | Billing coordinators (varying experience) | Bachelor’s degree | No | |
Executive leadership | 3 | 3 | Director-level and senior vice president–level hospital leaders (≥20 y in health care and ≥15 y clinical experience for MD) | MD, doctorate, and/or master’s degree in business or health care | Yes |
Total | 75 | 168 |
In all, 75 interviews represented the work of 168 personnel.
Health information management (coding) leaders are involved in designing clinical documentation queries and other data collection tools critical for collecting and validating quality data.
Based on a previously described model for outpatient metrics,8 we assessed time spent on quality reporting in the following mutually exclusive categories: (1) entering information exclusively for the purposes of quality reporting, (2) reviewing quality reports from external entities, (3) tracking quality metric specifications, (4) developing and implementing data collection processes, and (5) collecting and validating data to be used in quality measurement. Respondents recalled weekly hours spent on data-reporting activities in the previous calendar year (2018) by themselves and their direct-reporting staff whose time they felt confident reporting. They were only to report time spent on quality metric data acquisition and reporting, excluding all time spent designing and implementing quality-improvement interventions. Respondents could provide time estimates as logical to their work (eg, hours/week, “X hours twice/year”). Later, the interviewer annualized results for cost estimation. Respondents were asked to differentiate time spent among various metric types (eg, claims-based, chart-abstracted), but if they were unable to they could attribute their (total) time as “uncategorized” by type of metric. Because summative metrics (eg, CMS star rating) are higher-level metrics formed by combinations of other metrics for which time was already counted, we did not attribute additional personnel time, but retained these metrics because they are widely recognized.
Because quality data-reporting activities overlap with other aspects of hospital data acquisition and reporting, including billing, accreditation, and research, we excluded time and costs related to these activities to ensure that we present as conservative an estimate as possible about quality metric reporting. Importantly, we included staff (mostly clinical documentation specialist) time devoted to generating clinical documentation queries for quality metrics. Although many documentation queries are generated for billing, respondents felt they could accurately apportion time. In contrast, we conservatively excluded physician and advanced care clinician time answering queries because we would not expect them to distinguish billing from quality-related queries.
Personnel costs were calculated using time-driven activity-based costing.16,17 With the goal of estimating nationally relevant data, we acquired representative salary data (excluding benefits and bonuses) from the 2017-2018 Association of American Medical Colleges annual faculty survey, institutional human resources national median salary references, and GlassDoor.com, using nationwide median salaries for analogous positions. All currency values were adjusted for inflation to 2022 US dollars using the US city-average Consumer Price Index.18 Mean individual hourly salaries were estimated by dividing annual salary by weekly hours worked, assuming a 48-week work year. Total personnel cost to the institution was calculated by multiplying these hourly rates by reported hours spent in quality data–reporting tasks. A sensitivity analysis was conducted by performing this calculation using the minimum and maximum salary estimate bounds. We estimated 2018 calendar year costs of contracts with analytics and ratings firms by averaging the contract costs for fiscal years 2018 and 2019.
Results
Reported Metrics
A total of 162 unique metrics were identified that the hospital reported to 7 measuring organizations in 2018 (Table 2), with a total of 271 reports (some metrics were reported to multiple organizations). The majority of reports (191 [70.5%]) contributed to publicly available hospital quality reporting but did not impact payment; 73 (26.9%) reports were included in pay-for-performance programs and 7 (2.6%) were monitored by external agencies, but were not publicly reported nor included in payment programs. Of the 162 unique metrics (a complete list of metrics is available in the eTable in Supplement 2), the most common method of ascertainment was claims-based (96 metrics [59.3%]), followed by survey or direct reporting from patients or staff (32 [19.8%]), chart-abstracted (26 [16.0%]), electronic (4 [2.5%]), and summative (4 [2.5%]) (Figure 1A). When evaluated using the Donabedian framework, the most common category was outcome, with 107 (66.0%) metrics (Figure 1B). When characterized by National Academy of Medicine domains, safety comprised the most frequent domain, with 101 (62.3%) metrics (Figure 1C).
Table 2. Hospital-Based Metrics Reported by a Large Academic Medical Center in Maryland by Method of Ascertainment.
Ascertainment method | No. of metrics reported | ||||||||
---|---|---|---|---|---|---|---|---|---|
Government and independent rating organizationsa | Unique metricse | ||||||||
HSCRCb | CMS | MHCCb | The Leapfrog Group | TJC | USNWR | ANCCc | Totald | ||
Claims-based | 62 | 30 | 30 | 10 | 0 | 5 | 0 | 137 | 96 |
Chart-abstracted | 9 | 20 | 15 | 8 | 13 | 1 | 0 | 66 | 26 |
Electronic | 0 | 4 | 0 | 0 | 4 | 0 | 0 | 8 | 4 |
Survey or direct reporting from patients or staff | 8 | 16 | 12 | 15 | 0 | 2 | 3 | 56 | 32 |
Summative | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 4 | 4 |
Total | 80 | 71 | 57 | 34 | 17 | 9 | 3 | 271 | 162 |
Abbreviations: TJC, The Joint Commission; USNWR, US News & World Report.
All metrics are hospital metrics, not physician practice metrics (eg, they are not under the Centers for Medicare & Medicaid Services [CMS] Merit-Based Incentive Payment System).
The Maryland Health Services Cost Review Commission (HSCRC) and Maryland Health Care Commission (MHCC) are regulatory bodies in Maryland that oversee the state’s unique CMS waiver program that monitors hospital costs and quality statewide.
The American Nurses Credentialing Center (ANCC) is responsible for the Magnet Recognition Program.
Values for row totals sum to more than the number of unique metrics because of instances of reporting the same metric to more than 1 organization.
Among the 162 unique metrics, 97 are reported by hospitals in all 50 states and 65 are reported only to the HSCRC and/or MHCC.
Figure 1. Categorization of Unique Quality Metrics (N = 162) Reported by a Large Academic Medical Center in Maryland.
The Donabedian14 and National Academy of Medicine (NAM)15 frameworks for categorizing quality metrics are two of the most widely used schemata for conceptualizing care quality. Metric codes are commonly used brief names applied by metric developers including Centers for Medicare & Medicaid Services (CMS), The Joint Commission, the Agency for Healthcare Research and Quality, and others. Additional detail on specific metrics can be found in the eTable in Supplement 2. AMI indicates acute myocardial infarction; CLABSI, central line–associated bloodstream infection; HAI, hospital-acquired infection; HCAHPS, Hospital Consumer Assessment of Healthcare Providers and Systems; PSI, Patient Safety Indicators; and USNWR, US News & World Report.
Person-Hours and Personnel Cost of Quality Data Reporting
Across all metric categories (claims-based, chart-abstracted, electronic, survey or direct reporting from patients or staff, and uncategorized [time respondents could not differentiate by metric type]), a total of 108 478 person-hours were spent annually, the majority of which were consumed by data collection and validation (71 078 hours [65.5%]), followed by reviewing reports (15 758 hours [14.5%]), developing processes (13 680 hours [12.6%]), tracking specifications (7019 hours [6.5%]), and entering information (943 hours [0.9%]) (Figure 2A). By metric category, claims-based (80 218 hours [73.9%]) and chart-abstracted (17 975 hours [16.6%]) metrics took up most of the time, followed by uncategorized time (10 068 hours [9.3%]), electronic metrics (159 hours [0.1%]), and survey or direct reporting from patients or staff (59 hours [0.05%]) (Table 3).
Figure 2. Estimated Person-Hour and Personnel Cost by Quality Reporting Activity at a Large Academic Medical Center in Maryland.
A, Reported annual person-hours spent on various tasks on different types of quality metrics. B, Estimated total annual personnel cost to the institution of various aspects of quality data reporting and different types of quality metrics. C, Estimated per-metric annual personnel cost to the institution of various aspects of quality data reporting and different types of quality metrics. Whiskers represent upper and lower certainty intervals from salary ranges. Note the logarithmic scale on the y-axis and that vendor fees for the 2018 calendar year were excluded, which totaled $602 730.66, as detailed in the text. All dollar amounts were inflation-adjusted to 2022 USD using US city-average Consumer Price Index (Bureau of Labor Statistics).
Table 3. Estimated Total and Per-Metric Person-Hours and Personnel Costs by Type of Quality Metric at a Large Academic Medical Center in Maryland.
Metric type | Unique metrics | Person-hours per year | Person-hours per metric per year | Personnel cost per year (certainty interval), $a | Personnel cost per metric per year (certainty interval), $ |
---|---|---|---|---|---|
Claims-based | 96 | 80 218 | 836 | 3 605 144.01 (2 377 146.05-7 601 599.78) | 37 553.58 (24 761.94-79 183.33) |
Chart-abstracted | 26 | 17 975 | 691 | 880 653.76 (545 462.54-1 648 438.78) | 33 871.30 (20 979.33-63 401.49) |
Electronic | 4 | 159 | 40 | 7606.31 (4401.77-16 836.27) | 1901.58 (1100.44-4209.07) |
Survey or direct reporting from patients or staff | 32 | 59 | 2 | 8622.89 (3878.16-16 777.35) | 269.47 (121.19-524.29) |
Summativeb | 4 | ||||
Uncategorizedc | 10 068 | 536 191.31 (327 491.74-1 241 080.24) | |||
Total | 108 478 | 5 038 218.28 (3 258 380.27-10 524 732.43) |
These data exclude vendor fees for the 2018 calendar year, which totaled $602 730.66, as detailed in the text. All dollar amounts are inflation-adjusted to 2022 USD using the US city-average Consumer Price Index (Bureau of Labor Statistics).
Because summative metrics (eg, Centers for Medicare & Medicaid Services star rating) are higher-level metrics formed by combinations of other metrics for which time was already counted, summative metrics did not have time or personnel cost attributed, but were retained in reporting for completeness and because they are widely recognized.
Some respondents were unable to categorize their time by metric type and chose to report their time as “uncategorized.”
Accounting for annual salaries for individuals involved, it was estimated that the total annual personnel cost of quality data reporting was $5 038 218.28, with claims-based metrics making up an estimated $3 605 144.01 (71.6%) of that cost (Table 3). Chart-abstracted metrics also made up a considerable fraction of the total personnel cost ($880 653.76 [17.5%]), followed by uncategorized time ($536 191.31 [10.6%]), electronic metrics ($7606.31 [0.2%]), and survey or direct reporting from patients or staff metrics ($8622.89 [0.2%]). Data collection and validation was the costliest activity ($3 087 698.08 [61.3%]), followed by reviewing reports ($855 179.39 [17.0%]) (Figure 2B-C). A sensitivity analysis using the lowest and highest ranges for salary estimates resulted in a certainty interval of $3 258 380.27 to $10 524 732.43 (Table 3).
Claims-based and chart-abstracted metrics required the most per-metric person-hours (836 and 691 person-hours per metric per year, respectively) and highest costs ($37 553.58 and $33 871.30 per metric per year, respectively), while electronic metrics required less (40 person-hours and $1901.58 per metric per year) (Table 3). Personnel cost and person-hour resources that participants could not differentiate by metric type (uncategorized) were excluded from the per-metric analysis. Total 2018 calendar year vendor fees (inflation-adjusted to 2022) for US News & World Report, Vizient, and a patient experience survey vendor were estimated to be $602 730.66 (total number presented to protect proprietary contractual information).
Discussion
At a single academic medical center, 162 unique metrics were reported to external organizations in 2018. Using time-driven activity-based costing, it was conservatively estimated that preparing and reporting data for external quality assessments for 1 hospital consumes approximately 108 000 person-hours at a personnel cost of approximately $5.0 million per year, with an additional approximately $600 000 in related vendor contracts. These estimates notably exclude time spent on quality improvement activities and only pertain to quality data preparation and reporting. The most resource-intensive measures were claims-based (836 person-hours/metric; $37 553.58/metric) and chart-abstracted (691 person-hours/metric; $33 871.30/metric) metrics.
The $5.6 million per year spent on quality reporting is a small fraction of the Johns Hopkins Hospital’s $2.4 billion in annual expenses19 and it may be proportionately higher than at smaller hospitals. Regardless, the current findings, when extrapolated to the more than 4100 acute care, nongovernment hospitals in the US,20 suggest annual nationwide expenditures in the billions of dollars simply for reporting quality data.
The overall $5.6 million per year is remarkably close to an estimate quoted by a National Academy of Medicine survey of health executives on the cost of their hospitals’ quality measurement infrastructures.21 Another study surveyed 3 academic medical center administrators for the annual cost of collecting and validating data just for CMS’s sepsis core measure, finding figures ranging from $134 000 to $2 million.22 Although the current study examined the matter with in-depth cost accounting, the concordance is reassuring. There have been several studies in the outpatient setting. A health data analytics firm that supports primary care–accountable care organizations reported 108 metrics across its various accountable care organization contracts.23 Another group estimated that, nationally, outpatient practice quality data reporting in 3 specialties costs $15.4 billion annually and requires 15 hours of staff time per physician per week.8 Finally, a study reported the annual cost to primary care practices of responding to insurer queries for 4 quality reporting programs to be less than $100 to $4300 per physician.24
Another finding from the current study that bears emphasizing is the resource intensity of claims-based and chart-abstracted metrics. Although chart-abstracted metrics might be expected to be resource intensive given the manual chart review, claims-based metrics surprisingly represented the most time-consuming type of metric, despite being generated from administrative data “collected anyway” for billing. It is possible that this stems from the challenge in accurately representing patient health status in International Classification of Diseases–coded data. Validating administrative data often requires confirming the specificity of a diagnosis (eg, pneumonia or sepsis resulting from pneumonia) and whether risk-modifying comorbidities were present on admission. Although the order or volume of comorbidity codes or the “present on admission” indicator may not affect billing, these details can significantly alter performance on risk-adjusted claims-based metrics. Thus, hospitals have invested in a large infrastructure (mostly clinical documentation specialists and quality specialists) to review documentation and increase the specificity and accuracy of coding. Note that optimizing codes represents data collection and validation, not actual care quality improvement, because accurately depicting current care is distinct from making efforts to modify current care practices.
Conversely, electronic clinical quality measures (automatically reported from electronic health record data) were found to be much less resource intensive. Notably, there were only 4 electronic clinical quality measures that the hospital reported to external organizations in 2018. This low volume may reflect the newness of this type of measure and the difficulty of designing high-quality electronic clinical quality measures. Certainly not all key areas of health care quality could be explored with electronic clinical quality measures, but there may be a role for increasing investment in the design of more effective ones, potentially with direct involvement of electronic health record vendors.
Notably, the total annual personnel cost for survey or direct reporting from patients or staff metrics ($8622.89) would be much higher when including the related vendor fees. Because vendor fees may vary widely, these costs were kept separate in reporting.
As pay-for-performance systems continue to grow in US health care, we must consider quality measurement’s cost-effectiveness. Although this study does not broach this nuanced issue, fundamental and revealing work on quality metric cost-effectiveness is underway, primarily in outpatient settings.8,24,25,26 Recent work has demonstrated that high performance by primary care physicians in Medicare’s Merit-based Incentive Payment System may not correlate with higher-quality patient care, but is strongly associated with physician practice size and health system enrollment, implicating the impact of a large infrastructure to collect, validate, and report data for ambulatory care metrics.26
As efforts to refine the implementation of quality measurement continue, policy makers, quality metric designers, and hospital executives must consider the costs associated with current hospital quality metric reporting. Possible avenues for reducing the burden of inpatient quality reporting include reducing the number of overall metrics and investing in the development of electronic metrics that may be immediately representative of patient illness and care provided without requiring multiple reviews of diagnosis code sets or extensive chart reviews.
Limitations
This study has several limitations. First, the results likely underestimate total resources and cost involved in quality reporting for several reasons. Time spent on activities impacting quality metrics but primarily directed at other goals (registry abstraction, accreditation, billing, and research) was excluded, as well as physician and advanced care clinician time spent answering queries. Vendor fees for staff engagement surveys, such as the safety culture survey, were also not included because these overlap with human resources aims and are not strictly quality data reporting. Next, although a systematic approach was used in the interview process, it is possible that there were individuals not included in this estimate. Private insurer metrics were additionally not considered, and this work was limited to considering inpatient quality metrics. Finally, bonuses and benefits were not included in salary estimates. These decisions were made a priori with the aim of producing a conservative, but more generalizable, base case estimate. Second, individual recall guided time estimates, which has some inherent unreliability but is the standard used by previous studies. Third, the interviews and data collection were completed by a single individual, which could introduce bias in both respondents’ responses and the interviewer’s interpretation of the responses. Fourth, this study only includes data from a single hospital, one that is large and has a dedicated quality review department, which may not be generalizable to all US hospitals. However, most hospitals have some form of quality review staff with a reasonably similar review process.
Fifth, these cost estimates include 65 Maryland-specific metrics required for pay-for-performance in Maryland under a CMS waiver.4 However, it is unlikely that the estimate is highly inflated compared with the remainder of the country because 60 of the 65 Maryland-specific metrics are individual potentially preventable conditions within the Maryland Hospital Acquired Conditions program. Although represented as 60 additional metrics, the bulk of Maryland Hospital Acquired Conditions documentation review and validation is performed globally, with risk-adjustment covariates and exclusions applying broadly across the Maryland Hospital Acquired Conditions program. Senior hospital quality leaders at Johns Hopkins Hospital estimate that less than or equal to 5% of quality reporting effort is spent, collectively, on Maryland Hospital Acquired Condition reporting and that, including the other 5 Maryland-specific metrics, the amount that all Maryland-specific metrics could inflate annual reporting cost compared with non-Maryland hospitals should be less than or equal to 10%, making the cost estimates in this article relatively generalizable.
Conclusions
In 2018, the Johns Hopkins Hospital reported results for 162 unique quality metrics to government and nationally prominent private agencies, 97 of which were applicable to hospitals nationwide. Preparing quality data for submissions to these external organizations required approximately 108 000 person-hours and cost approximately $5.0 million in personnel costs, with an additional approximately $600 000 for related vendor contracts and excluding all time devoted to quality improvement interventions. Claims-based metrics were the most resource intensive, using approximately 800 person-hours and $35 000 per metric per year. Future innovation should consider reducing metric volume and developing additional non–claims-based electronic metrics.
eMthods
eTable 1
Data sharing statement
References
- 1.US Centers for Medicare & Medicaid Services . NHE fact sheet. Published December 15, 2021. Accessed March 30, 2022. https://www.cms.gov/research-statistics-data-and-systems/statistics-trends-and-reports/nationalhealthexpenddata/nhe-fact-sheet
- 2.Porter ME. What is value in health care? N Engl J Med. 2010;363(26):2477-2481. [DOI] [PubMed] [Google Scholar]
- 3.McKethan A, Jha AK. Designing smarter pay-for-performance programs. JAMA. 2014;312(24):2617-2618. [DOI] [PubMed] [Google Scholar]
- 4.Maryland Department of Health . Health Services Cost Review Commission (HSCRC) overview. Maryland Health Services Cost Review Commission. Accessed April 22, 2022. https://hscrc.maryland.gov/Pages/default.aspx
- 5.Rosenbaum L. Reassessing quality assessment: the flawed system for fixing a flawed system. N Engl J Med. 2022;386(17):1663-1667. [DOI] [PubMed] [Google Scholar]
- 6.Schuster MA, Onorato SE, Meltzer DO. Measuring the cost of quality measurement. JAMA. 2017;318(13):1219-1220. [DOI] [PubMed] [Google Scholar]
- 7.The Role of Administrative Waste In Excess US Health Spending. Health Affairs Research Brief; 2022. Accessed March 28, 2023. https://www.healthaffairs.org/do/10.1377/hpb20220909.830296/full/
- 8.Casalino LP, Gans D, Weber R, et al. US physician practices spend more than $15.4 billion annually to report quality measures. Health Aff (Millwood). 2016;35(3):401-406. doi: 10.1377/hlthaff.2015.1258 [DOI] [PubMed] [Google Scholar]
- 9.Blanchfield BB, Acharya B, Mort E. The hidden cost of regulation. Jt Comm J Qual Patient Saf. 2018;44(4):212-218. [DOI] [PubMed] [Google Scholar]
- 10.Sharfstein JM, Stuart EA, Antos J. Maryland’s all-payer health reform-a promising work in progress. JAMA Intern Med. 2018;178(2):269-270. doi: 10.1001/jamainternmed.2017.7709 [DOI] [PubMed] [Google Scholar]
- 11.Richman BD, Kaplan RS, Kohli J, et al. Billing and insurance-related administrative costs. Health Aff (Millwood). 2022;41(8):1098-1106. [DOI] [PubMed] [Google Scholar]
- 12.Tseng P, Kaplan RS, Richman BD, Shah MA, Schulman KA. Administrative costs associated with physician billing and insurance-related activities at an academic health care system. JAMA. 2018;319(7):691-697. doi: 10.1001/jama.2017.19148 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.National Quality Forum . Glossary of Terms. 2022. https://www.qualityforum.org/Measuring_Performance/Submitting_Standards/NQF_Glossary.aspx
- 14.Donabedian A. Evaluating the quality of medical care. Milbank Mem Fund Q. 1966;44(3):166-206. [PubMed] [Google Scholar]
- 15.Institute of Medicine, Committee on Quality of Health Care in America . Crossing the Quality Chasm: A New Health System for the 21st Century. National Academies Press; 2001. [Google Scholar]
- 16.Kaplan RS, Anderson SR. Time-driven activity-based costing. Harv Bus Rev. 2004;82(11):131-138. [PubMed] [Google Scholar]
- 17.Kaplan RS. Improving value with TDABC. Healthc Financ Manage. 2014;68(6):76-83. [PubMed] [Google Scholar]
- 18.Division of Consumer Prices and Price Indexes . Consumer Price Index (CPI) tables. US Bureau of Labor Statistics. Published March 10, 2022. Accessed May 26, 2022. https://www.bls.gov/cpi/tables/
- 19.Johns Hopkins Hospital . Form 990: return of organization exempt from income tax: OMB No. 1545-0047. https://apps.irs.gov/pub/epostcard/cor/520591656_202006_990_2021051818121576.pdf
- 20.American Hospital Association . Fast facts on US hospitals, 2022. Accessed April 22, 2022. https://www.aha.org/statistics/fast-facts-us-hospitals
- 21.Dunlap NE, Ballard DJ, Cherry RA, et al. Observations From the Field: Reporting Quality Metrics in Health Care. National Academies Press; 2016. Accessed December 10, 2022. https://nam.edu/observations-from-the-field-reporting-quality-metrics-in-health-care/
- 22.Wall MJ, Howell MD. Variation and cost-effectiveness of quality measurement programs: the case of sepsis bundles. Ann Am Thorac Soc. 2015;12(11):1597-1599. [DOI] [PubMed] [Google Scholar]
- 23.Rubin I, Israel J. The case for aligning quality measurement. Health Affairs. December 5, 2022. Accessed May 3, 2023. https://www.healthaffairs.org/content/forefront/case-aligning-quality-measurement
- 24.Halladay JR, Stearns SC, Wroth T, et al. Cost to primary care practices of responding to payer requests for quality and performance data. Ann Fam Med. 2009;7(6):495-503. doi: 10.1370/afm.1050 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25.Meltzer DO, Chung JW. The population value of quality indicator reporting: a framework for prioritizing health care performance measures. Health Aff (Millwood). 2014;33(1):132-139. [DOI] [PubMed] [Google Scholar]
- 26.Bond AM, Schpero WL, Casalino LP, Zhang M, Khullar D. Association between individual primary care physician merit-based incentive payment system score and measures of process and patient outcomes. JAMA. 2022;328(21):2136-2146. [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
eMthods
eTable 1
Data sharing statement