Skip to main content
Journal of the American Medical Informatics Association: JAMIA logoLink to Journal of the American Medical Informatics Association: JAMIA
. 2024 Dec 26;32(2):318–327. doi: 10.1093/jamia/ocae289

New indices to track interoperability among US hospitals

Catherine E Strawley 1,, Julia Adler-Milstein 2, A Jay Holmgren 3, Jordan Everson 4
PMCID: PMC11756636  PMID: 39724921

Abstract

Objectives

To develop indices of US hospital interoperability to capture the current state and assess progress over time.

Materials and Methods

A Technical Expert Panel (TEP) informed selection of items from the American Hospital Association Health IT Supplement survey, which were aggregated into interoperability concepts (components) and then further combined into indices. Indices were refined through psychometric analysis and additional TEP input. Final indices included a “Core Index” measuring adoption of foundational interoperability capabilities, a “Pathfinder Index” representing adoption of advanced interoperability technologies and auxiliary exchange activities, and a “Friction Index” quantifying barriers. The first 2 indices were scored from 0 (no interoperability) to 100 (full interoperability); the Friction Index was scored 0 (no friction) to 100 (maximum friction). We calculated indices annually from 2021 to 2023, stratifying by hospital characteristics.

Results

Items within components created reliable and meaningful measures, and associations between components within indices followed the TEP’s expectations. Weighted mean scores for the Core (2023), Pathfinder (2022), and Friction (2023) Indices were 61, 57, and 30, respectively. Hospitals with 500+ beds (large), not designated as critical access, in metropolitan areas, and using market leading electronic health records had statistically significant higher mean scores on all indices. Index values also improved modestly over time.

Discussion

Hospitals performed best on the Core Index. Given recent policy and programmatic initiatives, we anticipate continued improvement across all indices.

Conclusion

Ongoing index tracking can inform policy impact evaluations and highlight persistent interoperability disparities across hospitals.

Keywords: hospital interoperability, data exchange, public health, application programming interface, health information exchange

Background and significance

For over a decade, US health information technology (health IT) policy has been focused on various ways to support interoperability, defined in the 21st Century Cures Act of 2016 as technology that “enables the secure exchange of electronic health information” without special effort, “allows for complete access, exchange, and use of all electronically accessible health information for authorized use,” and “does not constitute information blocking.”1 The Cures Act pushed interoperability forward by prohibiting practices likely to interfere with the access, exchange, and use of electronic health information, creating requirements for developers and providers to support easy patient access to their health information, and authorizing the creation of a Trusted Exchange Framework and Common Agreement (TEFCA) to enable nationwide exchange of health information. In tandem, the Centers for Medicare & Medicaid Services (CMS) adjusted the Medicare and Medicaid EHR Incentive Programs under the 2015 Medicare Access and CHIP Reauthorization Act (MACRA) to further prioritize interoperability, and, in 2018, renamed its initiatives the Medicare and Medicaid Promoting Interoperability (PI) Programs.2,3 The Office of the Assistant Secretary for Technology Policy/Office of the National Coordinator for Health Information Technology (ASTP)’s 2020 Cures Act Final Rule subsequently implemented many of the provisions of the Cures Act by establishing new interoperability requirements and updated electronic health record (EHR) certification criteria for health IT developers to support data exchange.4

Despite this focus on interoperability, there are few widely used comprehensive measures available to assess interoperability progress by healthcare delivery organizations. Measuring the state of hospital interoperability is necessary to evaluate the effectiveness of existing policies and identify needs for new or revised policy. Hospitals’ progress in enabling interoperability has been examined by the Assistant Secretary for Technology Policy/Office of the National Coordinator for Health Information Technology (hereafter, ASTP) and in the literature based on engagement with 4 major exchange activities: finding, sending, receiving, and integrating health information.5–7 However, this assessment of interoperability capabilities does not capture the full breadth of high-value interoperability capabilities, does not address all components of the Cures Act definition of interoperability, and may not reflect areas targeted by recent policy efforts. For example, exchange of data representing social determinants of health (SDOHs) and support for electronic public health reporting were only recently accelerated during the COVID-19 pandemic.8–10 Furthermore, existing measures do not offer representations of the continued concern that, while hospital capabilities may exist to enable exchange, beneficial exchange without “special effort” remains uncommon.11,12

Comprehensive indices, like those widely used to track basic and comprehensive EHR adoption over the past decade and also those used in other sectors (eg, the Consumer Confidence Index, the DOW Jones Industrial Average), serve an important role in high-level assessments of current state and trajectory of progress.13–15 In the context of hospital interoperability, which is complex and includes many components, headline index numbers provide a straightforward way to describe the state of interoperability, and tracking indices over time similarly allows for a more granular understanding of progress (or lack thereof). Tracking indices for hospitals of varying sizes, resources, and affiliations further supports examination of whether progress is occurring equitably across organizations.

Objective

We therefore sought to develop a set of indices, analogous to the consumer price index, stock market indices, and other measures of value over time, that can accurately and simply communicate a holistic sense of hospital interoperability. We focused on hospitals because they provide critical services, they are the largest facilities in health care, they often serving as the anchor for large health systems, and data on hospital interoperability are readily available from existing national surveys. Our goal was to develop a single index or small number of indices that would meet the following criteria: encompass a breadth of dimensions of interoperability in a logical and hierarchical structure; capture incremental progress in nationwide hospital interoperability through a continuous scale; be easily updated to reflect new technologies, interoperability needs, and policy priorities; and be used to assess the effects of policies including disparities among hospitals. This work also serves as a model for the development of similar interoperability indices among other types of healthcare delivery organizations (eg, physician offices, behavioral health providers) as well as other relevant entities in the health IT ecosystem (eg, digital health companies).

Materials and methods

Selection of data source, index framework development, and technical expert feedback

Given our objective to develop and implement indices using existing data, the study team assessed sources that captured a national sample of hospitals and broad dimensions of interoperability as well as those that are collected at least annually. We selected the American Hospital Association (AHA) Health IT Supplement survey, which is distributed annually to the CEOs of all US hospitals, regardless of AHA membership, and contains questions developed with input from ASTP and intended to measure the adoption and use of health IT in US hospitals. The respondent is asked to complete the survey or designate completion to the most knowledgeable person in the organization, online or via mail. The survey regularly receives a high response rate, including 54% in 2022 and 50% in 2023. Study team members reviewed the 2014-2023 AHA Health IT Supplement surveys and identified candidate survey items relevant to interoperability. The study team (primarily J.A.M. and A.H.) then developed an initial structure to organize these items into broader groupings.

Next, we convened a Technical Expert Panel (TEP) consisting of 6 experts on interoperability and measurement. Experts were identified by the study team, by consultation with knowledgeable parties, and through a snowball-style approach. The final list of experts represented a cross-section of knowledgeable parties with the expertise to speak to the measurement process. Individuals were recruited from health information organizations, health systems, trade groups, and technology companies to achieve multiple vantage points on the index content and minimize bias from any one party; participants were offered an honorarium (see Acknowledgments for list of TEP members). Over a series of 5 videoconference meetings between May and August 2023, the TEP reviewed existing survey instruments to develop and confirm a set of related indices. Two study team members (J.A.M. and A.H.) facilitated each TEP meeting by presenting the survey content, facilitating discussion, and seeking consensus.

In the first meeting, the TEP discussed and agreed upon the conceptual design of 3 mutually exclusive indices: (1) foundational interoperability technologies and practices (the Core Index); (2) the adoption of novel interoperability technologies, including practices relevant to engaging patients, the use of application programming interfaces (APIs), and information exchange with public health (the Pathfinder Index); and (3) challenges experienced with interoperability (the Friction Index) (Figure 1). In the 3 subsequent meetings, TEP members reviewed survey items from each index and decided: (1) whether the item should be included (ie, it is relevant to interoperability); (2) if so, in which index it fit best; and (3) how response options for the item should be scored. For example, 1 survey item, originally slated for the Pathfinder Index, related to hospitals’ use of automated, manual, or a mix of both reporting types for public health data reporting. The TEP felt that these survey items reflected aspects of both the Pathfinder Index and Friction Index and that they provided only modest insight beyond items relating to the technology used to submit public health data; these items were ultimately omitted from the index. After each TEP meeting, the study team revised the index design based on the feedback received. Final approval of the index design was received during the fifth TEP meeting. Additional detail regarding TEP recruitment and the content of TEP sessions is available in Appendix S1.

Figure 1.

This figure is a hierarchical diagram displaying each of the three indices, the components they were aggregated from, and the survey items each component comprises. From left to right, the first column is titled “Hospital Interoperability Indices”, the second column is titled “Index Components”, and the third column is titled “Individual Hospital-level Survey Questions.” The hierarchy shows that the “Core Index” contains the components “Clinical Interoperable Exchange”, “Clinical Information Availability and Use”, and “Breadth of Exchange Partners.” The “Clinical Interoperable Exchange” component comprises the following survey items: “Finding data electronically”, “Sending data electronically”, “Receiving data electronically”, and “Integrating data without manual intervention.” The “Clinical Information Availability and Use” component contains the following survey items: “Information Available to Clinicians” and “Information Used by Clinicians.” The “Breadth of Exchange Partners” component contains the following survey items: “Hospitals: Send information”, “Hospitals: Receive information”, “Ambulatory Providers: Send information”, “Ambulatory Providers: Receive information”, “Long-term Care: Send information”, “Long-term Care: Receive information”, “Behavioral Health: Send information”, and “Behavioral health: Receive information.” The “Pathfinder Index” contains the components “Clinician / Health System APIs”, “Patient Engagement”, “Social Determinants of Health (SDOH)”, and “Public Health Data Submitted by EHR / HIE.” The “Clinician / Health System APIs” component contains the following survey items: “Integrate Third-Party Data to EHR”, “Provide EHR Data to Third-Party Apps”, “Provide non-EHR Data to Third-Party Apps.” The “Patient Engagement” components contains the following survey items: “Enable Downloading Patient Information”, “Enable Importing Patient Information”, “Enable Sending Patient Information”, “Enable Patients to Access Apps via API”, “Enable Patients to Access Apps via FHIR”, and “Enable Submissions of Patient Generated Health Data via FHIR.” The “Social Determinants of Health (SDOH)” component contains the following survey items: “Receive SDOH Data from Healthcare Orgs”, “Receive SDOH Data from Community / Social Service Orgs”, “Use Individual-level SDOH Data”, and “Use Population-level SDOH Data.” The “Public Health Data Submitted by EHR / HIE” component contains the following survey items: “Syndromic Surveillance”, “Immunization Registry”, “Electronic Case Reporting”, “Public Health Registry”, “Clinical Data Registry”, “Electronic Reportable Laboratory Result”, and “Hospital Capacity Reporting.” The “Friction Index” contains the components “Barriers to Exchange”, “Numerous Methods of Exchange”, and “Experience of Information Blocking.” The “Barriers to Exchange” component contains the following survey items: “Barriers to Receiving Information”, “Barriers to Sending Information”, and “Other Barriers to Exchange.” The “Numerous Methods of Exchange” component contains the following survey items: “Number of Methods to Receive Information”, “Number of Methods to Find Information”, and “Number of Methods to Send Information.” The “Experience of Information Blocking” component contains the following survey items: “By Developers”, “By Health Information Exchanges / Networks”, “By Healthcare Providers”, “Number of Healthcare Provider Information Blocking Methods Experienced”, and “Number of Developer Information Blocking Methods Experienced.”

Hospital interoperability indices conceptual model. API, application programming interfaces; EHR, electronic health record; FHIR, Fast Healthcare Interoperability Resources; HIE, health information exchange; SDOH, social determinants of health.

The TEP also identified important concepts missing from the indices (for which the AHA IT Supplement did not have an associated item). Of note, the TEP identified information security, data accuracy, data standardization, data quality, workforce challenges, and the state of exchange with social service organizations as important concepts that were not reflected in the indices due to limitations in availability of questions in the survey instruments. These topics will inform future measurement efforts.

While the TEP discussed the possibility of applying weights to different items within an index (ie, to give certain items more importance), TEP participants ultimately decided that there was no systematic basis on which to justify weights. Therefore, we applied equal weights to each item, and created components such that hospitals with minimum performance on each item received a score of zero, and hospitals with the maximum performance on all items received a score of 100. Each component was then also assigned equal weight and aggregated to produce the final score for each index, which also ranged from 0 to 100. A score of 100 on the Core or Pathfinder Index represents perfect interoperability according to the respective measures, while the same score indicates the worst possible experience with friction (the greatest number of major challenges to interoperability) on the Friction Index.

We then used data from the 2022 IT Supplement survey to create the Pathfinder Index and 2023 survey data to construct the Core and Friction Indices. Different years of data were used due to differences in the availability of survey items used to inform each index. Two members of the study team (J.E. and C.S.) then evaluated the psychometric reliability and validity of the initial indices, using the respective years of data from which each index was created. To establish internal consistency reliability, we calculated item-rest correlations and Cronbach’s alpha, using the “psych” package in R (4.2.2), to determine whether items within each component were related to one another such that they appear to empirically measure a shared concept.16,17 To assess construct validity, we calculated Spearman correlation coefficients between components within and across indices. These correlations were used to evaluate whether the components of each index are (1) closely related such that quantitative results indicate that they represent a shared higher-level concept, supporting convergent validity, or (2) are not closely related quantitatively, indicating that they represent distinct aspects of the overarching index, supporting discriminant validity.18

Each index was finalized through an iterative process. The study team identified items with unexpected psychometrics, eg, very low correlation of components within the same index and presented issues to the TEP. The TEP and study team then discussed the acceptability of these results in light of conceptual expectations for the relationship between the items and components and identified strategies to address each item. We then generated a final set of psychometrics that are reported in the results below.

Final index construction

The final set of indices included a Core Index measuring levels of adoption of foundational interoperability capabilities, a Pathfinder Index representing the extent to which hospitals have adopted more advanced technologies for interoperability and engage with auxiliary exchange, and a Friction Index that quantifies the extent to which hospitals face barriers to interoperability (Figure 1). Each index is comprised of components that represent more specific interoperability dimensions, and these components are each comprised of several survey items. For example, the Core Index accounts for the frequency with which hospitals use a variety of methods to send summary of care records, an item contributing to the “Clinical Interoperable Exchange” component. As part of its “Social Determinants of Health (SDOH)” component, the Pathfinder Index includes a survey item representing the types of organizations from which hospitals receive data on patients’ social needs. Finally, an example of items included in the Friction Index is a survey question related to which issues hospitals experience when sending, receiving or querying information to/from other hospitals (eg, difficulty matching patients, data formatting concerns), which is aggregated into the “Barriers to Exchange” component. Appendix S2 provides a more detailed breakdown of the specific AHA IT Supplement questions informing individual items and index components, including descriptive statistics of survey responses to each of the items, as well as further details about our approach to construct the index and update it over time.

Analysis

We focused our analysis on non-federal acute care hospitals because those hospitals were eligible for Federal EHR incentives, because a broader population might suppress measures of reliability and validity by introducing variation from different types of hospitals, and because these hospitals are often the focus of analysis of hospital interoperability. We then created non-response weights in each year of AHA IT Supplement data (2021-2023) using a logistic regression to predict the likelihood that a hospital in the full AHA Annual Survey responded to the survey based on the hospital’s size, ownership, teaching status, system membership, availability of a cardiac intensive care unit, urban status, and region. Hospital weights were the inverse of these response probabilities. These weights were then integrated into all analyses described below to generate nationally representative results.

Using 2022 IT Supplement data for the Pathfinder Index and 2023 data for the Core and Friction Indices, we constructed histograms and calculated means, medians, 25th percentile values, and 75th percentile values for the 3 indices and their respective components to depict the spread of hospitals’ scores. To determine whether indices varied across hospitals of different types, we constructed histograms and calculated the mean value for each index across stratifications of hospital size (small, medium, large), critical access hospital (CAH) status (yes vs no), core-based statistical area (CBSA) type (metropolitan, micropolitan, rural), and primary EHR used (market-leading—Epic, Cerner, Meditech vs non-market leading).5 We selected these variables for stratification because they offer a representation of hospitals’ resource availability.

Lastly, to capture longitudinal progress in performance, we calculated mean scores for each index and its respective components from 2021 through 2023, as data availability allowed. However, differences in survey questions over time made it infeasible to assess trends in the Pathfinder and Friction Indices (see survey items available by year in Table S1). When items were not included in an earlier year, we imputed the value, setting it equal to the first year that the data were observed. We also developed an approach to smooth indices that included new items over time, as detailed in Part 5 of Appendix S2.

Results

Three indices

Core Index

The Core Index measures the level of adoption of foundational interoperability capabilities. It is comprised of 3 components representing (1) hospitals’ interoperable exchange of patient information with other healthcare organizations (measured through 4 survey items), (2) the availability and use of exchanged information to inform patient care (2 items), and (3) breadth of exchange partners, including long-term care and behavioral health providers (8 items).

Pathfinder Index

The Pathfinder Index quantifies hospitals’ implementation of more advanced technologies and adoption of auxiliary interoperability activities, aligning with newer policy relating to the standardization of advanced exchange technologies (ie, APIs) and a greater emphasis on public health following the onset of the COVID-19 pandemic. The index includes 4 components: (1) hospital support APIs supporting applications used by clinicians and the health system (measured through 3 survey items); (2) patient engagement, including support for APIs to enable use of data by patients and enabling submission of patient-generated health data (6 items); (3) the exchange and use of SDOH information (4 items), and (4) submission of public health data for 7 activities (eg, syndromic surveillance) using the EHR or a health information exchange (HIE) (7 items).

Friction Index

The Friction Index serves as a numeric representation of the extent and severity of challenges faced by hospitals engaging in health data exchange. The index includes 3 components: (1) barriers a hospital experiences to exchange health information (measured through 3 survey items), (2) a hospital’s need to use numerous methods (eg, HIE, national network, and point-to-point interfaces) to exchange health information (3 items), and (3) a hospital’s experience of information blocking from various actors (3 items).

Psychometric properties of index construction framework

In the final indices, Cronbach’s alpha within components varied from 0.39 for the experience of information blocking component of the Friction Index to 0.92 for the breadth of exchange partners component of the Core Index (Table S1). Item-rest correlations generally fell within the range of 0.40-0.80, indicating moderate to high correlations between individual items and other items in a component. There were 3 exceptions: the item-rest correlation for the hospital capacity reporting item in the public health component of the Pathfinder Index was 0.39. The item-rest correlations for hospitals’ experiences with information blocking by HIEs and by healthcare providers in the information blocking component of the Friction Index were 0.37 and 0.27, respectively.

Spearman correlations (ρ) for components within indices varied (Figure 2). Core Index components were positively correlated (ρ ranged from 0.44 to 0.59). Components of the Pathfinder Index were less correlated with one another, with ρ ranging from 0.23 to 0.35. These correlations indicate that both the Core and Pathfinder Index were moderately to well correlated but not duplicative. The Friction Index components were not closely correlated with one another, with ρ ranging from −0.08 to 0.14, indicating little relationship between components of the Friction Index.

Figure 2.

The figure displays a heatmap of Spearman correlation coefficients for all components included across all three indices. Component names are displayed on both the horizontal and vertical axes in the following order: “Clinical Interop Functions”, “Data Use”, “Breadth of Exchange”, “API”, “Patient Engagement”, “SDOH”, “Public Health”, “Barriers”, “Methods”, and “Info Blocking”. The component labels are color coded as follows (according to their respective index): in green for the Core Index – Clinical Interop Functions, Data Use, and Breadth of Exchange; in blue for the Pathfinder Index – API, Patient Engagement, SDOH, and Public Health; and in purple for the Friction Index – Barriers, Methods, and Info Blocking. The legend scale shows coefficient ranges from -1.0 to 1.0, with -1.0 being colored as dark red, 0 being white, and 1.0 colored as dark blue. Correlation coefficients between these numbers are displayed on a gradient. In addition to a color scale showing the correlation between corresponding components on the horizontal and vertical axes, the number for the Spearman correlation coefficient is displayed. The values displayed are as follows: Clinical Interop Functions vs. Data Use (0.59), Clinical Interop Functions vs. Breadth of Exchange (0.5), Data Use vs. Breadth of Exchange (0.44), Clinical Interop Functions vs. API (0.34), Data Use vs. API (0.33), Breadth of Exchange vs. API (0.21), Clinical Interop Functions vs. Patient Engagement (0.44), Data Use vs. Patient Engagement (0.38), Breadth of Exchange vs. Patient Engagement (0.29), API vs. Patient Engagement (0.24), Clinical Interop Functions vs. SDOH (0.37), Data Use vs. SDOH (0.36), Breadth of Exchange vs. SDOH (0.24), API vs. SDOH (0.23), Patient Engagement vs. SDOH (0.32), Clinical Interop Functions vs. Public Health (0.28), Data Use vs. Public Health (0.33), Breadth of Exchange vs. Public Health (0.22), API vs. Public Health (0.35), Patient Engagement vs. Public Health (0.3), SDOH vs. Public Health (0.32), Clinical Interop Functions vs. Barriers (-0.02), Data Use vs. Barriers (-0.03), Breadth of Exchange vs. Barriers (-0.1), API vs. Barriers (-0.06), Patient Engagement vs. Barriers (-0.12), SDOH vs. Barriers (0.02), Public Health vs. Barriers (-0.15), Clinical Interop Functions vs. Methods (0.73), Data Use vs. Methods (0.59), Breadth of Exchange vs. Methods (0.5), API vs. Methods (0.44), Patient Engagement vs. Methods (0.44), SDOH vs. Methods (0.36), Public Health vs. Methods (0.38), Barriers vs. Methods (-0.08), Clinical Interop Functions vs. Info Blocking (-0.05), Data Use vs. Info Blocking (-0.05), Breadth of Exchange vs. Info Blocking (-0.07), API vs. Info Blocking (0.19), Patient Engagement vs. Info Blocking (-0.2), SDOH vs. Info Blocking (-0.13), Public Health vs. Info Blocking (0.07), Barriers vs. Info Blocking (0.14), Methods vs. Info Blocking (0.0).

Spearman correlations between hospital interoperability indices components. Notes: Consistent with the color coding in Figure 1, components in green text correspond to the Core Index (2023), components in blue text correspond to the Pathfinder Index (2022), and components in purple text correspond to the Friction Index (2023). API, application programming interfaces; SDOH, social determinants of health.

Several components were correlated with components in different indices. The correlations across components of the Pathfinder Index and components of the Core Index ranged from 0.21 to 0.44. The Core Index components correlated with the methods of exchange component in the Friction Index (ρ ranged from 0.50 to 0.73).

Index scores

The mean scores for the Core, Pathfinder, and Friction Indices were 61 in 2023, 57 in 2022, and 30 in 2023, respectively (Figure 3). However, because the Core Index exhibited a great deal of left skewedness, its median (71) was substantially higher than its mean.

Figure 3.

This figure is a table containing information regarding the distribution of scores on each of the three indices and their respective components. The first column of the table contains index and component labels (the column label is “Index and Components”), the second column displays histograms to show the distribution of scores for each index and component (the column label is “Distribution”), column three displays weighted means (column labeled “Mean”), column four displays weighted 25th percentile scores (column labeled “25th pctl”), column five displays weighted median scores (column labeled “Median”), and the final column displays weighted 75th percentile scores (column labeled “75th pctl”). The first row displays values for the Core Index, showing a left-skewed distribution with a mean of 61, 25th percentile score of 46, median of 71, and 75th percentile score of 81. The subsequent three rows display index performance values for the components of the Core Index: “Clinical Interoperable Exchange”, “Clinical Information Availability and Use”, and “Breadth of Exchange Partners.” The histogram for the “Clinical Interoperable Exchange” component shows a dramatically left-skewed distribution, with a mean of 77, 25th percentile score of 63, median of 88, and 75th percentile score of 100. The histogram for the “Clinical Information Availability and Use” component shows some left skewing, and this component has a mean of 65, 25th percentile score of 25, median of 75, and 75th percentile score of 100. The histogram for the “Breadth of Exchange Partners” component appears bimodal, with a mean of 42, 25th percentile score of 19, median score of 50, and 75th percentile score of 100. The subsequent row in the table displays the distribution of scores for the Pathfinder Index. The histogram is relatively normal, with a weighted mean of 57, 25th percentile score of 43, median of 57, and 75th percentile score of 77. The “Clinical/Health System APIs” component of the Pathfinder Index has a left-skewed histogram with a weighted mean of 61, 25th percentile score of 33, median of 67, and 75th percentile score of 100. The “Patient Engagement” distribution is left skewed, with a weighted mean of 64, 25th percentile score of 50, median of 67, and 75th percentile score of 100. The histogram for the “Social Determinants of Health” component shows a large portion of scores at the low end of the distribution (with scores equal to 0), with a weighed mean of 42, 25th percentile score of 0, median of 50, and 75th percentile score of 75. The “Public Health Data Submitted by EHR/HIE” component has a weighted mean of 57, 25th percentile score of 43, median of 57, and 75th percentile score of 100. The next row displays the distribution of scores for the Friction Index, for which the histogram shows evidence of right-skewing. The Friction Index has a weighted mean of 30, 25th percentile score of 21, median of 31, and 75th percentile score of 37. Within the Friction Index, the “Barriers to Exchange” component histogram is relatively normal, with a weighted mean of 42, 25th percentile score of 26, median of 42, and 75th percentile score of 81. The “Methods of Exchange” component histogram displays mild right-skewing, and the scores have a weighted mean of 37, 25th percentile score of 15, median of 40, and 75th percentile score of 55. Finally, the “Experience of Information Blocking” component histogram displays right skewing, with a weighted mean of 11, 25th percentile and median scores of 0, and a 75th percentile score of 33.

Hospital performance on the hospital interoperability indices and components. Notes: Calculated means and percentiles reflect survey weights. API, application programming interfaces; EHR, electronic health record; HIE, health information exchange.

Values on individual components of each index varied widely. Among the Core Index components, hospitals performed best with respect to the clinical interoperable exchange component (mean = 77; 95% CI: 76-78), followed by the clinical information availability and use component (mean = 65; 95% CI: 64-67), and then had much lower scores on the breadth of exchange partners component (mean = 42; 95% CI: 41-43).

On the Pathfinder Index, hospitals performed similarly on the clinical/health system APIs (mean = 66; 95% CI: 65-68), patient engagement (mean = 64; 95% CI: 63-65), and public health data submission (mean = 57; 95% CI: 56-58) components; but the mean score was lower for the SDOH component (mean = 42; 95% CI: 41-44).

Scores on the Friction Index components were similar for the barriers to exchange and methods of exchange components (mean = 42 and 95% CI: 41-43, and mean = 37 and 95% CI: 36-38, respectively), but the mean score was much lower for the experience of information blocking component (mean = 11; 95% CI: 10-12).

Index scores by hospital type and year

Index scores varied significantly between hospitals with differing characteristics (Figure 4). Hospitals that are larger in size, are not CAHs, are in metropolitan areas, and use a market leading EHR had statistically significantly higher mean scores on the Core and Pathfinder Indices compared to their counterparts. These differences were driven by a substantially larger proportion of hospitals with fewer resources scoring near the bottom of the distribution of each index, rather than by a shift in the modal response. Hospitals that are larger in size, are not CAHs, are in metropolitan areas, and use a market leading EHR also had higher mean scores on the Friction Index, indicating that they experienced greater friction.

Figure 4.

This figure is a table displaying the distribution of scores on each of the three indices, stratified by four variables representing hospital characteristics, including hospital size (large, medium, and small), critical access hospital (CAH) status (CAH, not a CAH), Core Based Statistical Area (CBSA) type (metro, micro, rural), and EHR used (market-leading vs. non market-leading). Each column represents scores on a different index; the columns are labeled left to right as “Core Index”, “Pathfinder Index”, and “Friction Index.” Each row represents a category for a hospital characteristic. The first three rows fall under the “Hospital Size” characteristic category, with values for each row displayed as “Large (Ref. Group)”, “Medium”, and “Small.” The next two rows display values for “Critical Access Hospital (CAH) Status”, including “CAH (Ref. Group)” and “Not a CAH.” The subsequent three rows are labeled “Metro (Ref. Group)”, “Micro”, and “Rural”, representing hospital “CBSA Type.” The final two rows are labeled “Leading (Ref. Group)” and “Non-leading”, representing the “EHR (Market leading vs. non-market leading)” characteristic. Each cell within the table represents a different subset of scores on a particular index, stratified by a particular hospital characteristic. Further, each cell includes a histogram displaying the distribution of scores and a numbered vertical line representing the weighted mean for each of the stratifications of index scores by hospital characteristic. The table shows that the weighted mean scores for large hospitals are 73, 67, and 33; for medium sized hospitals are 67, 62, and 31; and for small hospitals are 55, 52, and 28 for the Core, Pathfinder, and Friction Indices, respectively. The weighted mean scores for CAHs are 53, 49, and 27; and for hospitals that are not CAHs are 65, 61, and 31 for the Core, Pathfinder, and Friction Indices, respectively. Weighed mean scores for hospitals located in metropolitan areas are 67, 62, and 31; for hospitals located in micropolitan areas are 57, 55, and 30; and for hospitals located in rural areas are 49, 47, and 26 for the Core, Pathfinder, and Friction Indices, respectively. Finally, weighted mean scores for hospitals using a market leading EHR are 69, 63, and 32, while mean scores for those not using a market leading EHR are 33, 36, and 25 for the Core, Pathfinder, and Friction Indices, respectively.

Hospital performance on the hospital interoperability indices by hospital characteristic. Notes: All weighted means (represented by red vertical lines) were significantly different from that of their respective reference groups. Market-leading EHRs include Cerner, Epic, and MEDITECH’s EHRs. CAH, critical access hospital; CBSA, core-based statistical area; EHR, electronic health record.

Hospitals’ performance on these measures also varied across time (Table 1).

Table 1.

Trends in weighted means of hospital interoperability indices, 2021-2023.

2021 (n = 2364) 2022 (n = 2541) 2023 (n = 2539) Expected 2024 Expected 2025
Core (smoothed) a 56 60 61 New data available
 Clinical interop functions 69 75 77
 Data availability and use—all items included in 2021 58 62 65
 Breadth of exchange partners No items included No items included 42
Pathfinder 57 New data available
 Clinician/health system APIs No items included 66 No items included Update
 Patient engagement (smoothed)b 58 64 67 Update
 Social determinants of health No items included 42c 42 Update
 Public health 50c 57 61 Update
Friction 30 New data available
 Barriers to exchange 45c 50c 42
 Methods of exchange 33 36 37
 Information blocking 18 12 11
a

The “Breadth of Exchange Partners” component was not included in 2021 or 2022. To calculate the Core Index in those years, we assumed that the “Breadth of Exchange Partners” had the same value in 2021 and 2022 as in 2023.

b

Five of 6 items included on the 2022 and 2023 AHA IT Supplement were also included in 2021. To calculate the patient engagement component in 2021, we assumed that the sixth item, which captured whether the hospital supported patients’ ability to “Submit patient-generated data (eg, blood glucose, weight) through apps configured to meet Fast Healthcare Interoperability Resources (FHIR) specifications,” had the same value in 2021 as in 2022.

c

Substantially different response options or question phrasing from 2023, see Appendix S2 for information on the difference.

The mean score on the Core Index increased from 56 to 61 between 2021 and 2023. Where we could track change in components of the Pathfinder and Friction Indices over time, they exhibited improvement. For example, the information blocking component of the Friction Index declined from 18 in 2021 to 11 in 2023 (Table 1).

Discussion

The 3 indices developed in this study—Core, Pathfinder, and Friction—represent holistic and meaningful measures of hospital interoperability. These indices are intended to capture and simply convey progress in interoperability, with grounding in expert deliberation to identify and group interoperability concepts into a logical hierarchical structure and psychometric analyses to characterize their reliability and validity.

Advantages of index construction

This approach builds on existing measures of hospital interoperability, including commonly used measures focused on engagement in 4 domains of interoperability.5 In contrast to those measures, the indices we developed capture diverse concepts that represent the breadth of interoperability (including the use of health data, the number and type of partners with which data are exchanged); the use of APIs (including standards-based APIs); and patient engagement with data, public health reporting, and challenges to exchange. Furthermore, these indices are continuous measures, whereas existing metrics are binary. Continuous measures are better able to reflect incremental progress, such as increasingly frequent use of interoperability, which previous measures may not have captured.5 Another important advantage of these indices is the use of psychometric analysis to closely understand the relationships between included items. Components within the Core Index, which primarily capture interoperability between healthcare delivery organizations, were highly correlated, aligning with expectations. In contrast, correlations between the components of the Pathfinder Index were lower, which was expected because the Pathfinder Index represents newer and more diverse focuses of policy and technology development. Finally, components of the Friction Index were not well correlated and likely represent distinct facets of Friction. Given that these components are not well correlated in cross-sectional data, it will be important to monitor whether they improve in parallel or at varied rates in the future.

Hospital performance

Hospitals’ performance on the Pathfinder Index in 2022 was modestly lower than on the Core Index in 2023. Lower scores on the Pathfinder Index relative to the Core Index likely reflect the legacy of the HITECH Act, by far the largest public financial investment in health IT in the United States, which incentivized adoption specifically for acute care treatment purposes. However, public policy and recent events have galvanized progress on technologies captured by the Pathfinder Index. Monitoring the relative rate of improvement on these 2 indices could inform where further policy interventions are needed or whether progress continues organically under the existing policy regime.1

The Friction Index represents a useful complement to the Core and Pathfinder Indices by capturing the extent of challenges experienced when engaging in interoperability; for this measure, a larger score represents a greater extent of challenges. In the cross-sectional data, hospitals with higher scores on the Core and Pathfinder Indices also had higher (ie, worse) scores on the Friction Index. If this continued over time, we might expect increases in all indices: as interoperability became more common, so did Friction. In contrast, a positive outcome over time would be to observe increases in the Core and Pathfinder Indices accompanied by decreases in the Friction Index. Furthermore, because the components of the Friction Index exhibit low convergent validity (ie, are not well correlated), it will be important to monitor both changes in the overall Friction Index and its specific components over time. This is particularly important because specific policy interventions are likely to affect components differently. For instance, while mean scores on the information blocking component have declined (an improvement) since the effective date of information blocking regulations in 2021, increases in scores on the methods of exchange component indicate progressively greater use of multiple methods to exchange information, and the experience of barriers remains substantially common. By facilitating connectivity across networks, the TEFCA may reduce the need to use these multiple methods, lowering this component of friction in the future.19 Our team intends to track scores on the methods of exchange component to evaluate how they change over time given the establishment of this new infrastructure for nationwide HIE.

Disparities in index scores

These data indicate that hospitals with fewer resources—as captured by hospital size, critical access status, location, and use of a market-leading EHR—had statistically significantly lower scores on the Core and Pathfinder Indices (representing worse performance). These findings parallel recent work, reiterate important disparities between hospitals, and indicate the validity of these aggregate indices relative to prior work.5 These disparities reinforce a need for targeted policy to ensure that hospitals with limited resources and that disproportionately care for groups that have been marginalized can apply interoperable health IT to quantify and target upstream and preventable causes of health crises.20–22

Limitations

Index construction may not capture all relevant aspects of, and challenges related to, interoperability. First, component themes were limited to those already represented among the survey items included in the Health IT Supplement. Because of ASTP’s involvement in developing the Health IT Supplement, additional survey items reflecting new interoperability themes can be added in the future, but there will be a delay in when these new themes are reflected in the indices due to the time demands of survey development (eg, the need to cognitively test new questions) and fielding of these new items.

Additionally, the themes reflected in the indices reflect the subject matter expertise of the research team and TEP members. The concepts included in the indices may be biased based on the experiences of those involved in index design, which may have affected the level of importance placed on each interoperability topic area or affected whether a topic was reflected in the indices at all. We note that the intention of including TEP members in the index design process was to minimize the impact of the study team’s bias on item selection by including a wider variety of perspectives, although we acknowledge that this approach may not have eliminated bias completely.

Future work

The indices are designed such that additional concepts can be added over time as new interoperability technologies proliferate. In the coming years, we intend to update the items informing these indices to reflect hospitals’ adoption of novel technologies and implementation of new processes, as well as to incorporate important topics identified by the TEP. In consultation with the survey developers, the research team will work to develop, test, and field questions on these concepts in future iterations of the AHA Health IT Supplement survey and will reconvene a TEP to inform question development and inclusion in the index. Each of the 3 index scores will be re-calculated biannually (Core and Friction on odd years and Pathfinder on even years) for individual hospitals and to attain mean scores representing overall nationwide performance. Evaluation of hospitals’ index scores can also highlight performance disparities as they persist or shift over time.

This work focused on hospitals given the longstanding survey efforts of these organizations. The process may serve as a model for the development of additional indices, using other data and measuring interoperability for other types of healthcare delivery organizations (eg, provider organizations). However, we anticipate that other indices may vary in structure.

Conclusion

Through TEP guidance and psychometric analysis, we developed a set of comprehensive national indices to represent the state of US hospital interoperability. The final indices capture progress on foundational interoperability capabilities (in the Core Index), newer and more diverse interoperability capabilities (in the Pathfinder Index), and difficulty encountered in engaging in interoperable exchange (in the Friction Index). We found that, on average, hospitals performed better on the Core Index compared to the Pathfinder Index. Better-resourced hospitals tended to score higher on all indices compared to their counterparts—better on the Core and Pathfinder Indices and worse on the Friction Index. Between 2021 and 2023, hospitals’ performance on the Core Index, as well as the components of the Pathfinder and Friction Indices, generally improved. Continued tracking of these index scores over time and across hospital characteristics will offer opportunities to highlight progress in the widespread use of interoperable technologies, to track the impact of policies as they are implemented, and to target new policies.

Supplementary Material

ocae289_Supplementary_Data

Acknowledgments

We would like to acknowledge the following Technical Expert Panel (TEP) participants, who contributed subject matter expertise in the development of 3 indices representing US hospital interoperability: (1) Jeff Chin—Director, Data Collaboratives & Governance, Michigan Medicine; (2) Mari Savickis—Vice President, Public Policy, CHIME; (3) Craig Behm—President & CEO, CRISP; (4) Ries Robinson—CEO Rodin Scientific, LLC (formerly CEO Graphite Health); (5) Chantal Worzala—Principal, Alazro Consulting; and (6) Lorren Pettit—Vice President, Digital Health Analytics, CHIME.

Contributor Information

Catherine E Strawley, Office of the Assistant Secretary for Technology Policy/Office of the National Coordinator for Health Information Technology, Washington, DC 20201, United States.

Julia Adler-Milstein, Division of Clinical Informatics & Digital Transformation, Department of Medicine, University of California San Francisco, San Francisco, CA 94143, United States.

A Jay Holmgren, Division of Clinical Informatics & Digital Transformation, Department of Medicine, University of California San Francisco, San Francisco, CA 94143, United States.

Jordan Everson, Office of the Assistant Secretary for Technology Policy/Office of the National Coordinator for Health Information Technology, Washington, DC 20201, United States.

Author contributions

Catherine E. Strawley contributed to the conception of the manuscript, analysis, visualization, and interpretation of the data, and drafting and critical revision of the manuscript. Julia Adler-Milstein contributed to project administration, conception and design of the indices, interpretation of the data, and critical revision of the manuscript. A Jay Holmgren contributed to project administration, conception and design of the indices, interpretation of the data, and critical revision of the manuscript. Jordan Everson contributed to project supervision, conception and design of the indices and manuscript, analysis and interpretation of the data, and drafting and critical revision of the manuscript. All authors provided final approval of the final manuscript and agree to be accountable for all aspects of the work.

Supplementary material

Supplementary material is available at Journal of the American Medical Informatics Association online.

Funding

This work was supported through a contract funded by the Office of the Assistant Secretary for Technology Policy/Office of the National Coordinator for Health Information Technology (ASTP).

Conflicts of interest

The authors have no conflicts of interest.

Data availability

The AHA data used in this study are available for purchase from the AHA at https://www.ahadata.com/aha-data-resources.

References

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

ocae289_Supplementary_Data

Data Availability Statement

The AHA data used in this study are available for purchase from the AHA at https://www.ahadata.com/aha-data-resources.


Articles from Journal of the American Medical Informatics Association : JAMIA are provided here courtesy of Oxford University Press

RESOURCES