Abstract
Background
In the neonatal intensive care unit (NICU), predefined acuity-based team care models are restricted to core roles and neglect interactions with providers outside of the team, such as interactions that transpire via electronic health record (EHR) systems. These unaccounted interactions may be related to the efficiency of resource allocation, information flow, communication, and thus impact patient outcomes. This study applied network analysis methods to EHR audit logs to model the interactions of providers beyond their core roles to better understand the interaction network patterns of acuity-based teams and relationships of the network structures with postsurgical length of stay (PSLOS).
Methods
The study used the EHR log data of surgical neonates from a large academic medical center. The study included 104 surgical neonates, for whom 9,206 unique actions were performed by 457 providers in their EHRs. We applied network analysis methods to model EHR provider interaction networks of acuity-based teams in NICU postoperative care. We partitioned each EHR network into three subnetworks based on interaction types: (1) interactions between known core providers who were documented in scheduling records (core subnetwork); (2) interactions between core and noncore providers (extended subnetwork); and (3) interactions between noncore providers (extended subnetwork). For each core subnetwork, we assessed its capability to replicate predefined core-provider relations as documented in scheduling records. We further compared each EHR network, as well as its subnetworks, using standard network measures to determine its differences in network topologies. We conducted a case study to learn provider interaction networks taking care of 15 neonates who underwent gastrostomy tube placement surgery from EHR log data and measure the effectiveness of the interaction networks on PSLOS by the proportional-odds model.
Results
The provider networks of four acuity-based teams (two high and two low acuity), along with their subnetworks, were discovered. We found that beyond capturing the predefined core-provider relations, EHR audit logs can also learn a large number of relations between core and noncore providers or among noncore providers. Providers in the core subnetwork exhibited a greater number of connections with each other than with providers in the extended subnetworks. Many more providers in the core subnetwork serve as a hub than those in the other types of subnetworks. We also found that high-acuity teams exhibited more complex network structures than low-acuity teams, with high-acuity team generating 6,416 interactions between 407 providers compared with 931 interactions between 124 providers, respectively. In addition, we discovered that high-acuity and low-acuity teams shared more than 33 and 25% of providers with each other, respectively, but exhibited different collaborative structures demonstrating that NICU providers shift across different acuity teams and exhibit different network characteristics. Results of case study show that providers, whose patients had lower PSLOS, tended to disperse patient-related information to more colleagues within their network than those who treated higher PSLOS patients (p = 0.03).
Conclusion
Network analysis can be applied to EHR log data to model acuity-based NICU teams capturing interactions between providers within the predesigned core team as well as those outside of the core team. In the NICU, dissemination of information may be linked to reduced PSLOS. EHR log data provide an efficient, accessible, and research-friendly way to study provider interaction networks. Findings should guide improvements in the EHR system design to facilitate effective interactions between providers.
Keywords: provider interaction network, electronic health record, audit log, neonatal intensive care unit, care team, network analysis, network structure, postsurgical length of stay, proportional-odds model
Introduction
Patients admitted to the neonatal intensive care unit (NICU) include high-risk infants, who may suffer from a variety of complex diseases1,2 including prematurity, heart disease, respiratory failure, renal insufficiency, surgical problems, genetic or metabolic illnesses, neurologic problems, visual and hearing impairments, hyperbilirubinemia, infections, and hypo/hyperglycemia.3 The average cost for an infant hospitalized in the NICU is $3,000 per day.4 The birth hospitalization for a premature infant in the NICU can exceed $250,000 due to the increased risk for complications, amplified complexity, length of stay, and resource utilization, readmission, and need for additional health and social service.4–7 Beyond high health care costs, NICU patients are at greater risk for medical complications and errors in care. A recent study found that potentially modifiable factors contributed to 31% of NICU deaths.8 The variety and complexity of problems that NICU patients experience are contributed to by variability in the use of clinical resources, providers, and consultants.
Health care organizations (HCOs) often use acuity-based care team models9–14 to match appropriate clinical resources to the individual needs of each infant. Under these models, the composition (which often includes an attending neonatologist, fellow, residents, and nurse practitioner), skills, and hierarchy of the team are designed with the intention of providing safe, high quality care for each infant while optimizing available clinical resources across all patients in the NICU. However, the design of acuity-based teams currently focuses on the core or nucleus of the team9–12 (e.g., what types of individuals are absolutely necessary) and may not reflect the many interactions that occur in-person or virtually across providers in different teams and consulting services. The designed team does not explicitly consider how providers beyond these teams actually collaborate, communicate, and coordinate with the core (and each other) to provide care.11,12
Several studies have modeled the interactions among providers using direct observation of care delivery in the NICU15–18; however, these studies are costly and labor intensive, which limits their scope and thus generalizability. In addition, they fail to capture interactions that occur virtually via health information systems, such as an electronic health record (EHR). Several studies utilized administrative claims data to learn provider interaction networks and workflows19–22; however, by relying on claims data, the collaborations correspond only to those identified by payments for the primary problems ailing a patient at the time of care. Moreover, claims data only capture the interactions of providers and patients based on billed conditions and procedures, but fail to capture other types of interactions (e.g., writing clinical notes, administering medications, and simply checking on patient status), which are events critical to patient care but are rarely viewed outside of a health care system. To solve the challenges raised by claims data, many types of research leveraged EHR log data and data mining technologies to infer clinicalworkflows23,24 and care team structures.25–27 However, these studies limited care team structure or workflow exploration to high levels of abstraction such as across the entire health care system.23–27 While few have focused on modeling care team structures in specific settings, such as the NICU, there are some studies that relied on progress notes in the EHR to model handoff patterns between NICU providers.28 Yet these studies only considered NICU nurses and neglected the roles of other providers (e.g., neonatologists, pediatric surgeons, or respiratory therapists).
We designed a pilot study to leverage EHR log data to better understand team structures and collaboration across a variety of provider types and explore opportunities to design or refine, care team structures and processes in the NICU. We specifically designed a data mining algorithm and network analysis methodology for EHR log data to model the interactions of NICU care teams in an urban children’s hospital, which enabled the inference of interaction network structures. We modeled broad provider interaction networks for four predesigned high (2) and low (2) acuity core teams and characterized their structure via standard network measures. We focused on a sample of surgical neonates, who were cared by providers from the acuity-based NICU core teams pre- and postoperatively. Because infants with surgical procedures require complex clinical care inside and outside the NICU environment and demand the allocation of large amounts of hospital resources, we anticipated broad team models with significant complexity, making these patients a suitable cohort to test model development and performance. We conducted a case study to validate the effectiveness of provider interaction networks learned from EHR log data via their relationships with postsurgical length-of-stay (PLOS)—a metric with direct cost implications. In the case study, we focused on neonates who underwent gastrostomy tube placement surgery and received postoperative care from NICU providers. These patients’ need for high resource allocation and complex clinical care made them an excellent case study for studying effectiveness of EHR provider interaction networks.
Methods
Setting and Acuity-Based Care Teams
This study focuses on the 98-bed, Level IV (regional), academic NICU at Vanderbilt University Medical Center (VUMC), which is composed of two geographically distinct units in adjacent hospitals: a 20-bed unit containing mostly inborn preterm infants and a 78-bed unit for infants with medical and surgical conditions including infants transported in. The NICU is part of a larger, 1,019-bed, comprehensive medical center comprised of multiple hospitals. The NICU receives approximately 1,500 admissions annually and is supported by a full range of medical and surgical subspecialty services. Patients are assigned to one of five predefined core care teams based upon their acuity and underlying pathology. More complex or critically ill infants are admitted and managed by either the Blue (mainly chronically ill infants) or Red (mainly cardiac or surgical infants) teams. Patients with low acuity health conditions (e.g., a convalescing pre- term infant) are assigned to the Green or Yellow teams. The White team (excluded from the analysis) cares for inborn premature infants in a 20-bed unit and transfers any infant requiring surgery to one of the other four teams.
Datasets
The data span November 1, 2016 to December 31, 2017 and include 104 infants with noncardiac surgical procedures (e.g., gastrostomy or bronchoscopy). Care team specialty and acuity information for all of the patients were collected for the Red, Blue, Yellow, and Green teams. These teams managed all of the surgical infants. The White team was not analyzed because it does not care for surgical patients.
The NICU core team assigned to each patient including the core providers (physician, nurse practitioner, fellow, physician assistant, registered nurse, and resident physician) was known in advance from internally posted scheduling records. The schedule data include 17,314 assignments of core providers to specific acuity team (color) on a specific date and period. There were 78 core providers affiliated with 18 provider groups (e.g., NICU clinical fellow, neonatology physician, pediatric fellow) involved during the study period. Provider groups are predefined by health care systems, each of which is defined as a set of providers who have similar job titles or clinical responsibilities.
Table 1 summarizes the patient population in terms of age, corrected gestational age, and weight on the surgical day. When a patient had multiple surgeries, we only included their first surgery in our study to ensure that the team comparative study was not biased to patients with consecutive surgeries. A total of 81 patients were assigned to the high acuity teams (Red and Blue), while 23 patients were assigned to the low acuity teams (Yellow and Green). No patient was assigned to the White team. The average corrected gestational age of the observed patients was less than 36 weeks reflecting that the majority of surgical infants admitted to the NICU were premature.
Table 1.
Summary information for the infants in this study
Item | Team color | ||||
---|---|---|---|---|---|
Blue | Red | Yellow | Green | ||
# of patients (n = 104) | 41 | 40 | 11 | 12 | |
Gestational age (wk) | Average | 33.4 | 35.3 | 33.4 | 32.2 |
Q1 | 27 | 32.3 | 29.7 | 27.7 | |
Median | 35.3 | 36.7 | 34.3 | 33.2 | |
Q3 | 38.6 | 38.7 | 37.4 | 37.3 | |
Age (d) | Average | 64.8 | 38.39 | 50.33 | 63 |
Q1 | 5 | 2 | 6 | 16 | |
Median | 27 | 9.5 | 56 | 69 | |
Q3 | 133 | 56 | 63 | 105 | |
Weight (kg) | Average | 3.28 | 3.24 | 3.33 | 3.43 |
Q1 | 3.42 | 2.71 | 2.71 | 3.09 | |
Median | 3.27 | 3.08 | 3.32 | 3.30 | |
Q3 | 4 | 3.41 | 4.03 | 3.85 | |
# of providers committing actions to EHRs | 213 | 288 | 63 | 81 | |
# of unique provider groups | 71 | 79 | 31 | 38 | |
# of unique EHR interaction | 2,904 | 3,364 | 1,288 | 1,659 |
Abbreviation: EHR, electronic health record.
We extracted all provider interactions with the patient’s EHR, including opening and modifying records (e.g., documentation [including procedure notes], observations, measurements, diagnoses, and orders), along with timestamps from the EHR system.23–27 Data were extracted from the VUMC health care electronic systems for 3 perioperative days: 1 day before surgery, the day of surgery, and 1 day after surgery.
The EHR data included 457 providers, affiliated with 110 provider groups (e.g., cohorts of NICU nurses on day or night shift, perioperative services, Pediatric Surgery, Anesthesiology, Urology, or Ophthalmology). The providers performed 9,206 unique interactions with the EHRs.
Study Design
The study’s framework (Fig. 1) consisted of four components: (1) modeling provider interaction networks from EHR and observation data; (2) learning core and extended subnetworks; (3) assessing the ability of core subnetwork to capture predefined core provider relationships; and (4) performing an overall network analysis on each EHR network, as well as their subnetworks, using standard network measures to determine their differences in network topologies.
Fig. 1.
An overview of the framework for learning provider interaction networks from the data in EHR and observation and the comparison of differences in network structures. EHR, electronic health record. Leveraging patient-team (A), and patient-provider pairs (B) to learn provider networks (C).
Provider Interaction Networks in EHR
We grouped patients based on the four predefined core teams (color) from observation data (shown in Fig. 1A). For each group of patients, we created a bipartite subgraph of providers and patients from EHR log data. We represented the relationships between providers and patients in the graph as a matrix, such that each cell contained the number of actions a provider performed in the patient’s EHRs within the investigated 3 days (1 day before surgery, the day of surgery, and 1 day after surgery) (as shown in Fig. 1B). Next, we applied term frequency-inverse document frequency (TF-IDF) normalization to the matrix to characterize the strength of interactions between providers. TF-IDF was selected as a normalization strategy because it considers the number of actions a provider performed in a patient’s EHR in relation to the inverse of the total number of patients the provider committed actions to their EHRs thus describing the affinity of the provider to the patient.29 This technique is widely used in information retrieval and natural language processing to normalize relations between words, documents, and document collections or, in our case, between patients, providers, and teams.
The interaction strength (edge weight) of two providers within a care team was defined as the similarity between the row vectors in the matrix for the two providers. This similarity reflects how many patients the providers had in common. We applied cosine similarity on the providers’ vectors to infer the strength of an interaction. Cosine similarity has been shown to be an effective measure of similarity between two vectors in the biomedical domain30 and has been widely applied (e.g., measuring the similarity of symptoms or effects between diseases31). Using the strength of interactions, we built an EHR network of providers for each care (color) team. An interaction strength greater than 0 indicates an edge exists between the two providers or in other words they both had interactions with the same patients’ charts. To better understand interaction structures between providers taking care of patients with different levels of acuity, we built four EHR provider interaction networks (each corresponds to a color core team) based on the interactions whose strength (cosine similarity) is greater than 0.2 to remove some noise from the network structures.
Comparison of Provider Interaction Distribution between Any Two EHR Networks
Since high- and the low-acuity team dealt with different types of patients, we hypothesized that (1) distribution of provider interactions in high acuity is significantly different from that in low acuity team, and (2) distribution of provider interactions in high (low) acuity team is very similar. To test our hypotheses, we conducted a two-sided Wilcoxon rank-sum test for any pairs of networks.32 Wilcoxon rank-sum test is a nonparametric alternative to the two-sample t-test, designed to handle data that are not Gaussian distributed.
Core- and Extended-EHR Subnetworks
We partitioned each EHR provider interaction network into three subnetworks based on interaction types. The first subnetwork, which we refer to as the core EHR subnetwork, contains only interactions between known core providers documented in the scheduling records (predefined core teams). The second subnetwork only contains interactions between core and noncore providers, and the third contains interactions between noncore providers. We refer to the subnetworks containing noncore providers as the extended-EHR subnetworks.
Assessing the Ability of EHR Provider Networks to Capture Predefined Relations between Core Providers
To evaluate the utility of the learned EHR provider network, we compared interaction relations in EHR provider networks to predefined core-provider relations in predefined care teams (e.g., Blue, Red, Green, or Yellow) as documented in scheduling records. We refer to all interaction relations in a provider network as EHR Coverage, and all predefined core-provider relations in a color team as Schedule Coverage.
To measure the ability of an EHR provider network to capture interaction relations occurring among core providers in a color team (Schedule Coverage), we developed a metric of capability as
where Percentageschedule_coverage is defined as the percentage of relations in the Schedule Coverage matching interactions in the EHR Coverage, and PercentageEHR_coverage is defined as the percentage of interactions in the EHR Coverage matching relations in the Schedule Coverage. The bigger value of the Cap, the more ability for an EHR provider network to capture relations in a core team and to discover novel provider interactions (EHR Coverage - Schedule Coverage) beyond the core team.
Network Analysis for Each EHR Provider Interaction Network and Its Subnetworks
To identify differences in network structures (topologies) between acuity-based EHR networks and their corresponding subnetworks, we analyzed differences according to eight network measures.
Network Measures and Their Interpretations in the Virtual Interactions in the EHR
As a running example, Fig. 2 provided a network of 8 providers and 10 edges. Before diving into the details, we introduced the notion of distance between providers and its relationship with interaction strength. According to the definition of interaction strength, two providers were close when they had a similar affinity to patients. A smaller distance indicated a more similar affinity to patients. We used distance to represent relationships between providers in the example graph. For instance, provider 2 had more similar affinity to patients with provider 3 than provider 1; thus a distance of 0.2 and 0.7, respectively.
Fig. 2.
An example provider interaction network.
Betweenness Centrality
Betweenness centrality is defined as the number of times a provider is on the shortest path between all pairs of the other providers.33 In Fig. 2, providers 1 and 5 are on eight and zero shortest paths, respectively, which indicates that provider 1 has a larger betweenness centrality than provider 5. For instance, provider 1 is in the shortest path (2 → 1 → 6) between provider 2 and provider 6. In this scenario, provider 1 is the most efficient bridge to connect providers 2 and 6. Clinically, this indicates that provider 1 is most likely to aid communication among providers 2 and 6.
Closeness Centrality
Closeness centrality is the inverse of the normalized sum of the length of the shortest paths between the provider and all other providers in the graph.34,35 The more central a provider is, the closer he/she is to all other providers. It should be recognized that the distance between two providers is inversely proportional to the similarity of affinity to patients (i.e., a closer distance indicates a more similar affinity to patients). A central provider shares similar patients with other providers who also share similar patients with the remaining providers.
Eigenvector Centrality
Eigenvector centrality is a measure of the number of connections to other providers who themselves have high eigenvector scores.36 The score of a provider is the product of the number of connections and the score of the connected provider. The eigenvector score of a provider is partially determined by the corresponding eigenvalues. A provider, who frequently shares patients with others, tends to have a high eigenvector score. Thus, a provider with a high eigenvector centrality can be seen as someone, who has many patients in common with other providers that frequently share patients with others.
Authority
A provider’s authority corresponds to the normalized sum of the hub values of providers that shared patients with the provider.37 The hub value of a provider is measured as the normalized sum of the authority values of providers he/she shares patients with. The initial hub and authority values of a provider are predefined and both values are updated in a mutually recursive manner until a final measurement is achieved. From a clinical perspective, providers with a high authority score are expected to also have a high eigenvector centrality. This is because both measures indicate that the provider has many patients in common with other providers who themselves frequently share patients with other providers.
Cluster Coefficient
A provider’s cluster coefficient is the proportion of connections among their adjacent providers divided by the number of connections that could possibly exist between them.38 As shown in Fig. 2, provider 2 has three adjacent providers: providers 1, 3, and 5. The number of connections between providers 1, 3, and 5 is one (provider 1 → provider 3), while the maximum possible number of connections is three (provider 1 → provider 3; provider 1 → provider 5, and provider 3 → provider 5). As such, provider 2’s cluster coefficient is one-third. One can think of the cluster coefficient as a quantification of how close a provider’s neighbors are to being a clique of clinicians (e.g., a small group of clinicians, with shared interests in common patients). A provider with a large cluster coefficient is a provider who shares patients with providers who also share patients with each other.
Modularity
Modularity measures the amount of effort required to divide a network into modules,39 which, in this study, we refer to as subteams. The amount of effort is determined by two factors: (1) connections between nodes within modules, and (2) connections between nodes in different modules. The higher the modularity, the denser the connections are within modules and the more sparse connections are across modules. The example network in Fig. 2 has a high modularity when dividing the network into two subteams: {1, 2, 3, 5, 6} and {4, 6, 7, 8}.
Graph Density
Graph density is defined as the total number of edges within the network divided by the number of edges that could exist.40 As shown in Fig. 2, the number of edges is 10, and the maximum number of edges for the network is 28. Thus, the graph density is 10/28 or 0.357. The more dense the network, the more interactions among providers within the network.
Degree
A provider’s degree is the total number of edges with which they are affiliated. The weighted degree is the sum of the edge weights. In this study, the weight of an edge between two providers is the strength of the interaction between them.
Comparative Study on Network Structure of Core- and Extended-EHR Subnetworks
As mentioned above, we categorized each of the learned EHR provider networks into three subnetworks: one core subnetwork and two extended subnetworks. We conducted intra-and interanalysis. The intra-analysis is to compare the three subnetworks of each of the four EHR provider interaction networks; and the interanalysis is to compare a subnetwork (core or extended) across the four EHR provider interaction networks. First, we measured the differences in the subnetworks of an EHR provider network using six of the network measures: (1) degree, (2) weighted degree, (3) betweenness centrality, (4) closeness centrality, (5) authority, and (6) eigenvector centrality. We leveraged Gephi to calculate the values of the six metrics for each subnetwork.41 Second, we compared the subnetworks (i.e., core or extended) across the EHR provider networks (i.e., Blue, Red, Yellow, and Green). We then applied a min–max normalization of each network measure to enable comparison on a common scale.
Comparison of EHR Provider Interaction Networks
We performed several types of investigations to characterize the differences in the network structures among the four EHR provider networks. First, we modeled the differences between the networks of similar acuity levels (i.e., Blue vs. Red or Yellow vs. Green). Second, we modeled the differences between high and low acuity networks (i.e., Blue vs. Yellow, Blue vs. Green, Red vs. Yellow, and Red vs. Green).
We conducted pairwise comparisons (using a pair of networks) to identify common and unique measures (e.g., providers, provider groups, subteam structures, degree, and centralities) of the four EHR provider networks.
Modeling EHR Provider Networks for Gastrostomy
Patients and Measuring Their Relationships with PLOS We extracted EHR data of 18 NICU gastrostomy patients from the day prior to the patient’s surgery day until postoperative day 30. Fifteen of 18 were discharged home within 30 days after surgery. So we investigated those 15 patients in such a case study. This is because looking beyond 30 days (e.g., 60, 90, or 120 days) would have introduced too much variation. The mean postsurgical length of stay (PSLOS) of the 15 patients was 23.28 days, with the lowest PSLOS being 12 days. We also acquired general patient demographic data, such as age, date of discharge, date of surgery, and weight.
Since the case study aims to validate the effectiveness of EHR provider networks via their relationships with the patient outcome—PLOS, we built patient-level EHR provider networks. For each patient’s episode, we created a simplified sequence dataset by ordering provider actions based on their timestamps starting from the day prior to the patient’s surgery until postoperative day 30 or the patient’s discharge date. Based on the sequences, we identified relationships between providers whenever their actions occurred consecutively (provider B used the patient’s EHR after provider A). We characterized each provider interaction (weight of an edge) with the frequency by which they occurred. We followed this process for each patient’s EHR and finished with 15 learned patient-level provider networks. We used the standard network metrics including degree, betweenness centrality, closeness centrality, and authority as introduced above to quantify the structures of each patient-level provider network.
Most network metrics’ distributions and the PLOS distribution did not follow standard distributions, so we appeal to rank-based measures of association. We modeled patient PSLOS with each network metric controlling for patient age and weight using a proportional-odds logistic regression model. The R programming language and specifically the RMS package were used for all statistical analyses.42,43
Results
EHR Provider Interactions and Strength
Fig. 3 depicted the distributions of the learned provider interactions as a function of their strength. All four teams exhibited a greater number of interactions with strength ≥0.5 than below 0.5. The low acuity teams (Yellow and Green) had a greater number of such interactions than high acuity teams (Blue and Red).
Fig. 3.
Distributions of provider interactions as a function of interaction strength by teams (Blue, Red, Yellow, Green).
The distribution of interactions for the four teams did not exhibit any statistically significant differences: Blue versus Red (p = 0.9273); Yellow versus Green (p = 0.6403); Blue versus Yellow (p = 0.0661); Blue versus Green (p = 0.1643); Red versus Yellow (p = 0.1170); and Red versus Green (p = 0.6403). These findings suggest that the four teams follow similar interaction distributions. In particular, Blue and Red and Yellow and Green shared highly similar distributions.
The Ability of EHR Provider Networks to Represent Relations in Schedule Coverage and Novel Relations Beyond the Predefined
If providers worked in the same team during the same period of a day (e.g., daytime or nighttime), we assumed their relationships were predefined. The hospital scheduling records indicated there were 238 predefined provider relations between 57 core providers in Blue, 87 relations between 41 core providers in Red, 33 relations between 23 core providers in Green, and 30 relations between 18 core providers in Yellow. The EHR indicated there were 2,763 interactions between 213 providers (both core and noncore) in Blue, 3,653 interactions between 288 providers (both core and noncore) in Red, 349 interactions between 63 providers (both core and noncore) in Yellow and 582 interactions between 81 providers (both core and noncore) in Green. The ability of EHR provider networks to represent the relations predefined in the scheduling records and those beyond the predefined is characterized by CapEHR_Network The values of CapEHR_Network for the Blue, Red, Yellow, and Green-related EHR networks are 10.22, 40, 10.63, and 20.75, respectively, which indicates that beyond relations among core providers in Schedule Coverage, many more relations between core and noncore providers or among noncore providers (EHR Coverage - Schedule Coverage) were also captured by EHR networks.
Comparative Results of the Core and Extended Subnetworks
The network structures for the four EHR provider networks as well as their corresponding core and extended subnetworks are depicted in Supplementary Figs. A1–A4. In each figure, the Red color represents core providers and the Blue color represents noncore providers. Within each network, there are three types of interactions: (1) between core providers; (2) between core and noncore providers; and (3) between noncore providers. Fig. 4 depicts the Blue EHR provider networks with annotated examples of core and noncore providers. The Blue team (which often managed chronically ill patients, who frequently have chronic lung disease) is uniquely connected with respiratory therapy, hematology/oncology, diagnostic radiology, otolaryngology, and pediatric endocrinology. Fig. 5 depicts the differences in network metrics (degree, weighted degree, closeness centrality, betweenness centrality, eigenvector centrality, and authority) of the core and extended subnetworks within each EHR network. There are several notable findings. First, providers in the core subnetwork have a higher degree and weighted degree, on average, than those in the extended subnetworks. This suggests that the connections among core providers are more frequent than those among noncore providers or between core and noncore providers. However, the degree of providers in the extended subnetwork was larger than those in the core subnetwork for the Green team, which demonstrates noncore providers in the Green team are much more active than core providers.
Fig. 4.
The structure of the Blue EHR provider interaction network. The network consists of three types of interactions: (i) between core-providers, (ii) between core- and noncore-providers, and (iii) between noncore-providers. The core-providers are colored with red, and the noncore-providers are colored with Blue. EHR, electronic health record.
Fig. 5.
Average degree, weighted degree, closeness centrality, betweenness centrality, authority, and eigenvector centrality of providers in the core- and extended-subnetworks of Blue, Red, Yellow, and Green network.
Fig. 5 shows that providers in the extended subnetworks exhibited the smallest closeness and betweenness centrality. This suggests that EHR interaction networks that include core providers tend to have higher providers’ closeness and betweenness centrality. Providers in the Blue- and Red-core subnetworks exhibit the largest authority and eigenvector centrality, which demonstrates that more providers, who had many patients in common with other providers frequently shared patients with other providers. For the low acuity teams (Yellow or Green), providers in the extended subnetworks exhibit larger authority and eigenvector centrality than providers in the core subnetworks.
EHR Provider Network Structures
The network measures for the four EHR networks were reported in Table 2 and Fig. 6. The top provider groups in terms of percentage of providers within each network were depicted in Fig. 7.
Table 2.
Characteristics of the four EHR provider interaction networks
Blue | Red | Yellow | Green | |
---|---|---|---|---|
Number of providers | 213 | 288 | 63 | 81 |
Number of edges | 2,763 | 3,653 | 349 | 582 |
Number of communities | 8 | 9 | 5 | 6 |
Graph density | 0.124 | 0.092 | 0.18 | 0.186 |
Modularity | 0.574 | 0.556 | 0.563 | 0.598 |
Abbreviation: EHR, electronic health record.
Fig. 6.
Average degree, weighted degree, closeness centrality, betweenness centrality, authority, cluster coefficient, and eigenvector centrality of providers in Blue, Red, Yellow, and Green networks.
Fig. 7.
Distribution of providers within each provider group in the Blue, Red, Yellow, and Green EHR provider networks. The provider groups are ranked from highest to lowest provider percentage. The top 12 and 6 provider groups are depicted for the high and low acuity networks, respectively. EHR, electronic health record. Legends color label provider types correspond to the color in the pie chart.
As expected, the high acuity EHR networks were composed of a greater number of providers, interactions (edges), and communities than the low acuity networks (Table 2). Additionally, the structures of high acuity networks were more complex than low acuity (Supplementary Figs. A1–A4). However, the graph density of low acuity networks was much larger than that of high acuity networks (Table 2), which suggests that providers in low acuity networks are more tightly coupled and, thus, work more closely on the same patients.
Although high and low acuity networks differed in the complexity of their network structures, they both have clear community structures with modularity scores greater than 0.5 (Table 2). The composition of the communities within each of the four acuity networks was depicted in Supplementary Figs. A5–A8 and Supplementary Tables A1–A4.
High-Acuity EHR Networks
Table 3 summarized the provider differences between pairs of EHR netwroks. Two high acuity networks shared a large number of providers. For instance, 44% of the providers in the Blue network also cared for infants in the Red network. Similarly, 33% of the providers in the Red cared for infants in the Blue network. This is expected since providers often shift across acuity teams in the NICU and cross over at night. The provider groups in common between each pair of networks are reported in Supplementary Table A5, which provides details about which types of providers work for different networks.
Table 3.
Differences in the providers between pairs of EHR provider networks
Team A/Team B | Providers in common | Unique providers in team A | Unique providers in the second |
---|---|---|---|
Blue/Red | 94 (Blue: 44%; Red: 33%) | 119 | 194 |
Yellow/Green | 20 (Yellow: 31%; Green: 25%) | 43 | 61 |
Blue/Yellow | 28 (Blue: 13%; Yellow: 44%) | 175 | 35 |
Blue/Green | 41 (Blue: 19%; Green: 51%) | 172 | 40 |
Red/Yellow | 30 (Red: 10%; Yellow: 48%) | 258 | 33 |
Red/Green | 38 (Red: 13%; Green: 47%) | 250 | 43 |
Abbreviation: EHR, electronic health record.
Fig. 7 shows the 16 provider groups with the largest proportion of providers for the two high acuity networks. These networks share various provider groups, including inpatient nurse practitioners, neonatology, anesthesiologists, pediatrics respiratory care providers from the day or night shift, pharmacy inpatient service from day or night shift, NICU clinical staff leader(CSL), and a variety of NICU nurses from day or nightshift. Additionally, the two high acuity networks include groups of specialists, including pediatric neurology, pediatric pulmonary, pediatric surgery, pediatric urology, pediatric radiology, otolaryngology, and ophthalmology. The details of these specialist groups are provided in Supplementary Table A5. Beyond variation in network composition, the two high acuity networks also exhibit variation in their network measures. Although the providers in the Red and Blue networks exhibited a similar betweenness centrality and cluster coefficient (Fig. 6), they differ in degree, weighted degree, closeness centrality, authority, and eigenvector centrality. Providers in the Red network achieved higher values of network measures than Blue. This is most likely due to the fact that the Red network is generally assigned to surgical infants with complex comorbidities and incorporates a larger group of specialists. By contrast, the Blue network exhibits a more homogeneous patient population, which limits the number of consultants needed.
The larger betweenness centrality suggests that the Red network was composed of more providers (reflecting its diverse patient population) who shared more patients with many other providers. The larger eigenvector centrality and authority suggests that the Red network contained more providers, who had many patients in common with other providers that frequently shared patients with other providers. It is notable that the Red network had a smaller closeness centrality than the Blue network suggesting that providers in the former more frequently shared patients than providers in the latter.
Both networks contained unique provider groups. As shown in Supplementary Table A6, the Blue network is uniquely connected with hematology/oncology, obstetrics/ gynecology, diagnostic radiology, otolaryngology, and pediatric endocrinology. By contrast, the Red network has unique connections to cardiology, infectious disease, emergency care, pediatric critical care, urology, and neurology. These connections reflect the network-specific division of patients based on their specific diagnoses and the need for consulting services. For instance, in the Blue network, the respiratory therapists were one of the five roles with the largest degree, weighted degree, betweenness centrality, eigenvector centrality, and authority and smallest closeness centrality, as shown in Supplementary Table A7. This reflects the high prevalence of infants with chronic lung disease on the Blue network. Pediatric anesthesiologists exhibited the largest degree, weighted degree, betweenness centrality, eigenvector centrality, and authority on the Red network. This indicates that anesthesiologists frequently shared patients with other providers in Red network. This may be due to the fact that the Red network mainly provides care to critically ill patients, who require a closer involvement with pediatric anesthesiologists.
Low-Acuity EHR Networks
The Yellow and Green networks shared over 25% of their providers (Table 3). Fig. 7 shows that the top provider groups in common include anesthesiologists and surgery nurse practitioners (group 20) and neonatal nurse practitioners (group 30), neonatology, and anesthesiology. Other provider groups that were shared between the two low acuity networks included the pharmacy inpatient day and night shift, the NICU CSL group, the NICU nurse cohort day and night shift (e.g., cohorts 23, 25, and 26), and the pediatrics respiratory care day and night shift (e.g., cohorts 3 and 4; Supplementary Table A5).
The two low acuity networks also exhibited differences in their community structures. As shown in Table 2, the Green network was composed of clearer community structures than the Yellow network (modularity scores of 0.598 and 0.563 for Green and Yellow, respectively) possibly reflecting the more homogeneous nature of the Green network infants, who tend to be feeders and growers.
Providers on the Yellow network had larger degree, closeness centrality, authority, and eigenvector centrality scores and lower betweenness centrality, weighted degree, and cluster coefficient than the Green network (Fig. 6). It is notable that Green providers had much larger betweenness scores than Yellow providers. This suggests that the providers in the Green network shared more patients with a substantially greater number of providers. By contrast, providers on the Yellow network had higher eigenvector centrality, authority, and closeness centrality than those on the Green. This indicates that there are many more authoritative providers (e.g., neonatologist or NICU nurse practitioner) on the Yellow network than the Green.
Each of the networks had their own unique set of provider groups. Unique groups in the Yellow network included pediatric anesthesiology, pediatric infectious disease, and perioperative service (Supplementary Table A6). By contrast, unique groups in the Green network included orthopedics, internal medicine, gastroenterology laboratory, NICU CSL group, pediatric neurology, and plastic surgery (Supplementary Table A6).
High versus Low Acuity EHR Networks
Over 44% of the providers in the low acuity networks (Yellow and Green) also appeared in the high acuity networks (Blue and Red; Table 3). However, less than 19% of the providers in the high acuity networks appeared in the low acuity networks. The networks at different acuity levels shared certain provider groups, including neonatology, anesthesiology, general surgery, anesthesia, and surgery nurse practitioners and neonatal nurse practitioners, holding room/postanesthesia care unit group, NICU CSL group, NICU cohorts, and respiratory care cohorts (Fig. 7; Supplementary Table A5).
There were several notable differences between high and low acuity networks. First, high acuity networks contained substantially more members than low acuity networks (Blue and Red were composed of 213 and 288, respectively, while Yellow and Green were composed of 63 and 81, respectively; Table 2). The larger number of providers reflects the increased complexity of patients and the need for specialist care.
Second, high acuity networks had contained more communities than low acuity networks (Blue and Red contained 8 and 9, respectively, while Yellow and Green contained 5 and 6, respectively).
Third, low acuity networks exhibited larger authority, closeness centrality, cluster coefficient, and eigenvector centrality scores than high acuity networks. This suggests that the providers in low acuity networks are more tightly coupled, in that they shared more patients with other providers who themselves frequently shared patients. Although low acuity networks had a smaller number of providers and clusters, the density of their connections was higher than those in high acuity networks (Yellow and Green exhibited densities of 0.180 and 0.186, respectively, while Blue and Red exhibited densities of 0.124 and 0.092, respectively).
Fourth, high acuity networks achieved a greater variety of provider groups than low acuity networks. Provider groups unique to high acuity networks included hematology/oncology, pediatric cardiology, obstetrics/gynecology, pediatric otolaryngology, pediatric ophthalmology, pediatric pulmonology, pediatric anesthesiology, pediatric neurology, pediatric endocrinology, and pediatric urology (Supplementary Table A6), reflecting more complex problems, which required subspecialists to get involved.
Gastrostomy Patient-Level EHR Provider Networks—Test Results of Relations between PSLOS and Network Metrics via the Proportional-Odds Model
When controlling for patient age and weight, the degree average was the only network metric significantly associated with PSLOS at the 0.05 significance level. To better interpret the results, we scaled the degree averages by the interquartile range. With each interquartile range unit increase of degree average, the odds of a longer PSLOS decreased by approximately 88%. In simpler terms, higher degree average was associated with shorter PSLOS within our patient sample.
To understand this relationship between degree and PSLOS from another perspective, we dichotomized our patient sample by the approximate scaled degree average, at a median value 1.7. For the “smaller” degree average group, the average PSLOS was 25.89 days, whereas for the “larger” degree average group, the average PSLOS was 20.67 days. The Wilcoxon rank-sum test of the two groups’ PSLOS found that the “smaller” degree average group is more likely to have larger PSLOS than the “larger” degree average group (p = 0.03). This demonstrates the inverse relationship between degree average and PSLOS. Fig. 8 depicts the boxplots of PLOS for high and low degree groups of networks. On the left of the figure, all networks whose degree average is greater than 1.7 and, on the right, all networks whose degree average is less than 1.7.
Fig. 8.
Boxplots of the postsurgical length of stay for high (degree >1.7) and low (degree <1.7) degree group of networks.
The providers with highest degrees tended to be (Attending) Physicians, Registered Nurses, and Nurse Practitioners and many high degree providers had EHR interactions with other top-degree providers. For example, following a basic metabolic panel (testing for blood calcium levels among other laboratory tests) by a top degree Neonatal Physician, a top degree Nurse Practitioner ordered a potassium chloride oral solution for the patient.
Discussion
Provider interaction networks extracted from EHR log data allow for rich characterizations of care team structures in the NICU. Our results have several notable implications.
First, our approach provides an opportunity to identify the provider interactions that occur in an ad hoc or virtual manner via health information systems. Our approach has the added benefit of analyzing EHR utilization data, instead of direct observational data or administrative claims. This is notable because EHR log data capture details of activities for a wide range of providers including almost all of hospital employees, and at the same time, the log data can be collected automatically, substantially reducing the cost and bias of studying provider interaction networks. Thus, our approach provides an efficient, accessible, and research-friendly way to study provider interaction networks.
Second, the learned network structures may assist HCOs in refining their care team models, especially for relationships beyond core providers such as those between core and noncore providers or among noncore providers. We found there are many more EHR provider interactions occurring beyond the current core-team models predefined in the scheduling records. For instance, we found that the high acuity teams (Blue as shown in Fig. 4) included physicians from Otolaryngology, Pulmonology, Endocrinology, Neurology, Hematology, and Radiology. Unfortunately, in the reality of a busy NICU, these interactions are often conducted via the EHR, and potentially fraught with miscommunication and failure to close the loop.44 Adding direct communications between these physicians and providers in the NICU coordination workflow could potentially improve work efficiency. The learned network structures can inform HCOs about which providers play an important role in controlling the information passed between providers, who are the closest providers to work with, and who are linked by other important providers.
Third, we conducted a case study on the EHR provider networks of gastrostomy patients and found a significant association between PLOS and the average degrees of provider networks. Providers treating low PSLOS patients dispersed patient-related information to more colleagues than providers treating high PSLOS patients.
There are several limitations in this pilot study that should be recognized, which can serve as guidelines for further future investigations. First, EHR log data do not capture the actions and interactions of every provider. There are several reasons why this is the case: (1) not every provider interacts with a patient’s EHR during their clinical service and (2) EHR systems do not document all interactions. Future studies should integrate other resources, such as phone calls or communication logs,45 to further refine network structures.
Second, the number of surgical infants in this study was small and unique as we focused on a rare type of patient (i.e., those who suffered from severe health conditions that required surgical intervention). Future investigations may benefit by considering the network characteristics generated by all patients in the NICU.
Third, the interactions related to extended-subnetworks learned from EHR data were not interpreted by human experts in a formalized manner. Transforming the learned interactions between core and noncore providers or among noncore providers into concrete information change is a nontrivial challenge. For instance, we found that the Blue and Red teams are associated with physicians from other divisions or departments, including Hematology, Urology, and Neurology; however, we did not verify if the detected interactions were meaningful with experts. Qualitative methods, including surveys or focus group interviews with content experts, will be necessary to investigate such issues.
Fourth, the investigation into the EHR provider interaction networks does not consider the patient’s health conditions. Although the learned four EHR provider interaction networks correspond to different acuity levels, for instance, the Blue is for chronically ill infants, and Green is for low acuity health conditions (e.g., a convalescing preterm infant), formalized patient risk stratification models46 are still required to further understand the impact of EHR provider interaction networks on the management of patients.
Fifth, the content of a provider’s interaction with the EHRs of a patient is not investigated. This is because the EHR audit logs we used in this work are from our homegrown EHR system-StarPanel, which did not record details of an event a provider committed to EHRs of a patient. For instance, we cannot determine which artifacts of a patient’s EHRs are affiliated with a provider’s EHR activities. We modeled interactions between providers via their common accesses to EHRs of a patient during a specific period, which may not capture the accurate relationships between providers. For instance, provider A accessed a patient medication list, and provider B added new content to a progress note of the patient within a period (e.g., 30 minutes). These two providers did not have information exchange or virtual communications, but their relationship was artificially created via our approach. We plan to consider the content of a provider’s activities to EHRs of a patient to learn more clinically meaningful provider relations in our future study. Specially, we will propose to define common tasks to characterize provider activities (e.g., reading progress notes or adding content to notes) along with their contextual information (e.g., patient visit type, care phase, architectural location, timestamp) in EHRs from a clinical perspective, and then upon the defined tasks to learn networks.
Sixth, our network analysis-based framework only incorporated basic network analysis approaches rather than advanced technologies. Since the focus of this work is to develop a common framework to validate provider interactions learned from EHRs and compare differences in collaborative structures between networks only consisting of core providers, or noncore providers or both, we started with the basic network analysis approaches. More advanced approaches including node2vector, Uniform Manifold Approximation, and Projection, and graph neural networks are anticipated to be incorporated into the framework to learn high-quality provider interactions.
Seventh, we only analyzed scheduling records for core providers in the NICU but did not investigate scheduling records of other types of providers such as anesthesiologists or surgeons. Over 90% of EHR interactions were for relations between core and noncore providers or among noncore providers, and we need to extract other types of scheduling records to verify those interactions. Due to the complexity of the protocol to extract and analyze scheduling data beyond the NICU, we will add such type of study in our future work.
Conclusion
In this paper, we showed that the actions of care providers, as documented in EHR audit logs, are amenable to network analysis in a manner that reveals interaction networks. Specifically, we learned the EHR provider networks for four predefined NICU care teams and quantified their structures via standard network metrics. Our comparative analysis of the four EHR networks and their corresponding subnetworks only including core providers, noncore providers, or both indicated differences in terms of collaborative network structure. We found that beyond fully capturing provider relations existing in the core-provider scheduled records, EHR audit logs also learn a large number of provider relations between core and noncore providers or among noncore providers. In addition, providers in core subnetworks are more likely to have virtual interactions between each other than those in other types of subnetworks. The analysis results of this study indicate that the EHR can be used along with network analysis to model acuity-based teams and measure their team structures over the course of the care for surgical neonates. Through our case study, we also found that in the NICU, dissemination of information through the EHR may be associated with reduced PSLOS. In our future work, we will translate the results into actionable criteria to improve resource utilization, team-based care, and workload at the provider and team levels. Specially, we will test the weak, medium, and strong provider interactions learned from EHRs, consolidate the roles with network analytics, and engage HCOs to define roles and activities for an important NICU process.
Supplementary Material
Acknowledgment
The authors would like to thank Cindy Kim for assisting in conducting a case study to learn provider interaction networks taking care of neonates who underwent gastrostomy tube placement surgery. They further thank Dr. Jonathan S. Schildcrout for providing directions on the designing of the statistical models.
Funding
This research was supported, in part, by the National Library of Medicine of the National Institutes of Health under Award Number R01LM012854.
Footnotes
Conflict of Interest
None declared.
References
- 1.Sahni R, Polin RA. Physiologic underpinnings for clinical problems in moderately preterm and late preterm infants. Clin Perinatol 2013;40(04):645–663 [DOI] [PubMed] [Google Scholar]
- 2.Mwaniki MK, Atieno M, Lawn JE, Newton CRJC. Long-term neuro-developmental outcomes after intrauterine and neonatal insults: a systematic review. Lancet 2012;379(9814):445–452 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Chavez-Valdez R, McGowan J, Cannon E, Lehmann CU. Contribution of early glycemic status in the development of severe retinopathy of prematurity in a cohort of ELBW infants. J Perinatol 2011;31(12):749–756 [DOI] [PubMed] [Google Scholar]
- 4.Kornhauser M, Schneiderman R. How plans can improve outcomes and cut costs for preterm infant care. Manag Care 2010;19(01):28–30 [PubMed] [Google Scholar]
- 5.Petrou S, Eddama O, Mangham L. A structured review of the recent literature on the economic consequences of preterm birth. Arch Dis Child Fetal Neonatal Ed 2011;96(03):F225–F232 [DOI] [PubMed] [Google Scholar]
- 6.Mangham LJ, Petrou S, Doyle LW, Draper ES, Marlow N. The cost of preterm birth throughout childhood in England and Wales. Pediatrics 2009;123(02):e312–e327 [DOI] [PubMed] [Google Scholar]
- 7.National Perinatal Information System/Quality Analytic Services. Available at: www.npic.org. Prepared by March of Dimes Perinatal Data Center; 2011. Accessed February 18, 2020
- 8.Jacob J, Kamitsuka M, Clark RH, Kelleher AS, Spitzer AR. Etiologies of NICU deaths. Pediatrics 2015;135(01):e59–e65 [DOI] [PubMed] [Google Scholar]
- 9.Brodsky D, Gupta M, Quinn M, et al. Building collaborative teams in neonatal intensive care. BMJ Qual Saf 2013;22(05):374–382 [DOI] [PubMed] [Google Scholar]
- 10.Vandenberg KA. Individualized developmental care for high risk newborns in the NICU: a practice guideline. Early Hum Dev 2007; 83(07):433–442 [DOI] [PubMed] [Google Scholar]
- 11.Sneve J, Kattelmann K, Ren C, Stevens DC. Implementation of a multidisciplinary team that includes a registered dietitian in a neonatal intensive care unit improved nutrition outcomes. Nutr Clin Pract 2008;23(06):630–634 [DOI] [PubMed] [Google Scholar]
- 12.Salera-Vieira J, Tanner J. Color coding for multiples: a multidisciplinary initiative to improve the safety of infant multiples. Nurs Womens Health 2009;13(01):83–84 [DOI] [PubMed] [Google Scholar]
- 13.White RD, Smith JA, Shepley MM; Committee to Establish Recommended Standards for Newborn ICU Design. Recommended standards for newborn ICU design, eighth edition. J Perinatol 2013;33(Suppl 1):S2–S16 [DOI] [PubMed] [Google Scholar]
- 14.Milette I, Martel MJ, da Silva MR, Coughlin McNeil M. Guidelines for the institutional implementation of developmental neuro-protective care in the NICU. Part B: recommendations and justification. A joint position statement from the CANN, CAPWHN, NANN, and COINN. Can J Nurs Res 2017;49(02):63–74 [DOI] [PubMed] [Google Scholar]
- 15.Profit J, Sharek PJ, Kan P, et al. Teamwork in the NICU setting and its association with health care-associated infections in very low-birth-weight infants. Am J Perinatol 2017;34(10):1032–1040 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.Barbosa VM. Teamwork in the neonatal intensive care unit. Phys Occup Ther Pediatr 2013;33(01):5–26 [DOI] [PubMed] [Google Scholar]
- 17.O’Brien K, Bracht M, Macdonell K, et al. A pilot cohort analytic study of family integrated care in a Canadian neonatal intensive care unit. BMC Pregnancy Childbirth 2013;13(01, Suppl 1):S12. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18.Bracht M, O’Leary L, Lee SK, O’Brien K. Implementing family-integrated care in the NICU: a parent education and support program. Adv Neonatal Care 2013;13(02):115–126 [DOI] [PubMed] [Google Scholar]
- 19.Uddin S, Khan A, Piraveenan M. Administrative claim data to learn about effective healthcare collaboration and coordination through social network Paper presented at: 48th Hawaii International Conference on System Sciences. Kauai, Hawaii: IEEE2015:3105–3114 [Google Scholar]
- 20.Cunningham FC, Ranmuthugala G, Plumb J, Georgiou A, Westbrook JI, Braithwaite J. Health professional networks as a vector for improving healthcare quality and safety: a systematic review. BMJ Qual Saf 2012;21(03):239–249 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.Uddin S, Hossain L, Hamra J, Alam A. A study of physician collaborations through social network and exponential random graph. BMC Health Serv Res 2013;13:234. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22.Uddin S Exploring the impact of different multi-level measures of physician communities in patient-centric care networks on healthcare outcomes: a multi-level regression approach. Sci Rep 2016;6:20222. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23.Chen Y, Lorenzi N, Nyemba S, Schildcrout JS, Malin B. We work with them? Healthcare workers interpretation of organizational relations mined from electronic health records. Int J Med Inform 2014;83(07):495–506 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24.Chen Y, Xie W, Gunter CA, et al. Inferring clinical workflow efficiency via electronic medical record utilization Paper presented at: AMIA Annual Symposium Proceedings. San Francisco, CA: 2015;2015:416. [PMC free article] [PubMed] [Google Scholar]
- 25.Chen Y, Patel MB, McNaughton CD, Malin BA. Interaction patterns of trauma providers are associated with length of stay. J Am Med Inform Assoc 2018;25(07):790–799 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26.Chen Y, Lorenzi NM, Sandberg WS, Wolgast K, Malin BA. Identifying collaborative care teams through electronic medical record utilization patterns. J Am Med Inform Assoc 2017;24(e1): e111–e120 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27.Chen Y, Kho AN, Liebovitz D, et al. Learning bundled care opportunities from electronic medical records. J Biomed Inform 2018;77:1–10 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28.Gray JE, Davis DA, Pursley DM, Smallcomb JE, Geva A, Chawla NV. Network analysis of team structure in the neonatal intensive care unit. Pediatrics 2010;125(06):e1460–e1467 [DOI] [PubMed] [Google Scholar]
- 29.Aizawa A An information-theoretic perspective of tf–idf measures. Inf Process Manage 2003;39(01):45–65 [Google Scholar]
- 30.Ye J Improved cosine similarity measures of simplified neutrosophic sets for medical diagnoses. Artif Intell Med 2015;63(03): 171–179 [DOI] [PubMed] [Google Scholar]
- 31.Roque FS, Jensen PB, Schmock H, et al. Using electronic patient records to discover disease correlations and stratify patient cohorts. PLOS Comput Biol 2011;7(08):e1002141. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 32.Wilcoxon F, Katti SK, Wilcox RA. Critical values and probability levels for the Wilcoxon rank sum test and the Wilcoxon signed rank test. Select Tables Mathematica Stat 1970;1:171–259 [Google Scholar]
- 33.Brandes U A faster algorithm for betweenness centrality. J Math Sociol 2001;25(02):163–177 [Google Scholar]
- 34.Kempe D, Kleinberg J, Tardos É. Maximizing the spread of influence through a social network Paper presented at: Proceedings of the 9th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining; Washington, DC: 2003:137–146 [Google Scholar]
- 35.Okamoto K, Chen W, Li XY. Ranking of closeness centrality for large-scale social networks In: Preparata FP, Wu X, Yin J (eds). Frontiers in Algorithmics. FAW 2008. Lecture Notes in Computer Science, Vol 5059 Berlin, Heidelberg: Springer [Google Scholar]
- 36.Bonacich P Some unique properties of eigenvector centrality. Soc Networks 2007;29(04):555–564 [Google Scholar]
- 37.Tichy NM, Tushman ML, Fombrun C. Social network analysis for organizations. Acad Manage Rev 1979;4(04):507–519 [Google Scholar]
- 38.Barabâsi AL, Jeong H, Néda Z, et al. Evolution of the social network of scientific collaborations. Phys A 2002;311(3–4):590–614 [Google Scholar]
- 39.Newman MEJ. Modularity and community structure in networks. Proc Natl Acad Sci U S A 2006;103(23):8577–8582 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 40.Scott J Social network analysis. Sociology 1988;22(01):109–127 [Google Scholar]
- 41.Bastian M, Heymann S, Jacomy M. Gephi: an open source software for exploring and manipulating networks Paper presented at: Third International AAAI Conference on Weblogs and Social Media; San Jose, CA: 2009 [Google Scholar]
- 42.R Foundation for Statistical Computing. R: A language and environment for statistical computing. 2016. Accessed January 31, 2019
- 43.Harrell FE Jr. Regression modeling strategies. 2019. Available at: http://biostat.mc.vanderbilt.edu/rms. Accessed January 31, 2019 [Google Scholar]
- 44.Partnership for Health IT Patient Safety. Closing the loop: using health IT to mitigate delayed, missed, and incorrect diagnoses related to diagnostic testing and medication changes. 2018. Accessed October 30, 2018
- 45.Rucker DW. Using telephony data to facilitate discovery of clinical workflows. Appl Clin Inform 2017;8(02):381–395 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 46.Chandler AE, Mutharasan RK, Amelia L, Carson MB, Scholtens DM, Soulakis ND. Risk adjusting health care provider collaboration networks. Methods Inf Med 2019;58(2–03):71–78 [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.