Abstract
Objective
Predictive analytics are potentially powerful tools, but to improve healthcare delivery, they must be carefully integrated into healthcare organizations. Our objective was to identify facilitators, challenges, and recommendations for implementing a novel predictive algorithm which aims to prospectively identify patients with high preventable utilization to proactively involve them in preventative interventions.
Materials and Methods
In preparation for implementing the predictive algorithm in 3 organizations, we interviewed 3 stakeholder groups: health systems operations (eg, chief medical officers, department chairs), informatics personnel, and potential end users (eg, physicians, nurses, social workers). We applied thematic analysis to derive key themes and categorize them into the dimensions of Sittig and Singh’s original sociotechnical model for studying health information technology in complex adaptive healthcare systems. Recruiting and analysis were conducted iteratively until thematic saturation was achieved.
Results
Forty-nine interviews were conducted in 3 healthcare organizations. Technical components of the implementation (hardware and software) raised fewer concerns than alignment with sociotechnical factors. Stakeholders wanted decision support based on the algorithm to be clear and actionable and incorporated into current workflows. However, how to make this disease-independent classification tool actionable was perceived as a challenge, and appropriate patient interventions informed by the algorithm appeared likely to require substantial external and institutional resources. Stakeholders also described the criticality of trust, credibility, and interpretability of the predictive algorithm.
Conclusions
Although predictive analytics can classify patients with high accuracy, they cannot advance healthcare processes and outcomes without careful implementation that takes into account the sociotechnical system. Key stakeholders have strong perceptions about facilitators and challenges to shape successful implementation.
Keywords: predictive analytics, implementation, user-centered design, quality, healthcare utilization
INTRODUCTION
Predictive analytics using electronic health record (EHR) or administrative data can accurately predict in-hospital mortality, 30-day unplanned readmission, prolonged length of stay, triage, decompensation, and treatment optimization for multiple organ failure.1–4 Similar techniques could be applied to identify condition-independent populations such as high-need, high-cost (HNHC) patients, the subset of patients that accounts for the majority of the healthcare spending.5–8 HNHC patients often have complex needs related to multiple comorbidities, social determinants of health, and functional limitations.6,9,10 These patients may be better served by improved primary care access,11,12 care coordination programs, and social services.13–16 An important gap to utilizing these services is identifying HNHC patients early enough to intervene. In the current project, a collaboration between 2 clinical research networks, our team is developing a novel HNHC predictive algorithm to identify patients at high risk of becoming HNHC in coming months or years.
However, a predictive model cannot advance healthcare without being transformed into some form of clinical or organizational decision support and implemented within healthcare organizations. Recent predictive algorithms have demonstrated a superior ability to classify patients, but progress has been slow in leveraging classification power to improve outcomes.17 A tool to identify HNHC patients might pose challenges for healthcare organizations because the optimal action for these patients may not be obvious. The best organizational recipient of the information is also uncertain. For this innovation, therefore, the “5 Rights” of clinical decision support are still unclear (right information, right person, right time, right format, right channel).18 One critical lesson from previous health information technology (HIT) implementation is that end users (eg, nurses, physicians, and support staff) must be involved to make new technology useful, usable, and actionable.18–21
Therefore, during the HNHC algorithm development, we conducted a concurrent preimplementation qualitative study with key stakeholders in 3 healthcare organizations. Our goal was to determine how HNHC information should be presented, to which stakeholders, when they will need it, and to identify facilitators and challenges to implementation.18 We also sought to identify novel aspects of this proposed implementation that were not covered by current conceptual models.
METHODS AND MATERIALS
Setting
This study took place within 3 healthcare systems, 2 in Florida, and 1 in New York City. Within each organization, interviews targeted broad participant groups in ambulatory multispecialty practice and within the hospital setting. These healthcare systems were chosen due to their varied patient populations and organizational structures in addition to their affiliation with 2 Clinical Research Networks (CRNs). CRNs are partnerships organized through the National Patient-Centered Clinical Research Network (PCORnet) to enable clinical research that is faster, easier, less costly, and more relevant to patients’ needs. CRNs facilitate patient-centered research through secure, central data repositories including administrative, clinical, Medicare claims, Medicaid claims, commercial claims, and social determinants of health data from different healthcare systems. The CRN was integral for the HNHC algorithm development portion of the project (not reported in detail here). The IRBs at Weill Cornell Medicine and University of Florida determined that this study was not human subjects research.
The HNHC predictive algorithm
We developed a predictive model for HNHC Medicare fee-for-service and dual-eligible patients. HNHC patients were defined as those in the top 10% of total annual healthcare spending in a year. The development of this model was largely based on our previous work on a taxonomy for HNHC patients.10 Predictors of this model include patient age and 9 indicators representing clinically meaningful HNHC patient categories, including serious medical illness (eg, end stage cancer), frailty, serious mental illness, single high-cost condition (eg, HIV), single condition with high pharmacy costs (eg, Crohn’s disease), chronic pain, end-stage renal disease, opioid use disorder, and multiple chronic conditions.10 In ongoing work using logistic regression and machine learning methods for prediction, we have found that this model achieved good discrimination (C-statistics ranged from 0.72 to 0.82), good accuracy (Brier score ranged from 0.16 to 0.21), and good calibration (little difference between the predicted and observed risk by risk deciles) (unpublished data). In addition, we are collaborating with a local Medicare accountable care organization to implement this model to inform opportunities for quality improvement and cost reduction.
Study design and sample
The study utilized semi-structured interviews with 3 key stakeholder groups: operational personnel, informatics personnel, and potential end users (Table 2). Each site’s principle investigator initially reached out to high-level operational and informatics leadership via e-mail. We then employed a snowball sampling approach to contact additional interviewees, conducting analysis and recruitment iteratively until we achieved qualitative saturation of themes.22
Table 2.
n |
|||
---|---|---|---|
Stakeholder group | Included roles | Florida CRN | New York City CRN |
End users of the predictive algorithm (EU) | Primary care provider (including practice leads), hospitalist, nurse practitioner (primary care), care manager (RN), health coach (RN), case manager/social worker | 6 | 9 |
Informatics (INF) | Chief information officer, Chief/associate medical information officer, Chief Analytics Officer | 3 | 4 |
Operations (OPS) | ACO Director, Chief Transformation Officer, Dir. /Ass. Dir. of Population Health, Chief Quality and Patient Safety Officer, Chief/Associate Medical Directors, VP Care Integration, Director of Care Management, Director of Community Health, Chief Scientific Officer, Department Chairs (Family Medicine, Internal Medicine, etc.) | 8 | 17 |
Total analyzed | 18 | 30 |
Abbreviations: CRN, clinical research network; EU, end user; INF, informatics; OPS, operations; RN, registered nurse.
Data collection
The semi-structured interview guides were based on Sittig and Singh’s model, which appeared to be relevant to our goal of informing the implementation of a predictive algorithm.23
Our team includes PhD-level researchers and clinicians with extensive experience using qualitative methods (ELA, JSA, NCB). Interview guides contained probes regarding dimensions of the model including external rules and regulations, internal organization, clinical content, human-computer interface, hardware and software infrastructure, people, workflow and communication, system measurement, and monitoring. We developed an interview guide for each stakeholder group using a common organizational structure (Table 1).
Table 1.
Section | Description |
---|---|
Background and current activities | Current efforts related to predictive analytics and identifying HNHC patients |
Perceived usefulness (operational and end users only) | Understanding how identifying HNHC patients may be helpful, at what point in the care process it may be useful, and to whom. |
Resource needs and constraints (informatics only) | Details related to the informatics resources (ie, personnel) necessary to develop the predictive algorithm, possibilities for presenting the score in current information systems, providing training, and making the score actionable. |
Barriers and facilitators | Challenges and enablers to implementing and meaningfully utilizing predictive analytics, specifically for identifying HNHC patients. Interviewer also probed regarding methods for measuring success. |
Abbreviation: HNHC, hi-need, hi-cost.
A single interviewer (NCB) conducted all interviews, with up to 3 other team members (ELA, JSA, KB, LTD) joining each interview as available. One research team member from the Florida team (KB) participated in all interviews with Florida-based interviewees (excluding 1 instance) to provide local context. New York City-based interviews were conducted in person or via phone. All interviews with Florida-based interviewees were conducted via phone. Interviews lasted 30–45 minutes. All interviews were audio recorded and professionally transcribed. One interview was excluded from the analysis due to a recording device malfunction.
Data analysis
Qualitative data were analyzed using thematic analysis and the constant comparative process.24 We developed the initial codebook by reading transcripts as a group, iteratively and inductively eliciting themes. Once we had developed an initial list of codes, we organized codes into higher level categories.25 We analyzed 5 transcripts as a group to develop familiarity with the codebook. After that, at least 2 team members coded each of the remaining interview transcripts through consensus coding. The research team met weekly to discuss the transcripts analyzed and update the coding scheme as needed. Saturation was assessed by our team based on the need to add new codes to account for distinct themes. The team determined that saturation had probably been reached after the first 35 interviews; but, because the majority of this initial group of interviews were recruited from the New York network, we continued interviewing at the Florida networks until we were confident we had interviewed a similarly representative group of stakeholders.26
Following coding of all interview transcripts, our team reviewed the data on a code-by-code basis to ensure consistency, develop a list of key emerging themes, and categorize themes as facilitators or challenges to using the HNHC algorithm.24 Themes were classified under the dimensions of the Sittig and Singh framework.23 Team meetings were used to discuss and determine any modifications to the framework that were suggested by our qualitative data.
The research team analyzed interview transcripts using NVivo 12 (QSR International Pty Ltd., 2018).
RESULTS
The resultant sample contained 49 interviewees, with 1 excluded (as described above), to produce a sample of 48 (Table 2).
The themes have been organized within the Sittig and Singh model,23 with each theme fitting logically within pre-established model dimensions.
Description of themes
We present each dimension (subheadings), describing related key themes supported, with participant quotes. Participant ID which provides the participant role (ie, end users [EU], informatics [INF], operations [OPS]), and a label to identify if the quotes represented a challenge or facilitator, follows each quotation.
Hardware and software infrastructure
Participants were generally not worried about their organization’s ability to replicate any predictive analytics and integrate an algorithm into an EHR. They did note that local customization might be necessary given local differences in the structure of clinical content. For example, similar information may be stored in different fields or formats across systems.
If you’re talking about the universe of [EHR vendor] customers, obviously there are differing technical aptitudes and implementation agility. – INF01 [Challenge]
Participants also described concerns related to how the clinical content would be utilized in the predictive algorithm to create meaningful, reliable knowledge.
There would be huge problems there because the [REDACTED] data is far in arrears … then the scores aren’t contemporary anymore. – INF04 [Challenge]
Clinical content
Anomalies in the data sources used to create the predictive algorithm—in particular, timeliness and data quality—were of great concern and a potential challenge to implementation. This mirrored the hardware/software concern related to how the predictive algorithm would utilize the clinical content.
If no one codes for diabetes, and no one codes for amputated leg, it’s a refresh [ie, the data may be overwritten]. So, the patient is no longer diabetic, or has grown back a leg. – OPS06 [Challenge]
External rules, regulations, and pressures
External rules and regulations presented both facilitators and challenges to using the predictive algorithm. External drivers predominantly pertained to payment structures. Participants in systems with more value-based contracts saw reimbursements as a facilitator, while those in fee-for-service dominated settings viewed the payment structure as a challenge to using the HNHC predictive algorithm.
But there’s certainly more pressure now and more focus on it now than there was five years ago … there are places that have 30, 40 percent of their dollars at risk. – OPS04 [Facilitator]
There’s not that much of a return on investment for us, from a financial point of view … there’s not that much that’s driving full adoption. – OPS04 [Challenge]
Multiple participants also mentioned that external rules drive potential responses to the information. One specific barrier was that eligibility criteria for programs such as care management or housing assistance were complex, driven by payers, grants, and other funding sources.
If … the funder is only going to give the resource to vulnerable patients for these preventable reasons, then why ask me to identify 100 patients, two of which are going to qualify? Just go ahead and find those two. – EU02 [Challenge]
Internal policies, procedures, and culture
There was also concern that the HNHC classifier would identify patients for whom there were no available resources.
Worst-case scenario would be you tell me I have a high-risk patient for whom I should be intervening on, but I’m not armed with any additional interventions to give …. Then it feels … bad to say, ‘Hey, they’re really sick.’ And you’re like, ‘I know that!’ – EU02 [Challenge]
Perceived resource constraints commonly related to social determinants of health, such as housing, food insecurity, or mental health needs.
If you screen for food insecurity and you go, ‘Great, my patients are hungry and they’re going to go to the ER for a sandwich’, unless you can give them a resource, we don’t want to ask.
Multiple participants described how stakeholders and organizations might use the information provided by the analytic model. Possible goals included identifying previously unknown subsets of at-risk patients, helping care managers prioritize patient lists, and helping organizations advocate for additional resources on the basis of their prevalence of HNHC patients.27
Even if we systematize it and take it out of that subjectivity … that would be a godsend. – INF01 [Facilitator]
I could easily see us going to that payer and saying, ‘Well, our risk model … shows your patient population is higher risk. We need to do more intervention, so we need more money.’ – OPS04 [Facilitator]
Challenges that could inhibit the organization’s use of a predictive algorithm included the possibility that the at-risk patients identified would be clinically obvious.
I know the medical conditions of my patients and how severe they are. So, having that in my face probably doesn’t really add that much. – EU02 [Challenge]
Participants also noted the large number of existing risk prediction algorithms and the challenge of differentiation.
What makes your tool actually any more accurate or unique compared to the 500 other vendors out there? – OPS09 [Challenge]
Workflow and communication
A strong theme was that the information should not create more work for providers. In addition, many of the appropriate responses to the HNHC classification (such as home or telephone visits, or referring to social services) were seen as potentially out of practice scope for these sorts of providers. Therefore, stakeholders suggested that the score should be integrated into team-based activities and used as a means for delegating tasks to appropriate staff.
They’re [primary care providers] looking for things that can offload them, not burden them more. – OPS05 [Facilitator]
Our participants reinforced lessons from other HIT implementations that champions in the clinical environment were crucial and would be important for instilling ownership over action within local departments.
So, there needs to be a clinician champion, particularly as you look within specific departments or institutes …. You can’t do this from a complete top down. – OPS10 [Facilitator]
Participants also mentioned that an implementation toolkit may be helpful as the algorithm expands to new sites to help new adopters make customization decisions.
I’ve learned … that this closing the loop is what makes the sale … sometimes, we’re handed a package with the implementation science done. – OPS10 [Facilitator]
Lastly, participants expressed that dissemination and implementation of the predictive algorithm would be best executed through face-to-face communication and leveraging clinical champions.
It does require pretty much face-to-face communication. Sending an e-mail that this tool is now available … won’t register. – OPS20 [Facilitator]
People
Most respondents thought that the primary recipients of HNHC information would be care managers or social workers. Most also thought the information should be easily available to all care providers to help with care coordination.
If everybody is in the loop about it then of course that would be for the best. – EU01 [Facilitator]
Several participants identified the patient as a stakeholder who should receive information to justify why they were receiving specific services. Physicians were also seen as playing a crucial role in endorsing interventions and introducing them to the patient.
The provider’s endorsement certainly helps with the care management …. There has to be a conversation with the patient around there is a need here and the care manager is going to help us … improve your health.” – OPS19 [Facilitator]
Clear communication about the HNHC information to the patient could address a challenge, which was that a patient’s preferences regarding where to seek care represented a possible barrier to high-quality care.
If my patient is at home and chooses to go to the emergency room, how am I going to stop him from high utilizing? If I’ve made him aware that he can come here and see me … – EU04 [Challenge]
Human-computer interface (HCI)
Adoption would require easily interpretable information, linked to clear actions. The majority of participants agreed that a categorical score would better facilitate interpretability and actionability than a numeric score.
For the nurses to be able to kind of plan their care, I think it does help for it to be categorical. It makes it easier than trying to get a percentage or a number and then, having to reference something else. – EU09 [Facilitator]
What would be helpful is not just the identification of a patient but suggestions on what would be the most appropriate resources that the patient would need. – OPS20 [Facilitator]
It was also seen as important to be able to begin the process of an intervention within the workflow of viewing the algorithm’s output. A few participants remarked that they would be open to an action occurring automatically based on the output of the algorithm.
Recommendations may be helpful as well as an easy, automatic referral to case management would be great. – OPS20 [Facilitator]
Participants also wanted the score to be easily accessible within their current workflow in the EHR.
Make sure that’s in their workflow. If you expect someone to go to a third-party system or to a website, you’ve lost. – INF02 [Facilitator]
Lastly, participants heavily emphasized the importance of transparency in building trust in the predictive algorithm. Specifically, they wanted high-level summaries of the included variables in the algorithm to help them understand why patients received a particular score.
One of the big questions that comes up is like how did you get to this number? And then typically ask for like the big details. If there’s detail on what went into it, for sure it’d be a confidence booster. – EU01 [Facilitator]
I think the best predictive models these days are black boxes … you can tell people what most important set of variables are, but not tell them how they’re actually … used. Because it’s too complicated. – EU04 [Facilitator]
Systems measurement and monitoring
Key metrics that could be tracked and improved with implementation of the predictive algorithm included patient outcomes and utilization-based metrics (eg, emergency department visits, unplanned readmission).
Obviously, the nirvana or the Holy Grail is better patient outcomes …. If the patients are doing better … that is … a good metric. Obviously, we would like to reduce cost. – OPS02 [Facilitator]
Differences by sites and stakeholder groups
As a posthoc analysis, we examined themes from the different healthcare systems and by stakeholder group (operations, informatics, and end users) and did not encounter any contradiction in themes between groups. However, in some cases, emphasis differed. For example, all groups agreed that external rules and regulations were central to adoption of the predictive algorithm. However, in 2 of the 3 healthcare systems, their relatively high percentage of value-based reimbursement contracts was seen as a facilitator; whereas, in another healthcare system, which had fewer value-based contracts, this reimbursement structure was perceived as a barrier to prioritizing identification of HNHC patients. Additionally, participants from all locations agreed that local customizations may be necessary, given unique information infrastructures, and went on to describe the nuances in their particular systems. For example, 1 health system described that they typically created dashboards to convey predictive risk algorithms. Alternatively, another healthcare system reported that they typically delivered predictive algorithms in the patients’ record via “hover-overs” or “best practice alerts” (a reminder function from the Epic EHR). Unsurprisingly, operational personnel were most likely to discuss external and intraorganizational drivers; informatics stakeholders described elements of the hardware/software infrastructure; and end users tended to focus on people, communication, and workflow.
DISCUSSION
Table 3 summarizes key takeaways related to sociotechnical considerations for implementing predictive algorithms and denotes the model dimensions that need to be considered related to each takeaway. Table 3 highlights Sittig and Singh’s concept that model dimensions are highly coupled, and it is imperative to system success to consider how they interact.23 We describe each key takeaway in detail below. Lastly, we discuss implications our qualitative study had for our local implementation.
Table 3.
Sitting and Singh Model Dimension |
||||||||
---|---|---|---|---|---|---|---|---|
Hardware/software | Clinical content | External regulations | Internal organization | Workflow & comm. | People | HCI | System M&M | |
Key Takeaway | ||||||||
Take lessons from previous health information technology implementations | X | X | X | X | X | X | X | X |
Ensure the institution has localized, compelling use cases | X | X | X | X | X | X | ||
Transparency related to the analytics is key to trust and use | X | X | X | X | ||||
Recommendations for actions may be helpful, depending on context | X | X | X | X | X | X | ||
Actions must be supported with resources | X | X | X | X | X | |||
Stakeholders desire a feedback loop including clinically meaningful outcomes | X | X | X | X | X | X | X | X |
Carefully consider the role of the patient | X | X | X | X | X | X |
Abbreviation: HCI, Human-computer interface.
Summary of key takeaways
Take lessons from previous HIT implementations
Stakeholder perspectives on developing and implementing predictive algorithms to identify HNHC patients had many similarities with lessons from other forms of HIT, but also included novel context-specific concerns. We found that this implementation raised few concerns about technical issues such as hardware or software. Instead, alignment with sociotechnical dimensions of work was a top concern. Stakeholders wanted decision support based on the algorithm to be clear, actionable, and incorporated into current workflows. The support required to make the information actionable touched on content, organizational (external/institutional), infrastructural (hardware/software), care team, end user, and patient dimensions. Content pertained to clinical, information, and social issues. Organizational factors related predominantly to payment structures, which are generally beyond the healthcare organization’s control. Hardware and software infrastructure were not a major concern, but an implementation toolkit was seen as helpful to new adopters. People and workflow, needed to be considered at the care team level, with physicians serving as secondary recipients, and patients themselves receiving some information (probably indirectly via their physicians or care teams).
Ensure the institution has localized, compelling use cases
The purpose of the new predictive algorithm within each organization was an area of active discussion. This theme underscores the importance of remaining driven by a problem (eg, access, processes, resources) in creating tools that support cognitive work.28 This means resisting the urge of saying, “We have the data; therefore we should do something with it.” Our finding supports the broader lesson to, “Start informatics innovations by identifying a problem, not the data.”
Furthermore, other model dimensions must align to demonstrate a need for the implementation of predictive analytics. For example, 1 of the healthcare systems had more external financial pressures (ie, more value-based contracts) that made the utility of the HNHC predictive algorithm more compelling.
Transparency related to the analytics is key to trust and use
Transparency was considered a facilitator of trust in the information. End users required insight into drivers of the proposed HNHC score, such as the weights attributed to contributing clinical and social variables. End users of a predictive algorithm usually are not informed about the input of the algorithm (ie, predictors and coefficients). Instead, they usually are presented with a score indicating the risk. Our finding raises the possibility that end users should be involved in reviewing some of these decision algorithm development and implementation processes. End users may provide feedback about the clinical interpretation of the predictors, as well as data quality issues that might not be known to the developers. Such a process also gives developers an opportunity to explain why unexpected variables were included, thus improving trust and credibility of the output. Previous studies have demonstrated the value of determining how a predictive algorithm’s output aligns with end users’ perceptions,29 especially in light of increasing awareness that purely data-driven algorithms may inadvertently perpetuate inequalities or bias.30
Recommendations for actions may be helpful, depending on context
Predictive analytics have demonstrated success in accurately identifying subsets of patients.1–4 However, stakeholders desired recommendations for interventions to prevent the patient from becoming HNHC, in addition to identification. Since HNHC status is condition-independent and can stem from many factors, resultant interventions may not be as straightforward as with clinically recognized diseases and conditions, such as sepsis. For condition-independent, socially situated risks such as HNHC, recommendations for interventions tailored to the patients’ specific risk factors may be necessary. However, the need for recommended action will vary from case-to-case based on the goal of the predictive algorithm.
Actions must be supported with resources
Making the information from the HNHC algorithm actionable, however, was perceived as a challenge because appropriate interventions appeared likely to require resources that were not always within the control of the stakeholders or even their healthcare organizations. This concern is particularly important for risks that involve both social and clinical factors, such as HNHC. Without providing sufficient resources to intervene with the identified patients, the predictive algorithm becomes frustrating to the user and can breed distrust.
Stakeholders desire a feedback loop including clinically meaningful outcomes
Predictive analytics were also seen as more credible to end users if the impact on utilization and health outcomes is demonstrated as part of implementation and tracked over time. Suggestions from this study included demonstrating that the HNHC algorithm could reduce (or had reduced) emergency department visits, length of stay, and other health system outcomes. This finding suggests that simply reporting the algorithm’s C statistic or Brier score to show its classification accuracy and discrimination is unlikely to be convincing to end users. Although it may be challenging to link the algorithm to actions and actions to outcomes, feedback from our stakeholders indicate that it is important to have conversations early to identify metrics that indicate success and ensure that these metrics can be accurately measured.
Carefully consider the role of the patient
Multiple stakeholders questioned if the patients identified as HNHC would be made aware of their status and how this may impact the flow of their care. For example, care managers stressed the importance of having the primary care provider’s endorsement. A patient may not respond well to a care manager they have never met calling them to offer services because they are at risk of becoming HNHC in the future. First, patients may be sensitive to cost impacting decisions regarding their care. Second, the idea of future risk may be hard for the patient to conceptualize. The overarching goal of predictive analytics in medicine is to improve the quality, safety, and efficiency of patient care. Therefore, considering how the patient identified by a predictive algorithm factors into the process is imperative.
Implications for our local implementation
Our institutions are early in the process of determining how the HNHC algorithm may be implemented, although there have been important implications from the findings of the qualitative study. Specifically, the algorithm development team is working with an accountable care organization (ACO) to implement the algorithm. The ACO payment model involves financial incentives for ACOs that lower growth in healthcare costs while meeting performance standards on quality.31 Therefore, they have a strong business case for identifying patients at risk for becoming HNHC and proactively intervening. ACOs also invest in resources that may be needed to better serve HNHC patients, such as care managers and social workers.
Limitations
Our study focused on a specific predictive algorithm, targeted at proactively identifying HNHC patients, which may limit generalizability to other predictive algorithms. We did however, highlight how many of our findings align with lessons learned from implementation of HIT more broadly. Our findings are also novel in that we included multiple different stakeholder groups (not only end users) from 3 different healthcare systems.
Our sample is also limited in that all 3 healthcare systems included represent large, academic, not-for-profit systems. Therefore, generalizability of our findings may be limited to these types of institutions. Participants were located in the northeast and southern United States, which may miss nuanced differences from other geographic regions.
Lastly, our goal was to reach saturation of qualitative content across all sites and participant groups, meaning we did not assess our data for saturation within each stakeholder group and healthcare system. We describe the differences found between systems and stakeholder groups, but our ability to clearly delineate all differences is limited due to the sampling strategy.
As with any qualitative study, findings are customized to the stakeholders and settings included, and should be interpreted as hypothesis-generating rather than hypothesis-confirming.
CONCLUSIONS
Predictive analytics have the promise to advance the quality, safety, and equity of healthcare delivery. However, to deliver on this promise, implementers will need to: identify optimal ways to deliver the results (via clinical decision support, reports and dashboards, or other approaches); understand clinical and business workflows in order to select the best front-line and secondary recipients of the information; and determine when and in what format to deliver it. Some of the framework for this implementation can be informed by frameworks developed in the implementation of EHRs and clinical decision support. Our diverse groups of stakeholders suggested that technical know-how and computing infrastructure were no longer a major challenge, but challenges remain related to aligning the case for predictive analytic models with internal and external drivers of the healthcare system as well as integrating the information with current people and process. Key takeaways for implementing predictive analytics included: remaining driven by the clinical application (not the availability of data), establishing trust through transparency and evidence of effectiveness, and ensuring the analytics are tied to interventions that are clear and may be easily implemented.
Future work should assess how our findings resonate with predictive algorithms in other areas of application. We should also explore how end users may be better integrated into the algorithm development process to improve trust and efficiency of algorithm integration into clinical workflows.
FUNDING
This work was funded by grant HSB-1604-35187 from the Patient Centered Outcomes Research Institute (PI, to RK).
AUTHOR CONTRIBUTIONS
RK secured funding for this work with input from ELA and JSA. NCB, ELA, and JSA designed the interview guide and study. NCB collected the data with assistance from LTD and KB, under the advisement of ELA and JSA. NCB, LTD, AT, and JSA participated in the data analysis and interpretation of findings. NCB drafted the manuscript, and all authors contributed to refining all sections and critically editing the paper.
CONFLICT OF INTEREST STATEMENT
None declared.
REFERENCES
- 1. Rajkomar A, Oren E, Chen K, et al. Scalable and accurate deep learning with electronic health records. NPJ Digit Med 2018; 1: 18. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2. Levin S, Toerper M, Hamrock E, et al. Machine-learning-based electronic triage more accurately differentiates patients with respect to clinical outcomes compared with the emergency severity index. Ann Emerg Med .2018; 71(5): 565–74.e562. [DOI] [PubMed] [Google Scholar]
- 3. Henry KE, Hager DN, Pronovost PJ, Saria S.. A targeted real-time early warning score (TREWScore) for septic shock. Sci Transl Med 2015; 299 (7): 1–9. [DOI] [PubMed] [Google Scholar]
- 4. Fletcher GS, Aaronson BA, White AA, Julka R.. Effect of a real-time electronic dashboard on a rapid response system. J Med Syst 2018; 42 (1): 1–10. [DOI] [PubMed] [Google Scholar]
- 5. Bates DW, Saria S, Ohno-Machado L, Shah A, Escobar G.. Big data in health care: using analytics to identify and manage high-risk and high-cost patients. Health Aff 2014; 33 (7): 1123–31. [DOI] [PubMed] [Google Scholar]
- 6. Figueroa JF, Joynt Maddox KE, Beaulieu N, Wild RC, Jha AK.. Concentration of potentially preventable spending among high-cost Medicare subpopulations: an observational study. Ann Intern Med 2017; 167 (10): 706–13. [DOI] [PubMed] [Google Scholar]
- 7. Sl H, Ca S DM. High-need, high-cost patients: who are they and how do they use health care? A population-based comparison of demographics. Accessed January 8, 2019.
- 8.Mitchell E. STATISTICAL BRIEF #497: Concentration of Health Expenditures in the U.S. Civilian Noninstitutionalized Population, 2014. Agency for Healthcare Research and Quality. Medical Expenditure Panel Survey Web site. https://meps.ahrq.gov/data_files/publications/st497/stat497.shtml. Accessed September 10, 2019. [PubMed]
- 9. Blumenthal D, Chernof B, Fulmer T, Lumpkin J, Selberg J.. Caring for high-need, high-cost patients—an urgent priority. N Engl J Med 2016; 375 (10): 909–11. [DOI] [PubMed] [Google Scholar]
- 10. Zhang Y, Grinspan Z, Khullar D, et al. Developing an actionable patient taxonomy to understand and characterize high-cost Medicare patients. Healthcare (Amst)2020. 2020 Jan 6:100406. doi: 10.1016/j.hjdsi.2019.100406. [Epub ahead of print] [DOI] [PubMed]
- 11. Goins S, Ledneva T, Conroy M. New York State all payer potentially preventable emergency room visits 2011–2012. https://www.health.ny.gov/statistics/sparcs/sb/docs/sb4.pdfAccessed August 12, 2019.
- 12.Agency for Healthcare Research and Quality. Chartbook on care coordination: preventable emergency department visits. https://www.ahrq.gov/research/findings/nhqrdr/chartbooks/carecoordination/measure2.html Accessed August 12, 2019.
- 13. Popejoy LL, Stetzer F, Hicks L, et al. Comparing aging in place to home health care: Impact of nurse care coordination on utilization and costs. Nurs Econ 2015; 33 (6): 306–13. [PMC free article] [PubMed] [Google Scholar]
- 14. Biernacki PJ, Champagne MT, Peng S, Maizel DR, Turner BS.. Transformation of care: integrating the registered nurse care coordinator into the patient-centered medical home. Popul Health Manag 2015; 18 (5): 330–6. [DOI] [PubMed] [Google Scholar]
- 15. Figueroa JF, Jha AK.. Approach for achieving effective care for high-need patients. JAMA Intern Med 2018; 178 (6): 845–6. [DOI] [PubMed] [Google Scholar]
- 16. Long P, Abrams M, Milstein A.. Effective Care for High-Need Patients. Washington, DC: National Academies Press; 2017. [PubMed] [Google Scholar]
- 17. Shah NH, Milstein A, Bagley Ph DS.. Making machine learning models clinically useful. JAMA 2019; 322 (14): 1351. [DOI] [PubMed] [Google Scholar]
- 18. Osheroff JA, Teich J, Levick D, et al. Improving Outcomes with Clinical Decision-Support: An Implementer’s Guide. Chicago, IL: HIMSS; 2012. [Google Scholar]
- 19. Levy-Fix G, Kuperman GJ, Elhadad N. Machine learning and visualization in clinical decision support: current state and future directions. arXiv preprint arXiv: 190602664 2019.
- 20. Jeffery AD, Novak LL, Kennedy B, Dietrich MS, Mion LC.. Participatory design of probability-based decision support tools for in-hospital nurses. J Am Med Inform Assoc 2017; 24 (6): 1102–10. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21. Amarasingham R, Patzer RE, Huesch M, Nguyen NQ, Xie B.. Implementing electronic health care predictive analytics: considerations and challenges. Health Aff (Millwood) 2014; 33 (7): 1148–54. [DOI] [PubMed] [Google Scholar]
- 22. Noy C. Sampling knowledge: The Hermeneutics of snowball sampling in qualitative research. Int J Soc Res Methodol 2008; 11 (4): 327–44. [Google Scholar]
- 23. Sittig DF, Singh H.. A new sociotechnical model for studying health information technology in complex adaptive healthcare systems. Qual Saf Health Care 2010; 19 (Suppl 3): i68–74. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24. Strauss A, Corbin J.. Basics of Qualitative Research. Thousand Oaks, CA: Sage Publishers; 2015. [Google Scholar]
- 25. Saldana J. The Coding Manual for Qualitative Researchers Los Angeles, CA: SAGE Publications Inc.; 2015.
- 26. Miles MB, Huberman AM, Saldana J.. Qualitative Data Analysis: A Methods Sourcebook. 3rd ed Thousand Oaks, CA: Sage; 2014. [Google Scholar]
- 27. Rogers EM. Diffusion of Innovations. New York, NY: Simon and Schuster; 2010. [Google Scholar]
- 28. Woods D, Roth EM.. Cognitive engineering: human problem solving with tools. Human Factors 1988; 30 (4): 415–30. [Google Scholar]
- 29. Benda NC, Blumenthal HJ, Hettinger Z, et al. Human factors design in the clinical environment: development and assessment of an interface for visualizing emergency medicine clinician workload. IISE Trans Occup Ergon Human Fact 2018; 6 (3-4): 1–28. [Google Scholar]
- 30. Khullar D. A.I.Could Worsen Health Disparities New York Times 2/2/2019, 2019; A: 23.
- 31.Centers for Medicare and Medicaid Services. The ACO investment model 2020. https://innovation.cms.gov/initiatives/ACO-Investment-Model/Accessed January 16, 2020.