Abstract
Objectives
Assess understanding of impactibility modelling definitions, benefits, challenges and approaches.
Design
Qualitative assessment.
Setting
Two workshops were developed. Workshop 1 was to consider impactibility definitions and terminology through moderated open discussion, what the potential pros and cons might be, and what factors would be best to assess. In workshop 2, participants appraised five approaches to impactibility modelling identified in the literature.
Participants
National Health Service (NHS) analysts, policy-makers, academics and members of non-governmental think tank organisations identified through existing networks and via a general announcement on social media. Interested participants could enrol after signing informed consent.
Outcome measures
Descriptive assessment of responses to gain understanding of the concept of impactibility (defining impactibility analysis), the benefits and challenges of using this type of modelling and most relevant approach to building an impactibility model for the NHS.
Results
37 people attended 1 or 2 workshops in small groups (maximum 10 participants): 21 attended both workshops, 6 only workshop 1 and 10 only workshop 2. Discussions in workshop 1 illustrated that impactibility modelling is not clearly understood, with it generally being viewed as a cross-sectional way to identify patients rather than considering patients by iterative follow-up. Recurrent factors arising from workshop 2 were the shortage of benchmarks; incomplete access to/recording of primary care data and social factors (which were seen as important to understanding amenability to treatment); the need for outcome/action suggestions as well as providing the data and the risk of increasing healthcare inequality.
Conclusions
Understanding of impactibility modelling was poor among our workshop attendees, but it is an emerging concept for which few studies have been published. Implementation would require formal planning and training and should be performed by groups with expertise in the procurement and handling of the most relevant health-related real-world data.
Keywords: public health, health policy, organisational development
STRENGTHS AND LIMITATIONS OF THIS STUDY.
The number of participants was small, but a wide range of stakeholders were represented.
The area of research is quite new, and little information was available to help participants prepare for workshops.
During workshops all comments were heard and/or seen by all participants, allowing immediate review, feedback and discussion.
Introduction
Of modifiable contributors to population health, medical care addresses only 10%‒20%. Meanwhile, individual health behaviours contribute around 30%, socioeconomic factors roughly 40% and physical environment around 10%.1 Therefore, understanding where, when and to whom the right care belongs requires organisation of large amounts of information. In its 2019 Long Term Plan, National Health Service (NHS) England proposed new policies that will create closer networks between primary medical and community health services, provide urgent community response and recovery support, and strengthen links with care homes.2 This approach is intended to promote the so-called triple aim, which is to improve population health, quality of care and cost control by minimising the number and duration of stays in hospital. The introduction of integrated care services aims to provide access to care in the most clinically appropriate settings, including community care and other out-of-hospital services, such as mental health and care home services, while reducing the burden on emergency care services.
Traditionally, the identification of patients who will benefit from treatments has relied on risk stratification. This started with clinicians considering their own patients, but for larger populations, they do not have sufficient information to predict need and capacity for unplanned care.3 With increasing electronic health data capture, data modelling at a population level has become possible. Risk stratification models aim to predict individuals at risk of an adverse outcome in whom proactive intervention might mitigate the risk,4 or, for population health planning, to assess the distribution, health needs and experiences of different cohorts of patients at risk. However, this type of modelling is generally based on current or previous healthcare activity with a limited number of variables applied.5–8 Therefore, the outputs are narrow predictions (eg, risk of hospitalisation in patients with a specific disease) and only those in the highest risk group or groups tend to be offered preventive care. The outputs, though, do not provide information on which people are most likely to respond to the care offered.9–12 This is a key reason why risk stratification has not consistently led to improvements in health outcomes across the population.13
Although no algorithm will be entirely accurate, risk stratification can return high proportions of true and false positive and negatives if thresholds and sample sizes are not optimised to keep misclassification to a minimum.14 Additionally, findings are potentially subject to regression to the mean—that is, patients who are assigned to the extreme strata by chance and are likely to move to a lower-risk stratum without any intervention cannot be differentiated from those who require care. By contrast, assessment of impactibility conceptually aims to refine outputs by identifying patients within a stratum for whom the care being offered is likely to have the most impact,15–17 in other words, deploying ‘the right care at the right place and at the right time’.2 It considers whether existing care is adequate (ie, whether it is sufficient and without redundancy or whether changes need to be made to maximise effectiveness), accessible and being given to the people who will benefit the most. Therefore, high impactibility would be achieved by prioritising patients predicted to be most amenable to care over those unlikely to respond (eg, likely to be admitted to hospital despite preventive care or those who overuse services without beneficial outcomes) and maximising adherence to treatment.15 A hypothetical example in figure 1 illustrates how impactibility analysis can be used to adjust the allocation of healthcare resources so that people in the population are more likely to benefit and to which innovations they are most likely to be amenable. In the current care of patients with chronic obstructive pulmonary disease, substantial resources and costs are allocated to interventions that have little effect on population health because they treat more severe diseases. Impactibility modelling suggests that greater population benefit will be achieved if more people can be helped to stop smoking and take up and/or access pulmonary rehabilitation. In integrated care services, these might involve primary care and community services. Furthermore, these changes are predicted to lessen the need for expensive later treatments that have less effect on long-term well-being, lowering the reliance on hospital care and the overall budget needed.
Figure 1.
Earlier intervention driven by impactibility analysis can improve the triple aim: population health, quality of care and cost control by minimising the number of people and duration of staying in hospital. (A) In a notional view of the current approach to managing chronic obstruction pulmonary disease, the height of each triangle indicates the degree of contribution each intervention makes to population health while the width of the triangle indicates the cost contribution. As use of more complex treatments and emergency and hospital care are expensive, most of the costs yields only a quarter of the population health benefit. (B) If impactibility modelling is applied to identify individuals who are most likely to benefit and to which innovations they are more likely to be amenable, resources can be reallocated to reduce hospital burden and spending, increase the impact made on population health and also improve people’s experience of care.
The Health Economics Unit helps to design impactibility models for NHS England. As part of that work, it has been asked to create a model that has practical applications and is based on a well-researched and transparent methodology. As an initial step, Orlowski et al 18 performed a systematic literature to assess how impactibility modelling is being used in population health management. Reports were limited in number and difficult to compare, but several types of approach were identified as well as some factors that need further study. Most studied was the propensity to succeed approach, in which the model is intended to identify traits associated with good engagement with and/or outcomes from particular preventive interventions. Risk stratification was enhanced by the inclusion of information such as sociodemographic factors, history of medication adherence and engagement with health services (eg, enrolment in programmes when invited). Other elements that might improve allocation of care could be disease (eg, ambulatory care-sensitive conditions) or analysis of gaps in detection and care.
To complement the literature review, we organised a series of workshops in which participants discussed definitions and benefits of and challenges and approaches to impactibility modelling to inform model design. The aim was to explore what NHS analysts, policy-makers, academics and members of non-governmental think tank organisations understand impactibility analysis to be and the ways in which it might be applied. Here, we summarise the responses and findings and consider how impactibility might be implemented routinely in NHS care planning.
Methods
Workshop formats
The researchers were members of the Health Economics Unit, part of the NHS Midlands and Lancashire Commissioning Support Unit, which specialises in health economics and data analysis (including real-world data) to provide evidence bases for health system and industry decision-making, health service design and potential impact on patients. The workshops were performed as part of the unit’s work towards designing impactibility models for NHS England. Potential participants were identified through existing networks and via a general announcement on social media. People who expressed an interest were sent information and a consent form, and the final set of participants represented varied stakeholders (figure 2).
Figure 2.
Attendance of participants in workshops 1 and 2.
Two workshops were developed. Each was held multiple times so that small numbers of people could attend. Workshop 1 involved discussioins of impactibility definitions and terminology through moderated open discussion. Several weeks before the workshops, participants were emailed information packs and the systematic literature review report by Orlowski et al 18 so that they could familiarise themselves with the terminology, definitions and approaches in use. Participants’ comments and questions were recorded in writing during the workshops by the moderators (RF and HH) and were collated for interpretation. After workshops, participants could provide further comments via online surveys. The workshop design could be iteratively altered based on experience to improve productivity.
In workshop 2, themes were discussed, based on a framework of five options for modelling, as identified in the systematic review by Orlowski et al:18 health conditions amenable to preventive care; health needs/gaps analysis; propensity to succeed; behavioural response models and assessment by healthcare professionals plus modelling (table 1).14 17 19–40 Participants were asked to post ideas and comments on data quality, evidence bases, ease of use by end user, practicality, ethics and cost and to arrange feedback under the theme or themes they felt most appropriate. A collaborative online Miro board (Miro, Amsterdam, Netherlands/San Francisco, California, USA) was used to collect comments questions and to review and comment on others’ posts. Notes of verbal discussions were also taken by the workshop moderators.
Table 1.
Types of impactibility model
| Approach | Benefits | Limitations |
| Health conditions amenable to preventive care |
|
|
| Health needs/gap analysis | ||
| Propensity to succeed models |
|
|
| Behavioural response models |
|
|
| HCP’s assessment of an individual’s ‘likelihood to benefit’ |
HCP, healthcare professional.
All feedback collected during the workshops was reviewed by two authors (AO and RA) to assess fit with the themes and to reveal the prevailing views, considerations and topics of interest.
Owing to COVID-19 restrictions, most of the workshops were performed as online group meetings. All workshops, except one that had one participant, were attended by groups of 4‒10 people. Overall, 37 people attended, among whom 21 attended both workshops 1 and 2, 6 attended only workshop 1 and 10 attended only workshop 2 (figure 3).
Figure 3.
Stakeholders represented in workshops. NHS, National Health Service; MLCSU, Midlands and Lancashire Commissioning Support Unit; CCG, Clinical Commissioning Group; NEL, North East London; UCL, University College London; CSU, Commissioning Support Unit; PHE, Public Health England.
Data analysis
The primary aim of this study was to investigate understanding and perceptions about impactibility analysis and its applications in healthcare. As this was exploratory research and all data were qualitative, it was deemed appropriate to present the findings descriptively.
Patient and public involvement
We did not involve patients or the public in the design, conduct, participant recruitment, reporting or dissemination plans of our research as this was an exploratory study of how healthcare professionals understood impactibility modelling and owing to the timing for the study during the COVID-19 pandemic. Nevertheless, we drew on studies in the literature that reflect patients’ priorities, experiences and preferences and that helped to create the themes for the workshops.
Results
Defining impactibility analysis
It was clear from the workshops that impactibility as a modelling concept is not clearly understood. One participant raised its use by actuaries in the USA to calculate insurance premiums. In that situation, actuaries use impactibility analysis to select which conditions and patients to exclude from insurance cover, and attendees were unsure about how such modelling could be used in an NHS environment. However, the key difference for the NHS is that rather than excluding patients, impactibility analysis would be used identify people to include in care management programmes.
Another important consideration was that relevant data for impactibility might become available only when a patient enters the health system with symptoms or a disease, leading participants to question whether opportunities to make a difference would be missed. This problem is known to arise with risk stratification, as (often arbitrary) decision thresholds have to be met to trigger the next ‘action’ (eg, an electronic record alert or change to care), but patients close to the thresholds are deemed ineligible.14 Additionally, the specificity of thresholds to a dataset, population or outcome can limit the opportunities for identification of patients at risk.14 Impactibility modelling would aim to use well-calibrated models that incorporate a comprehensive set of predictors, including continuous variables, to provide whole-population background data, such as socioeconomics41 and individual well-being,42 while allowing the application of flexible (personalised) thresholds14 against which amenability may be judged at different time points. Where data are not available, it will be necessary to work with proxies, but care gap analysis can provide useful information to formulate these.15
Impactibility analysis was generally viewed as a cross-sectional method to identify patients. However, this type of modelling is better considered as an iterative process to monitor changes in the population and how they affect amenability (eg, due to changes in circumstances, outcomes or response), as well as to make use of new information/data. For example, Mattie et al 19 designed a two-part impactibility model that compared costs between two sets of patients, one using and one not using a digital health platform. Machine learning models were then trained to categorise new patients as impactable versus not impactable. The initial sensitivity was 0.65‒0.77. The authors expected that as the information obtained expanded, the parameters of the model could be reviewed and honed to improve accuracy in future assessments.
It became clear that to support shared understanding, it will be crucial to clarify and standardise the language used to explain impactibility as a concept. We suggest that the definition provided by Geraint Lewis be adopted: ‘impactibility models…aim to identify the subset of at-risk patients for whom preventive care is expected to be successful’.15
Benefits of impactibility
Given that the use of impactibility analysis is poorly understood, it was perhaps not surprising that few benefits were identified by participants. However, the opportunity to reduce inequality was a repeated consideration. Indeed, this is thought to be one of the key benefits of using impactibility models. Multiple factors drive poor outcomes and inequalities, some of which may be controlled by the patient but many of which are beyond their control,2 and participants expressed that impactibility analysis could allow some of these complexities to be ‘unpacked’.
Challenges of building an impactibility model for the NHS
Timeliness of data, quality of data and access to wider sociological information beyond commonly available items, such as demographics and comorbidities, were all raised as relevant challenges. Participants also felt that the need to show a return on investment could be difficult to overcome. A conflict arises due to the juxtaposition of the long-term view needed to make inroads into disease prevention/optimised care (5‒10 years) and short-term budget cycles (1‒2 years). This tension will be eased only by system buy-in, although it might be ameliorated to some degree by the ability to create localised plans.
Finally, participants were wary of biases (eg, measurement errors and sampling biases). All algorithms have biases because variables are frequently based on clinical expertise and regression modelling to decide the next steps. This approach means that all variables of interest might not be included, and the degree to which decision thresholds are based on physician behaviour rather than faithful representations of patient physiology is not clear.43 Additionally, no dataset is perfect. Therefore, to maximise algorithm performance, initial applications should involve well-curated data, followed by validation on real-world data to assess whether the variables used are likely to improve or negatively affect decision-making.15 36 39
Approaches to modelling
Some examples of comments on the different approaches to modelling are provided in table 2. Using health conditions amenable to preventive care was popular because registries already exist for some diseases and prospective hospital data are available. However, access to data from primary care and on social factors potentially poses more of a problem. An issue raised was the number of conditions that might be considered within the model and whether that might make applying the findings overwhelming. Testing the proof of concept with individual disease models was suggested.
Table 2.
Sample comments from workshop 2, by modelling approach
| Health conditions amenable to preventive care | Health needs/gaps analysis | Propensity to succeed | Behavioural response models | HCP assessment plus modelling | No change |
| ‘Practical, as can link to existing segments and can use existing classification algorithms by assigning avoidable risk score/probabilities [and be] deployed as an open-source model to be ‘tailored’ and ‘enhanced’ with wider system data’ ‘The sensitivity of common conditions (particularly metabolic syndrome, type 2 diabetes, etc) is well studied’ ‘International evidence and data in well-managed disease registries are available [although] this can and should be done with data that is routinely processed today’ ‘If using multiple conditions in the model, I think we would struggle to have something that is detailed enough to inform decision-making without overwhelming the user’ ‘Sampling methods would need to be used, as certain conditions are specifically correlated to certain ethnicities, certain geographies, etc’ |
‘It is practical, but tends to be more qualitative in approach, with metrics defined from more of an inclusion into a business intelligence tool’ ‘Can see this being more useful if the ‘model’ was a guided approach to finding gaps rather than giving an output’ ‘limited benchmark data’ ‘We haven't talked about identifying social context factors—for example, research shows that people living alone, people living with somebody with multimorbidity, [and] children with parents with mental ill health, all have significantly higher health care use than those who do not’ ‘Data is often scarcer and lower quality for the highest-need populations (recent refugees, no-fixed-abode/homeless individuals, etc), making health-gap quantification challenging’ |
‘Could access to technology be seen as an enabler?’ ‘I think that PPIE is really key. They will have a much better idea of what ‘patient characteristics’ or context will affect a patient’s impactibility, as well as how best to improve these for patients. I think it would be a very big missed opportunity to not involve patients in determining an impactibility model’ ‘Potential to widen inequalities as people might be a product of their current system experience ‘ ‘Evidence primarily derived from non-universal health systems that can cherry pick patient cohort’ ‘Quite easy to group then target, and this group will also then be likely to engage (again, though, health inequalities)’ ‘[Need] the right interpreters of the models and approaches. Who can engage with the tools and the clients and help make sense of them? Do we have enough of these people? Probably.’ |
‘Could see it as something leading user though inputs on who is their population, and the output is these are your hardest to reach segments/potential conditions to target, signposting to evidence base of how’ ‘Is there an opportunity to bring in social media data?’ ‘Could be seen as judgemental’ ‘Potential risks for exclusion?’ ‘Actually, might be useful just as a simple toolkit to know who will respond most to what’ ‘Issues may arise as people may be denied access to treatments based on behavioural traits.’ ‘…this is challenging, but seems a useful approach’ |
‘I think currently there are no readily available data that could replace a clinician’s knowledge and judgment on ‘impactibility’’ ‘Even with improvements in data quality, individuals are always going to have needs and preferences which will require clinical judgement to decide on best courses of action.’ ‘Would require a lot of training/support to ensure standardisation of responses/assessments’ ‘This is subjective and potentially vulnerable to bias’ ‘Cheap, as this could be part of assessment process [but] cost to staff time could be considered.’ ‘Subjectivity—access to treatment should not be determined by one person’s judgment.’ ‘[Risk of] incentivisation of treatment decisions for clinicians—puts them in a difficult position’ ‘Subjective judgements have little protection from unfairness or inconsistency. If not done systematically and algorithmically, this option could lead to very different treatment of patients in different areas’ |
‘No implementation or development costs’ ‘Localities will continue to iterate their existing models [anyway] rather than adopt a new one’ ‘Ease of use probably a close proxy for ‘felt need’. End users will tolerate ‘difficult to use’ if they see the gain as great and vice versa. So, where does ‘impactibility model’ rank for imagined end users, relative to other, similar tools / approaches at their disposal for improving population health? I really don’t know…and guess it depends entirely on their sense of how far current approaches are working/time and resource available/willingness to ‘follow the data, etc. Apply this, ‘no model’ probably wins! Followed by ‘clinician judgement’’ ‘Arguably, more to lose for not doing this as the system is not optimised in its current format. It might be difficult, but should we not try do better?’ ‘Data quality/capture won't improve until used—no model not an option’ |
HCP, healthcare professional; PPIE, patient and public involvement and engagement.
Health needs/gap analysis was viewed as an unwieldy option because there could be many underlying reasons for needing healthcare, and it was uncertain whether documentation would be accurate enough to provide robust data. The issue of benchmarks was also raised. These are available for some disorders (eg, through the Quality of Outcomes Framework, a pay-for-performance scheme in primary care in the NHS) but not all, and the quality of data varies. Additionally, there was a concern that variation between groups and regions could be too great to make findings widely applicable.
The propensity to succeed approach seemed to raise concerns because it was felt that it could be affected by bias (eg, one participant suggested racial bias, stating ‘lots of the literature has a bias to white participants [raising the] risk that the assumptions in the model are, therefore, incorrect for other communities which are often already experiencing inequality’). Increased inequality was another concern, as several participants questioned whether patients showing less propensity to succeed would still receive appropriate care. In contrast to these responses, the literature review indicated that this was a promising approach if broad information, such as sociodemographic factors, medication adherence and previous programme engagement, could be included.18
Behavioural response modelling was generally well received, and participants felt that this could prove to be the most relevant tool. However, similarly to propensity to succeed modelling, some concerns were expressed about how behavioural analysis might exclude people if they were predicted not to respond to treatments.
The fifth approach, healthcare professionals’ assessment of an individual’s likelihood to benefit, was generally viewed as an add-on that could act as a filter for other model findings. Clear guidelines on what factors to assess, how to translate findings into objective measures and a potentially high time burden (and thereby high cost) were concerns.
Generally, participants supported a change from the current situation. Some concern was raised about the quality of data, but comments suggested that there was willingness to try something new. The most recurring issues were the shortage of benchmarks; incomplete access to/recording of primary care data and social factors (which were seen as important to understanding amenability to treatment); the need for outcome/action suggestions as well as providing the data; and the risk of increasing inequality by the introduction or perpetuation of biases.
Discussion
Impactibility analysis is a developing field that so far remains mostly conceptual, with few practical studies having been done on application and implementation.18 It is not surprising, therefore, that participants in our study did not clearly understand the concept. Although there was positivity towards the idea of change, concerns were raised particularly about health inequality and the need for new interventions and improved data quality. The theory behind impactibility modelling, though, is that health inequality could be reduced by helping resources to be accessed efficiently and earlier. It is also possible that many current treatment strategies will be relevant with better allocation. Therefore, an iterative process that uses the expanding data gathered from patients who move in and out of strata over time could potentially lead to improvements without major change or upheaval.
Understanding whether a person is likely to respond to a treatment due to their physical or behavioural characteristics might reduce the risk of resources being wasted on blanket interventions offered to patients irrespective of whether they will engage with or complete them. If care gaps can be identified, a weighted allocation of evidenced-based solutions could potentially be used to close them. Therefore, not only would the number of patients treated appropriately increase but more could move into lower-risk strata or avoid moving into higher-risk strata. Concerns were raised in our workshops about widening inequality, particularly for groups of people unlikely to engage with care (eg, people in higher indices of deprivation, ethnic minorities). However, it is more likely that the impactibility approach would capture such subgroups and highlight them for prioritisation.15 By contrast, they are often missed by risk stratification because less care accessed is mistakenly equated to healthier status by algorithms.44 Furthermore, if risk is assumed equal for all people within a given stratum and the budget is preassigned equally, the money that is not spent because the intervention is not optimal for everyone is generally not reallocated to maximise care for patients who will respond. That said, impactibility analysis must at all costs avoid trying to make people amenable to intervention against their will (intervention may be offered but should not be preassigned) or at the cost of others’ healthcare.
Given the low number of practical studies of impactibility analysis and lack of clarity about the optimum modelling approaches, we suggest 10 key principles to consider when designing an impactibility model (box 1). Implementation should involve specialist groups or organisations with access to and experience in using real-world data in order to develop and test appropriate approaches. The use of commercial analysis packages or non-specialist big data analytics companies is unlikely to yield the most applicable findings because the aim of impactibility is to understand the needs of real patients in the current system. Without access to such data, there are also important challenges to deriving meaningful insights that could lead to health, financial and ethics equity. Several organisations, such as the Health Economics Unit and the Association of Professional Healthcare Analysts, have the appropriate expertise to develop and test a range of possible approaches in a managed experimental/learning environment that reflects the real-world setting. Furthermore, as Goldacre et al 45 note, the NHS employs around 10 000 data analysts, but traditionally, they have had little guidance on and few opportunities for progression or formal literature to maximise their potential. Goldacre et al suggest they present an opportunity to develop a 21st century NHS analyst workforce that can deliver innovative analyses relevant to current healthcare needs.
Box 1. Key principles for the design of impactibility models.
Efficient resource allocation: The optimum impactibility model aims to reduce health inequality by facilitating efficient and early access to resources, potentially improving outcomes without significant systemic upheaval.
Individualised treatment response prediction: The model should discern a person’s likelihood of responding to treatment based on their physical and behavioural characteristics, avoiding resource wastage on universally applied interventions.
Targeted interventions for care gaps: Weighted allocation of evidence-based solutions should address identified care gaps, enhancing the appropriateness of treatment and potentially reducing health inequality.
Inclusivity and prioritisation of subgroups: The model should prioritise subgroups, often overlooked by traditional risk stratification, ensuring equitable access to interventions, especially for groups less likely to engage with care.
Ethical considerations: The analysis must prioritise individual consent, refraining from coercive interventions and safeguarding ethical principles to prevent compromise of others’ healthcare.
Specialised development and testing: Implementation should involve specialised groups or organisations experienced in using real-world data, steering clear of generic commercial analysis packages or non-specialist big data analytics companies.
Flexibility for localised data input: Models should be adaptable to localised data, allowing users to input their own data for increased relevance, acknowledging that meaningful findings may emerge at the individual level.
Improved data standardisation and completeness: To strengthen predictive power, efforts should be directed toward improving standardisation, completeness and linkage of data, ensuring meaningful insights for effective impactibility analysis.
Enhanced patient experience and streamlined pathways: The model should contribute to time-saving for healthcare professionals, improved patient experience through personalised care, and streamlined diagnostic and treatment pathways.
Clarification of the model’s purpose: Given concerns about potential misuse, clarity on the model’s purpose, especially within the context of NHS England, is crucial for its successful integration into healthcare practices.
Not all data for impactibility analyses need to have national coverage, as models could be created that would allow users to plug in their own local and regional data to maximise relevance. Some of the most meaningful findings will be at the individual level. For example, Hsueh et al 20 investigated whether a machine learning impactibility model could help to personalise a postdischarge management programme aimed at preventing readmission to the hospital. They wanted to increase the accuracy of predicting who would respond to the programme to maximise the effectiveness of care planning. They use a large set of care management records that include a wide range of goals, such as tobacco cessation, knowledge of healthy eating, medication adherence, actions to resolve care gaps, and fall prevention, and six intervention categories (eg, education, screening). These were assessed along with covariates such as age, sex and time in the programme. The authors found that education and referral strategies planned at the individual level were most likely to lead to achieving care management goals. That said, improvements in standardisation, completeness and linkage of data will substantially strengthen predictive power.
Impactibility analysis has several theoretically appealing factors. For instance, time could be saved for healthcare professionals if efficiency can be improved by targeted treatment. Patients might have an improved experience of care if it can be personalised for their needs. Clarification of the most appropriate diagnostic and treatment pathways could prevent unnecessary duplication, such as repeat scans, treatments and additional interventions that could potentially cause harm. Factors such as cultural preferences or practices and demographics are recognised as important in the exclusion of groups with vulnerabilities,15 36 37 46–48 and analysis using broader information might lessen some of the issues of self-categorisation by patients, which can skew who seeks and takes treatment,49 and help new patients to be reached.
Strengths of these workshops, particularly workshop 2, were that all comments were heard and/or seen by all participants, allowing review and feedback at the time, and that a wide range of healthcare professional stakeholders involved in policy-making and budget setting for the NHS were represented. A limitation of this work was that COVID-19 precluded in-person workshops. These factors might have influenced the type of data that could be collected and the depth of engagement. For instance, not all participants were able to use the tools provided in the online meeting software. For these reasons, a descriptive approach to the data was most practical. Furthermore, given the timing, it was not possible to include patients and the public in the design of this study and we had to rely on studies in the literature to highlight their priorities, experiences and preferences.18
The area of research is quite new in the UK, although the impactibility approach has been used by insurance companies in the USA to exclude patients from payment plans. This was mentioned as an example in one workshop and made participants unsure about the benefits in the UK. If other participants had similar knowledge, it might have influenced responses. However, it must be borne in mind that impactibility for NHS England will be used to enable more patients to receive appropriate treatment rather than to exclude patients for not being the right fit. Finally, the findings are highly applicable to current care, but access to robustly recorded, well-linked, standardised data are inconsistent and could prevent them from being implemented into policy.
Conclusions
Impactibility is the moving away from cross-sectional risk stratification by statistical thresholds towards scaled likelihood of response to treatment based on a wide range of considerations. Advancing methods mean that there is already the capacity to analyse large and complex data and use multiple variables and thresholds in ways that might align more accurately with the current clinical context.14 The policy changes proposed in the NHS Long Term Plan2 will only enhance the amount and type of data available. Impactibility analysis might enable incorporation of more of the 80% of modifiable factors beyond the current reach of the healthcare system. As this concept is still being developed, it is vital that model development and testing are performed by specialists with access to and understanding of relevant real-world data.
Supplementary Material
Footnotes
@DrAlexBottle
Contributors: AO: conception of the study, developed and performed the workshops, interpretation the data, writing of the report. RF: developed and performed the workshops, reviewed the drafts. HH: developed and performed the workshops, reviewed the drafts. RA: writing of the report, interpretation of the data. VC: developed workshops, writing of the report. JP: developed workshops, writing of the report. SS: developed workshops, writing of the report. AB: developed workshops, writing of the report and is guarantor.
Funding: The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.
Competing interests: None declared.
Patient and public involvement: Patients and/or the public were not involved in the design, or conduct, or reporting, or dissemination plans of this research.
Provenance and peer review: Not commissioned; externally peer reviewed.
Data availability statement
All data relevant to the study are included in the article.
Ethics statements
Patient consent for publication
Not applicable.
References
- 1. Hood CM, Gennuso KP, Swain GR, et al. On behalf of the topic group ‘evaluating diagnostic tests and prediction models’ of the STRATOS initiative. County health Rankings: relationships between determinant factors and health outcomes. Am J Prev Med 2016;50:129–35. 10.1016/j.amepre.2015.08.024 [DOI] [PubMed] [Google Scholar]
- 2. NHS England . The NHS Long Term Plan, 2019. Available: https://www.longtermplan.nhs.uk/wp-content/uploads/2019/08/nhs-long-term-plan-version-1.2.pdf
- 3. Allaudeen N, Schnipper JL, Orav EJ, et al. Inability of providers to predict unplanned readmissions. J Gen Intern Med 2011;26:771–6. 10.1007/s11606-011-1663-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4. Midlands and Lancashire Commissioning Support Unit . Is risk stratification likely to improve the use of NHS resources? 2021. Available: https://www.strategyunitwm.nhs.uk/sites/default/files/2021-10/RiskStratification-StrategyUnitPaper.pdf
- 5. Bottle A, Aylin P, Majeed A. Identifying patients at high risk of emergency hospital admissions: a logistic regression analysis. J R Soc Med 2006;99:406–14. 10.1177/014107680609900818 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6. Donzé J, Aujesky D, Williams D, et al. Potentially avoidable 30-day hospital readmissions in medical patients: derivation and validation of a prediction model. JAMA Intern Med 2013;173:632–8. 10.1001/jamainternmed.2013.3023 [DOI] [PubMed] [Google Scholar]
- 7. van Walraven C, Escobar GJ, Greene JD, et al. The Kaiser Permanente inpatient risk adjustment methodology was valid in an external patient population. J Clin Epidemiol 2010;63:798–803. 10.1016/j.jclinepi.2009.08.020 [DOI] [PubMed] [Google Scholar]
- 8. Billings J, Blunt I, Steventon A, et al. Development of a predictive model to identify Inpatients at risk of re-admission within 30 days of discharge (PARR-30). BMJ Open 2012;2:e001667. 10.1136/bmjopen-2012-001667 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9. Billings J, Dixon J, Mijanovich T, et al. Case finding for patients at risk of readmission to hospital: development of algorithm to identify high risk patients. BMJ 2006;333:327. 10.1136/bmj.38870.657917.AE [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10. Woodhams V, de Lusignan S, Mughal S, et al. Triumph of hope over experience: learning from interventions to reduce avoidable hospital admissions identified through an academic health and social care network. BMC Health Serv Res 2012;12:153. 10.1186/1472-6963-12-153 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11. Lewis G, Kirkham H, Duncan I, et al. How health systems could avert ‘triple fail’ events that are harmful, are costly, and result in poor patient satisfaction. Health Affairs 2013;32:669–76. 10.1377/hlthaff.2012.1350 [DOI] [PubMed] [Google Scholar]
- 12. Bernstein RH. New arrows in the quiver for targeting care management: high-risk versus high-opportunity case identification. J Ambul Care Manage 2007;30:39–51. 10.1097/00004479-200701000-00007 [DOI] [PubMed] [Google Scholar]
- 13. Bardsley M, Blunt I, Davies S, et al. Is secondary preventive care improving? observational study of 10-year trends in emergency admissions for conditions amenable to ambulatory care. BMJ Open 2013;3:e002007. 10.1136/bmjopen-2012-002007 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14. Wynants L, van Smeden M, McLernon DJ, et al. Three myths about risk thresholds for prediction models. BMC Med 2019;17:192. 10.1186/s12916-019-1425-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15. Lewis GH. Impactibility models”: identifying the subgroup of high-risk patients most amenable to hospital-avoidance programs: impactibility models. Milbank Q 2010;88:240–55. 10.1111/j.1468-0009.2010.00597.x [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16. Steventon A, Billings J. Preventing hospital readmissions: the importance of considering ‘Impactibility,’ not just predicted risk. BMJ Qual Saf 2017;26:782–5. 10.1136/bmjqs-2017-006629 [DOI] [PubMed] [Google Scholar]
- 17. Freund T, Wensing M, Geissler S, et al. Primary care physicians’ experiences with case finding for practice-based care management. Am J Manag Care 2012;18:e155–61. [PMC free article] [PubMed] [Google Scholar]
- 18. Orlowski A, Snow S, Humphreys H, et al. Bridging the impactibility gap in population health management: a systematic review. BMJ Open 2021;11:e052455. 10.1136/bmjopen-2021-052455 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19. Mattie H, Reidy P, Bachtiger P, et al. A framework for predicting impactability of digital care management using machine learning methods. Popul Health Manag 2020;23:319–25. 10.1089/pop.2019.0132 [DOI] [PubMed] [Google Scholar]
- 20. Hsueh PYS, Das S, Maduri C, et al. Learning to personalize from practice: a real world evidence approach of care plan personalization based on differential patient behavioral responses in care management records. AMIA Annu Symp Proc 2018;2018:592–601. [PMC free article] [PubMed] [Google Scholar]
- 21. Steventon A, Ariti C, Fisher E, et al. Effect of telehealth on hospital utilisation and mortality in routine clinical practice: a matched control cohort study in an early adopter site. BMJ Open 2016;6:e009221. 10.1136/bmjopen-2015-009221 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22. Steventon A, Bardsley M, Billings J, et al. Effect of telehealth on use of secondary care and mortality: findings from the whole system demonstrator cluster randomised trial. BMJ 2012;344:e3874. 10.1136/bmj.e3874 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23. Steventon A, Tunkel S, Blunt I, et al. Effect of telephone health coaching (Birmingham Ownhealth) on hospital use and associated costs: cohort study with matched controls. BMJ 2013;347:f4585. 10.1136/bmj.f4585 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24. Guthrie E, Afzal C, Blakeley C, et al. CHOICE: choosing health options in chronic care emergencies. Programme Grants Appl Res 2017;5:1–272. 10.3310/pgfar05130 [DOI] [PubMed] [Google Scholar]
- 25. Buja A, Rivera M, Soattin M, et al. Impactibility model for population health management in high-cost elderly heart failure patients: a capture method using the ACG system. Popul Health Manag 2019;22:495–502. 10.1089/pop.2018.0190 [DOI] [PubMed] [Google Scholar]
- 26. Lewis G. Next Steps for Risk Stratification in the NHS. 2015. Available: https://www.england.nhs.uk/wp-content/uploads/2015/01/nxt-steps-risk-strat-glewis.pdf
- 27. Knabel T, Louwers J. Intervenability: another measure of health risk: by coupling predictive modeling with evidence-based medicine, health plans can identify patients who will benefit the most from care management intervention. (Decision Support)., Available: https://www.thefreelibrary.com/Intervenability%3A+another+measure+of+health+risk%3A+by+coupling...-a0119182675 [PubMed]
- 28. Farley TA, Dalal MA, Mostashari F, et al. Deaths preventable in the U.S. by improvements in use of clinical preventive services. Am J Prev Med 2010;38:600–9. 10.1016/j.amepre.2010.02.016 [DOI] [PubMed] [Google Scholar]
- 29. Navratil-Strawn JL, Hawkins K, Hartley SK, et al. Using propensity to succeed modeling to increase utilization and adherence in a nurse Healthline telephone triage program. J Ambul Care Manage 2016;39:186–98. 10.1097/JAC.0000000000000103 [DOI] [PubMed] [Google Scholar]
- 30. Hommer CE, Hawkins K, Ozminkowski RJ, et al. Propensity to succeed: a new method to identify individuals most likely to benefit from a depression management program. The American Journal of Geriatric Psychiatry 2013;21:S152–3. 10.1016/j.jagp.2012.12.201 [DOI] [Google Scholar]
- 31. Hawkins K, Ozminkowski RJ, Mujahid A, et al. Propensity to Succeed: Prioritizing Individuals Most Likely to Benefit from Care Coordination, Available: https://www.liebertpub.com/doi/epdf/10.1089/pop.2014.0121 [DOI] [PMC free article] [PubMed]
- 32. DuBard CA, Jackson CT. Active redesign of a Medicaid care management strategy for greater return on investment: predicting Impactability. Popul Health Manag 2018;21:102–9. 10.1089/pop.2017.0122 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 33. Ozminkowski RJ, Wells TS, Hawkins K, et al. Little data, and care coordination for Medicare beneficiaries with Medigap coverage. Big Data 2015;3:114–25. 10.1089/big.2014.0034 [DOI] [PubMed] [Google Scholar]
- 34. Flaks-Manov N, Srulovici E, Yahalom R, et al. “Preventing hospital readmissions: healthcare providers’ perspectives on “impactibility” beyond EHR 30-day readmission risk prediction”. J Gen Intern Med 2020;35:1484–9. 10.1007/s11606-020-05739-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 35. Horvath B, Silberberg M, Landerman LR, et al. Dynamics of patient targeting for care management in Medicaid: a case study of the Durham community health network. Care Manag J 2006;7:107–14. 10.1891/cmj-v7i3a001 [DOI] [PubMed] [Google Scholar]
- 36. Shadmi E, Freund T. Targeting patients for multimorbid care management interventions: the case for equity in high-risk patient identification. Int J Equity Health 2013;12:70. 10.1186/1475-9276-12-70 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 37. Freund T, Gondan M, Rochon J, et al. Comparison of physician referral and insurance claims data-based risk prediction as approaches to identify patients for care management in primary care: an observational study. BMC Fam Pract 2013;14:157. 10.1186/1471-2296-14-157 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 38. Freund T, Wensing M, Mahler C, et al. Development of a primary care-based complex care management intervention for chronically ill patients at high risk for hospitalization: a study protocol. Implement Sci 2010;5:70. 10.1186/1748-5908-5-70 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 39. Freund T, Mahler C, Erler A, et al. Identification of patients likely to benefit from care management programs. Am J Manag Care 2011;17:345–52. [PubMed] [Google Scholar]
- 40. Fleming MD, Shim JK, Yen IH, et al. Patient engagement at the margins: health care providers’ assessments of engagement and the structural determinants of health in the safety-net. Soc Sci Med 2017;183:11–8. 10.1016/j.socscimed.2017.04.028 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 41. Department for Levelling Up, Housing and Communities, Ministry of Houseing, Communities and Local Government . English indices of deprivation, Available: https://www.gov.uk/government/collections/english-indices-of-deprivation
- 42. Office for National Statistics . Personal well-being in the UK: April 2019 to March 2020, Available: https://www.ons.gov.uk/peoplepopulationandcommunity/wellbeing/bulletins/measuringnationalwellbeing/april2019tomarch2020#personal-well-being-interactive-maps
- 43. Beaulieu-Jones BK, Yuan W, Brat GA, et al. Machine learning for patient risk stratification: standing on, or looking over, the shoulders of Clinicians NPJ Digit Med 2021;4:62. 10.1038/s41746-021-00426-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 44. Obermeyer Z, Powers B, Mullainanthan S. Dissecting racial bias in an algorithm used to manage the health of populations, Available: https://www.science.org/doi/10.1126/science.aax2342?url_ver=Z39.88-2003&rfr_id=ori:rid:crossref.org&rfr_dat=cr_pub%20%200pubmed [DOI] [PubMed]
- 45. Goldacre B, Bardsley M, Benson T, et al. Bringing NHS data analysis into the 21st century. J R Soc Med 2020;113:383–8. 10.1177/0141076820930666 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 46. Ross S, Curry N, Goodwin N. Case management: what it is and how it can best be implemented. 2011. Available: https://www.lincolnshirecommunityhealthservices.nhs.uk/application/files/9515/0642/5278/The_Kings_Fund_-_case_management.pdf
- 47. Foley TJ, Vale L. What role for learning health systems in quality improvement within healthcare providers Learn Health Syst 2017;1:e10025. 10.1002/lrh2.10025 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 48. Lewy H, Barkan R, Sela T. Personalized health systems—past, present, and future of research development and implementation in real-life environment. Front Med 2019;6:149. 10.3389/fmed.2019.00149 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 49. Isler O, Isler B, Kopsacheilis O, et al. Limits of the social-benefit motive among high-risk patients: a field experiment on influenza vaccination behaviour. BMC Public Health 2020;20:240. 10.1186/s12889-020-8246-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Data Availability Statement
All data relevant to the study are included in the article.



