Skip to main content
AMIA Annual Symposium Proceedings logoLink to AMIA Annual Symposium Proceedings
. 2018 Dec 5;2018:1076–1083.

Interactive Cost-benefit Analysis: Providing Real-World Financial Context to Predictive Analytics

Mark G Weiner 1, Wasiq Sheikh 1, Harold P Lehmann 2
PMCID: PMC6371360  PMID: 30815149

Abstract

Objective: Clinical implementation of predictive analytics that assess risk of high-cost outcomes are presumed to save money because they help focus interventions designed to avert those outcomes on a subset patients who are most likely to benefit from the intervention. This premise may not always be true. A cost-benefit analysis is necessary to show if a strategy of applying the predictive algorithm is truly favorable to alternative strategies.

Methods: We designed and implemented an interactive web-based cost-benefit calculator, enabling specification of accuracy parameters for the predictive model and other clinical and financial factors related to the occurrence of an undesirable outcome. We use the web tool, populated with real-world data to illustrate a cost-benefit analysis of a strategy of applying predictive analytics to select a cohort of high-risk patients to receive interventions to avert readmissions for Congestive Heart Failure (CHF).

Results: Application of predictive analytics in clinical care may not always be a cost-saving strategy compared with intervening on all patients. Improving the accuracy of a predictive model may lower costs, but other factors such as the prevalence and cost of the outcome, and the cost and effectiveness of the intervention designed to avert the outcome may be more influential in determining the favored strategy.

Conclusion: An interactive cost-benefit analyses provides insights regarding the financial implications of a clinical strategy that implements predictive analytics.

Introduction

Despite ongoing advances in clinical care, congestive heart failure (CHF) continues to be a significant source of morbidity, mortality and high-cost health utilization. It is among the most common indications for hospitalizations among older adults, and is a significant source of readmissions. In an effort to reduce costs, Medicare, in 2012, developed the Readmissions Reduction Program (HRRP)1 which created penalties for readmissions within 30 days, motivating intensive investigations to reduce readmissions, especially for heart failure given their high frequency. The approach has been twofold: 1) to develop interventions that can reduce the likelihood of readmission, and 2) to develop predictive algorithms that identify patients who are at the highest risk for readmission. Clinical and informatics investigators have worked for at least 30 years2 to identify predictors of readmission for CHF. These algorithms leverage many sources of administrative and clinical data including demographics, diagnoses, laboratory studies, vital signs and cardiac parameters. Analytical approaches have evolved from linear equations to machine learning methods.3 Regardless of the method by which they are derived, the common thread underlying all of these risk models is the generation of some threshold, above which, a patient is considered high risk, and in need of an intervention, and below which, the patient is considered low risk, and should not receive the intervention.

The premise of all of this research is that a better predictive algorithm for CHF readmission will be cost saving by helping to focus resources on patients who are most likely to benefit from an intervention designed to avert a readmission, and not to waste resources on interventions for patients who are not likely to be readmitted and therefore do not need them. To ensure these predictive algorithms will achieve the cost savings promised in many publications, it is important to model the real-world financial context in which these algorithms will be applied. The accuracy of the predictive algorithm is only one factor. A full cost-benefit analysis requires modeling of several additional factors: 1) the prevalence of the undesirable outcome, 2) the cost of the undesirable outcome, 3) the cost of the planned intervention, and 4) the effectiveness of the intervention at averting the undesirable outcome. The purpose of this paper (and web site https://lksom.temple.edu/informatics/costbene.html [tested on current Chrome, Safari and Edge browsers]) is to provide institutional decision makers with an easily-accessible tool to conduct an interactive cost-benefit analysis where users enter parameters for the cost-benefit variables based on literature and locally-derived data, and explore the financial changes associated with alteration of these parameters.

Methods

The goal of the web-based cost-benefit analysis tool is to enable the user to visualize changes in costs associated with changes to basic parameters of the model. The cost-benefit model is agnostic to the specific methods used to generate the predictive model of risk of the outcome. What is important to the cost-benefit analysis is the predictive model’s accuracy as assessed by its sensitivity and specificity.

The fundamental variables (outlined in square brackets) that are relevant to the cost model are as follows:

  1. The [at-risk population size] (e.g. how many patients had an index CHF admission, and could have a readmission?)

  2. The [prevalence] of the “undesired outcome” (e.g. what proportion of the patients with an index admission for CHF are readmitted within 30 days?)

  3. The [sensitivity] of the predictive algorithm for generating a positive result in a population that has the undesired outcome

  4. The [specificity] of the predictive algorithm for generating a negative result in a population without the undesired outcome

  5. The [cost of the undesired outcome] (e.g. If a patient is readmitted for CHF, what is the cost of the readmission?)

  6. The [cost of implementing the intervention] designed to avert the undesired outcome (e.g. What is the per-patient cost of an intervention that may prevent the readmission?)

  7. The [effectiveness] of the intervention designed to avert the undesired outcome (e.g. If a patient receives the intervention, how much less likely is that patient from being readmitted?)

Bayes Theorem is applied to the first 4 of the above variables to find the sizes of the populations that are expected to test positive and negative according to a predictive algorithm, and, within these groups, the numbers of true positives, true negatives false positives and false negatives, according to the following equations :

number True Positive [TP] = [sensitivity] × [prevalence] × [at-risk population size]

number False Positive [FP] = (1 - [specificity]) × (1- [prevalence]) × [at-risk population size]

number True negative [TN] = [specificity] × (1- [prevalence]) × [at-risk population size]

number False Negative [FN] = (1- [sensitivity]) × [prevalence] × [at-risk population size]

[tot pos] = [TP] + [FP] (Total number of people who will test positive according to the predictive algorithm)

[tot neg] = [TN] + [FN] (Total number of people who will test negative according to the predictive algorithm)

The cost model then presumes the intervention is applied according to one of 3 core strategies:

  • The “base case,” where neither testing nor intervention is applied, and the number and cost of the outcomes are based solely on the size of the at-risk population, the established prevalence of the undesired outcome, and the cost of the undesired outcome.
    • Total cost = [at-risk population size] × [prevalence] × [cost of undesired outcome]
  • The “test and treat positive” strategy is the strategy all published work on predictive algorithms presumes to be the most financially favorable. In this case, the total cost of the strategy is the total cost of the intervention applied only to those who test positive (TP+FP) plus the total costs of undesired outcomes occurring despite the intervention.
    • Total Cost = Total Cost of Intervention + Total Cost of undesired outcome
    • Total Cost of intervention = [tot pos] × [cost of implementing the intervention]
    • Total Cost of undesired outcome = ([TP] × [1 - effectiveness] + [FN]) × [cost of undesired outcome]
      • Total Cost of undesired outcome is the product of the per-patient cost of the undesired outcome with the number of people with the undesired outcome which is the sum of the people who test positive according to the predictive model (TP), receive the intervention, and have the undesired outcome anyway (1-effectiveness), plus the number of false negatives (people who test negative according to the predictive model and have the undesired outcome without having the opportunity to receive the intervention).
  • The “treat all” strategy, where all patients receive the intervention regardless of the result of a predictive algorithm. As above, the total cost is the cost of the intervention which is applied to all at-risk patients (rather than just the subgroup who tests positive according to the predictive algorithm) plus the total costs of the undesired outcome despite the intervention.
    • Total Cost = Total Cost of Intervention + Total Cost of undesired outcome
    • Total Cost of intervention = [at-risk population size] × [cost of implementing the intervention]
    • Total Cost of undesired outcome = ([prevalence] × [1 - effectiveness] × [at-risk population size]) × [cost of undesired outcome]
      • Total Cost of undesired outcome is the product of the per-patient cost of the undesired outcome with the number of people with the undesired outcome which, in this strategy, is the total number of people who receive the intervention who would have had the undesired outcome (prevalence × at-risk population size), and have the undesired outcome anyway (1-effectiveness).

Users can then manipulate sliders to change the base-parameter assumptions of the model. As the user moves the sliders, the costs associated with the model update instantly, reflecting changes in the costs of the 3 strategies according to the above formulas.

Different values of the accuracy parameters may favor different strategies. The accuracy values are used to generate an ROC curve with typical axes of true positive rate (sensitivity) and false positive rate (1- specificity). The ROC curve has 3 points – (0,0), (sensitivity, 1-specificity), and (1,1). An estimate of the Area Under the Curve (AUC) is calculated as the sum of trapezoidal areas generated by the graph, and is the same as the “c-statistic” that is the reported discriminatory value in published reports of predictive algorithms.4 As the user changes the sensitivity and specificity parameters of the predictive model being tested, the financial values change. While most predictive algorithms report the AUC/c-statistic value, the single value is really a composite function of both the sensitivity and specificity. Therefore, the web tool allows you to check a box to hold the AUC constant, in which case, as the sensitivity is changed, the specificity is updated automatically to maintain a constant AUC, changing the overall financial values, and sometimes changing the favored strategy. This feature demonstrates the fact that a single AUC value may be associated with favoring the “treat all” and “test and treat positive” strategy depending on the values of the component sensitivity and specificity.

Sensitivity analyses enable the user to vary parameters over wide ranges, looking for thresholds where the favored strategy will change. The web tool displays the graphical results of 1-, 2- and 3-way sensitivity analyses. The 1-way analysis is a 2D line graph with the total cost on the y-axis, the selected parameter to vary, on the x-axis. A green line is displayed to show the change in cost per the change in the selected variable for the “Treat all” strategy”; a red line, for the “test and treat positive” strategy. When the green line (“treat all”) is below the red line (“test and treat”), the “treat all” strategy is favored, because lower cost is preferred. The 2-way sensitivity analysis shows the total cost on the z-axis, and the variation in 2 variables on the x, y axes define a pair of intersecting planes. Similar to the 1-way analysis, when the green plane is below the blue/red plane, the “treat all” strategy is favored. The 3-way sensitivity analysis foregoes the representation of the cost on the x-, y- or z-axes, and instead has a set of 11 parallel planes reflecting different values of the user-defined z parameter, and each plane has two colors reflecting the favored strategy at the corresponding values of the user-defined x and y parameters. The larger the blue area of a plane is, the wider the array of parameters that favor a “treat all” strategy.

The site was built exclusively with freely-available tools, including JavaScript, HTML5, and Plotly.js.

Results

In the driving example for this abstract, CHF readmissions, some of the factors corresponding to the above clinical and financial parameters are empirically derivable based on real-world data: 1) the background rate of readmission after an index admission for CHF5,6, and 2) the cost of a readmission for CHF.7 Consistent with prior literature on mechanisms to avert readmissions, the cost of the intervention may vary depending on its intensity, from low-cost post-discharge phone calls, to higher cost telemedicine or home nurse interventions.8 The effectiveness of many interventions has be quantified in numerous studies, and can be applied in our cost benefit model.9 Lastly, many studies have shown the accuracy of the different predictive models for CHF readmission which can be applied within our web-based cost-benefit model.10-14

The base model of the web tool is populated with plausible initial values for clinical scenario of CHF. Consistent with published reports, the prevalence of the “undesired outcome” of CHF readmissions within 30 days is 0.2.5,6 The cost of the “undesired outcome” of readmission is set initially at $20,000. The cost per person of an intervention to avert the readmission is set at $750, and the effectiveness of the intervention is set at 0.8, meaning that in a cohort receiving the intervention, the rate of the undesired outcome will drop by 80%. The test characteristics of the predictive algorithm are set with a sensitivity of 0.8 and a specificity of 0.8.

The 2×2 table next to the cost parameters in Figure 1 demonstrates the Bayesian analysis associated with the base case parameters. In a population of 1000 patients admitted with congestive heart failure, and a prevalence of readmission set at 0.2, there will be 200 patients with readmissions, and 800 patients without readmissions. The sensitivity of 0.8 means that of the 200 patients with readmissions, 0.8 × 200= 160 will have a positive test according to the predictive algorithm (True Positives). The specificity of 0.8 means that of the 800 patients who will not be readmitted, 800 × 0.8 = 640 of them will have a negative test (True Negatives). The number of false positives (FP) is then the number who will not be readmitted (800) minus the number of true negatives (640). FP = 800 - 640 = 160, meaning 160 patients will test positive with the predictive algorithm but will not be readmitted, and therefore would not benefit from an intervention designed to avert an admission. The number of False Negatives is the number who will be readmitted (200) minus the true positives (160). FN = 200 -160 = 40, meaning 40 patients will test negative according to the predictive model, and therefore will be readmitted without the opportunity to be offered the intervention designed to avert the readmission.

Figure 1.

Figure 1.

A sample screenshot of the Cost-Benefit Analyzer web page (https://lksom.temple.edu/informatics/costbene.html). The upper left panel shows the sliders for the following parameters: prevalence of undesired outcome, sensitivity and specificity of the predictive model, at-risk population size, cost of undesired outcome, cost of intervention, and effectiveness of the intervention. The upper right shows the population-based 2×2 table that changes dynamically with the sliders and a report on the likelihood ratios of positive and negative tests and the AUC. The middle panel shows the financial results for each strategy (Intervene on no one; Intervene on everyone; Test and treat those positive), showing for each: the number of patients receiving the intervention, the total cost of that intervention, the number of undesired outcomes, the total cost of those undesired outcomes, and the sum of the two costs. The test-and-treat strategy adds the number of undesired outcomes due to false negatives. The right-hand side provides choices for the x-, y-, and z-axes. The bottom panels display the 1-way , 2-way , and 3-way sensitivity analyses.

The financial values in the charts below the parameter specifications arise from the user-entered data and the formulas outlined in the methods section. In the “treat no one” strategy, all 200 of the patients will be readmitted according to the base readmission prevalence of 0.2. Each of the 200 readmissions will cost $20,000 leading to a total cost for the strategy of 200 × $20,000 = $4,000,000.

In the “treat all” strategy, all 1000 patients who had an initial admission for Congestive Heart Failure will receive the intervention which costs $750 each, so the cost of the intervention alone is 1000 × $750 = $750,000. The value of the “treat all” strategy is that instead of 200 people having a readmission, since the intervention is 0.8 effective, that means that 200 × (1 - 0.8) = 40 people will be readmitted. Again, since each admission costs $20,000, the total cost of the readmissions in this strategy is $20,000 × 40 = $800,000. Therefore the total cost of this strategy is the total cost of the intervention ($750,000) plus the total cost of the readmissions ($800,000) that occur despite the intervention, for a total of $1,550,000.

In the “test and treat positive” strategy, the intervention is not applied to all 1000 patients. It is only applied to the cohort that tested positive according to the predictive algorithm. As seen in the 2×2 table, generated through the Bayesian analysis, 320 patients will test positive, so 320 patients will receive the intervention that costs $750 each, leading to a total cost of $240,000— less than the cost of intervening on all 1000 patients, and the source of the premise that the predictive algorithm should save money. However, other costs need to be addressed. Of the 320 people who receive an intervention, given the same test characteristics of the predictive algorithm, the Bayesian analysis suggests that only 160 of them were expected to be readmitted. The intervention will reduce this expected number of readmissions by 0.8, leaving only 160 × (1 - 0.8) = 32 patients readmitted—much less than the 200 readmissions of the “treat no one” strategy. However, there are still 680 patients who tested negative in the predictive algorithm. Again, according to the Bayesian analysis, 40 of them are going to be readmitted, and since they did not receive the intervention, there is no reduction in that value. Therefore, there will be 32 patients who tested positive with the predictive algorithm that will be readmitted, and 40 patients who tested negative to the predictive algorithm that will be readmitted, for a total of 72 readmissions. Each of the 72 readmissions costs $20,000, for a total cost of $1,440,000. Therefore, the total cost of this strategy is the $240,000 for the interventions, and $1,440,000 for the cost of readmissions despite the interventions for a total cost of $1,680,000.

The cost-benefit analysis arising from these parameters shows clearly that the “treat all” strategy at a cost of $1,550,000 is favored over the “test and treat positive” strategy which costs more, at $1,680,000. This finding occurs despite a predictive algorithm that is on par with some of the best published algorithms, with an AUC = 0.8. This result would suggest that in a clinical scenario defined by the described parameters, it is more financially favorable simply to apply the readmission-averting intervention to all patients initially admitted with CHF, without using the predictive algorithm.

Given the apparent conclusion that “treat everyone” is the preferred strategy, should one conclude that the work on improving the predictive algorithm will be unhelpful? How much more accurate does the algorithm need to be to make the “test and treat positive” strategy more effective? One can determine this threshold by moving the sensitivity and specificity sliders, or by examining the 2-way sensitivity analysis with sensitivity and specificity as axes. As sensitivity and specificity approach 90%, the “test and treat positive” strategy becomes favorable. However, despite a great deal of research and development on predictive algorithms to assess risk of CHF readmissions, that degree of high performance has yet to be achieved.

Perhaps some of the other model parameters were incorrect, leading to an incorrect conclusion that the strategy that applies the predictive algorithm was less favorable than a “treat all” strategy. Perhaps the assumed cost of readmission was too high, or too low at $20,000. Again, by moving the slider for the cost of the readmission, one can immediately see the impact of changing this parameter on the overall costs of each strategy. As the cost of readmission increases, the cost of both strategies increases, but the gap between the “treat all” and “test and treat positive” becomes wider, suggesting the predictive algorithm strategy is even less favorable. However, as the cost of the readmission goes down, the costs of both strategies decreases and the gap becomes smaller, with the test and treat positive strategy becoming favorable as the cost of readmission drops below $ 16,000. While the average cost of a readmission may be between $15,000 and $20,000, those values reflect the provider cost, and not necessarily the charge to the payor. What about the other parameters? Is the $750 intervention too expensive? What if you try a less expensive intervention? Certainly, that will save money, but will the less expensive intervention have a lower effectiveness? Again, the web tool allows exploration of these changes. What if the prevalence of the readmission changes? If the cost of an intervention is constant, but the effectiveness of the intervention at averting a undesired outcome improves, does that add to or detract from the favorability of applying the predictive analytics toward a “test and treat positive” strategy? Using the web-based cost benefit analyzer, one can see that, as the effectiveness of the intervention increases, the favorability of the “test and treat positive” strategy declines. Similarly, and somewhat surprisingly, the value of predictive analytics supporting a “test and treat positive” strategy increases as the effectiveness of the intervention decreases. Table 1 shows a selection of plausible values for the clinical and cost parameters related to Congestive Heart Failure readmission. Note that the favored strategy indicated by the asterisk in the rightmost columns is sometimes “Treat all,” even with increases in the sensitivity and specificity of the predictive algorithm. As prevalence decreases and cost of the readmission goes down, the “test and treat positive” strategy becomes more favorable, even without changes in the accuracy of the predictive algorithm. As the cost of the readmission increases, the “treat all” strategy tends to become more favorable.

Table 1.

tabular output of the change in cost of the “treat all” and “test and treat positive” strategies associated with changes in relevant clinical and cost parameters related to congestive heart failure. The model assumes a constant at-risk population size of 1000. The favored strategy (represented with a ‘*’ in one of the two rightmost columns) may change as a function of changes to parameters other than the accuracy of the predictive model. The cost of the “treat none” strategy is excluded for simplicity, but is generally higher than the other two strategies except in the setting of implausibly expensive and ineffective interventions.

graphic file with name 2976979t1.jpg

Discussion

Cost-benefit analysis has been applied in non-clinical domains since the 1930s15 and has recognized value in healthcare in terms of optimizing therapeutic decision making16 and assessing the value of new technologies including EHRs.17 However, it has not yet been applied toward understanding the financial implications of applying predictive analytics algorithms to stratify patients into cohorts who are likely to benefit from clinical interventions. Use of our interactive modeling and visualization tool for a cost–benefit analysis suggests that under many plausible clinical circumstances, a strategy of applying a state-of-the-art predictive algorithm for CHF readmission to select a subset of patients needing intervention may cost more overall than a strategy of intervening on all patients discharged with CHF. The cost–benefit model makes some simplifying assumptions such as the cost of an undesired outcome being constant regardless of whether an outcome occurs in a patient labeled by predictive algorithm as high risk or low risk. It is possible that the cost of the outcome in a low risk patient would be less than that of a higher risk patient. The model also presumes that the application of the intervention, especially to people who did not need it, does no harm. Future versions of this tool will incorporate these possibilities.

Given the local institutional variation in influential parameters such as the underlying cost and prevalence of undesired outcomes like CHF readmissions, we suggest that decision makers use this cost–benefit tool in this and in other clinical contexts to help inform the decision whether to adopt the various predictive-modeling tools (or, for that matter, novel tests and biomarkers) for local use. Much of the necessary information can be derived from the institutional EHR. Applying locally-derived data in the model may show that a predictive algorithm that works well at one institution may not work well at another institution, not because the algorithm is differently predictive across institutions, but because other key features of the outcome such as the baseline prevalence and response to the intervention may differ. As we have done in this paper, we suggest that informatics researchers who develop and publish predictive models also use the cost–benefit tool to report on the expected financial impact of implementation of the tool under a variety of plausible circumstances based on locally-derived clinical and cost parameters and those published in the literature.

Bibliography

  • 1.Joynt KE, Jha AK. A Path Forward on Medicare Readmissions. NEngl JMed. 2013;368(13):1175–1177. doi: 10.1056/NEJMp1300122. doi:10.1056/NEJMp1300122. [DOI] [PubMed] [Google Scholar]
  • 2.Vinson JM, Rich MW, Sperry JC, Shah AS, McNamara T. Early Readmission of Elderly Patients With Congestive Heart Failure. J Am Geriatr Soc. 1990;38(12):1290–1295. doi: 10.1111/j.1532-5415.1990.tb03450.x. doi:10.1111/j.1532-5415.1990.tb03450.x. [DOI] [PubMed] [Google Scholar]
  • 3.Mortazavi BJ, Downing NS, Bucholz EM, et al. Circ Cardiovasc Qual Outcomes. 2016. Analysis of Machine Learning Techniques for Heart Failure Readmissions. doi:10.1161/CIRCOUTCOMES.116.003039. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Romero-Brufau S, Huddleston JM, Escobar GJ, Liebow M. Why the C-statistic is not informative to evaluate early warning scores and what metrics to use. Crit Care. 2015;19(1) doi: 10.1186/s13054-015-0999-1. doi:10.1186/s13054-015-0999-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Gupta A, Allen LA, Bhatt DL, et al. Association of the Hospital Readmissions Reduction Program Implementation With Readmission and Mortality Outcomes in Heart Failure. JAMA Cardiol. 2018;3(1):44. doi: 10.1001/jamacardio.2017.4265. doi:10.1001/jamacardio.2017.4265. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Bergethon KE, Ju C, DeVore AD, et al. Trends in 30-Day Readmission Rates for Patients Hospitalized With Heart Failure: Findings From the Get With The Guidelines-Heart Failure Registry. Circ Heart Fail. 2016;9(6) doi: 10.1161/CIRCHEARTFAILURE.115.002594. doi:10.1161/CIRCHEARTFAILURE.115.002594. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Kilgore M, Patel HK, Kielhorn A, Maya JF, Sharma P. Economic burden of hospitalizations of Medicare beneficiaries with heart failure. RiskManag Healthc Policy. 2017;10:63–70. doi: 10.2147/RMHP.S130341. doi:10.2147/RMHP.S130341. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Ziaeian B, Fonarow GC. The Prevention of Hospital Readmissions in Heart Failure. Prog Cardiovasc Dis. 2016;58(4):379–385. doi: 10.1016/j.pcad.2015.09.004. doi:10.1016/j.pcad.2015.09.004. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Klersy C, De Silvestri A, Gabutti G, Regoli F, Auricchio A. A Meta-Analysis of Remote Monitoring of Heart Failure Patients. J Am Coll Cardiol. 2009;54(18):1683–1694. doi: 10.1016/j.jacc.2009.08.017. doi:10.1016/J.JACC.2009.08.017. [DOI] [PubMed] [Google Scholar]
  • 10.Kansagara D, Englander H, Salanitro A, et al. Risk prediction models for hospital readmission: A systematic review. JAMA - J Am Med Assoc. 2011;306(15):1688–1698. doi: 10.1001/jama.2011.1515. doi:10.1001/jama.2011.1515. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Frizzell JD, Liang L, Schulte PJ, et al. Prediction of 30-Day All-Cause Readmissions in Patients Hospitalized for Heart Failure. JAMA Cardiol. 2017;2(2):204. doi: 10.1001/jamacardio.2016.3956. doi:10.1001/jamacardio.2016.3956. [DOI] [PubMed] [Google Scholar]
  • 12.Amarasingham R, Moore BJ, Tabak YP, et al. An Automated Model to Identify Heart Failure Patients at Risk for 30-Day Readmission or Death Using Electronic Medical Record Data. Med Care. 2010;48(11):981–988. doi: 10.1097/MLR.0b013e3181ef60d9. doi:10.1097/MLR.0b013e3181ef60d9. [DOI] [PubMed] [Google Scholar]
  • 13.Krumholz HM, Parent EM, Tu N, et al. Readmission After Hospitalization for Congestive Heart Failure Among Medicare Beneficiaries. Arch Intern Med. 1997;157(1):99. doi:10.1001/archinte.1997.00440220103013. [PubMed] [Google Scholar]
  • 14.Fleming LM, Gavin M, Piatkowski G, Chang JD, Mukamal KJ. Derivation and Validation of a 30-Day Heart Failure Readmission Model Lisa. Am J Cardiol. 2014;114:1379–1382. doi: 10.1016/j.amjcard.2014.07.071. doi:10.1016/j.amjcard.2014.07.071. [DOI] [PubMed] [Google Scholar]
  • 15.Pearce DW. Cost-Benefit Analysis. London: Macmillan Education UK; 1983. The Origins of Cost-Benefit Analysis; pp. 14–24. doi:10.1007/978-1-349-17196-5_2. [Google Scholar]
  • 16.Pauker SG, Kassirer JP. Therapeutic Decision Making: A Cost-Benefit Analysis. N Engl J Med. 1975;293(5):229–234. doi: 10.1056/NEJM197507312930505. doi:10.1056/NEJM197507312930505. [DOI] [PubMed] [Google Scholar]
  • 17.Wang SJ, Middleton B, Prosser LA, et al. A cost-benefit analysis of electronic medical records in primary care. Am J Med. 2003;114(5):397–403. doi: 10.1016/s0002-9343(03)00057-3. doi:10.1016/S0002-9343(03)00057-3. [DOI] [PubMed] [Google Scholar]

Articles from AMIA Annual Symposium Proceedings are provided here courtesy of American Medical Informatics Association

RESOURCES