Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2022 Jan 20.
Published in final edited form as: Am Heart J. 2018 Dec 8;209:116–125. doi: 10.1016/j.ahj.2018.12.003

How can clinical researchers quantify the value of their proposed comparative research?

Anirban Basu 1,2,*, David L Veenstra 1, Josh Carlson 1, Wei-Jhih Wang 1, Kelley Branch 3, Jeffrey Probstfield 3
PMCID: PMC8776318  NIHMSID: NIHMS1604689  PMID: 30638543

Abstract

A research funder faces the challenge of selecting to fund a small set of studies from a larger pool of proposals, even after proposals achieve the benchmarks of scientific rigor and integrity. Clinical researchers can better quantify the value of their proposed study to facilitate this prioritization process. Value of information (VOI) analysis, can help in this quantification and inform the funder about the population and individual patient-level impact of a comparative research proposal. In this paper, we introduce the overarching framework of the value of information to a clinical research audience, identify the steps required to calculate VOI for a proposal, and highlight some software that can be used to readily compute these estimates based on information available for a research protocol.

Introduction

Clinical researchers design research studies to answer specific and clinically meaningful scientific questions to improve patient lives. Resulting positive studies, or those that confirm a priori hypotheses, provide value by informing the use of effective and beneficial healthcare interventions. Negative studies also provide value, but by avoiding interventions with unproven effectiveness or for which harms outweigh benefits. Given the limitations in grant funding for research, there is an unmet need to robustly and simply quantify the importance and value of competing clinical research studies.

A research funder faces the challenge of selecting to fund a small set of studies from a larger pool of proposals, even after proposals achieve the benchmarks of scientific rigor and integrity. Committing to fund one study means that funders cannot use the same funds to fund another equally meritorious proposal. Therefore, quantifying the value of a proposed clinical study can be of great importance to the funder of the study.

Indeed, the demand for evidence of a favorable return on investment (ROI) in clinical research is becoming the norm due to the growing political pressure to defend public spending in researchi. For example, the Affordable Care Act specifically directs the Patient-Centered Outcomes Research Institute (PCORI) to:

“… establish and update a research project agenda …taking into consideration the types of research that might address each priority and the relative value (determined based on the cost of conducting research compared to the potential usefulness of the information produced by research) associated with the different types of research, and such other factors as the Institute determines appropriate.” (PPACA Sec6301/1181SSA, Page 66)

A framework for quantifying the value of a proposed research study can begin with the consideration of three factors or mechanisms that are essential in improving individual and population health:

  • 1) Evidence estimation: the proposed research will generate evidence to estimate a population-level parameter more precisely that it is currently known,

  • 2) Impact on clinical practice: new evidence will influence clinical practice, and

  • 3) Impact on health: changes in clinical practice will lead to improved health outcomes for the target population.

In this paper, we provide a conceptual foundation for clinical researchers about how to quantify these three factors in order to quantify the value of a comparative research proposal.

Evidence estimation

Although the goal of a clinical study is to find a population-level answer to a scientific question (which can be about a treatment effect, disease risk or other important information), a study can only provide an answer from the sample chosen for the study. Consequently, inference about the population-level answer is laced with uncertainty because of the potential sampling variability of the study. That is, an identical study conducted in an identical setting with a different sample from the same target population could generate a different sample value. In statistical jargon, the research question can be thought of as a statistical parameter, whose population answer or value is only known with some reservation, but based on prior research.

For example, a few small studies may indicate that empagliflozin, an inhibitor of sodium-glucose cotransporter 2, when used in addition to standard care, may reduce cardiovascular morbidity and mortality (i.e., the hazard ratio for empagliflozin is less than 1) in patients with type 2 diabetes at high cardiovascular risk. Only a large trial was able to confirm this assertion by estimating the hazard ratio with sufficient precisionii. Thus, additional research on answering the question can improve the current estimates available for the population answer and is akin to obtaining a more precise estimate of the parameter. Researchers assume that a well-designed study will lead to a more precise estimate of that parameter. This is a reasonable assumption and is true whether the study is ‘positive’ or ‘negative.’

Impact on clinical practice

An ultimate goal of clinical studies is that patients and their physicians behave differently after knowing the more precise answer to the particular question (e.g., write a different prescription, be more adherent to treatment, or establish a new protocol for clinical care). Empirical studies of healthcare technology diffusion convincingly show that this is not true – adoption of research results in clinical practice is variable and depends on a complex set of factors, including the availability of other treatments, reimbursement policies, and budgetary and delivery challengesiii. However, precise evidence often establishes the necessary conditions to call for changes in clinical treatments.

Impact on health

Appropriate changes in treatments in response to more precise information are assumed to ultimately lead to improved quality of life and/or longer life for patients. While this is most often the case, long-term impacts from treatments, rarely quantified within a clinical trial, often require linking surrogate measures to comprehensive outcomes and simulation approaches.

Value of Information Analysis

How can these factors be explicitly evaluated and how can their ultimate impact on the “expected”/anticipated value of clinical research studies be quantified? Value of information (VOI) analysis, which helps in assisting with the prioritization of healthcare research, may offer a way forward. VOI applies methods from economic theory and decision analysis to estimate the clinical and economic returns to society from performing a clinical study. VOI analyses can form quantitative estimates of the potential impact of a research study and ultimately inform the ROI of research by comparing the impact of the research to the costs of conducting the study.

VOI analyses have been widely applied to assess the value of research across a variety of diseases. A recent systematic review identified 86 peer-reviewed and published VOI applications, of which 13 suggested no further research would be necessary, 66 recommended further research and 7 gave no recommendationsiv.

To understand VOI, one must distinguish between quantifying the impact of a research study already conducted with its results known (i.e., a retroactive assessment) from the potential impact of a research study that is proposed (i.e., assessment of anticipated impacts). Retroactive assessment often takes the results of a study and puts them in a simulation model to forecast the impact of the results on the target population. For example, a study published in 2014 in the Annals of Internal Medicine estimated that the net realized economic return of the Woman’s Health Initiative (WHI) estrogen plus progestin trial was $137 billion, a 1000:1 return-on-investment (ROI) made by NHLBIv. Estimating this ROI was possible because the results of the WHI study were already known. However, what if we were to ask what would the returns from the WHI study be before it was conducted? We must account for all potential results that could come out of the WHI study, carry out similar analyses to the Annals paper for each potential result, and then average any impacts over those results. This ex-ante approach is essentially a VOI analysis. The formality of the VOI approach lies in quantifying the uncertainty in the potential study results using the mathematical tool for uncertainty – a probability distribution.

A VOI analysis can also be envisioned as a comparison between conducting a research study versus not conducting the research study (Figure 1). Conducting research would mean that the results of the research study would impact population health and may influence clinical practice and patient health. Not conducting research would mean that status-quo in clinical practice persists. This is illustrated in a stylized Figure 1 that considers the value of conducting a research study to find out whether a treatment is superior to the standard of care. Note that the decision tree illustrated in Figure 1 is intentionally asymmetric in design. This is because, in one instance decision makers are making final treatment choices/adoption decisions without knowing the truth about the effectiveness of treatment, while in the other decision-makers make these decisions only after knowing the truth from research. The key insight here is that without the study, a decision about whether to offer the treatment to patients would be made under the uncertainty of the prevailing evidence. So, there is a chance that this decision could be wrong and hence reduce the expected outcomes. With the research study (assuming it’s a perfect study that resolves all uncertainty, as described below) this decision uncertainty is removed. Treatment is offered only when it is known with certainty that it is superior to standard of care. The difference between the expected outcomes under these two scenarios produces the expected value of this research study.

Figure 1:

Figure 1:

A Stylized figure to illustrate that VOI for a research study is a comparative analysis of doing versus not doing a research study

Let us illustrate these methods with a stylized example of secondary prevention of cardiovascular events. We will discuss later the web-based tools available to clinical researchers to estimate VOI for their studies.

A Value of Information Example

Take the example of a research study on the effects of low versus high dose aspirin doses on secondary prevention of cardiovascular outcomes for patients after a heart attack. This comparison was the basis for a large PCORI-funded clinical trial called ADAPTABLEvi. For this illustration, we will keep the details from that trial at a minimum, including the specific doses, and highlight the VOI calculation phases.

The clinical researcher proposing this study believes that there is value in finding which dosage provides the best balance of efficacy and complication rates. Prior evidence pointed toward high aspirin dose being slightly more effective than low dose treatment in reducing ischemic events in the setting of secondary prevention. However, there was considerable uncertainty around these effect sizes to determine the optimal dosage of aspirin especially when the risks of gastrointestinal bleeding are consideredvii,viii,ix. In clinical practice, 80% of eligible patients received the low dose aspirinx.

The researcher wants to resolve this uncertainty and establish superiority, non-inferiority or equivalence between the two doses. Regardless of what the purpose of the design of the trial is, the impact of a trial would be driven by a) how clinical practice would respond/change after the conduct of the trial and b) how this change, or lack thereof, in clinical practice would influence long-term outcomes such as quality of life and survival of patients.

To estimate the VOI of a trial, one can construct a decision model that accounts for the population studied and the clinical outcomes of the trial (see Figure 1). If the trial employs surrogate endpoints, then the model must also account for the epidemiological links between those endpoints and comprehensive endpoints such as life expectancy. For the aspirin dosing trial, the likely endpoint is major adverse cardiac events (MACE), and we could create a model for high-risk patients that have a history of myocardial infarction or documented atherosclerotic cardiovascular disease. Each parameter has probability distribution in the (such as the risk of MACE) to reflect the corresponding risk and uncertainties. Some of the parameters would be specific to the alternate aspirin dosage. When these parameter values are propagated through the model, they produce the probability distribution of outcomes, such as life expectancy under each of the dosage arm. Such a full analysis of this problem along with a complete VOI analysis can be found in Basu and Meltzer (2018)xi.

The Concept of Expected Value of ‘Perfect’ Information: An Ideal Clinical Trial of Infinite Size

In this next section, we illustrate the VOI calculation using a stylized value and distribution of life-expectancy outcomes under each arm of the aspirin trial. We start by considering “possibilities” that reflect the true outcomes plausible under each treatment choice. Only one of these outcomes will be realized when treatment is received. In reality, many different levels of outcomes could be plausible, and this uncertainty is often reflected using a continuous probability distribution. However, in our stylized example, let there be three possibilities for outcomes from this trial as shown in Table 1. The probability with which each possible outcome can occur is also shown in Table 1. A dose is considered superior/optimal if it generates higher life expectancy than its alternative.

Table 1:

Potential results on the effect of aspirin dose on life expectancy after research is conducted with an infinite population sample.

Possibility #* Probability of these outcomes Average Life Expectancy with Low Dose Aspirin Average Life Expectancy with High Dose Aspirin Optimal choice of Dose for each possibility
1 25% 14.9 years 14.5 years Low dose
2 50% 14.7 years 15.0 years High dose
3 25% 14.6 years 14.9 years High dose
*

Possibilities reflect the true outcomes that are plausible under alternative treatment choices.

The first thing to notice about the outcomes in Table 1 is that there is no uncertainty associated with these numbers, such as standard errors. The first step of VOI analysis is calculating the Expected Value of Perfect Information (EVPI), where the ‘perfect’ corresponds with the notion that the research will be conducted on an “infinite sample” that would provide the true average life expectancies under each treatment arm with absolute precision. The expected value of a research study with an infinite sample forms the upper bound of the VOI calculations. While achieving this value is not possible with any realistic research trials, this point estimate is important as discussed later.

To understand how EVPI is calculated, we compare the average outcomes if status-quo persists, i.e., 80% of the target population gets low dose aspirin, and 20% gets a high dose, versus the average outcome if our choice of aspirin dose for 100% of the population aligns with the one that will produce the best outcomes. Note that in the status quo, 80% of physicians are choosing low-dose even if high-dose has a higher prior probability of being superior. This may be driven by the risk-aversion of the physicians as they need to see stronger evidence on the superiority of low-dose in order to change practice.

There are a few things to notice in this stylized example in Table 1, which will help in the calculation of the average outcomes under each alternative scenario:

  • 1) There is a 25% chance that using low-dose aspirin will produce better outcomes than using high-dose aspirin (this corresponds to possibility #1). The chance of high-dose aspirin producing better outcomes is 75% (possibilities #2 and #3). Nevertheless, there is genuine uncertainty among the clinical community that there is clinical equipoise between the two doses, which warrants a future studyxii.

  • 2) When low-dose aspirin produces better results, the expected outcomes from low-dose is 14.9 years

  • 3) Similarly, when low-dose aspirin produces better results, the expected outcomes from high-dose is 14.5 years

  • 4) When high-dose aspirin produces better results (i.e. 75% of times), the expected outcomes from low-dose is 14.7*(0.50/.75) + 14.6*(.25/.75) = 14.67 years (using rows 2 & 3 of Table 1)

  • 5) Similarly, when high-dose aspirin produces better results, the expected outcomes from high-dose is 15.0*(0.50/.75) + 14.9*(.25/.75) = 14.97 years (using rows 2 and 3 of Table 1)

Therefore, calculation of average population outcomes would depend on the probabilities and the final payoffs of four states: high dose better and high dose given; low dose better and high dose given; high dose better and low dose given; low dose better and low dose given. These states are illustrated in Figure 2.

Figure 2:

Figure 2:

Stylized VOI calculation for Low versus High dose Aspirin Study.

Let’s consider the status-quo, where decisions about treatment are made without any new research. Here, the probabilities of the four states, in line with the four branches shown in Figure 2, are:

  • Low dose given and Low dose is superior: 80%*25% = 20%

  • Low dose given and low dose is inferior: 80%*75% = 60%

  • High dose given and Low dose is superior: 20%*25% = 5%

  • High dose given and Low dose is inferior: 20%75% = 15%

The life-expectancy for each branch are shown in Figure 2 and comes from table 1 or the above calculations. The average outcomes under staus-quo would be

0.20*14.9 + 0.60*14.67+ 0.05*14.5 + .15*14.97 = 14.75 years

Consider that with a perfect research study, we can assess which dose will produce the better outcome. If the first possibility is true, then 100% of the patients should receive the low-dose aspirin, generating an average life expectancy per person in the population of 14.9 years. If either possibility 2 or 3 is true, then 100% of the patients should receive the high-dose aspirin, generating average life expectancy per person in the population of 14.97 years. Again, since we know the probability for each of these possibilities, we can say that, with perfect research, the expected life expectancy per person would be 0.25*14.9 + 0.75*14.97 = 14.95 years (Figure 2).

The expected value of perfect information (EVPI), 14.95 – 14.75 = 0.20 life years per patient, reflects the difference in expected life expectancy between the status-quo and treatment use after the perfect trial is conducted. The difference arises due to the change in the use rate of low or high-dose aspirin (Figure 2).

The underlying assumption here is that with precise information adoption of the right dose will be immediate and perfect. Although this is never achieved in practice, this assumption produces an upper bound or the maximum benefit of the value of research.

To obtain the population value of this “perfect” research, one can multiply this per patient estimate with the size of incident target population per year (N) over the number of years (T) this comparative question would be relevant.

Population EVPI = Per Patient EVPI* N* T

For example, the population EVPI for the stylized aspirin study would be about 210 Billion (Figure 3).

Figure 3:

Figure 3:

Population EVPI Calculations

Although such a monetized value estimate seems astounding for such a simple question, one must remember three features of EVPI when interpreting its results:

  1. The EVPI is an upper bound of the expected value of research. That is, it reflects the value of perfect research data where all uncertainties about outcomes are eliminated. Such perfect research studies are impossible to fund, design and perform. However, EVPI estimates are important because if this value of a perfect study is close to or lower than the costs of conducting the perfect study, then there is no reason to perform the study. In essence, EVPI provides a necessary but not sufficient condition to proceed with the planning of a research study.

  2. While we have measured outcomes in terms of life expectancy in the example above, other outcomes can be used instead. Researchers have used quality-adjusted life years (QALYs) and net health benefits (QALYs – healthcare costs) to calculate EVPI. The calculated value for EVPI would depend on which objective function of population health the researcher or funder considers most important.

  3. Economists often discount the value generated in future years compared to those generated in a current year. Therefore, calculation of population EVPI should reflect the net present value of the stream of population impact generated by the research study over time.

  4. Lastly, a large value such as $210B does not mean that the study provides a better ROI than other possible studies – even though such value will likely exceed in research investment costs in a single study, the return from the possible portfolio of studies must be considered. Also, before considering comparative returns, we need to be more specific in our approach and calculate returns from actual, rather than idealized studies.

The Concept of Expected Value of ‘Sample’ Information: Actual Clinical Trials

When the clinical importance and the EVPI are sufficient to warrant conducting a research study, the next question is whether the expected value of a real, less-than-perfect study is sufficient to justify an investment of research funds. A less-that-perfect but achievable study does not imply a flawed research design, but rather the practical limitations of clinical research that strives to find the answer to the research question with precision and at a reasonable cost. Unlike the perfect research of Table 1, the impact of treatment after the research study has uncertainty, although the overall uncertainty should decrease after the trial to make the research worthwhile. Table A1 presents a more realistic version of the potential results after a research study has been completed.

The primary question is, ‘Can the research study reduce the uncertainty to the extent that it influences clinical practice and clinical outcomes?’ Naturally, a proposed study with a smaller sample size will have limited reduction in uncertainty and lower chances of altering clinical practice compared to a study with larger sample size. However, a large sample size with a small benefit may improve uncertainty, but have little chance of altering clinical practice. Following the three principles of the VOI framework, we calculate the expected value of the real research study by quantifying 1) reductions in uncertainty, 2) the corresponding changes in treatment decisions, and 3) their potential impact on patient health constitute the calculations of the expected value of a research study with realistic or finite sample size. This expected value is known as the Expected Value of Sample Information (EVSI). EVSI is the appropriate metric to use when comparing two independent studies that are vying for that same pot of research money.

A more detailed description of the intuition behind the calculation of EVSI as applied to the stylized example discussed above can be found in the accompanying Appendix.

Sample size determination

A very important use of EVSI calculation is to ascertain the optimal sample size for a research study. Traditionally, the sample size for a study is determined using power calculations to balance Type 1 (false positive) and Type 2 (false negative) errors. Alternatively, EVSI estimates can be obtained to ascertain the value of information for different sample sizes for the proposed study. Typically, EVSI would increase with sample size and asymptote to EVPI. EVSI takes a decision-theoretic approach and explicitly assigns a loss function or impact estimate for the Type 1 and Type 2 errors. Thus, the decision about the optimal tradeoff between the two errors is driven by the losses that these errors impose on the population. An advantage of the EVSI approach is that it can directly inform whether the additional increase in sample size and lower risk of Type 2 error, which comes with an incremental cost, is worth the investment. For example, comparing this profile of EVSI as a function of sample size with the increasing costs of sampling as sample size increases can provide a benefit-cost calculus to determine optimal sample size for a proposed research study. Therefore, the EVSI approach to optimal sample size determination can complement traditional approaches to support the research design proposed.

Adoption of Evidence – the second effect in the VOI framework

A key factor determining the population impact of new evidence is how that evidence is to be implemented in practice. This is the second effect in our VOI framework that links the generation of new evidence through research studies and the realization of population health impacts. We had kept it implicit in the calculations shown above. There are two parts to this implementation process. First is choosing a decision criterion that indicates what should be implemented in practice. This is a belief that the funder (or another decision maker) has about the level of evidence standard needed to adopt the results of a trial. For example, they may believe that a typical physician would adopt treatment over control if the expected value of life expectancy from treatment is higher than control, irrespective of the uncertainty associated with that expectation. Alternatively, she may believe that the physician would adopt a new treatment only if the physician is at least 95% confident that the expected value of life expectancy from treatment is higher than control. Different decision criterion may be used under different scenarios. An analyst can specify alternative decision criteria to calculate alternative VOI estimates for the same research study. More discussion of the use of alternative decision criterion can be found in Bennette et al. (2016)xiii and Basu and Meltzer (2018).xi

The second part of implementation is, given a decision criterion, there may be many other factors driving the actual adoption or implementation of trial results into clinical practice. There is substantial literature documenting many factors that affect adoption of a new technology or implementation of clinical guidelines in practice. In an ideal setting, we would have predictive algorithms to forecast implementation of evidence-based care in practice conditional of results of a study and other factors. Such predictions can be easily incorporated within a VOI analysis where the strength of evidence from a trial would also forebode the implementation of the trial results in practice. Typically, VOI calculation assumes that adoption of trial results are immediate, once a chosen decision criterion is met. This assumption produces an upper bound for both EVPI and EVSI metrics and is often sufficient for conveying the absolute and relative value of trials.

Challenges with VOI approach

Quantifying the value of uncertainty, although useful, is challenging. An important stage in VOI is characterizing the current uncertainty, but this process can be difficult due to limited data. Mathematically, we often have to rely on probability distributions to reflect uncertainty in current evidence. The requirements for such distributional assumptions are typical in modeling exercises. For example, most traditional sample size calculations rely on some estimates of uncertainty and Gaussian distributional assumptions. Moreover, ignoring uncertainty because of the difficulty of expressing it accurately, inherently implies that there is no uncertainty in current evidence, which could be misleading. Often, robustness checks to alternative distributional assumptions are carried out to ensure that such assumptions are not driving idiosyncratic results.

Since one of the primary goals of VOI is to help in research prioritization, VOI estimates are useful if they can be expressed in the same metrics across different studies. In our stylized example, we have used life expectancy as the primary metric to express VOI. It is important when we are talking about value, to express these estimates in terms of comprehensive outcomes such as life expectancy, quality-adjusted life years or Net Health Benefits. It creates a challenge for researchers wanting to study the effect of an intervention on a surrogate outcome, such as exercise capacity, to calculate a VOI estimate for their study in comprehensive units. However, the researchers’ responsibility to demonstrate why their study is more valuable to fund over an alternative that attempts to establish a more direct effect on comprehensive outcomes. In many cases, however, epidemiological data can be used to extrapolate effects on surrogate outcomes to effects on comprehensive outcomes. But a good VOI analysis will account for the uncertainty in this extrapolation in calculating the value of that study.

Finally, challenges exist regarding the time and expertise required to carry out these studies. Although placing VOI in a uniformly standard template can be difficult, we describe some latest developments of software to mitigate some of these challenges in the next section.

Software for EVPI and EVSI calculations

Computational algorithms of VOI estimates are well documented in the literature (see ISPOR VOI task Force for a comprehensive reviewxiv). However, often one relies on the fact that a probabilistic simulation model has already been constructed to compare outcomes, which can then be used to generate estimates of VOI under different scenarios. For the trialist looking to obtain an estimate of VOI, this may be a significant challenge as models are typically very specific and not easily manipulated by those untrained in modeling. To address this issue, we developed a web-based platform that allows inputs on effect sizes, population sizes, and other relevant parameters available to a trialist to run a validated cardiovascular model and produce estimates of EVPI and EVSI in life-years. This is known as the Value of Information for Cardiovascular Trials and Other Comparative Research (VICTOR) Platform. This platform is in a beta-testing phase and freely available to the public at https://sop.washington.edu/choice/research/research-projects/victor/

It is important to note that this software is only in the specific content domain of cardiovascular outcomes in the US population. Experience with it use can guide the applicability and application of VOI in this domain in the future.

The site also has several resources on VOI along with detailed guides for determining if VICTOR applies to the users trial and how to use VICTOR. The tool intends to introduce and increase the use of these powerful concepts in valuing research investments. These data will also allow for manipulation of various parameters, such as an overall effect or sample size, to optimally plan clinical trials.

Conclusion

Given limits to research budgets worldwide, it is increasingly clear that researchers need to better quantify value for their proposed study. In most proposals, researchers describe why their proposed study could have a significant impact on clinical practice and patient health, but typically in euphemistic, qualitative terms. Utilizing VOI analyses provides quantifiable estimates of the trial impact that can positively inform the judgment of the funder in selecting the proposal to fund.

VOI’s uptake by institutions prioritizing and commissioning research has been limited, partly because of the need to develop a decision model and apply advanced statistical methodologies, and the resulting time required to carry out VOI analysis.xv It is also possible that funders do not believe that they have a problem with prioritization. However, given the recent interests by funders like NIH and PCORI on VOI methods seem to suggest that is not the case. However, with advances in software and web-based interactions like VICTOR, the hope is that these challenges will largely be mitigated. A recent study evaluated the feasibility and outcomes of incorporating value-of-information (VOI) analysis into a stakeholder-driven research prioritization process for cancer genomics trials in a US-based setting. When presented with VOI information over 50% of stakeholders modified their rankings of trial-fundability, nearly 70% stated that VOI data was useful and all supported inclusion of the VOI data in future prioritization processesxvi. Further work, funded by PCORI, developed an efficient and customized process to calculate the expected VOI of cancer clinical trials that is feasible for use in decision-making and was found to be acceptable to investigators within the Southwest Oncology Group, a large clinical trial collaborative that recommends funding to the National Cancer Institutexvii,xviii.

In summary, research is of high value when it has the potential to influence practice and outcomes positively and such assessments need to be included in major grant proposals. VOI analysis can help make these assessments. There are two reasons why presenting a VOI estimate with a proposal is worthwhile. First, VOI allows the research proposal to highlight the nuances of the impact of their study on the population and the individual patient level. This is especially true when the researcher proposes a study with surrogate outcomes as end-points for their study and faces the ambiguity from reviewers about the value of studying surrogate endpoints. Second, it gives the funder a common language to compare alternative proposals that look at different populations, compare different interventions and use different end-points. Development of online tools such as the VICTOR platform will allow researchers to more easily calculate VOI and allow the researcher to see the value of their research and hone their approach to optimize VOI. Similarly, we hope that funders would welcome VOI submitted with each proposal and consider these metrics as important inputs into their decision making about selecting and funding the highest value proposals.

Acknowledgment

This research was supported by funding from the National Institutes of Health (R01HL126804, PI: Basu). We thank helpful suggestions from William Applegate, David Meltzer, and an anonymous reviewer.

APPENDIX

Calculation of EVSI is more complicated than EVPI. EVSI requires accounting for the sampling variability of the realistic or finite sample size to estimate the final distribution of results on which the expected values will be based. Here, invoking the concept of Bayesian updating in this framework is convenient. One starts with the prior knowledge of the distribution of results, and then the trial with a finite sample size produces a data point, such as mortality. Combining this data point with prior knowledge gives a post-trial or posterior estimate of results. A posterior distribution of results will then reflect the new possibilities as shown in Table A1, and the expected value calculations follow the same principle as in EVPI but with these new possibilities. That is, EVSI is the difference between the weighted average of optimal choice-specific life expectancy for each new possibility of result from the posterior distribution minus the expected life expectancy from status quo.

In Table A1, we begin with the same three possibilities for true results as in Table 1. Instead, now research cannot precisely reveal which one of these possibilities will be actually realized when a treatment is used. Research with a finite sample will only reveal sample results with an associated sampling variability as this sample is drawn from a population where one of the three possibilities is true. Therefore, Table A1 shows the potential results from a finite sample trial under each possibility. For each trial, we can calculate a posterior mean following Gaussian conjugacy. A simple calculation for low dose outcomes under possibility 1 is shown here:

  • Prior Mean for Low dose =μLO = 14.725 years

  • Prior variance for Low dose = σ2LO = 0.012

  • Sample Mean for Low Dose = yLO = 14.3

  • Sample variance of mean for Low Dose = s2LO = 0.76

  • Posterior mean for Low dose = μLO + (yLO - μLO)*σ2LO/( s2LO + σ2LO) = 14.72

Rest of the posterior means are shown in Table A1. These posterior means are now used as new possibilities to determine optimal choices. Under these trial results, high dose turns out to be the optimal treatment under each possibility. Therefore, the overall expected life expectancy that could be realized after the conduct of the finite sample research will be the true mean outcomes for high dose, which is 0.25*14.5 + 0.50*15.0 + 0.25*14.9 = 14.85 years.

The expected value of sample information (EVSI) can be quantified by the difference in the expected life-expectancy between this scenario an status quo: 14.85 – 14.75 = 0.10 life years per patient. This is obviously less that EVPI, but likely achievable with finite resources. Similar to population EVPI, one can calculate a population EVSI using the same calculations, which comes to 1.05 million years or about $105 Billion. This would indicate that an investment in this research study is still worth doing, even if the true differences in life years is less. A more elaborate analysis of EVSI for this question can be found in Basu and Meltzer (2017)xi.

Table A1:

Potential results on the effect of aspirin dose on life expectancy after research is conducted with a realistic or finite sample size.

Possibility # Probability of such an outcome Trial results Mean (Variance of Mean) Posterior Average Life Expectancy with Low Dose Aspirin* Posterior Average Life Expectancy with High Dose Aspirin Optimal choice of Dose for each possibility
1 25% Low: 14.3 (0.76)
High: 14.4 (0.28)
14.72 14.79 High
2 50% Low: 14.1 (0.28)
High: 15.3 (.19)
14.70 14.93 High
3 25% Low: 14.6 (1.20)
High: 14.3 (0.28)
14.72 14.78 High

From Table 1:

Prior Mean for Low dose =μLO = 14.725 years

Prior variance for Low dose = σ2LO = 0.012

Prior Mean for High dose =μHI = 14.85 years

Prior variance for High dose = σ2HI = 0.0425

Using Gaussian weighted average between posterior mean and sample mean

References

  • i.Akpan N Do taxpayers get their money’s worth from the National Institutes of Health? https://www.pbs.org/newshour/science/taxpayers-get-moneys-worth-national-institutes-health , Accessed December 14, 2017.
  • ii.Zinman B, Wanner C, Lachin JM et al. Empagliflozin, cardiovascular outcomes, and mortality in type 2 diabetes. New England Journal of Medicine 2015; 373: 2117–2128. [DOI] [PubMed] [Google Scholar]
  • iii.Francke AL, Smit MC, de Veer AJ, Mistiaen P. Factors influencing the implementation of clinical guidelines for health care professionals: A systematic meta-review. BMC Medical Informatics and Decision Making2008; 8:38. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • iv.Thorn J, Coast J, Andronis L. Interpretation of the expected value of perfect information and research recommendations: A systematic review and empirical investigation. Medical Decision Making 2016; 36: 285–295. [DOI] [PubMed] [Google Scholar]
  • v.Roth JA, Etzioni R, Walters TM et al. Economic returns from the Women’s Health Initiative estrogen plus progestin clinical trial. Annals of Internal Medicine 2014; 160: 594–602. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • vi.http://theaspirinstudy.org/, Accessed April 24, 2018.
  • vii.Collaborative overview of randomized trials of antiplatelet therapy------I: Prevention of death, myocardial infarction, and stroke by prolonged antiplatelet therapy in various categories of patients. Antiplatelet Trialists’ Collaboration. BMJ 1994;308:81–106. [PMC free article] [PubMed] [Google Scholar]
  • viii.Antithrombotic Trialists’ Collaboration. Collaborative metaanalysis of randomised trials of antiplatelet therapy for prevention of death, myocardial infarction, and stroke in high risk patients. BMJ 2002;324:71–86. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • ix.Kong DF, Hasselblad V, Kandzari DE, Newby LK, Califf RM. Seeking the optimal aspirin dose in acute coronary syndromes. Am J Cardiol 2002;90:622–5. [DOI] [PubMed] [Google Scholar]
  • x.Yu J, Mehran R, Dangas GD et al. Safety and efficacy of high-dose versus low-dose aspirin after primary percutaneous coronary intervention in ST-segment elevation myocardial infarction. JACC: Cardiovascular Interventions 2012; 5(12): 1231–8 [DOI] [PubMed] [Google Scholar]
  • xi.Basu A, Meltzer D. Decision criterion and value of information analysis: optimal aspirin dosage for secondary prevention of cardiovascular events. Medical Decision Making. 2018; 38(4): 427–438. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • xii.Freedman B. Equipoise and the ethics of clinical research. New England Journal of Medicine 1987; 317(3): 141–5. [DOI] [PubMed] [Google Scholar]
  • xiii.Bennette CS, Veenstra DL, Basu A, Ramsey SD, Carlson JJ. Development and Evaluation of an approach to using Value of Information Analyses for real-time prioritization decisions within SWOG, a large cancer clinical trials cooperative group. Medical Decision Making 2016; 36(5): 641–51 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • xiv.ISPOR Task Force on Value of Information Analysis for Research Decisions: Emerging Good Practices. https://www.ispor.org/TaskForces/Value-of-Information-Analysis-for-Research-Decisions.asp, Accessed April 24, 2018.
  • xv.Claxton KP, Sculpher MJ. Using value of information analysis to prioritise health research: some lessons from recent UK experience. PharmacoEconomics. 2006;24(11):1055–68. [DOI] [PubMed] [Google Scholar]
  • xvi.Carlson JJ, Thariani R. Roth J, Gralow J, Henry NL, Esmail L, Deverka P, Ramsey SD, Baker L, Veenstra DL. Value-of-information analysis within a stakeholder-driven research prioritization process in a US setting: an application in cancer genomics. Medical Decision Making 2013; 33(4):463–71. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • xvii.Bennette CS, Veenstra DL, Basu A, Baker LH, Ramsey SD, Carlson JJ. Development and evaluation of an approach to using value of information analyses for real-time prioritization decisions within SWOG, a large cancer clinical trials cooperative group. Medical Decision Making 2016; 36(5): 641–51. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • xviii.Carson JJ, Kim DD, Guzauskas GF, Bennette CS, Veenstra DV, Basu A, Hendrix N, Hershman DL, Baker L, Ramsey SD. Integrating Value of Research into NCI Clinical Trials Cooperative Group Research Review and Prioritization: A Pilot Study. Cancer Medicine 2018; In Press [DOI] [PMC free article] [PubMed] [Google Scholar]

RESOURCES