Skip to main content
Sage Choice logoLink to Sage Choice
. 2013 Jul;33(5):671–678. doi: 10.1177/0272989X13487257

Evidence Synthesis for Decision Making 6

Embedding Evidence Synthesis in Probabilistic Cost-effectiveness Analysis

Sofia Dias 1,2,, Alex J Sutton 1,2, Nicky J Welton 1,2, A E Ades 1,2
PMCID: PMC3704202  PMID: 23804510

Abstract

When multiple parameters are estimated from the same synthesis model, it is likely that correlations will be induced between them. Network meta-analysis (mixed treatment comparisons) is one example where such correlations occur, along with meta-regression and syntheses involving multiple related outcomes. These correlations may affect the uncertainty in incremental net benefit when treatment options are compared in a probabilistic decision model, and it is therefore essential that methods are adopted that propagate the joint parameter uncertainty, including correlation structure, through the cost-effectiveness model. This tutorial paper sets out 4 generic approaches to evidence synthesis that are compatible with probabilistic cost-effectiveness analysis. The first is evidence synthesis by Bayesian posterior estimation and posterior sampling where other parameters of the cost-effectiveness model can be incorporated into the same software platform. Bayesian Markov chain Monte Carlo simulation methods with WinBUGS software are the most popular choice for this option. A second possibility is to conduct evidence synthesis by Bayesian posterior estimation and then export the posterior samples to another package where other parameters are generated and the cost-effectiveness model is evaluated. Frequentist methods of parameter estimation followed by forward Monte Carlo simulation from the maximum likelihood estimates and their variance-covariance matrix represent’a third approach. A fourth option is bootstrap resampling—a frequentist simulation approach to parameter uncertainty. This tutorial paper also provides guidance on how to identify situations in which no correlations exist and therefore simpler approaches can be adopted. Software suitable for transferring data between different packages, and software that provides a user-friendly interface for integrated software platforms, offering investigators a flexible way of examining alternative scenarios, are reviewed.

Keywords: cost-effectiveness analysis, probabilistic sensitivity analysis, evidence synthesis, network meta-analysis


Probabilistic methods in decision analysis were introduced in the 1980s.1,2 Their defining feature is that they allow for a full expression of the uncertainty in model parameters. There are 2 main reasons for advocating probabilistic methods in decision making. The first is that they can provide a form of sensitivity analysis that allows investigators to easily see the joint impact of the uncertainty in multiple parameters on the expected costs and benefits and on decision uncertainty. For this reason, use of these methods is often called probabilistic sensitivity analysis. A second reason is that, faced with uncertainty in the vector/matrix of model parameters θ, decision makers should choose the decision option, D, that delivers the highest expected net benefit. In other words, the decision maker selects decision D*, such that

D*=MaxDEθ[NB(D,θ)].

This “expectation” requires an integration of the net benefit function, NB(D,θ) over the joint distribution of parameters θ. There are a wide range of methods for achieving this integration, and the appropriate choice of method depends on the algebraic structure of the net benefit function.

It must be emphasized that the expected net benefit is not the same as the net benefit at the expected value of the parameters, except in the cases where net benefit is linear in all its parameters, and there are no correlations between parameters. This is relatively rare, for several reasons. First, most evidence synthesis is performed on log or logit scales, while the parameters of cost-effectiveness models tend to be probabilities on the natural scale. The transformation is a nonlinear one. Second, many cost-effectiveness analyses (CEAs) include Markov models. Here, the net benefit functions include terms in powers of transition probabilities, again introducing nonlinearity. Third, modern methods of evidence synthesis such as network meta-analysis, also known as mixed treatment comparisons, generate estimates of several treatment efficacy parameters from a common data set, in most cases inducing correlations between parameters. These correlations may have little or no bearing on the expected net benefit of each intervention option, but they will directly affect the uncertainty in incremental net benefits between interventions. Other synthesis techniques that estimate more than 1 parameter from a single data set also induce parameter correlation structures. Perhaps the most important are meta-regression,3,4 multiple outcome synthesis,59 and bivariate meta-analysis.10

It is therefore essential that software solutions are adopted which ensure that the complex uncertainty structure in parameter estimates is faithfully propagated through the decision model.11 The main purpose of this paper is to provide guidance on the computational approaches that will deliver probabilistic cost-effectiveness analysis in any situation.

Not only is Monte Carlo (MC) simulation from the joint parameter distribution the simplest way to evaluate the expected net benefit, but for any form of model it also delivers other crucial tools of probabilistic CEA, such as plots of the cost-effectiveness plane, cost-effectiveness acceptability curves, and estimates of the probability that a decision is cost-effective.12,13 MC simulation is also the easiest fully general approach to analysis of the expected value of information,1416 unless the net benefit can be assumed to be normally distributed.17 Probabilistic methods have been recommended in a range of leading textbooks and tutorial papers and are the preferred option for submissions to reimbursement agencies such as the National Institute for Health and Clinical Excellence (NICE) in the UK.13 We therefore recommend MC simulation-based approaches for all analyses, even for strictly linear models.

We address the question of which computational approaches correctly preserve the properties of the evidence synthesis within a probabilistic CEA. No advice is given on the relative merits of cohort models compared with individual patient simulation or on how to choose the best model. This paper is restricted to providing guidance on how to implement the model of choice.

We set out 4 main generic options. These should be considered the “default” approaches as they will correctly propagate the uncertainty structure in the evidence synthesis in any situation. These include 2 uses of Bayesian Markov chain Monte Carlo (MCMC) methods and 2 frequentist methods, either (1) sampling from a multivariate distribution of the estimates and their variance-covariance (VCV) matrices or (2) bootstrapping. Some guidance is provided on how to identify situations in which there are no correlations and where simpler methods, and a wider range of software, can be adopted. This article provides a brief summary of software tools that can be used to help interface between different software, and it reviews some recent developments in user-friendly “front ends” to assist the integrated use of multiple software platforms. These offer ways in which investigators can conduct scenario analyses, not just with individual parameters but also with different data sets or different synthesis models, and quickly see the impact on cost-effectiveness results.

Methods to Incorporate Synthesis Results in Probabilistic Cost-effectiveness Analysis

There are several ways in which the results of the evidence synthesis can be incorporated into the probabilistic CEA. Table 1 summarizes the methods and the restrictions on their use.

Table 1.

Summary of Methods and Their Properties and Restrictions

Estimation Output to CEA Software Restrictions
Bayesian MCMC None: CEA within MCMC software None
Bayesian MCMC MCMC chains exported None
Bayesian MCMC Posterior means, variances, correlations None, but assumes multivariate normality in posterior distribution
Bayesian MCMC Posterior means and variances Only suitable if no correlation between parametersa
Estimation by non-Bayesian (frequentist) methods Parameter estimates and variance-covariance matrix None, but assumes multivariate normality of treatment effect estimates
Estimation by non-Bayesian (frequentist) methods Parameter estimates and their variances Only suitable if no correlation between parametersa
Estimation by non-Bayesian (frequentist) methods Bootstrap resampling None, but special methods are necessary for sparse data

Note: CEA = cost-effectiveness analysis; MCMC = Markov chain Monte Carlo.

a

Users should ensure that the data structure and analysis methods do not imply correlations between parameters, before using these methods.

Bayesian Posterior Simulation: One-Stage Approach

When estimation of the synthesis parameters is via sampling from a Bayesian posterior distribution of the relevant parameters, this can be integrated with the CEA as a single process within a single programming package, in what has been referred to as “comprehensive decision analysis.”1820

Bayesian MCMC simulation,21 using WinBUGS,22 OpenBUGS,23 or other MCMC packages, provides the obvious example. The advantage of this approach is that it not only estimates a Bayesian posterior distribution but also is simulation-based, so that its outputs are perfectly compatible with the MC sampling approach that has become the standard modeling method in so many areas of science. Samples from the joint posterior distribution can be put directly through the decision analysis, so that net benefit and other outputs can be evaluated for each set of’parameter samples, without requirements for assumptions about its distributional form. Distributions of additional parameters and costs can be readily incorporated.

Development of MCMC algorithms and sampling schemes is a specialized area of research. Although users need not have a detailed knowledge of the precise working of MCMC software, a good understanding of the fundamentals of Bayesian data analysis is essential. For completeness it is worth mentioning that a broad range of non-MCMC simulation-based Bayesian updating schemes have also been proposed, including the sample importance resampling algorithm,24 Bayesian melding,25,26 and Bayesian Monte Carlo.27 All these have the same properties as Bayesian MCMC in that they all feature both Bayesian estimation and sampling from joint posterior distributions. The latter 2 were specifically designed for evidence synthesis.

Bayesian Posterior Simulation: Two-Stage Approach

If investigators have preferred software for CEA, either general software packages such as R, STATA, or SAS or spreadsheet or decision tree packages such as EXCEL or TreeAGE, a further option is to take the posterior samples from the Bayesian MCMC, or other posterior sampling scheme, and use them as input to the CEA package. This has the same technical properties as the Bayesian 1-stage approach since the full posterior distribution is preserved. From WinBUGS, the CODA output, which lists all values generated from the full posterior distribution, can be exported into a spreadsheet-based program such as EXCEL, using BUS–BUGS Utility for Spreadsheets.28 When the CODA output is used, it is important that the correlations in the parameter estimates are preserved. This is done by ensuring that all parameter values are sampled from the same MCMC iteration. If the CODA output is stored as separate columns for each parameter with iteration values along the rows, this would correspond to sampling all the parameter values in 1 row, each time. The CODA output can also be converted to the freely available statistical software R29 for convergence diagnostics, further analysis, and plotting using add-on packages such as BOA–Bayesian Output Analysis Program30 or CODA–Convergence Diagnostics and Output Analysis.31

A potential advantage of a 2-stage approach arises in cases where there is substantial autocorrelation between successive MCMC samples. This can arise in many situations but usually depends on the statistical model, the way it is parameterized, and sparseness of the data. The effect of high levels of autocorrelation is to increase the degree of Monte Carlo error, with the result that it may require hundreds of thousands, rather than tens of thousands, of simulations before stable estimates are obtained. A common practice in decision modeling has been to “thin” the posterior sampling. For example, rather than store every posterior sample from the MCMC process, one might store every tenth or every twentieth. This will usually be enough to reduce autocorrelation substantially, so that the decision model can be run with, say, 25,000 samples from a thinned chain rather than with the 500,000 original samples. This is particularly relevant for computationally expensive models.

Frequentist Estimation with Monte Carlo Sampling

If evidence synthesis can be performed using frequentist software (which may use a variety of methods of estimation, including methods of moments, iterative weighted least-squares, or (restricted) maximum likelihood [(RE)ML]), a 2-stage approach is also possible. The first step is estimation, which produces parameter estimates and their VCV matrix. In the second step, these are used to populate a multivariate normal distribution that can be used for forward MC sampling (in the same or in a different package) along with the other CEA parameters.

A wide range of frequentist approaches to pair-wise meta-analysis exist (see below). However, of particular significance in CEA are the recently published modules in STATA and R that are capable to estimating more complex models. mvmeta 9 is a STATA routine that fits the same kinds of network meta-analysis and indirect comparison models that are described in the Bayesian literature32,33 as well as fitting multiple outcome models. A similar module has been written in R,34 although in its current form this is a network extension of “bivariate” meta-analysis10 in which the true absolute (baseline) effects and relative treatment effects in each trial are drawn from a bivariate normal distribution. This is a slightly different model from the unrelated baselines models recommended in the decision-making literature.32,33 However, it is perfectly feasible to program unrelated baseline network meta-analysis software in R, SAS, or any other platform.

In many cases, the use of frequentist estimates and their VCV matrix with RE models is likely to produce parameter distributions with a little less uncertainty, because Bayesian methods take uncertainty in variance parameters into account. The extent of the difference is unlikely to be critical, although investigators should always be sure that posterior distributions of variance parameters are sensible. Similarly, the existing frequentist approaches are all based on normal approximations to likelihoods for count data. This can lead to difficulties with sparse data, and especially with zero cells.32,35

Frequentist Estimation with Bootstrapping

The final option of estimation and then bootstrapping36 has been used from time to time in CEA.37 In its original form, bootstrapping is a technique in which one generates a series of “new” data sets by repeatedly resampling with replacement from the original data, each time producing a new set of parameters estimates. This stream of estimates can then be treated in the same way as samples from a Bayesian posterior distribution. However, this procedure is not always straightforward, particularly with small sample sizes and zero cells.

Nonetheless, a very wide range of variant bootstrap procedures are available that can mitigate these and other problems. In the parametric bootstrap, for example, a model is fitted to the data by maximum likelihood and is then used to generate a series of data sets with the same size and structure as the original. The analysis procedure is applied to each of these data sets to generate a stream of parameter values. Data analysis based on resampling is a rich area with an extensive literature. Readers are referred to other texts for further information.36,38,39

Simpler Approaches and When to Use Them

When single efficacy parameters are of interest, such as in simple pair-wise meta-analysis, and each parameter in the CEA is sourced from a distinct and independent source, simpler approaches may be adopted. There are no correlations between parameters and therefore no VCV matrix. The computing task in this case is exactly the same as described in the 2-stage frequentist approach described above, but because the synthesis task is substantially less complex, a far wider range of software is available to carry it out. An estimate of the relative treatment effect and its variance can be found by either using specific meta-analysis software or implementing meta-analysis routines in standard statistical software packages. The effect measure and its variance can, in a second stage, be used to populate a distribution for the appropriate parameter in any software suitable for probabilistic decision analysis.

A systematic and comprehensive review of all software options capable of evidence synthesis is beyond the scope of this paper, but noteworthy options are described below and more detailed reviews and comparisons are available elsewhere.4043

Numerous stand-alone meta-analysis packages have been developed over the years, but the following probably include the most extensive and up-to-date feature sets:

  • Comprehensive meta-analysis (commercial)44

  • Meta-Analyst (free)43

  • MIX,45 which is an add-on to EXCEL (commercial and free versions available)

  • RevMan,46 which is the official software of the Cochrane Collaboration (free)

  • EXCEL (commercial), with which simple meta-analysis can be carried out with a small amount of programming

In addition, although it would be possible to program most standard meta-analysis models in any reasonably powerful general statistics package, probably the most extensive freely available software routines that allow meta-analysis to be conducted and numerous graphical outputs to be produced are available for STATA47 and for R, such as the meta 48 and rmeta 49 packages.

Indirect comparisons represent another evidence structure that does not induce correlations between efficacy parameters.33,50 Here again, separate syntheses can be carried out for trials comparing treatments A and B and trials comparing treatments A and C. Then the estimates obtained and their variances can be entered into the simulation package used for CEA, and the relative cost-effectiveness of A, B, and C can be readily determined because covariances are not involved. Although this is an acceptable approach in principle, the Bayesian MCMC approach to indirect comparisons32,33 may be preferable in cases in which 1 or more of the pair-wise comparisons is represented by a very small number of trials. This is because MCMC has the flexibility to allow “shared variance” random effects (RE) models,32 whereas with existing frequentist methods it may be necessary to have some estimates from random and others from fixed-effects models, which not only is a less natural solution but also runs counter to the mathematical relationships between variances that must hold in models in which all relative treatment effects are internally consistent.51 There is, however, nothing to prevent shared variance models being programmed in frequentist packages.

It is worth noting for completeness that certain extensions of pair-wise or network meta-analysis will induce correlations, and in these cases it would be prudent to use 1 of the 4 fully general methods above. One of these is meta-regression52 in which the size of the relative effect depends on a covariate.4,53 This is essentially a model with terms for intercept and slope where these parameters will be correlated unless the model is parameterized to make them orthogonal. Another extension concerns multiple outcomes. Very frequently, different trials report different outcomes, or different combinations of outcomes, at different times or combinations of times, in different ways or combinations of ways. Elsewhere54,55 we have advocated the use of methods that effectively model the relationships between the outcomes, in order to strengthen inference on the treatment effects.5,5658 This invariably induces correlations between outcome parameters and between outcome and treatment effect estimates. Similarly, synthesis methods based on the bivariate meta-analysis model10 inevitably generate correlations between treatment and control success rates.

The correlations induced by meta-regression or by’multiple outcome synthesis will usually have less impact on incremental net-benefit than the between-treatment effect correlations induced by network meta-analysis or by the bivariate model for meta-analysis. They will, however, affect joint parameter uncertainty in other, and possibly complex, ways, and it is prudent to use 1 of the 4 generic approaches to propagating the joint parameter uncertainty into the CEA model.

Use of Multiple Software Platforms

In recent years, interfaces have become available that let different software applications communicate with each other. These facilities allow for the integration of the components of a CEA that may have been conducted in different packages. The motivations and advantages of an integrated approach across software applications are potentially multifaceted. First, such an approach allows multidisciplinary teams who have different software skills and preferences to produce an integrated analysis. For example, statisticians may wish to use general statistical software, whereas decision modelers may wish to use EXCEL or specific decision modeling software. This approach also allows the best software for each component of the analysis to be used, therefore producing an “optimal” mix. For example, if a network meta-analysis is required, WinBUGS may be the best software to use, but it has limited graphical capabilities. Therefore, it may be desirable to present the results of the synthesis in a package with advanced graphical capability such as R. Furthermore, the original data set may have been prepared in spreadsheet software such as EXCEL. Although use of multiple pieces of software to conduct different components of the analysis is common historically, few have been integrated. Once a model has been set up within an integrated system, it is, of course, particularly easy to update.

This approach is also useful when updating a model that has already been constructed in a particular software: If, for example, the CEA is already set up in Excel but a new evidence synthesis needs to be carried out in R or WinBUGS, the packages can be made to communicate directly, facilitating the analysis and any future updates.

Communication Between Software Packages

The simplest form of communication between software packages is to allow the transfer of data between them. To facilitate communication, transparency, and future data updates, it is good practice to keep all data collected for the analysis, including all annotations and details of any corrections, in a single file, for example, an EXCEL workbook with multiple worksheets. If the analysis is to be carried out in WinBUGS, data columns can be copied directly from spreadsheet software into WinBUGS and pasted by selecting Paste Special from the WinBUGS Edit menu and choosing the Plain text option. Alternatively, XL2BUGS 59 is an EXCEL add-in that converts EXCEL data into WinBUGS vector format, and BAUW 60 converts data in text format into WinBUGS vector or matrix format.

If data are stored in R, R2WinBUGS 61 can be used to convert R objects into WinBUGS list data using the bugs.data function.

See http://www.mrc-bsu.cam.ac.uk/bugs/winbugs/remote14.shtml for details on software capable of communicating with WinBUGS.

Integrated Use of Software Platforms

Integrated platforms reduce the need to copy data and intermediate results from one screen/system to another and thereby reduce the risk of transcription errors. Further advantages of integrating the analysis (which also exist if the Bayesian 1-stage approach is conducted, since in that approach the analysis is integrated by definition) include facilitating the modification and updating of any aspect of the analysis, conducting sensitivity analyses, and, more generally, promoting transparency. For example, if a new trial is reported that is to be added to the evidence synthesis, then in an integrated approach the CEA would automatically be updated. This goes some way to ensuring that the appropriate uncertainty is propagated through to the decision model. Different software can be used to fully integrate data input, analysis, and the display of results using multiple packages, into a single step. If part of this integrated approach is the inclusion of a user-friendly interface, this can also make the exploration of the synthesis and CEA accessible to nontechnical experts, including clinical experts and even decision makers themselves, allowing them to interrogate the analysis. To this end, a Transparent Interactive Decision Interrogator (TIDI),62 which integrated syntheses conducted in WinBUGS with graphical displays and the decision model conducted in R and a “point and click” interface in EXCEL, was developed for a recent Single Technology Appraisal at NICE. This pilot “proof of concept” initiative allowed members of the appraisal committee to request reruns of the CEA using alternative parameter distributions and synthesis models in real time in the committee meetings.62

Several (freely available) code routines have been developed for commonly used packages in health technology assessment that allow them to communicate with other packages, and these can be used in the’creation of integrated analyses. For example, RExcel,63 an add-on to EXCEL, provides communication between EXCEL and R, and R2WinBUGS is one of several packages that allow the user to control WinBUGS through R. Thus, if both of these linking packages are used in combination, then WinBUGS can be controlled through EXCEL (via R), and a Visual Basic interface can be written in EXCEL to facilitate this (which is the software setup used in the TIDI project described above). Similar control of WinBUGS through STATA64 and several other packages is also possible, as is the embedding of OpenBUGS in R through rbugs 65 and the linking of many other packages to each other.

Discussion

A number of previous authors have discussed the’role of parameter uncertainty in CEA and the’need for methods that appropriately propagate parameter uncertainty through the model to the decision.12,16,1820 This tutorial paper reviews the implications for choice of computational approaches in the context of evidence synthesis. Our general conclusion is that where the synthesis involves a network meta-analysis, or other methods that induce parameter correlations, Bayesian MCMC methods of synthesis are likely to be the most convenient because the full joint posterior uncertainty in parameters can be easily propagated through the decision model in a single step. However, frequentist solutions are also available.

We recommend that where multiple software platforms are used, for example, to store data, to carry out the synthesis, and to run CEAs, an integrated approach is taken, where the different platforms communicate with each other. This will avoid transcription errors and allow for easy and immediate update of results. User-friendly front-ends on integrated platforms give decision makers the ability to interrogate models more easily.

Acknowledgments

The authors thank Jenny Dunn at NICE DSU and Rachael Fleurence, Jeroen Jansen, Alec Miners, Jaime Peters, Mike Spencer, and the team at NICE, led by Gabriel Rogers, for reviewing earlier versions of this paper.

Footnotes

from School of Social and Community Medicine, University of Bristol, Bristol, UK (SD, NJW, AEA); and Department of Health Sciences, University of Leicester, Leicester, UK (AJS). This paper was based on Technical Support Document No 6, available from http://www.nicedsu.org.uk/, which was prepared with funding from the National Institute for Health and Clinical Excellence (NICE) through its Decision Support Unit. The views expressed in this document, and any errors or omissions, are of the authors only. Alex Sutton has received financial reimbursement when working as an advisor on the development of the Comprehensive Meta-Analysis software package.

References

  • 1. Critchfield GC, Willard KE. Probabilistic analysis of decision trees using Monte Carlo simulation. Med Decis Making. 1986;6:85–92 [DOI] [PubMed] [Google Scholar]
  • 2. Doubilet P, Begg CB, Weinstein MC, Braun P, McNeill BJ. Probabilistic sensitivity analysis using Monte Carlo simulation: a practical approach. Med Decis Making. 1985;5:157–77 [DOI] [PubMed] [Google Scholar]
  • 3. Thompson SG, Higgins JPT. How should meta-regression analyses be undertaken and interpreted? Stat Med. 2002;21:1559–74 [DOI] [PubMed] [Google Scholar]
  • 4. Dias S, Sutton AJ, Welton NJ, Ades AE. Evidence synthesis for decision making 3: heterogeneity—subgroups, meta-regression, bias, and bias-adjustment. Med Decis Making. 2013;33(5):618-640 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5. Lu G, Ades AE, Sutton AJ, Cooper NJ, Briggs AH, Caldwell DM. Meta-analysis of mixed treatment comparisons at multiple follow-up times. Stat Med. 2007;26(20):3681–99 [DOI] [PubMed] [Google Scholar]
  • 6. Nam I-S, Mengerson K, Garthwaite P. Multivariate meta-analysis. Stat Med. 2003;22:2309–33 [DOI] [PubMed] [Google Scholar]
  • 7. Riley RD, Abrams KR, Lambert PC, Sutton AJ, Thompson JR. An evaluation of bivariate random-effects meta-analysis for the joint synthesis of two correlated outcomes. Stat Med. 2007;26:78–97 [DOI] [PubMed] [Google Scholar]
  • 8. Riley RD, Thompson JR, Abrams KR. An alternative model for bivariate random-effects meta-analysis when the within-study correlations are unknown. Biostatistics. 2008;9:172–86 [DOI] [PubMed] [Google Scholar]
  • 9. White IR. Multivariate random-effects meta-regression: updates to mvmeta. Stata J. 2011;11:255–70 [Google Scholar]
  • 10. van Houwelingen HC, Zwinderman KH, Stijnen T. A bivariate approach to meta-analysis. Stat Med. 1993;12:2273–84 [DOI] [PubMed] [Google Scholar]
  • 11. Ades AE, Claxton K, Sculpher M. Evidence synthesis, parameter correlation and probabilistic sensitivity analysis. Health Econ. 2005;14:373–81 [DOI] [PubMed] [Google Scholar]
  • 12. Claxton K, Sculpher M, McCabe C, et al. Probabilistic sensitivity analysis for NICE Technology Assessment: not an Optional extra. Health Econ. 2005;14:339–47 [DOI] [PubMed] [Google Scholar]
  • 13. National Institute for Health and Clinical Excellence Guide to the Methods of Technology Appraisal. 2008. Available from http://www.nice.org.uk/media/B52/A7/TAMethodsGuideUpdatedJune2008.pdf [PubMed]
  • 14. Thompson KM, Evans JS. The value of improved national exposure information for perchloroethylene (Perc): a case study for dry cleaners. Risk Anal. 1997;17:253–71 [Google Scholar]
  • 15. Claxton K, Posnett J. An economic approach to clinical trial design and research priority-setting. Health Econ. 1996;5:513–24 [DOI] [PubMed] [Google Scholar]
  • 16. Felli JC, Hazen GB. Sensitivity analysis and the expected value of perfect information. Med Decis Making. 1998;18:95–109 [DOI] [PubMed] [Google Scholar]
  • 17. Schlaifer R. Probability and Statistics for Business Decisions. New York: McGraw-Hill; 1958 [Google Scholar]
  • 18. Cooper NJ, Sutton AJ, Abrams KR, Turner D, Wailoo A. Comprehensive decision analytical modelling in economic evaluation: a Bayesian approach. Health Econ. 2003;13:203–26 [DOI] [PubMed] [Google Scholar]
  • 19. Parmigiani G, Samsa GA, Ancukiewicz M, Lipscomb J, Hasselblad V, Matchar D. Assessing uncertainty in cost-effectiveness analyses: application to a complex decision model. Med Decis Making. 1997;17:390–401 [DOI] [PubMed] [Google Scholar]
  • 20. Spiegelhalter DJ, Myles JP, Jones DR, Abrams KR. Bayesian methods in health technology assessment: a review. Health Technol Assess. 2000;4(38):1–130 [PubMed] [Google Scholar]
  • 21. Gilks WR, Richardson S, Spiegelhalter DJ. Markov Chain Monte Carlo in Practice. London: Chapman & Hall/CRC; 1996 [Google Scholar]
  • 22. Lunn DJ, Thomas A, Best N, Spiegelhalter D. WinBUGS—a Bayesian modelling framework: concepts, structure, and extensibility. Stat Comput. 2000;10:325–37 [Google Scholar]
  • 23. Lunn D, Spiegelhalter D, Thomas A, Best N. The BUGS project: evolution, critique and future directions. Stat Med. 2009;28:3049–67 [DOI] [PubMed] [Google Scholar]
  • 24. Rubin DB. Using the SIR algorithm to simulate posterior distributions. In: Bernardo JM, DeGroot MH, Lindley DV, Smith AFM, eds. Bayesian Statistics 3. Oxford, UK: Clarendon Press; 1988. p. 395–402 [Google Scholar]
  • 25. Raftery AE, Givens GH, Zeh JE. Inference from a deterministic population dynamics model for Bowhead whales (with discussion). J Am Stat Assoc. 1995;90:402–30 [Google Scholar]
  • 26. Poole D, Raftery AE. Inference for deterministic simulation models: the Bayesian melding approach. J Am Stat Assoc. 2000;95:1244–55 [Google Scholar]
  • 27. Brand KP, Small MJ. Updating uncertainty in an integrated risk assessment: conceptual framework and methods. Risk Anal. 1995;15:719–31 [Google Scholar]
  • 28. Hahn G. BUGS Utility for Spreadsheets. 2001. [Version 1.0.1]. Available from: URL: http://faculty.salisbury.edu/~edhahn/bus.htm
  • 29. R Development core Team. R: A Language and Environment for Statistical Computing. Vienna, Austria: R Foundation for Statistical Computing; 2010. Available from: URL: http://www.R-project.org
  • 30. Smith BJ. The boa package. 2005. [Version 1.1.5]. Available from: URL: http://www.public-health.uiowa.edu/boa/
  • 31. Plummer M, Best N, Cowles K, Vines K. CODA: convergence diagnosis and output analysis for MCMC. R News. 2006;6:7–11 [Google Scholar]
  • 32. Dias S, Sutton AJ, Ades AE, Welton NJ. Evidence synthesis for decision making 2: a generalized linear modeling framework for pairwise and network meta-analysis of randomized controlled trials. Med Decis Making. 2013;33(5):607-617 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33. Dias S, Welton NJ, Sutton AJ, Ades AE. NICE DSU Technical support document 2: a generalised linear modelling framework for pair-wise and network meta-analysis of randomised controlled trials. 2011. Available from: URL: http://www.nicedsu.org.uk [PubMed]
  • 34. Viechtbauer W. Conducting meta-analyses in R with the metafor Package. J Stat Softw. 2010;36(3). [Google Scholar]
  • 35. Sweeting MJ, Sutton AJ, Lambert PC. What to add to nothing? Use and avoidance of continuity corrections in meta-analysis of sparse data. Stat Med. 2004;23:1351–75 [DOI] [PubMed] [Google Scholar]
  • 36. Efron B, Tibshiranie RJ. An Introduction to the Bootstrap. New York: Chapman & Hall; 1993 [Google Scholar]
  • 37. Lord J, Asante MA. Estimating uncertainty ranges for costs by the bootstrap procedure combined with probabilistic sensitivity analysis. Health Econ. 1999;8:323–33 [DOI] [PubMed] [Google Scholar]
  • 38. Davison AC, Hinkley DV. Bootstrap Methods and Their Application. Cambridge, UK: Cambridge University Press; 1997 [Google Scholar]
  • 39. Lunneborg CE. Data Analysis by Resampling: Concepts and Applications. Pacific Grove, CA: Duxbury Press; 2000 [Google Scholar]
  • 40. Bax L, Yu L-M, Ikeda N, Moons KGM. A systematic comparison of software dedicated to meta-analysis of causal studies. BMC Med Res Methodol. 2007;7:40. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 41. Sterne JAC, Bradburn MJ, Egger M. Meta-analysis in Stata. In: Egger M, Davey Smith G, Altman DG. eds. Systematic Reviews in Health Care: Meta-analysis in Context. London: BMJ Books; 2001. p. 347–69 [Google Scholar]
  • 42. Sutton AJ, Lambert PC, Hellmich MAG, Abrams KR, Jones DR. Meta-analysis in practice: a critical review of available software. In: Stangl DK, Berry DA. eds. Meta-analysis in Medicine and Health Policy. New York: Marcel Dekker; 1998. p. 315–39 [Google Scholar]
  • 43. Wallace BC, Schmid CH, Lau J, Trikalinos TA. Meta-Analyst: software for meta-analysis of binary, continuous and diagnostic data. BMC Med Res Methodol. 2009;9:80. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 44. Borenstein M, Hedges LV, Higgins JPT, Rothstein HR. Comprehensive Meta-Analysis. Englewood, NJ: Biostat; 2005. [Version 2]. [Google Scholar]
  • 45. Bax L, Yu L-M, Ikeda N, Tsuruta H, Moons KGM. Development and validation of MIX: comprehensive free software for meta-analysis of causal research data. BMC Med Res Methodol. 2006;6:50. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 46. The Nordic Cochrane Centre Review Manager (RevMan) [Version 5.0]. Copenhagen: The Cochrane Collaboration; 2008. Available from: URL: http://ims.cochrane.org/revman [Google Scholar]
  • 47. Sterne JAC. Meta-analysis in Stata: An Updated Collection from the Stata Journal. College Station, TX: Stata Press; 2009 [Google Scholar]
  • 48. Schwarzer G. meta: meta-analysis with R. 2010. [Version 1.6-1]. Available from: URL: http://cran.r-project.org/web/packages/meta/ [Google Scholar]
  • 49. Lumley T. rmeta: meta-analysis 2009. Available from: URL: http://cran.r-project.org/web/packages/rmeta/
  • 50. Glenny AM, Altman DG, Song F, et al. Indirect comparisons of competing interventions. Health Technol Assess. 2005;9(26):1–134 [DOI] [PubMed] [Google Scholar]
  • 51. Lu G, Ades AE. Modelling between-trial variance structure in mixed treatment comparisons. Biostatistics. 2009;10:792–805 [DOI] [PubMed] [Google Scholar]
  • 52. Thompson SG, Sharp SJ. Explaining heterogeneity in meta-analysis: a comparison of methods. Stat Med. 1999;18:2693–708 [DOI] [PubMed] [Google Scholar]
  • 53. Dias S, Sutton AJ, Welton NJ, Ades AE. NICE DSU technical support document 3: heterogeneity: subgroups, meta-regression, bias and bias-adjustment. 2011. Available from: URL: http://www.nicedsu.org.uk [PubMed]
  • 54. Dias S, Welton NJ, Sutton AJ, Ades AE. NICE DSU technical support document 5: evidence synthesis in the baseline natural history model. 2011. Available from: URL: http://www.nicedsu.org.uk [PubMed]
  • 55. Dias S, Welton NJ, Sutton AJ, Ades AE. Evidence synthesis for decision making 5: the baseline natural history model. Med Decis Making. 2013;33(5):657-670 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 56. Welton NJ, Cooper NJ, Ades AE, Lu G, Sutton AJ. Mixed treatment comparison with multiple outcomes reported inconsistently across trials: evaluation of antivirals for treatment of influenza A and B. Stat Med. 2008;27:5620–39 [DOI] [PubMed] [Google Scholar]
  • 57. Welton NJ, Willis SR, Ades AE. Synthesis of survival and disease progression outcomes for health technology assessment of cancer therapies. Research Synthesis Methods. 2010;1:239–57 [DOI] [PubMed] [Google Scholar]
  • 58. Ades AE, Mavranezouli I, Dias S, Welton NJ, Whittington C, Kendall T. Network meta-analysis with competing risk outcomes. Value Health. 2010;13(8):976–83 [DOI] [PubMed] [Google Scholar]
  • 59. Misra S. XL2BUGS: Excel Add-In to convert data for WinBUGS. 2011. Available from: URL: http://www.simon.rochester.edu/fac/misra/software.htm
  • 60. Zhang Z, Wang L. Use BAUW to convert data. Available from: URL: http://www.psychstat.org/us/article.php/52.htm
  • 61. Sturtz S, Ligges U, Gelman A. R2WinBUGS: a package for running WinBUGS from R. J Stat Softw. 2005;12(3):1-16 (http://www.jstatsoft.org/v2/i03). [Google Scholar]
  • 62. Bujkiewicz S, Jones HE, Lai MCW, et al. Development of a Transparent Interactive Decision Interrogator to facilitate the decision making process in health care. Value Health. 2011;14:768–76 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 63. Heiberger RM, Neuwirth E. R Through Excel: A Spreadsheet Interface for Statistics, Data Analysis, and Graphics. New York: Springer; 2009 [Google Scholar]
  • 64. Thompson J, Palmer T, Moreno S. Bayesian analysis in Stata with WinBUGS. Stata J. 2006;6:530-49 [Google Scholar]
  • 65. Yan J, Prates M. Package rbugs: Fusing R and OpenBugs. 2011. [Version 0.4-9]. Available from: URL: http://cran.r-project.org/web/packages/rbugs/index.html

Articles from Medical Decision Making are provided here courtesy of SAGE Publications

RESOURCES