Skip to main content
Medline Book to support NIHPA logoLink to Medline Book to support NIHPA
. 2021 Jun;25(37):1–124. doi: 10.3310/hta25370

Developing a reference protocol for structured expert elicitation in health-care decision-making: a mixed-methods study.

Laura Bojke, Marta Soares, Karl Claxton, Abigail Colson, Aimée Fox, Christopher Jackson, Dina Jankovic, Alec Morton, Linda Sharples, Andrea Taylor
PMCID: PMC8215568  PMID: 34105510

Abstract

BACKGROUND

Many decisions in health care aim to maximise health, requiring judgements about interventions that may have higher health effects but potentially incur additional costs (cost-effectiveness framework). The evidence used to establish cost-effectiveness is typically uncertain and it is important that this uncertainty is characterised. In situations in which evidence is uncertain, the experience of experts is essential. The process by which the beliefs of experts can be formally collected in a quantitative manner is structured expert elicitation. There is heterogeneity in the existing methodology used in health-care decision-making. A number of guidelines are available for structured expert elicitation; however, it is not clear if any of these are appropriate for health-care decision-making.

OBJECTIVES

The overall aim was to establish a protocol for structured expert elicitation to inform health-care decision-making. The objectives are to (1) provide clarity on methods for collecting and using experts' judgements, (2) consider when alternative methodology may be required in particular contexts, (3) establish preferred approaches for elicitation on a range of parameters, (4) determine which elicitation methods allow experts to express uncertainty and (5) determine the usefulness of the reference protocol developed.

METHODS

A mixed-methods approach was used: systemic review, targeted searches, experimental work and narrative synthesis. A review of the existing guidelines for structured expert elicitation was conducted. This identified the approaches used in existing guidelines (the 'choices') and determined if dominant approaches exist. Targeted review searches were conducted for selection of experts, level of elicitation, fitting and aggregation, assessing accuracy of judgements and heuristics and biases. To sift through the available choices, a set of principles that underpin the use of structured expert elicitation in health-care decision-making was defined using evidence generated from the targeted searches, quantities to elicit experimental evidence and consideration of constraints in health-care decision-making. These principles, including fitness for purpose and reflecting individual expert uncertainty, were applied to the set of choices to establish a reference protocol. An applied evaluation of the developed reference protocol was also undertaken.

RESULTS

For many elements of structured expert elicitation, there was a lack of consistency across the existing guidelines. In almost all choices, there was a lack of empirical evidence supporting recommendations, and in some circumstances the principles are unable to provide sufficient justification for discounting particular choices. It is possible to define reference methods for health technology assessment. These include a focus on gathering experts with substantive skills, eliciting observable quantities and individual elicitation of beliefs. Additional considerations are required for decision-makers outside health technology assessment, for example at a local level, or for early technologies. Access to experts may be limited and in some circumstances group discussion may be needed to generate a distribution.

LIMITATIONS

The major limitation of the work conducted here lies not in the methods employed in the current work but in the evidence available from the wider literature relating to how appropriate particular methodological choices are.

CONCLUSIONS

The reference protocol is flexible in many choices. This may be a useful characteristic, as it is possible to apply this reference protocol across different settings. Further applied studies, which use the choices specified in this reference protocol, are required.

FUNDING

This project was funded by the NIHR Health Technology Assessment programme and will be published in full in Health Technology Assessment; Vol. 25, No. 37. See the NIHR Journals Library website for further project information. This work was also funded by the Medical Research Council (reference MR/N028511/1).

Plain language summary

BACKGROUND

Decisions in health care aim to maximise health, requiring judgements about treatments. The evidence used to make these judgements is typically uncertain. In these situations, the experience of experts is essential. Structured expert elicitation collects beliefs from experts. There are different guidelines available for structured expert elicitation; however, it is not clear if any of these be can be used in health-care decision-making, for example in considering if a treatment should be made available in the NHS. This project aimed to develop a guidance for structured expert elicitation to inform health-care decision-making.

METHODS

Reviews and experimental techniques were used to gather a list of methods to conduct structured expert elicitation. The suitability of these choices in health-care decision-making was then determined by comparing these with a set of standards that support the use of structured expert elicitation in health-care decision-making.

RESULTS

Different guidelines prefer different approaches to conduct structured expert elicitation. There is a lack of evidence available to determine which of these methods is most appropriate across the whole of health-care decision-making. It is possible to define reference protocol methods that could be used in a particular type of health-care decision-making, health technology assessment. This includes gathering experts with knowledge of the clinical area, asking experts about things that they observe in clinical practice and asking experts individually for their beliefs. For decision-makers working outside health technology assessment, for example at a local level, or for treatments that are not yet available to patients, these choices may not be appropriate.

CONCLUSIONS

This flexibility of this guidance is a useful feature. It is possible for different decision-makers in health care to interpret the reference protocol for their own circumstances.


Full text of this article can be found in Bookshelf.

References

  1. Bryan S, Williams I, McIver S. Seeing the NICE side of cost-effectiveness analysis: a qualitative investigation of the use of CEA in NICE technology appraisals. Health Econ 2007;16:179–93. https://doi.org/10.1002/hec.1133 doi: 10.1002/hec.1133. [DOI] [PubMed]
  2. Bothwell LE, Greene JA, Podolsky SH, Jones DS. Assessing the gold standard – lessons from the history of RCTs. N Engl J Med 2016;374:2175–81. https://doi.org/10.1056/NEJMms1604593 doi: 10.1056/NEJMms1604593. [DOI] [PubMed]
  3. Chavez-MacGregor M, Giordano SH. Randomized clinical trials and observational studies: is there a battle? J Clin Oncol 2016;34:772–3. https://doi.org/10.1200/jco.2015.64.7487 doi: 10.1200/jco.2015.64.7487. [DOI] [PubMed]
  4. Rothwell PM. External validity of randomised controlled trials: ‘to whom do the results of this trial apply?’. Lancet 2005;365:82–93. https://doi.org/10.1016/S0140-6736(04)17670-8 doi: 10.1016/S0140-6736(04)17670-8. [DOI] [PubMed]
  5. Frieden TR. Evidence for health decision making - beyond randomized, controlled trials. N Engl J Med 2017;377:465–75. https://doi.org/10.1056/NEJMra1614394 doi: 10.1056/NEJMra1614394. [DOI] [PubMed]
  6. Hora SC. Aleatory and epistemic uncertainty in probability elicitation with an example from hazardous waste management. Reliab Eng Syst Safety 1996;54:217–23. https://doi.org/10.1016/S0951-8320(96)00077-4 doi: 10.1016/S0951-8320(96)00077-4. [DOI]
  7. O’Hagan A, Buck C, Daneshkhah A, Eiser J, Garthwaite P, Jenkinson D, et al. Uncertain Judgements: Eliciting Experts’ Probabilities. Chichester: John Wiley & Sons; 2006. https://doi.org/10.1002/0470033312 doi: 10.1002/0470033312. [DOI]
  8. Griffin SC, Claxton KP, Palmer SJ, Sculpher MJ. Dangerous omissions: the consequences of ignoring decision uncertainty. Health Econ 2011;20:212–24. https://doi.org/10.1002/hec.1586 doi: 10.1002/hec.1586. [DOI] [PubMed]
  9. Babuscia A, Cheung K-M. An approach to perform expert elicitation for engineering design risk analysis: methodology and experimental results. J R Statist Soc 2014;177:475–97. https://doi.org/10.1111/rssa.12028 doi: 10.1111/rssa.12028. [DOI]
  10. Ayyub BM. Elicitation of Expert Opinions for Uncertainty and Risks. Boca Raton, FL: CRC Press; 2001. https://doi.org/10.1201/9781420040906 doi: 10.1201/9781420040906. [DOI]
  11. Peel A, Jenks M, Choudhury M, Lovett R, Rejon-Parrilla JC, Sims A, Craig J. Use of expert judgement across NICE guidance-making programmes: a review of current processes and suitability of existing tools to support the use of expert elicitation. Appl Health Econ Health Policy 2018;16:819–36. https://doi.org/10.1007/s40258-018-0415-5 doi: 10.1007/s40258-018-0415-5. [DOI] [PMC free article] [PubMed]
  12. Soares MO, Bojke L. Expert Elicitation to Inform health Technology Assessment. In Price CC, editor. International Series in Operations Research and Management Science. UK: Springer; 2018. pp. 479–94. https://doi.org/10.1007/978-3-319-65052-4_18 doi: 10.1007/978-3-319-65052-4_18. [DOI]
  13. Cooke RM. Experts in Uncertainty: Opinion and Subjective Probability in Science. Oxford: Oxford University Press; 1991.
  14. O’Hagan T, Oakley J. The Sheffield Elicitation Framework (SHELF). 2008. URL: www.tonyohagan.co.uk/shelf/ (accessed 18 November 2019).
  15. Colson AR, Cooke RM. Expert elicitation: using the classical model to validate experts’ judgments. Rev Environ Econ Policy 2018;12:113–32. https://doi.org/10.1093/reep/rex022 doi: 10.1093/reep/rex022. [DOI]
  16. European Food Safety Authority. Guidance on expert knowledge elicitation in food and feed safety risk assessment. EFSA J 2014;12. https://doi.org/10.2903/j.efsa.2014.3734 doi: 10.2903/j.efsa.2014.3734. [DOI]
  17. Environmental Protection Agency (EPA). Expert Elicitation Task Force White Paper. Washington, DC: EPA; 2009.
  18. Kaplan S. ‘Expert information’ versus ‘expert opinion.’ Another approach to the problem of eliciting/combining/using expert knowledge in PRA. Reliab Eng Syst Safety 1992;35:61–72. https://doi.org/10.1016/0951-8320(92)90023-e doi: 10.1016/0951-8320(92)90023-e. [DOI]
  19. Lindley DV, Tversky A, Brown RV. On the reconciliation of probability assessments. J R Stat Soc Series A 1979;142:146–62. https://doi.org/10.2307/2345078 doi: 10.2307/2345078. [DOI]
  20. Garthwaite PH, Kadane JB, O’Hagan A. Statistical methods for eliciting probability distributions. J Am Stat Assoc 2005;100:680–701. https://doi.org/10.1198/016214505000000105 doi: 10.1198/016214505000000105. [DOI]
  21. Cooke RM, Goossens LHJ. Procedures guide for structured expert judgement in accident consequence modelling. Radiat Prot Dosimetry 2000;90:303–9. https://doi.org/10.1093/oxfordjournals.rpd.a033152 doi: 10.1093/oxfordjournals.rpd.a033152. [DOI]
  22. Choy SL, O’Leary R, Mengersen K. Elicitation by design in ecology: using expert opinion to inform priors for Bayesian statistical models. Ecology 2009;90:265–77. https://doi.org/10.1890/07-1886.1 doi: 10.1890/07-1886.1. [DOI] [PubMed]
  23. Walls L, Quigley J. Building prior distributions to support Bayesian reliability growth modelling using expert judgement. Reliab Eng Syst Safety 2001;74:117–28. https://doi.org/10.1016/s0951-8320(01)00069-2 doi: 10.1016/s0951-8320(01)00069-2. [DOI]
  24. Budnitz RJ, Apostolakis G, Boore DM, Cluff LS, Coppersmith KJ, Cornell CA, et al. Recommendations for Probabilistic Seismic Hazard Analysis: Guidance on Uncertainty and Use of Experts. Livermore, CA: United States Nuclear Regulatory Commission; 1997. https://doi.org/10.2172/479072 doi: 10.2172/479072. [DOI]
  25. Meyer MA, Booker JM. Eliciting and Analyzing Expert Judgment: A Practical Guide. Philadelphia, PA: Society for Industrial and Applied Mathematics; 2001. https://doi.org/10.1137/1.9780898718485 doi: 10.1137/1.9780898718485. [DOI]
  26. Kotra J, Lee M, Eisenberg N, DeWispelare A. Branch Technical Position on the Use of Expert Elicitation in the High-Level Radioactive Waste Program. Division of Waste Management, Office of Nuclear Material Safety and Safeguards. Livermore, CA: United States Nuclear Regulatory Commission; 1996. https://doi.org/10.2172/414310 doi: 10.2172/414310. [DOI]
  27. Keeney R, von Winterfeldt D. Eliciting probabilities from experts in complex technical problems. IEEE Trans Eng Manag 1991;38. https://doi.org/10.1109/17.83752 doi: 10.1109/17.83752. [DOI]
  28. Tredger ERW, Lo JTH, Haria S, Lau HHK, Bonello N, Hlavka B, et al. Bias, guess and expert judgement in actuarial work. Br Actuar J 2016;21:545–78. https://doi.org/10.1017/s1357321716000155 doi: 10.1017/s1357321716000155. [DOI]
  29. Knol AB, Slottje P, van der Sluijs JP, Lebret E. The use of expert elicitation in environmental health impact assessment: a seven step procedure. Environ Health 2010;9:19. https://doi.org/10.1186/1476-069X-9-19 doi: 10.1186/1476-069X-9-19. [DOI] [PMC free article] [PubMed]
  30. Gosling JP. SHELF: The Sheffield Elicitation Framework. In Price CC, editor. International Series in Operations Research and Management Science. UK: Springer; 2018. pp. 61–93. https://doi.org/10.1007/978-3-319-65052-4_4 doi: 10.1007/978-3-319-65052-4_4. [DOI]
  31. Ashcroft M, Austin R, Barnes K, MacDonald D, Makin S, Morgan S, et al. Expert judgement. Br Actuar J 2016;21:314–63. https://doi.org/10.1017/S1357321715000239 doi: 10.1017/S1357321715000239. [DOI]
  32. Hemming V, Burgman MA, Hanea AM, McBride MF, Wintle BC. A practical guide to structured expert elicitation using the IDEA protocol. Methods Ecol Evol 2018;9:169–80. https://doi.org/10.1111/2041-210x.12857 doi: 10.1111/2041-210x.12857. [DOI]
  33. Gilovich T, Griffin DW, Kahneman D. Heuristics and Biases: the Psychology of Intuitive Judgment. Cambridge: Cambridge University Press; 2013.
  34. Kahneman D, Slovic P, Tversky A. Judgment Under Uncertainty: Heuristics and Biases. Cambridge; New York, NY: Cambridge University Press; 1982. https://doi.org/10.1017/CBO9780511809477 doi: 10.1017/CBO9780511809477. [DOI]
  35. Morgan MG. Use (and abuse) of expert elicitation in support of decision making for public policy. Proc Natl Acad Sci USA 2014;111:7176–84. https://doi.org/10.1073/pnas.1319946111 doi: 10.1073/pnas.1319946111. [DOI] [PMC free article] [PubMed]
  36. NHS. Guide to the Healthcare System in England Including the Statement of NHS Accountability. London: NHS; 2013.
  37. Kershaw A. NHS Vale of York CCG Referral Support Service Useful Information. 2017. URL: www.valeofyorkccg.nhs.uk/rss/index.php?id=contact-information (accessed 18 November 2019).
  38. Kay A. The abolition of the GP fundholding scheme: a lesson in evidence-based policy making. Br J Gen Pract 2002;52:141–4. [PMC free article] [PubMed]
  39. Great Britain. Care Act 2014. London: The Stationery Office; 2014.
  40. Lafond S, Charlesworth A, Roberts A. A Year of Plenty? An Analysis of NHS Finances and Consultant Productivity. London: The Health Foundation; 2017.
  41. The King’s Fund. Has the Government Delivered a New Era for Public Health? London: The King’s Fund; 2015. URL: www.kingsfund.org.uk/projects/verdict/has-government-delivered-new-era-public-health (accessed 28 March 2019).
  42. NHS. Interim Commissioning Policy: Individual Funding Requests. London: NHS Commissioning Board; 2013.
  43. Ham C, Glenn R. Reasonable Rationing: International Experience of Priority Setting in Health Care (State of Health). Milton Keynes: Open University Press; 2003.
  44. Grigore B, Peters J, Hyde C, Stein K. Methods to elicit probability distributions from experts: a systematic review of reported practice in health technology assessment. PharmacoEconomics 2013;31:991–1003. https://doi.org/10.1007/s40273-013-0092-z doi: 10.1007/s40273-013-0092-z. [DOI] [PubMed]
  45. National Institute for Health and Care Excellence (NICE). Guide to the Process of Technology Appraisal. London: NICE; 2014. [PubMed]
  46. Bennett P, Hare A, Townshend J. Assessing the risk of vCJD transmission via surgery: models for uncertainty and complexity. J Oper Res Soc 2005;56:202–13. https://doi.org/10.1057/palgrave.jors.2601899 doi: 10.1057/palgrave.jors.2601899. [DOI]
  47. Colson AR, Megiddo I, Alvarez-Uria G, Gandra S, Bedford T, Morton A, et al. Quantifying uncertainty about future antimicrobial resistance: comparing structured expert judgment and statistical forecasting methods. PLOS ONE 2019;14:e0219190. https://doi.org/10.1371/journal.pone.0219190 doi: 10.1371/journal.pone.0219190. [DOI] [PMC free article] [PubMed]
  48. Dallow N, Best N, H Montague T. Better decision making in drug development through adoption of formal prior elicitation. Pharm Stat 2018;17:301–16. https://doi.org/10.1002/pst.1854 doi: 10.1002/pst.1854. [DOI] [PubMed]
  49. Walley RJ, Smith CL, Gale JD, Woodward P. Advantages of a wholly Bayesian approach to assessing efficacy in early drug development: a case study. Pharm Stat 2015;14:205–15. https://doi.org/10.1002/pst.1675 doi: 10.1002/pst.1675. [DOI] [PubMed]
  50. Soares MO, Sharples L, Morton A, Claxton K, Bojke L. Experiences of structured elicitation for model based cost-effectiveness analyses. Value Health 2018;21:715–23. https://doi.org/10.1016/j.jval.2018.01.019 doi: 10.1016/j.jval.2018.01.019. [DOI] [PMC free article] [PubMed]
  51. Drummond M. Methods for the Economic Evaluation of Health Care Programmes. 4th edn. Oxford: Oxford University Press; 2015.
  52. Leal J, Wordsworth S, Legood R, Blair E. Eliciting expert opinion for economic models: an applied example. Value Health 2007;10:195–203. https://doi.org/10.1111/j.1524-4733.2007.00169.x doi: 10.1111/j.1524-4733.2007.00169.x. [DOI] [PubMed]
  53. Soares MO, Bojke L, Dumville J, Iglesias C, Cullum N, Claxton K. Methods to elicit experts’ beliefs over uncertain quantities: application to a cost effectiveness transition model of negative pressure wound therapy for severe pressure ulceration. Stat Med 2011;30:2363–80. https://doi.org/10.1002/sim.4288 doi: 10.1002/sim.4288. [DOI] [PubMed]
  54. Haakma W, Steuten LM, Bojke L, IJzerman MJ. Belief elicitation to populate health economic models of medical diagnostic devices in development. Appl Health Econ Health Policy 2014;12:327–34. https://doi.org/10.1007/s40258-014-0092-y doi: 10.1007/s40258-014-0092-y. [DOI] [PubMed]
  55. Bojke L, Claxton K, Bravo-Vergel Y, Sculpher M, Palmer S, Abrams K. Eliciting distributions to populate decision analytic models. Value Health 2010;13:557–64. https://doi.org/10.1111/j.1524-4733.2010.00709.x doi: 10.1111/j.1524-4733.2010.00709.x. [DOI] [PubMed]
  56. McKenna C, McDaid C, Suekarran S, Hawkins N, Claxton K, Light K, et al. Enhanced external counterpulsation for the treatment of stable angina and heart failure: a systematic review and economic analysis. Health Technol Assess 2009;13(24). https://doi.org/10.3310/hta13240 doi: 10.3310/hta13240. [DOI] [PubMed]
  57. Sperber D, Mortimer D, Lorgelly P, Berlowitz D. An expert on every street corner? Methods for eliciting distributions in geographically dispersed opinion pools. Value Health 2013;16:434–7. https://doi.org/10.1016/j.jval.2012.10.011 doi: 10.1016/j.jval.2012.10.011. [DOI] [PubMed]
  58. Fischer K, Lewandowski D, Janssen MP. Estimating unknown parameters in haemophilia using expert judgement elicitation. Haemophilia 2013;19:e282–8. https://doi.org/10.1111/hae.12166 doi: 10.1111/hae.12166. [DOI] [PubMed]
  59. Garthwaite PH, Chilcott JB, Jenkinson DJ, Tappenden P. Use of expert knowledge in evaluating costs and benefits of alternative service provisions: a case study. Int J Technol Assess Health Care 2008;24:350–7. https://doi.org/10.1017/S026646230808046X doi: 10.1017/S026646230808046X. [DOI] [PubMed]
  60. Grigore B, Peters J, Hyde C, Stein K. A comparison of two methods for expert elicitation in health technology assessments. BMC Med Res Methodol 2016;16:85. https://doi.org/10.1186/s12874-016-0186-3 doi: 10.1186/s12874-016-0186-3. [DOI] [PMC free article] [PubMed]
  61. Meads C, Auguste P, Davenport C, Małysiak S, Sundar S, Kowalska M, et al. Positron emission tomography/computerised tomography imaging in detecting and managing recurrent cervical cancer: systematic review of evidence, elicitation of subjective probabilities and economic modeling. Health Technol Assess 2013;17(12). https://doi.org/10.3310/hta17120 doi: 10.3310/hta17120. [DOI] [PMC free article] [PubMed]
  62. Wilson EC, Stanley G, Mirza Z. The long-term cost to the UK NHS and social services of different durations of IV thiamine (vitamin B1) for chronic alcohol misusers with symptoms of Wernicke’s encephalopathy presenting at the emergency department. Appl Health Econ Health Policy 2016;14:205–15. https://doi.org/10.1007/s40258-015-0214-1 doi: 10.1007/s40258-015-0214-1. [DOI] [PMC free article] [PubMed]
  63. Brodtkorb T-H. Cost-Effectiveness Analysis of Health Technologies When Evidence is Scarce. Linköping: Linköping University; 2010.
  64. Cao Q, Postmus D, Hillege HL, Buskens E. Probability elicitation to inform early health economic evaluations of new medical technologies: a case study in heart failure disease management. Value Health 2013;16:529–35. https://doi.org/10.1016/j.jval.2013.02.008 doi: 10.1016/j.jval.2013.02.008. [DOI] [PubMed]
  65. Speight PM, Palmer S, Moles DR, Downer MC, Smith DH, Henriksson M, Augustovski F. The cost-effectiveness of screening for oral cancer in primary care. Health Technol Assess 2006;10(14). https://doi.org/10.3310/hta10140 doi: 10.3310/hta10140. [DOI] [PubMed]
  66. Poncet A, Gencer B, Blondon M, Gex-Fabry M, Combescure C, Shah D, et al. Electrocardiographic screening for prolonged QT interval to reduce sudden cardiac death in psychiatric patients: a cost-effectiveness analysis. PLOS ONE 2015;10:e0127213. https://doi.org/10.1371/journal.pone.0127213 doi: 10.1371/journal.pone.0127213. [DOI] [PMC free article] [PubMed]
  67. Stevenson MD, Oakley JE, Lloyd Jones M, Brennan A, Compston JE, McCloskey EV, Selby PL. The cost-effectiveness of an RCT to establish whether 5 or 10 years of bisphosphonate treatment is the better duration for women with a prior fracture. Med Decis Making 2009;29:678–89. https://doi.org/10.1177/0272989X09336077 doi: 10.1177/0272989X09336077. [DOI] [PubMed]
  68. Meeyai A, Praditsitthikorn N, Kotirum S, Kulpeng W, Putthasri W, Cooper BS, Teerawattananon Y. Seasonal influenza vaccination for children in Thailand: a cost-effectiveness analysis. PLOS Med 2015;12:e1001829. https://doi.org/10.1371/journal.pmed.1001829 doi: 10.1371/journal.pmed.1001829. [DOI] [PMC free article] [PubMed]
  69. Colbourn T, Asseburg C, Bojke L, Philips Z, Claxton K, Ades AE, Gilbert RE. Prenatal screening and treatment strategies to prevent group B streptococcal and other bacterial infections in early infancy: cost-effectiveness and expected value of information analyses. Health Technol Assess 2007;11(29). https://doi.org/10.3310/hta11290 doi: 10.3310/hta11290. [DOI] [PubMed]
  70. Girling AJ, Freeman G, Gordon JP, Poole-Wilson P, Scott DA, Lilford RJ. Modeling payback from research into the efficacy of left-ventricular assist devices as destination therapy. Int J Technol Assess Health Care 2007;23:269–77. https://doi.org/10.1017/S0266462307070365 doi: 10.1017/S0266462307070365. [DOI] [PubMed]
  71. Stevenson MD, Oakley JE, Chick SE, Chalkidou K. The cost-effectiveness of surgical instrument management policies to reduce the risk of vCJD transmission to humans. J Oper Res Soc 2009;60:506–18. https://doi.org/10.1057/palgrave.jors.2602580 doi: 10.1057/palgrave.jors.2602580. [DOI]
  72. De Persis C, Wilson S. Using the Analytic Hierarchy Process in the Assessment of the Probability for an Explosion to Occur During the Atmospheric Re-Entry. 67th International Astronautical Congress, abstract no. 1021. Guadalajara, Mexico; 26–30 September 2016.
  73. Iglesias CP, Thompson A, Rogowski WH, Payne K. Reporting guidelines for the use of expert judgement in model-based economic evaluations. PharmacoEconomics 2016;34:1161–72. https://doi.org/10.1007/s40273-016-0425-9 doi: 10.1007/s40273-016-0425-9. [DOI] [PubMed]
  74. Colson AR, Cooke RM. Cross validation for the classical model of structured expert judgment. Reliab Eng Syst Safety 2017;163:109–20. https://doi.org/10.1016/j.ress.2017.02.003 doi: 10.1016/j.ress.2017.02.003. [DOI]
  75. Eggstaff JW, Mazzuchi TA, Sarkani S. The effect of the number of seed variables on the performance of Cooke’s classical model. Reliab Eng Syst Safety 2014;121:72–82. https://doi.org/10.1016/j.ress.2013.07.015 doi: 10.1016/j.ress.2013.07.015. [DOI]
  76. Clemen RT. Comment on Cooke’s classical method. Reliab Eng Syst Safety 2008;93:760–5. https://doi.org/10.1016/j.ress.2008.02.003 doi: 10.1016/j.ress.2008.02.003. [DOI]
  77. Montibeller G, von Winterfeldt D. Cognitive and motivational biases in decision and risk analysis. Risk Anal 2015;35:1230–51. https://doi.org/10.1111/risa.12360 doi: 10.1111/risa.12360. [DOI] [PubMed]
  78. Philips Z, Ginnelly L, Sculpher M, Claxton K, Golder S, Riemsma R, et al. Review of guidelines for good practice in decision-analytic modelling in health technology assessment. Health Technol Assess 2004;8(36). https://doi.org/10.3310/hta8360 doi: 10.3310/hta8360. [DOI] [PubMed]
  79. Budescu DV, Chen E. Identifying expertise to extract the wisdom of crowds. 2015;61:267–80. https://doi.org/10.1287/mnsc.2014.1909 doi: 10.1287/mnsc.2014.1909. [DOI]
  80. Boring R, Gertman D, Joe J, Marble J, Galyean W, Blackwood L, et al. Simplified Expert Elicitation Guideline For Risk Assessment Of Operating Events. Livermore, CA: U.S. Nuclear Regulatory Commission; 2005.
  81. Bolger F, Rowe G. The aggregation of expert judgment: do good things come to those who weight? Risk Anal 2015;35:5–11. https://doi.org/10.1111/risa.12272 doi: 10.1111/risa.12272. [DOI] [PubMed]
  82. Burgman MA, McBride M, Ashton R, Speirs-Bridge A, Flander L, Wintle B, et al. Expert status and performance. PLOS ONE 2011;6:e22998. https://doi.org/10.1371/journal.pone.0022998 doi: 10.1371/journal.pone.0022998. [DOI] [PMC free article] [PubMed]
  83. Claxton K, Sculpher M, McCabe C, Briggs A, Akehurst R, Buxton M, et al. Probabilistic sensitivity analysis for NICE technology assessment: not an optional extra. Health Econ 2005;14:339–47. https://doi.org/10.1002/hec.985 doi: 10.1002/hec.985. [DOI] [PubMed]
  84. Akins RB, Tolson H, Cole BR. Stability of response characteristics of a Delphi panel: application of bootstrap data expansion. BMC Med Res Methodol 2005;5:37. https://doi.org/10.1186/1471-2288-5-37 doi: 10.1186/1471-2288-5-37. [DOI] [PMC free article] [PubMed]
  85. Gosling JP. Methods for Eliciting Expert Opinion to Inform Health Technology Assessment. 2014. URL: www.semanticscholar.org/paper/Methods-for-eliciting-expert-opinion-to-inform-Gosling/38eba762cdaf5d6dae2fee2063bf776d5facec5b (accessed 1 November 2019).
  86. Clemen RT, Winkler RL. Combining probability distributions from experts in risk analysis. Risk Anal 1999;19:187–203. https://doi.org/10.1023/A:1006917509560 doi: 10.1023/A:1006917509560. [DOI] [PubMed]
  87. O’Hagan T, Oakley JE. SHELF: the Sheffield Elicitation Framework (Version 3.0). Sheffield: School of Mathematics and Statistics, University of Sheffield; 2016.
  88. Fogel L. Human Information Processing: Upper Saddle River, NJ: Prentice-Hall; 1967.
  89. Seaver D. Assessment of Group Preferences and Group Uncertainty for Decision-Making. California, USA: Social Science Research Institute; 1976.
  90. Staël von Holstein C-AS. Two techniques for assessment of subjective probability distributions – an experimental study. Acta Psychologica 1971;35:478–94. https://doi.org/10.1016/0001-6918(71)90005-9 doi: 10.1016/0001-6918(71)90005-9. [DOI]
  91. Winkler RL. The assessment of prior distributions in Bayesian analysis. J Am Stat Assoc 1967;62:776. https://doi.org/10.2307/2283671 doi: 10.2307/2283671. [DOI]
  92. Thall PF, Ursino M, Baudouin V, Alberti C, Zohar S. Bayesian treatment comparison using parametric mixture priors computed from elicited histograms. Stat Methods Med Res 2019;28:404–18. https://doi.org/10.1177/0962280217726803 doi: 10.1177/0962280217726803. [DOI] [PMC free article] [PubMed]
  93. Bornkamp B, Ickstadt K. A note on B-splines for semiparametric elicitation. Am Stat 2009;63:373–7. https://doi.org/10.1198/tast.2009.08191 doi: 10.1198/tast.2009.08191. [DOI]
  94. Gosling JP. On the Elicitation of Continuous, Symmetric, Unimodal Distributions. 2008. URL: https://arxiv.org/abs/0805.2044 (accessed 1 November 2019).
  95. Oakley JE, O’Hagan A. Uncertainty in prior elicitations: a nonparametric approach. Biometrika 2007;94:427–41. https://doi.org/10.1093/biomet/asm031 doi: 10.1093/biomet/asm031. [DOI]
  96. Gosling JP, Oakley JE, O’Hagan A. Nonparametric elicitation for heavy-tailed prior distributions. Bayesian Anal 2007;2:693–718. https://doi.org/10.1214/07-ba228 doi: 10.1214/07-ba228. [DOI]
  97. Moala FA, O’Hagan A. Elicitation of multivariate prior distributions: a nonparametric Bayesian approach. J Stat Plan Inference 2010;140:1635–55. https://doi.org/10.1016/j.jspi.2010.01.004 doi: 10.1016/j.jspi.2010.01.004. [DOI]
  98. Daneshkhah A, Hosseinian-Far A, Sedighi T, Farsi M. Prior Elicitation and Evaluation of Imprecise Judgements for Bayesian Analysis of System Reliability. In Hosseinian-Far A, Ramachandran M, Sarwar D, editors. Strategic Engineering for Cloud Computing and Big Data Analytics. New York, NY: Springer; 2017. pp. 63–79. https://doi.org/10.1007/978-3-319-52491-7_4 doi: 10.1007/978-3-319-52491-7_4. [DOI]
  99. Morris P. Decision analysis expert use. Management Sci 1974;20:1233–41. https://doi.org/10.1287/mnsc.20.9.1233 doi: 10.1287/mnsc.20.9.1233. [DOI]
  100. Morris PA. Combining expert judgments: a Bayesian approach. Management Sci 1977;23. https://doi.org/10.1287/mnsc.23.7.679 doi: 10.1287/mnsc.23.7.679. [DOI]
  101. Jacobs RA. Methods for combining experts’ probability assessments. Neural Comput 1995;7:867–88. https://doi.org/10.1162/neco.1995.7.5.867 doi: 10.1162/neco.1995.7.5.867. [DOI] [PubMed]
  102. Lipscomb J, Parmigiani G, Hasselblad V. Combining expert judgment by hierarchical modeling: an application to physician staffing. Management Sci 1998;44:149–61. https://doi.org/10.1287/mnsc.44.2.149 doi: 10.1287/mnsc.44.2.149. [DOI]
  103. Albert I, Donnet S, Guihenneuc-Jouyaux C, Lowchoy S, L. Mengersen K, Rousseau J. Combining expert opinions in prior elicitation. Bayesian Anal 2012;7:503–32. https://doi.org/10.1214/12-BA717 doi: 10.1214/12-BA717. [DOI]
  104. West M, Crosse J. Modelling probabilistic agent opinion. J R Stat Soc 1992;24:285–99. https://doi.org/10.2307/2345964 doi: 10.2307/2345964. [DOI]
  105. Gelfand AE, Mallick BK, Dey DK. Modeling expert opinion arising as a partial probabilistic specification. J Am Stat Assoc 1995;90:598–604. https://doi.org/10.1080/01621459.1995.10476552 doi: 10.1080/01621459.1995.10476552. [DOI]
  106. Lichtendahl KC, Grushka-Cockayne Y, Winkler RL. Is it better to average probabilities or quantiles? Management Sci 2013;59:1594–611. https://doi.org/10.1287/mnsc.1120.1667 doi: 10.1287/mnsc.1120.1667. [DOI]
  107. Bamber JL, Aspinall WP, Cooke RM. A commentary on ‘how to interpret expert judgment assessments of twenty-first century sea-level rise’ by Hylke de Vries and Roderik SW van de Wal. Clim Change 2016;137:321–8. https://doi.org/10.1007/s10584-016-1672-7 doi: 10.1007/s10584-016-1672-7. [DOI] [PMC free article] [PubMed]
  108. French S. Group Consensus Probability Distributions: A Critical Survey. In Bernardo JM, editor. Bayesian Statistics 2. Amsterdam: North-Holland; 1985. pp. 183–97.
  109. Hammitt JK, Zhang Y. Combining experts’ judgments: comparison of algorithmic methods using synthetic data. Risk Anal 2013;33:109–20. https://doi.org/10.1111/j.1539-6924.2012.01833.x doi: 10.1111/j.1539-6924.2012.01833.x. [DOI] [PubMed]
  110. Aspinall WP, Cooke RM. Quantifying Scientific Uncertainty from Expert Judgement Elicitation. In Rougier J, Sparks S, Hill LJ, editors. Risk and Uncertainty Assessment for Natural Hazards. Cambridge: Cambridge University Press; 2013. pp. 64–99. https://doi.org/10.1017/CBO9781139047562.005 doi: 10.1017/CBO9781139047562.005. [DOI]
  111. Cooke RM, ElSaadany S, Huang X. On the performance of social network and likelihood-based expert weighting schemes. Reliability Engineering & System Safety 2008;93:745–56. https://doi.org/10.1016/j.ress.2007.03.017 doi: 10.1016/j.ress.2007.03.017. [DOI]
  112. Ranjan R, Gneiting T. Combining probability forecasts. J R Stat Soc Series B Stat Methodol 2010;72:71–91. https://doi.org/10.1111/j.1467-9868.2009.00726.x doi: 10.1111/j.1467-9868.2009.00726.x. [DOI]
  113. Rufo MJ, Martin J, Perez CJ. Log-linear pool to combine prior distributions: a suggestion for a calibration-based approach. Bayesian Anal 2012;7:411–38. https://doi.org/10.1214/12-BA714 doi: 10.1214/12-BA714. [DOI]
  114. Hora SC, Kardeş E. Calibration, sharpness and the weighting of experts in a linear opinion pool. Ann Oper Res 2015;229:429–50. https://doi.org/10.1007/s10479-015-1846-0 doi: 10.1007/s10479-015-1846-0. [DOI]
  115. Winkler RL, Murphy AH. ‘Good’ probability assessors. J Appl Meteorol 1968;7:751–8. https://doi.org/10.1175/1520-0450(1968)007<0751:PA>2.0.CO;2 doi: 10.1175/1520-0450(1968)007&#x0003c;0751:PA&#x0003e;2.0.CO;2. [DOI]
  116. Quigley J, Colson A, Aspinall W, Cooke RM. Elicitation in the Classical Model. In Price CC, editor. International Series in Operations Research and Management Science. UK: Spinger; 2018. pp. 15–36. https://doi.org/10.1007/978-3-319-65052-4_2 doi: 10.1007/978-3-319-65052-4_2. [DOI]
  117. Wittmann ME, Cooke RM, Rothlisberger JD, Lodge DM. Using structured expert judgment to assess invasive species prevention: Asian carp and the Mississippi-Great Lakes hydrologic connection. Environ Sci Technol 2014;48:2150–6. https://doi.org/10.1021/es4043098 doi: 10.1021/es4043098. [DOI] [PMC free article] [PubMed]
  118. Cooke RM, Goossens LLHJ. TU Delft expert judgment data base. Reliab Eng Syst Safety 2008;93:657–74. https://doi.org/10.1016/j.ress.2007.03.005 doi: 10.1016/j.ress.2007.03.005. [DOI]
  119. Cooke RM. Validating Expert Judgment with the Classical Model. In Martini C, Boumans M, editors. Experts and Consensus in Social Science. New York, NY: Springer; 2014. pp. 191–212. https://doi.org/10.1007/978-3-319-08551-7_10 doi: 10.1007/978-3-319-08551-7_10. [DOI]
  120. Mellers B, Stone E, Murray T, Minster A, Rohrbaugh N, Bishop M, et al. Identifying and cultivating superforecasters as a method of improving probabilistic predictions. Perspect Psychol Sci 2015;10:267–81. https://doi.org/10.1177/1745691615577794 doi: 10.1177/1745691615577794. [DOI] [PubMed]
  121. Hanea AM, McBride MF, Burgman MA, Wintle BC. The value of performance weights and discussion in aggregated expert judgments. Risk Anal 2018;38:1781–94. https://doi.org/10.1111/risa.12992 doi: 10.1111/risa.12992. [DOI] [PubMed]
  122. Morgenstern O, Von Neumann J. Theory of Games and Economic Behavior. Princeton, NJ: Princeton University Press; 1953.
  123. Kahneman D, Egan P. Thinking, Fast and Slow: New York, NY: Farrar, Straus and Giroux; 2011.
  124. Reyna VF, Nelson WL, Han PK, Dieckmann NF. How numeracy influences risk comprehension and medical decision making. Psychol Bull 2009;135:943–73. https://doi.org/10.1037/a0017327 doi: 10.1037/a0017327. [DOI] [PMC free article] [PubMed]
  125. Tversky A, Kahneman D. Judgment under uncertainty: heuristics and biases. Science 1974;185:1124–31. https://doi.org/10.1126/science.185.4157.1124 doi: 10.1126/science.185.4157.1124. [DOI] [PubMed]
  126. Gigerenzer G, Selten R. Bounded Rationality: The Adaptive Toolbox. Cambridge, MA: MIT Press; 2002. https://doi.org/10.7551/mitpress/1654.001.0001 doi: 10.7551/mitpress/1654.001.0001. [DOI]
  127. Kynn M. The ‘heuristics and biases’ bias in expert elicitation. J R Stat Soc Series A 2008;171:239–64.
  128. Bojke L, Grigore B, Jankovic D, Peters J, Soares M, Stein K. Informing reimbursement decisions using cost-effectiveness modelling: a guide to the process of generating elicited priors to capture model uncertainties. PharmacoEconomics 2017;35:867–77. https://doi.org/10.1007/s40273-017-0525-1 doi: 10.1007/s40273-017-0525-1. [DOI] [PubMed]
  129. Bazerman MH, Moore DA. Judgment in Managerial Decision Making. UK: John Wiley & Sons; 2008.
  130. McBride MF, Fidler F, Burgman MA. Evaluating the accuracy and calibration of expert predictions under uncertainty: predicting the outcomes of ecological research. Divers Distrib 2012;18:782–94. https://doi.org/10.1111/j.1472-4642.2012.00884.x doi: 10.1111/j.1472-4642.2012.00884.x. [DOI]
  131. Tversky A, Kahneman D. Availability: a heuristic for judging frequency and probability. Cogn Psychol 1973;5:207–32. https://doi.org/10.1016/0010-0285(73)90033-9 doi: 10.1016/0010-0285(73)90033-9. [DOI]
  132. Slovic P, Fischhoff B, Lichtenstein S. Perceived risk: psychological factors and social implications. Proc R Soc Lond A 1981;376:17–34. https://doi.org/10.1098/rspa.1981.0073 doi: 10.1098/rspa.1981.0073. [DOI]
  133. Mehle T, Gettys CF, Manning C, Baca S, Fisher S. The availability explanation of excessive plausibility assessments. Acta Psychologica 1981;49:127–40. https://doi.org/10.1016/0001-6918(81)90024-X doi: 10.1016/0001-6918(81)90024-X. [DOI]
  134. Soll JB, Klayman J. Overconfidence in interval estimates. J Exp Psychol 2004;30:299. https://doi.org/10.1037/0278-7393.30.2.299 doi: 10.1037/0278-7393.30.2.299. [DOI] [PubMed]
  135. McKenzie CR, Liersch MJ, Yaniv I. Overconfidence in interval estimates: what does expertise buy you? Organ Behav Hum Decis Process 2008;107:179–91. https://doi.org/10.1016/j.obhdp.2008.02.007 doi: 10.1016/j.obhdp.2008.02.007. [DOI]
  136. Larrick RP. Debiasing. In Harvey N, editor. Blackwell Handbook of Judgment and Decision Making. UK: Wiley-Blackwell; 2004. pp. 316–38. https://doi.org/10.1002/9780470752937.ch16 doi: 10.1002/9780470752937.ch16. [DOI]
  137. Soll J, Milkman K, Payne J. A User’s Guide to Debiasing. In Keren G, Wu G, editors. The Wiley Blackwell Handbook of Judgement and Decision Making II. UK: John Wiley & Sons; 2015. pp. 924–51. https://doi.org/10.1002/9781118468333.ch33 doi: 10.1002/9781118468333.ch33. [DOI]
  138. Clemen RT, Lichtendahl KC. Debiasing Expert Overconfidence: A Bayesian Calibration Model. Sixth International Conference on Probablistic Safety Assessment and Management (PSAM6), abstract no. 1369. San Juan, Puerto Rico; 23–28 June 2002.
  139. Cooke R, Shrader-Frechette K. Experts in Uncertainty: Opinion and Subjective Probability in Science. Oxford: Oxford University Press; 1991.
  140. Lin S-W, Bier VM. A study of expert overconfidence. Reliab Eng Syst Safety 2008;93:711–21. https://doi.org/10.1016/j.ress.2007.03.014 doi: 10.1016/j.ress.2007.03.014. [DOI]
  141. Bolger F, Rowe G. There is data, and then there is data: only experimental evidence will determine the utility of differential weighting of expert judgment. Risk Anal 2015;35:21–6. https://doi.org/10.1111/risa.12345 doi: 10.1111/risa.12345. [DOI] [PubMed]
  142. Haran U, Ritov I, Mellers BA. The role of actively open-minded thinking in information acquisition, accuracy, and calibration. Judgm Decis Mak 2013;8:188. https://doi.org/10.1037/t41728-000 doi: 10.1037/t41728-000. [DOI]
  143. Plous S. A comparison of strategies for reducing interval overconfidence in group judgments. J Appl Psychol 1995;80:443. https://doi.org/10.1037/0021-9010.80.4.443 doi: 10.1037/0021-9010.80.4.443. [DOI]
  144. Haran U, Moore DA, Morewedge CK. A simple remedy for overprecision in judgment. Judgm Decis Mak 2010;5:467–76. https://doi.org/10.1037/e615882011-200 doi: 10.1037/e615882011-200. [DOI] [PMC free article] [PubMed]
  145. Speirs-Bridge A, Fidler F, McBride M, Flander L, Cumming G, Burgman M. Reducing overconfidence in the interval judgments of experts. Risk Anal 2010;30:512–23. https://doi.org/10.1111/j.1539-6924.2009.01337.x doi: 10.1111/j.1539-6924.2009.01337.x. [DOI] [PubMed]
  146. Teigen KH, Jørgensen M. When 90% confidence intervals are 50% certain: on the credibility of credible intervals. Appl Cogn Psychol 2005;19:455–75. https://doi.org/10.1002/acp.1085 doi: 10.1002/acp.1085. [DOI]
  147. Winman A, Hansson P, Juslin P. Subjective probability intervals: how to reduce overconfidence by interval evaluation. J Exp Psychol Learn Mem Cogn 2004;30:1167–75. https://doi.org/10.1037/0278-7393.30.6.1167 doi: 10.1037/0278-7393.30.6.1167. [DOI] [PubMed]
  148. Ferretti V, Guney S, Montibeller G, von Winterfeldt D. Testing Best Practices to Reduce the Overconfidence Bias in Multi-Criteria Decision Analysis. System Sciences (HICSS), 2016 49th Hawaii International Conference on, abstract no. 1381, pp. 1547–55. https://doi.org/10.1109/HICSS.2016.195 doi: 10.1109/HICSS.2016.195. [DOI]
  149. Murphy AH, Winkler RL. Probability forecasts: a survey of national weather service forecasters. Bull Am Meteorol Soc 1974;55:1449–52. https://doi.org/10.1175/1520-0477(1974)055<1449:PFASON>2.0.CO;2 doi: 10.1175/1520-0477(1974)055&#x0003c;1449:PFASON&#x0003e;2.0.CO;2. [DOI]
  150. Martin TG, Burgman MA, Fidler F, Kuhnert PM, Low-Choy S, McBride M, Mengersen K. Eliciting expert knowledge in conservation science. Conserv Biol 2012;26:29–38. https://doi.org/10.1111/j.1523-1739.2011.01806.x doi: 10.1111/j.1523-1739.2011.01806.x. [DOI] [PubMed]
  151. Prava VR, Clemen RT, Hobbs BF, Kenney MA. Partition dependence and carryover biases in subjective probability assessment surveys for continuous variables: model-based estimation and correction. Decis Anal 2016;13:51–67. https://doi.org/10.1287/deca.2015.0323 doi: 10.1287/deca.2015.0323. [DOI]
  152. Block RA, Harper DR. Overconfidence in estimation: testing the anchoring-and-adjustment hypothesis. Organ Behav Hum Decis Process 1991;49:188–207. https://doi.org/10.1016/0749-5978(91)90048-X doi: 10.1016/0749-5978(91)90048-X. [DOI]
  153. Schall DL, Doll D, Mohnen A. Caution! Warnings as a useless countermeasure to reduce overconfidence? An experimental evaluation in light of enhanced and dynamic warning designs. J Behav Decis Mak 2017;30:347–58. https://doi.org/10.1002/bdm.1946 doi: 10.1002/bdm.1946. [DOI]
  154. Arkes HR. Costs and benefits of judgment errors: implications for debiasing. Psychol Bull 1991;110:486. https://doi.org/10.1037/0033-2909.110.3.486 doi: 10.1037/0033-2909.110.3.486. [DOI]
  155. Welsh MB, Begg SH, Bratvold RB. Efficacy of Bias Awareness in Debiasing Oil and Gas Judgments. Proceedings of the Annual Meeting of the Cognitive Science Society 2007;29:1647–52.
  156. Morewedge CK, Yoon H, Scopelliti I, Symborski CW, Korris JH, Kassam KS. Debiasing decisions: improved decision making with a single training intervention. Policy Insights Behav Brain Sci 2015;2:129–40. https://doi.org/10.1177/2372732215600886 doi: 10.1177/2372732215600886. [DOI]
  157. Snyder M, Swann WB. Hypothesis-testing processes in social interaction. J Pers Soc Psychol 1978;36:1202. https://doi.org/10.1037/0022-3514.36.11.1202 doi: 10.1037/0022-3514.36.11.1202. [DOI]
  158. Nisbett RE, Ross L. Human Inference: Strategies and Shortcomings of Social Judgment. Upper Saddle River, NJ: Prentice Hall; 1980.
  159. Downs JS, Shafir E. Why some are perceived as more confident and more insecure, more reckless and more cautious, more trusting and more suspicious, than others: enriched and impoverished options in social judgment. Psychon Bull Rev 1999;6:598–610. https://doi.org/10.3758/BF03212968 doi: 10.3758/BF03212968. [DOI] [PubMed]
  160. Abbas AE, Budescu DV, Yu H-T, Haggerty R. A comparison of two probability encoding methods: fixed probability vs. fixed variable values. Decis Anal 2008;5:190–202. https://doi.org/10.1287/deca.1080.0126 doi: 10.1287/deca.1080.0126. [DOI]
  161. Nemet GF, Anadon LD, Verdolini E. Quantifying the effects of expert selection and elicitation design on experts’ confidence in their judgments about future energy technologies. Risk Anal 2017;37:315–30. https://doi.org/10.1111/risa.12604 doi: 10.1111/risa.12604. [DOI] [PubMed]
  162. Briggs A, Claxton K, Sculpher M. Decision Modelling for Health Economic Evaluation. Oxford: Oxford University Press; 2006.
  163. Brennan A, Chick SE, Davies R. A taxonomy of model structures for economic evaluation of health technologies. Health Econ 2006;15:1295–310. https://doi.org/10.1002/hec.1148 doi: 10.1002/hec.1148. [DOI] [PubMed]
  164. Cao Q, Buskens E, Feenstra T, Jaarsma T, Hillege H, Postmus D. Continuous-time semi-Markov models in health economic decision making: an illustrative example in heart failure disease management. Med Decis Making 2016;36:59–71. https://doi.org/10.1177/0272989X15593080 doi: 10.1177/0272989X15593080. [DOI] [PubMed]
  165. Karnon J, Stahl J, Brennan A, Caro JJ, Mar J, Möller J, ISPOR-SMDM Modeling Good Research Practices Task Force. Modeling using discrete event simulation: a report of the ISPOR-SMDM Modeling Good Research Practices Task Force – 4. Value Health 2012;15:821–7. https://doi.org/10.1016/j.jval.2012.04.013 doi: 10.1016/j.jval.2012.04.013. [DOI] [PubMed]
  166. Davis S, Stevenson M, Tappenden P, Wailoo A. NICE DSU Technical Support Document 15: Cost-Effectiveness Modelling Using Patient-Level Simulation. London: National Institute for Health and Care Excellence; 2014. [PubMed]
  167. Collett D. Modelling Survival Data in Medical Research. Boca Raton, FL: CRC Press; 2015.
  168. Putter H, Fiocco M, Geskus RB. Tutorial in biostatistics: competing risks and multi-state models. Stat Med 2007;26:2389–430. https://doi.org/10.1002/sim.2712 doi: 10.1002/sim.2712. [DOI] [PubMed]
  169. Welton NJ, Ades AE. Estimation of markov chain transition probabilities and rates from fully and partially observed data: uncertainty propagation, evidence synthesis, and model calibration. Med Decis Making 2005;25:633–45. https://doi.org/10.1177/0272989X05282637 doi: 10.1177/0272989X05282637. [DOI] [PubMed]
  170. Sharples LD, Taylor GI, Faddy M. A piecewise-homogeneous Markov chain process of lung transplantation. J Epidemiol Biostat 2001;6:349–55. https://doi.org/10.1080/13595220152601828 doi: 10.1080/13595220152601828. [DOI] [PubMed]
  171. Brard C, Le Teuff G, Le Deley MC, Hampson LV. Bayesian survival analysis in clinical trials: what methods are used in practice? Clin Trials 2017;14:78–87. https://doi.org/10.1177/1740774516673362 doi: 10.1177/1740774516673362. [DOI] [PubMed]
  172. Miksad RA, Gönen M, Lynch TJ, Roberts TG. Interpreting trial results in light of conflicting evidence: a Bayesian analysis of adjuvant chemotherapy for non-small-cell lung cancer. J Clin Oncol 2009;27:2245–52. https://doi.org/10.1200/JCO.2008.16.2586 doi: 10.1200/JCO.2008.16.2586. [DOI] [PMC free article] [PubMed]
  173. Johnson SR, Tomlinson GA, Hawker GA, Granton JT, Feldman BM. Methods to elicit beliefs for Bayesian priors: a systematic review. J Clin Epidemiol 2010;63:355–69. https://doi.org/10.1016/j.jclinepi.2009.06.003 doi: 10.1016/j.jclinepi.2009.06.003. [DOI] [PubMed]
  174. Hutton JL, Owens RG. Bayesian sample size calculations and prior beliefs about child sexual abuse. J R Stat Soc Series D 1993;42:399–404. https://doi.org/10.2307/2348473 doi: 10.2307/2348473. [DOI]
  175. Johnson NP, Fisher RA, Braunholtz DA, Gillett WR, Lilford RJ. Survey of Australasian clinicians’ prior beliefs concerning lipiodol flushing as a treatment for infertility: a Bayesian study. Aust N Z J Obstet Gynaecol 2006;46:298–304. https://doi.org/10.1111/j.1479-828X.2006.00596.x doi: 10.1111/j.1479-828X.2006.00596.x. [DOI] [PubMed]
  176. Lilford R. Formal measurement of clinical uncertainty: prelude to a trial in perinatal medicine. The Fetal Compromise Group. BMJ 1994;308:111–12. https://doi.org/10.1136/bmj.308.6921.111 doi: 10.1136/bmj.308.6921.111. [DOI] [PMC free article] [PubMed]
  177. Wilson ECF, Usher-Smith JA, Emery J, Corrie PG, Walter FM. Expert elicitation of multinomial probabilities for decision-analytic modeling: an application to rates of disease progression in undiagnosed and untreated melanoma. Value Health 2018;21:669–76. https://doi.org/10.1016/j.jval.2017.10.009 doi: 10.1016/j.jval.2017.10.009. [DOI] [PubMed]
  178. Vargas C, Bilbeny N, Balmaceda C, Rodríguez MF, Zitko P, Rojas R, et al. Costs and consequences of chronic pain due to musculoskeletal disorders from a health system perspective in Chile. Pain Rep 2018;3:e656. https://doi.org/10.1097/PR9.0000000000000656 doi: 10.1097/PR9.0000000000000656. [DOI] [PMC free article] [PubMed]
  179. Ren S, Oakley JE. Assurance calculations for planning clinical trials with time-to-event outcomes. Stat Med 2014;33:31–45. https://doi.org/10.1002/sim.5916 doi: 10.1002/sim.5916. [DOI] [PMC free article] [PubMed]
  180. Chaloner K, Rhame FS. Quantifying and documenting prior beliefs in clinical trials. Stat Med 2001;20:581–600. https://doi.org/10.1002/sim.694 doi: 10.1002/sim.694. [DOI] [PubMed]
  181. Chaloner K, Church T, Louis TA, Matts JP. Graphical elicitation of a prior distribution for a clinical trial. J R Stat Soc Series D 1993;42:341–53. https://doi.org/10.2307/2348469 doi: 10.2307/2348469. [DOI]
  182. Freedman LS, Spiegelhalter DJ. The assessment of the subjective opinion and its use in relation to stopping rules for clinical trials. J R Stat Soc Series D 1983;32:153–60. https://doi.org/10.2307/2987606 doi: 10.2307/2987606. [DOI]
  183. Spiegelhalter DJ, Freedman LS, Parmar MK. Applying Bayesian ideas in drug development and clinical trials. Stat Med 1993;12:1501–11. https://doi.org/10.1002/sim.4780121516 doi: 10.1002/sim.4780121516. [DOI] [PubMed]
  184. Parmar MK, Spiegelhalter DJ, Freedman LS. The CHART trials: Bayesian design and monitoring in practice. CHART Steering Committee. Stat Med 1994;13:1297–312. https://doi.org/10.1002/sim.4780131304 doi: 10.1002/sim.4780131304. [DOI] [PubMed]
  185. Parmar MK, Griffiths GO, Spiegelhalter DJ, Souhami RL, Altman DG, van der Scheuren E, CHART steering committee. Monitoring of large randomised clinical trials: a new approach with Bayesian methods. Lancet 2001;358:375–81. https://doi.org/10.1016/S0140-6736(01)05558-1 doi: 10.1016/S0140-6736(01)05558-1. [DOI] [PubMed]
  186. White IR, Pocock SJ, Wang D. Eliciting and using expert opinions about influence of patient characteristics on treatment effects: a Bayesian analysis of the CHARM trials. Stat Med 2005;24:3805–21. https://doi.org/10.1002/sim.2420 doi: 10.1002/sim.2420. [DOI] [PubMed]
  187. Singpurwalla ND. An interactive PC-based procedure for reliability assessment incorporating expert opinion and survival data. J Am Stat Assoc 1988;83:43–51. https://doi.org/10.2307/2288917 doi: 10.2307/2288917. [DOI]
  188. Claxton KP, Sculpher MJ. Using value of information analysis to prioritise health research: some lessons from recent UK experience. PharmacoEconomics 2006;24:1055–68. https://doi.org/10.2165/00019053-200624110-00003 doi: 10.2165/00019053-200624110-00003. [DOI] [PubMed]
  189. Wang H, Dash D, Druzdzel MJ. A method for evaluating elicitation schemes for probabilistic models. IEEE Trans Syst Man Cybern B Cybern 2002;32:38–43. https://doi.org/10.1109/3477.979958 doi: 10.1109/3477.979958. [DOI] [PubMed]
  190. Chang W, Cheng J, Allaire LL, Xie Y, McPherson J. Shiny: Web Application Framework for R. R package version 1.4.0. URL: https://CRAN.R-project.org/package=shiny
  191. Cokely ET, Galesic M, Schulz E, Ghazal S, Garcia-Retamero R. Measuring risk literacy: the Berlin Numeracy Test. Judgm Decis Mak 2012;7:25–47. https://doi.org/10.1037/t45862-000 doi: 10.1037/t45862-000. [DOI]
  192. Scott SG, Bruce RA. Decision-making style: the development and assessment of a new measure. Educ Psychol Meas 1995;55:818–31. https://doi.org/10.1177/0013164495055005017 doi: 10.1177/0013164495055005017. [DOI]
  193. Rowe G, Wright G. The Delphi technique as a forecasting tool: issues and analysis. Int J Forecast 1999;15:353–75. https://doi.org/10.1016/S0169-2070(99)00018-7 doi: 10.1016/S0169-2070(99)00018-7. [DOI]
  194. Tetlock P, Gardner D. Superforecasting: The Art and Science of Prediction. London: Random House; 2016.
  195. Cooke R. Elicitation in the Classical Model. In Dias C, Morton A, Quigley J, editors. Elicitation. The Science and Art of Structuring Judgement. New York, NY: Springer; 2017.
  196. Spiegelhalter DJ, Myles JP, Jones DR, Abrams KR. Bayesian methods in health technology assessment: a review. Health Technol Assess 2000;4(38). https://doi.org/10.3310/hta4380 doi: 10.3310/hta4380. [DOI] [PubMed]
  197. Harnan SE, Tappenden P, Essat M, Gomersall T, Minton J, Wong R, et al. Measurement of exhaled nitric oxide concentration in asthma: a systematic review and economic evaluation of NIOX MINO, NIOX VERO and NObreath. Health Technol Assess 2015;19(82). https://doi.org/10.3310/hta19820 doi: 10.3310/hta19820. [DOI] [PMC free article] [PubMed]
  198. Soares M, Claxton K, Schulpher M. Health Opportunity Costs in the NHS: Assessing the Implications of Uncertainty Using Elicitation Methods with Experts. York, Sheffield: Universities of Sheffield and York; 2017. doi: 10.1177/0272989X20916450. [DOI] [PMC free article] [PubMed]

RESOURCES