Skip to main content
Springer logoLink to Springer
. 2019 Apr 20;20(6):891–918. doi: 10.1007/s10198-019-01052-3

Multi-criteria decision analysis for health technology assessment: addressing methodological challenges to improve the state of the art

Mónica D Oliveira 1,, Inês Mataloto 1, Panos Kanavos 2
PMCID: PMC6652169  PMID: 31006056

Abstract

Background

Multi-criteria decision analysis (MCDA) concepts, models and tools have been used increasingly in health technology assessment (HTA), with several studies pointing out practical and theoretical issues related to its use. This study provides a critical review of published studies on MCDA in the context of HTA by assessing their methodological quality and summarising methodological challenges.

Methods

A systematic review was conducted to identify studies discussing, developing or reviewing the use of MCDA in HTA using aggregation approaches. Studies were classified according to publication time and type, country of study, technology type and study type. The PROACTIVE-S approach was constructed and used to analyse methodological quality. Challenges and limitations reported in eligible studies were collected and summarised; this was followed by a critical discussion on research requirements to address the identified challenges.

Results

129 journal articles were eligible for review, 56% of which were published in 2015–2017; 42% focused on pharmaceuticals; 36, 26 and 18% reported model applications, issues regarding MCDA implementation analyses, and proposing frameworks, respectively. Poor compliance with good methodological practice (< 25% complying studies) was found regarding behavioural analyses, discussion of model assumptions and uncertainties, modelling of value functions, and dealing with judgment inconsistencies. The five most reported challenges related to evidence and data synthesis; value system differences and participant selection issues; participant difficulties; methodological complexity and resource balance; and criteria and attributes modelling. A critical discussion on ways to address these challenges ensues.

Discussion

Results highlight the need for advancement in robust methodologies, procedures and tools to improve methodological quality of MCDA in HTA studies. Research pathways include developing new model features, good practice guidelines, technologies to enable participation and behavioural research.

Keywords: Multi-criteria decision analysis, Health technology assessment, Systematic review, Methodological quality, Methodological challenges, MCDA modelling

Background

In a context of increased ageing and epidemiological change, technological advances, increasing patient expectations and budget constraints, health systems are facing considerable challenges to improve access to innovation, enhance rationality in decision-making processes, and improve their efficiency and effectiveness. In this context, health technology assessment (HTA) is playing a critical role by bringing together evidence to help healthcare decision-makers in understanding the relative value of health technologies [1, 2].

As a multidisciplinary field involving theoretical and practice-oriented research to assess the direct and indirect consequences of health technology use [3], HTA is currently challenged in various ways. First, despite the increased use of HTA in many jurisdictions [4], a number of new health technologies—for instance biomedical technologies—are increasingly approved and adopted based on limited evidence on safety and effectiveness, with assessment under real-world conditions being rare, and technologies being used for little or no additional health gain [5]. Second, effective use of HTA requires the involvement of health stakeholders and the implementation of HTA findings, which is far from happening on a routine basis [6]. Third, HTA needs to resolve issues related to the deployment of existing evaluation methods and processes (with some methodological issues, such as the extent to which cost-effectiveness analysis is appropriate to evaluate all types of health technologies, remaining unresolved) [6, 7] and to address the lack of good quality evidence for many evaluation contexts and technologies [8]. Fourth, for HTA to have an impact there is a need to link and align decision processes at distinct health system levels, as decisions at these levels inter-relate [8]. Nevertheless, health technology decision-making by HTA agencies, hospitals and other organisations often remains unconnected. Finally, HTA needs to go beyond the evaluation of pharmaceuticals, with literature acknowledging that other technologies (such as medical devices and health information systems), or, indeed, the broader space of health care interventions, place additional challenges from a methodological and practical perspective [9].

At the core of HTA is the task of measuring additional value, aligned with the spirit that ‘you manage what you value’ [10] and the promotion of value for money in health systems [2]. Most literature in the health field has focused on traditional techniques based on the measurement of value as captured by comparative effectiveness, with effectiveness being centred on health outcomes or on health utilities. Emerging literature, however, has been exploring alternative and more comprehensive ways to measure value. In line with views that other dimensions are relevant for decision-making regarding health technology adoption (for instance equity and innovation), with a sense of inevitability in considering other criteria than clinical-and-cost-effectiveness [11], and with evidence suggesting that HTA agencies consider in practice other aspects in adoption, reimbursement and pricing decisions [12, 13], several studies have been exploring the use of multi-criteria decision analysis (MCDA) concepts in HTA.

Framed within decision analysis, MCDA operates within a paradigm of rationality, as defined by Simon [14], offering “a philosophy, articulated by a set of logical axioms and a methodology and collection of systematic procedures, based on those axioms” [15]. As a sound approach with theoretical foundations, MCDA can be seen as “an extension of decision theory that covers any decision with multiple objectives, a methodology for appraising alternatives on individual, often conflicting criteria, and combining them into one overall appraisal” [16]. As a field it operates as a toolbox offering a wide range of concepts, models and tools and a clear framework for thinking about resource allocation decisions and a common language [17].

The potential of MCDA in the health field has been discussed widely; such discussion has led to two taskforce reports from the International Society for Pharmacoeconomics and Health Outcomes (ISPOR) [18, 19] and to several literature reviews [2023]. The usefulness of MCDA in HTA has been supported in a number of other studies [11, 24]. Clear arguments provided for its use have been its alignment with value-based health care [25]; its encompassing nature and ability to account for health stakeholder preferences and values [26]; its transparent and synthesis reporting format [27]; its contribution in helping decision-makers to understand technology value and data gaps [21] and differences between evidence and value judgments [19]; its easily understandable outputs [24]; and the underlying link with the accountability for reasonableness principle [28]. MCDA has been recalled as a commonly used approach for priority-setting [29], and a number of organisations, including the European Medicines Agency and the Institute for Quality and Efficiency in Health Care (IQWiG) in Germany, have shown interest and explored the use of MCDA methods in drug regulatory science and HTA, respectively [30].

Although MCDA provides theoretically sound methods to balance costs, benefits and risks, and multicriteria models have been seen as intuitive by evaluators, several studies [19, 3033] have pointed to a number of shortcomings: first, publications under the ‘MCDA in HTA’ umbrella have sometimes used methods without a sound basis or made an inadequate use of existing methods [32, 33]. Second, studies have recognised the need to develop methods, knowledge and guidelines in the area, for instance, to address the use of inappropriate procedures for weighting (not accounting for attribute ranges [33] leads to the most common mistake reported in decision analysis literature [34]), a lack of testing for the adequacy of using additive model structures [32], and the need for developing methodological and practical guidelines to assist MCDA users [19]. Third, most articles in the literature have reported pilot and exploratory studies of MCDA in HTA, with few studies reporting successful implementations of MCDA models and results in the context of HTA, and with some studies reporting cognitive difficulties from participants in the process of model-building [19].

Despite these shortcomings, there is no comprehensive analysis of the extent to which methodological issues related to the application of MCDA in the context of HTA affect the credibility and policy-usefulness of published literature, and the range of challenges and limitations that need to be addressed by MCDA in this context. In light of this, the aim of the study is fourfold: first, to provide a critical review of published studies in the field; second, by applying a framework, to analyse the quality of MCDA studies in the context of HTA from a methodological perspective, as distinct from a policy-perspective that could have been adopted as an alternative; third, to summarise challenges and limitations reported in relevant studies; and, fourth, to reflect on how MCDA applied in the context of HTA can overcome these challenges and limitations. The study contributes to the literature in four ways: first, it provides a critical appraisal of studies applying MCDA in the context of HTA, their scope and trends; second, it defines and applies an approach to assess the methodological quality of MCDA model applications; third, it informs on which modelling steps improvements are needed; fourth it identifies and reports on a number of methodological challenges and limitations that need to be addressed and discusses how future studies can overcome these challenges.

The study is organized as follows: the “Methods” section outlines the review protocol and the methods used in the analysis of eligible studies, discusses the methodological quality framework and the process followed to collect and summarise challenges and limitations. The next two sections report the results, and discuss the results and reflect upon how MCDA in HTA can address the identified challenges and limitations. Finally, the last section concludes.

Methods

Review protocol and studies’ analyses

We conducted a systematic search on 18 September 2017 on the databases: PubMed, EBSCO, Web of Science, ScienceDirect and SAGE. A search protocol was developed and applied on the title and abstract fields, with a keyword combination recognising the range of terminological variations regarding MCDA and HTA (for instance related with similar designations such as multi- vs. multiple, criteria vs. attribute, decision analysis vs. decision aiding vs. decision theory, HTA vs. benefit-risk assessment). The search protocol, including all combinations used is shown in Appendix A. The literature search was restricted to journal articles and book chapters written in English, with no time constraints being applied. Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) [35] guidelines were taken into consideration in the development of the study.

Duplicates were removed from the collected studies, and titles and abstracts were screened by two reviewers (MO and IM) by applying the following predefined inclusion criteria: studies would have to discuss, develop or review the use of multi-criteria analysis (focusing on aggregation approaches only, notably following the strategy of first modelling components of value explicitly and then aggregating these components) for the evaluation of health technologies administered by the health sector. The review took a broad perspective on MCDA as a strict view would require considering only MCDA studies respecting decision theory principles. Studies explicitly structuring the criteria were included if they indicated to be a step towards MCDA model development. Similarly, studies using non-aggregation approaches (e.g. discrete choice experiments) were included only if they provided data to be used as an input to multicriteria aggregation models. Finally, other systematic literature reviews identified as part of the search strategy were included in the eligible studies.

The following exclusion criteria were applied: studies focusing on technologies not strictly classified as health technologies or not administered by the health sector, such as water cleaning technologies, medical waste, solid waste, environmental health, pure or general risk assessment; studies in which multiple criteria were not directly applied to a HTA context, including safety and efficacy assessment used in studies other than marketing authorisation; retrospective evaluation studies (strictly inferring criteria from previous decisions); decision quality and behavioural decision analysis studies; clinical reasoning and descriptive clinical decision-making studies; studies that presented a minor MCDA component, namely those having MCDA combined with other modelling approaches and those discussing several evaluation approaches, but with minor MCDA explanation or discussion, or with little more than mentioning MCDA as a technique among other evaluation techniques; studies recommending the use of MCDA without a detailed discussion about its rationale; MCDA patient preference assessment studies if they were not designed to directly compare health technologies (e.g. those to build quality of life utility scores); studies in languages other than English; and studies corresponding to conference proceedings were excluded if not adhering to the implemented protocol.

The full-text of the articles considered eligible was obtained from public sources or requested from the authors if not available otherwise. Articles for which the full-text was not made available, were removed. Supplementary articles and book chapters that were not found through the protocol but were identified along the initial review of studies and deemed to be within the protocol scope and published before the end of 2017 were added.

Analysis of eligible studies

The studies included for systematic review were classified with regards to the time of publication, type of journal, country of study, health technology focus and type of study. Since the number of studies covering the scope of this review has been increasing considerably since 2008 and one aimed at capturing recent trends (while avoiding periods with small numbers of studies, year fluctuations and uncomparable periods), periods of 3 years from 2009 onwards were considered, resulting in four time windows: up to 2008, 2009–2011, 2012–2014, 2015–2017.

Regarding the health technology focus, the following categories were considered: pharmaceuticals, vaccines, medical devices, nanotechnologies, general health technologies (e.g. medical devices or medical equipment), and health interventions (e.g. assessing tobacco control vs. promoting physical activity in the workplace vs. prescribing to control blood pressure). Studies not focusing on a specific type of health technology were classified as general health technologies; and studies centred on health interventions, strategies or programmes to promote individual or community health benefits, health equity or healthy behaviours were classified as health interventions.

Publications were also classified according to their methodological/conceptual/theoretical or practical/empirical focus: clinical, non-clinical (but health related), Operational Research/Management Science or interdisciplinary.

Regarding the country of study, studies were classified according to the institutional location of the first author.

Regarding the type of study, studies were categorized according to their main focus in the following categories: methodological or conceptual frameworks, analysis of issues, systematic literature reviews, structuring criteria studies, modelling approach studies, or model applications. Frameworks were defined as studies suggesting the use of MCDA methods and tools for HTA and defining guidelines or procedures for its use; analysis of issues were studies calling attention and discussing issues related to the development and use of MCDA in HTA; structuring criteria studies were those analysing the evaluation dimensions to be used within the scope of MCDA modelling; modelling approach studies were those developing MCDA approaches to address HTA issues; finally, model applications were studies reporting MCDA evaluation models to compare health technologies in practice. Within each type of study, the focus of the reported research was analysed.

Framework for analysing methodological quality

As no established approach for assessing the methodological quality of MCDA studies has been reported [36], this study developed the PROACTIVE-S approach for this purpose, as an enhancement to the PROACTIVE (Problem, Reframe, Objective, Alternatives, Consequences and Chances, Trade-Offs, Integrate, Value, and Explore and Evaluate) approach [37] that in itself was inspired in the PrOACT-URL approach (PROblem, Objectives, Alternatives, Consequences, Trade-offs, Uncertainty, Risk attitudes, and Linked decisions) [38]. PROACTIVE specifies each of these components as a modelling step in which some tools may be used, while it builds upon the eight elements for sound decision processes defined in the Smart Choices PrOACT-URL approach [38]. Both PROACTIVE and PrOACT-URL are explicit processes that require a clear and deep understanding by decision-makers before they commit to a decision [38] and are aligned with a value-focused thinking perspective, which is specifically useful to: (a) guide strategic thinking, (b) facilitate collective decision-making, (c) uncover hidden objectives, (d) direct the collection of information and (e) improve communication [34]. In comparison to PrOACT-URL, PROACTIVE makes more explicit the role of evidence, values, uncertainty and integration of these components [37], which are deemed particularly relevant in the context of HTA.

To produce an approach that can be used for the assessment of methodological quality of MCDA studies according to good practice considerations, PROACTIVE-S was developed (see Table 1) by adapting, adjusting, enhancing and improving PROACTIVE by:

  1. adjusting some steps and specifying each step into a set of sub-steps that detail good practice considerations based on multi-criteria decision theory, value measurement and value focused thinking literature [16, 34, 3941] and studies reflecting good practice aspects regarding the use of MCDA in health [18, 19, 4244];

  2. adding a “social step” (S) to ensure rigor, reliability and potential replicability of MCDA in HTA studies and understand participants’ attitudes and consensus regarding the constructed models. Adding this step is aligned with the view that MCDA modelling inherently follows a socio-technical approach that “combines technical elements of MCDA with social aspects of decision conferencing, resulting in a tested approach to working with key players that creates shared understanding of the issues, a sense of common purpose and commitment to the way forward” [45] and builds upon the socio-technical design principles proposed by Cherns [46]. Accordingly, social processes—that can encompass face-to-face, non-face-to-face processes or a combination of both—need to be properly designed and tested within MCDA for HTA.

Table 1.

Defining the PROACTIVE-S approach to analyse the methodological quality of evaluation models reported in the “MCDA in HTA” literature.

Source: the authors

PROACTIVE-S step Step scope Sub-step good practice considerations—the extent to which the study… Sub-step abbrevia-tion
Problem Define the problem …describes the evaluation context, the decision goal and reflects upon the type of evaluation problem (decision problematique) [40] P1
Reframe Reframe relevant multiple perspectives …considers/discusses the perspectives of relevant stakeholders and key players, clarifies the perspective of the problem owner [203] and discusses whose views should be considered in model building [43] R1
Objective Focus on the objectives …focuses on the objectives to be achieved [34] (rather than on focusing upon indicators and criteria) O1
Alternatives Consider all relevant alternatives …defines and discusses the relevant health technologies to be evaluated and linked decisions [38] A1
Consequences and chances Model consequences, uncertainty and lack data …assesses relevant consequences in adequate attributes that comply with required properties (measurable, operational, understandable) [34, 47] and organises options consequences into a performance matrix [18] C1
…discusses data sources and issues [42], as well as consequences’ uncertainty [41] C2
Trade-offs Understand value trade-offs …discusses trade-offs among competing objectives or criteria [34] T1
Integrate Integrate the evidence and values …distinguishes between evidence, options’ performance and value information in model building [18] I1
Value Build a value model and maximize value …discusses model respect for exhaustiveness, non-redundancy and non-overlap properties in additive models or other relevant properties for other models [34] V1
…discusses preference independence conditions and presents the underlying model structure (p.e. additive model formula) [39, 41] V2
…uses methods for model building that comply with multiattribute decision theory [16, 39] V3
…defines mechanisms to detect and correct inconsistencies [19, 48] V4
…uses procedures to model value functions [34, 41] V5
…uses weighting procedures that utilize weighting references [34], explaining the rationale for choosing those references [40, 48] V6
Explore and Evaluate Explore assumptions and evaluate uncertainty …explicits model assumptions and the relevant uncertainties for the evaluation context (p.e. imprecise consequences, variable consequences, quality of evidence, structural uncertainty and judgmental uncertainty) [19] E1
…tests the consequences of model assumptions and uncertainty (e.g. sensitivity, robustness and/or scenario analyses) [45] E2
…discusses model validation and requisiteness [45, 49] and questions model results [41] E3
…uses computer technology to display results and motivate discussion [50, 51] E4
Social Build and implement a socio-technical design …the study is replicable by making explicit: model building participants, participatory scope and format, participatory timeline, and protocols of questioning [48, 52] S1
…takes into consideration behavioural aspects, such as cognitive burden and potential biases [5355] S2
…promotes participants reflection and iterativeness in model development while promoting consensus [45] and/or reflects in ways to combine participants’ assessments [52] S3

The following steps from PROACTIVE are adjusted and divided into several sub-steps, and the “S” extra step is added:

  • Consequences and chances need to consider not only aspects related to the proper construction of attributes [34, 47] that serve to characterise technology performance [18], but also to discuss data sources, issues [42] and uncertainty [41].

  • Value measurement requires considering that the properties needed for using an additive model properties and model structures are sustained and reflected upon [34, 39, 41], that methods complying with multi-attribute decision theory are adopted [16, 39], that mechanisms to detect and correct inconsistencies are utilized [19, 48], that procedures to model value functions are used [34, 41], and that weighting procedures making use of weighting references are used [34] and the rationale for choosing those references is explained [40, 48].

  • Explore and evaluate requires reflecting upon model assumptions and uncertainties [19] and testing the consequences of assumptions and uncertainties [45], discussing model validation and requisiteness issues [45, 49] and questioning model results [41], and using decision support technology (e.g. IT) to display results and motivate discussion [50, 51].

  • The added social component (S) requires that the social process is described in detail [48, 52] (for instance for replicability and to enable result interpretation), takes into consideration behavioural aspects [5355], promotes participants’ reflection and consensus [45] and/or reflects how to combine participant assessments [52].

Protocol for identifying limitations and challenges

To collect information on limitations and challenges concerning the use of MCDA models in the context of HTA, articles were searched for the words “limitat*”, “challenge*”, “barrier*”, “difficult*”, “pitfall*”, “disadvantage*”, “accept*”, “implement*”, and “concern*”. Reported issues related to the use of MCDA in HTA were collected, for instance, general HTA concerns not specifically related to MCDA were not considered, and subsequently were clustered with similar or related limitations and challenges. The 12 most frequently cited clusters of limitations and challenges were summarised and studies expressing these concerns were identified.

Results

Protocol results

The search protocol yielded a total of 763 studies, of which 403 remained following the removal of duplicates. Screening at title and abstract level resulted in the elimination of 283 studies, leaving 120 studies to assess in full-text level. Among these, three full-texts could not be obtained (from Thai and Polish journals), three studies were considered out of scope because of content, two studies were conference proceedings and one article had been retracted. A further 18 studies that were deemed relevant (identified along the initial review of studies and deemed to be within the protocol scope) were added, resulting in a final sample of 129 studies included in the systematic review. The results from the literature selection process are presented as a PRISMA flowchart in Fig. 1.

Fig. 1.

Fig. 1

PRISMA flowchart describing study selection

Study characteristics

Period of publication: The 129 studies meeting the inclusion criteria were published between 1990 and 2017, with an upward publication trend being observed (Fig. 2a). Only 5% of the studies (seven studies) were published up to 2008, whereas the period between 2015 and 2017 accounted for 56% of the study sample (72). In the interim, 15 and 35 studies were published in the 2009–2011 and 2012–2014 periods, respectively.

Fig. 2.

Fig. 2

a Number of article publications over time; b number of publications by type; c number of publications according to health technology focus.

Source: the authors from the literature

Type of publication: The studies were published in 59 different journals that cover a wide range of perspectives. Value in Health published the largest number of studies (17), followed by the International Journal of Technology Assessment in Health Care (9) and Pharmacoeconomics (8). Sixty-eight percent of all studies were published in health (non-clinical), 26% in clinical, 5% in operational research/management science and 1% in interdisciplinary journals.

Study country: First author institutions spanned across 27 countries, with the most frequent first author institutions being located in the UK (30 studies), followed by Netherlands (17), Canada (17), US (16), Germany (8), Italy (7) and Hungary (4). Twenty other countries accounted for three or less studies each.

Health technology focus: Based on Fig. 2b, 44 studies (42%) focused on pharmaceuticals (the majority analysing pharmaceuticals in general, rather than pharmaceuticals for a specific therapeutic indication), with 29 studies investigating general health technologies, 25 studying health interventions, 16 studying medical devices (most of them comparing different devices), 3 researching vaccines (all of them exploring relevant criteria to assess vaccines), with one considering a nanotechnology.

Type of study: According to their main focus, 46 studies (36%) developed models to evaluate health technologies in practice; 33 (26%) analysed MCDA implementation issues; 23 (18%) proposed frameworks to support MCDA in HTA implementation; 7 (5%) explored modelling approaches; and 3 (2%) provided a systematic literature review (Fig. 2c). The content in each of these study groups is discussed below.

(a) Model application studies

The 46 model application studies evaluated pharmaceuticals (24 studies), medical devices (12), health interventions (9) and general health technologies (1). Pharmaceuticals constituted the subject matter of investigations in the following disease areas: rare diseases (none of which contained orphan cancer indications) [5660], cancer [6163], depression [64, 65], cerebrovascular diseases [66, 67], pain-relief [68, 69], age-related macular degeneration [70], overactive bladder [71], idiopathic short stature [72], Turner syndrome [73], psoriasis [74], hypercholesterolemia [75], chronic obstructive pulmonary disease (COPD) [76] and relapsing–remitting multiple sclerosis (RRMS) [77]. Two studies developed models to compare pharmaceuticals targeting several diseases [27, 78].

Medical device studies included imaging, surgical and screening approaches, notably, CT, MRI and ultrasound devices [79], MRI equipment [80], imaging techniques, software platforms for cerebrovascular diseases [81], photoacoustic mammoscope technique for breast cancer diagnosis [82], surgical approaches for cam femoroacetabular impingement [83], non-fusion surgery approach for adolescent idiopathic scoliosis [84], surgical robotic innovations [85, 86], reusable pedicle screw instrument kit for lumber arthrodesis [87], a pulmonary heart sensor [88], drug eluting beads for transcatheter arterial chemoembolization [89] and a screening test for cervical cancer [90].

Evaluated health interventions included public health programmes [91], primary care programmes [92, 93], community care programmes [94], screening strategies [95], mental health services [96], smoking cessation interventions [97] and types of medical care to be covered [98, 99]. One study evaluated both pharmaceuticals and surgical technologies for priority setting purposes [100].

Most applications in this group (34 studies, 74%) aimed to select the most valuable technology, although other purposes have been reported: ranking technologies (6), allocating available resources to technologies (5), and assigning to reimbursement categories (1). With regards to social processes, 39 studies (85%) reported the use of participative methods, 32 (70%) adopted face-to-face approaches for model-building, including decision conferences and workshops, whilst 7 (15%) used web-based formats; 10 (22%) used questionnaires/surveys, 2 (4%) each used interviews and Delphi processes; 7 (15%) studies developed models based upon authors’ opinion or did not detail if and how participatory processes took place; 18 (39%) studies dealt with aggregation of individual answers within modelling.

The most frequently used procedures for weighting criteria were the analytical hierarchy process (AHP) (12), quantitative swing weighting (9), point-scaling (8), 100-point allocation (4), and qualitative swing weighting with the measuring attractiveness by a categorical-based evaluation technique (MACBETH) (3); other procedures included the simple multi-attribute rating technique (SMART), SMART/SMARTS (SMART with Swings)/SMARTER (SMART Extended to Ranking), Borda points, equal weighting and weighting calibration according to fatal events. Studies reported building value scales with point systems (11 studies), direct rating (7), MACBETH (3), AHP (2), and selecting functions (3), including one selecting linear value functions; 18 studies either did not provide information about value scoring issues or implicitly opted for not modelling value scales. Only one study reported a non-additive model, having used a multiplicative model [96]. Five studies reported that results from MCDA modelling had practical consequences for decision-making.

(b) Analysis studies

The 33 studies in this category discussed a range of MCDA issues related to HTA adoption, notably (a) raising methodological issues, (b) analysing the relevance of MCDA in HTA, (c) providing a critique on the use of MCDA in HTA and (d) discussing aspects related to its practical use in the HTA context.

With regards to methodological issues on the use of MCDA in HTA, studies addressed a range of issues: first, they provided an overview of MCDA methods and their potential for supporting transparent and consistent health care decision-making [18]; second, they analysed the most common types of MCDA models applied in HTA and identified practical issues in its use [24]; third, they discussed requirements and defined steps for a Universal Methodology for Benefit-Risk Assessment (UMBRA) [101]; fourth, they compared MCDA with other methods for considering equity–efficiency trade-offs in assessing breast cancer control options [102], for evaluating medical devices and imaging technologies [103] and for comparing patient preferences [104]; fifth, they discussed MCDA as a method to integrate economic evidence into clinical practice guidelines [33]; sixth, they described (structured) Evidence to decision frameworks as an alternative or as complementary to MCDA models [105]; and, finally, reported 16 best practice principles for MCDA practice in HTA, with emphasis to facilitated group modelling [44].

Regarding the relevance of using MCDA in HTA, studies, first, discussed the use of MCDA methods to overcome barriers in priority setting, particularly by accounting for the views of healthcare stakeholders [106]; second, they recommended MCDA for dealing with stroke interventions requirements [107]; third, they discussed MCDA usefulness in the context of personalized healthcare by dealing with nuanced and context-specific information that decision-makers would typically require [108]; fourth, they suggested MCDA as a comprehensive tool for dealing with distinct criteria in priority setting or for designing healthcare coverage packages [109]; fifth, they suggested MCDA to value pharmaceuticals [110], arguing for its specific suitability in the rare diseases context [111, 112]; sixth, they discussed the potential role of MCDA to implement a hedonic-pricing approach by bringing together multiple perspectives on the benefits (and potential harms) of medical technologies [113]; seventh, they suggested MCDA to operationalise value-based pricing and aggregate elements of value that are not well represented by weighted quality adjusted life years in a pragmatic way [25]; eighth, they discussed the relevance of including evidence on patient preferences in MCDA [114]; ninth, they suggested MCDA as a tool to include all the relevant criteria that impact on decision-making within transparent processes in Canada [115] and analysed the benefits and challenges regarding its use in that context [116]; tenth, they suggested MCDA to explicitly model non-economic criteria in pricing and reimbursement decisions in Central and Eastern Europe [117]; and, finally, they suggested MCDA as a methodological approach to increase efficiency, rationality and legitimacy in resource allocation decisions [118].

A sizable group of studies provided a critical appraisal on the use of MCDA in HTA. Studies in this group, first, argued that MCDA can make decision-making too complex or too mechanistic, removing the element of deliberation [119]; second, they showed that MCDA, similarly to other economic evaluation methods, failed to incorporate opportunity costs [120]; third, they alerted for methodological flaws in current applications [121]; fourth, they alerted on the risk of MCDA adding complication since its influence on decision-makers and stakeholders was described as not clear in pharmaceutical pricing and reimbursement contexts [122]; fifth, they suggested the treacle test (can a winning intervention be incompletely ineffective?) and the smallpox test (can a winning intervention be for a disease that no one suffers from?) to raise questions about the adequacy of evaluation model structures reported in the field [32]; and, sixth, they raised issues about the validity and reliability of MCDA for evaluating pharmaceuticals and providing suggestions for improving methodological quality [31].

Finally, with regards to the practical use of MCDA in HTA, studies concluded positively upon first experiences of applying MCDA to prioritize health interventions [123]; discussed the implementation of HTA in Latin American countries, concluding that although MCDA has been applied in few cases, most health stakeholders declared preferring its use [124]; discussed its limited use in Hungary, but the relevance for its development [125]; and discussed that stakeholders in a case study favoured structured approaches to integrate patient views [126].

(c) Framework studies

Twenty-three studies explored the use of MCDA methods and tools for HTA and defined related guidelines or procedures in multiple decision-making and evaluation contexts. A first group of framework studies focused on the use of MCDA in HTA for specific health technologies: to assess new medical technologies [26, 127]; to evaluate pharmaceuticals while focusing on new pharmaceuticals [30, 128], social values [129], (older) well-established pharmaceuticals whose benefit–risk balance may have changed [130] and drug or dose comparisons [131].

A second group of framework studies was developed for specific purposes, exploring one of the following areas: to apply MCDA in clinical decision-making when clinical consensus regarding clinical criteria and weights is required [132]; to inform radiology guideline development [133] and disease management programs [134]; to select an optimal nanomaterial for a given medical application [135]; to inform drug ranking for formulary listing in low-income countries [136]; to propose MCDA in HTA for middle-income countries [137] and to critically reflect upon that proposal [138]; to inform a wide range of decisions, e.g. approval, guidelines and pricing/reimbursement [43]; to evaluate statins for primary prevention of cardiovascular disease [42]; and to select criteria to be used in HTA [139, 140].

A third group of framework studies focused on principles, methods and processes, aiming to (a) integrate MCDA and accountability for reasonableness principles to support HTA agencies and promote legitimacy in reimbursement recommendations [28]; (b) account for good practice considerations to implement MCDA in healthcare (not specific to HTA) [19]; (c) support deliberative processes behind decision-making by dealing with data analysis, synthesis and validation by experts in general [141], and for rare diseases [142] in particular; and (d) prioritize health technologies based on value for money concepts and under a limited budget [143].

(d) Structuring criteria studies

Studies structuring criteria aimed at informing selection in the following health technology contexts: diagnostic imaging evaluation [144]; vaccines evaluation [145147]; evaluation of off-patent pharmaceuticals in emerging markets [148]; evaluation of pharmaceuticals in a development stage [149]; pharmaceutical pricing and reimbursement [150]; orphan drug reimbursement [151]; disinvestment and resource allocation processes [152]; hospital investment [153]; hospital decision-making [154]; value definitions [155]; priority setting [156, 157]; criteria beyond cost-effectiveness analysis [158]; defining equity [159]; and physiotherapy implementation [160].

Most structuring criteria studies (12) conducted literature reviews to inform criteria selection, while some of them combined reviews with surveys, interviews or workshops. Six studies used specific tools to structure or rank the criteria, namely direct scoring [148], the AHP [154], a discrete choice experiment [156], a design comparing technologies in clinical scenarios [144], predefined scales [158] and ELECTRE III [with ELECTRE standing for ELimination Et Choix Traduisant la REalité (ELimination and Choice Expressing REality)] [150].

Other reviewed studies with a main focus other than structuring criteria also devoted substantial work to structuring criteria in the following contexts: assessment of medical innovation [139], drugs [129] and new medicines [30], for setting up local HTA evaluation processes [140], for evaluating disease management programs [134], and rare disease interventions across disease areas [142].

(e) Modelling approach studies

Chen [161] developed an approach to deal with imprecise judgments. Broekhuizen and colleagues [162, 163] and Wen et al. [164] researched how to model uncertainty in patient preferences and/or clinical outcomes using MCDA combined with Monte-Carlo simulation. Three studies made use of the stochastic multi-criteria acceptability analysis approach (SMAA) to explore what can be concluded when limited or no preference information exists and a data-driven approach is used: one explored Mixed Treatment Comparison for evidence synthesis within SMAA [165]; a second proposed a stochastic multi-criteria discriminatory method based on SMAA to describe the likelihood of one treatment performing better than alternatives [166]; and, a third, presented the net monetary benefit framework as a special case of SMAA [167].

(f) Literature review studies

There are three studies in this category; the first, reviewed approaches adopted in 40 MCDA studies, analysing the objective of the study and lessons learned [21]. The second assessed 22 MCDA studies to analyse costs and benefits at different stages of medical innovation, reviewing the type of policy applications, methodology, and criteria used and respective definitions [22]. And, the third reviewed ten MCDA studies which involved patients in model building [36].

Methodological quality of model applications

The application of the PROACTIVE-S approach to assess the extent to which the 46 model application studies followed good methodological practice is shown in Table 2, where studies are classified as fully, partially or not complying with the PROACTIVE-S sub-steps (as defined in Table 1); Fig. 3 summarises results from the analyses.

Table 2.

Adherence of 46 model application studies to the PROACTIVE-S framework and “the extent to which the study follows good practice considerations”

graphic file with name 10198_2019_1052_Tab2_HTML.jpg

Fig. 3.

Fig. 3

Percentage of studies fully or at least partly following good practice considerations in the PROACTIVE-S sub-steps. Note: Abbreviations in use are defined in the last column of Table 1.

Source: the authors from the literature

There are three broad areas of interest based on the data reported in Table 2 and the summary data in Fig. 3. First, the lowest levels of adherence to PROACTIVE-S’s good methodological practice considerations (≤ 50% of studies are fully or partly complying with good practice considerations) are found in the value measurement sub-steps and concern behavioural issues, such as: fully using methods for model building comply with multi-attribute decision theory (V3) (26% of studies); defining mechanisms to detect and correct inconsistencies (V4) (25% of studies); using procedures to model value functions (V5) (24% of studies); using weighting procedures that utilize weighting references and explain their rationale (V6) (35% of studies); and taking into consideration behavioural aspects (S2) (13% of studies).

Second, low levels of adherence (fully compliant at ≤ 50% and fully or partly compliant > 50%) are found in the following sub-steps: (fully) focusing on the objectives to be achieved (O1) (41% of studies); assessing relevant consequences in adequate attributes that comply with required properties and organising consequences of options into a performance matrix (C1) (39% of studies); discussing data sources and issues and consequences uncertainty (C2) (41% of studies); discussing model respect for exhaustiveness, non-redundancy and non-overlap properties in additive models or relevant properties for other models (V1) (28% of studies); discussing preference independence conditions and presenting the underlying model structure (V2) (30% of studies); explicitly modelling assumptions and the relevant uncertainties for the evaluation context (E1) (22% of studies); testing the consequences of model assumptions and uncertainty (E2) (46% of studies); and enabling replicability by making explicit model building participants, participatory scope and format, participatory timeline, and protocols of questioning (S1) (50% of studies).

Third, intermediate levels of adherence—translating into studies fully compliant with good methodological practice at > 50% but at a large distance from 100%—were found for the following sub-steps: studies fully defining and discussing the relevant health technologies to be evaluated and linked decisions (A1) (57% of studies); discussing model validation and requisiteness and questioning model results (E3) (54% of studies); and promoting participants’ reflection and iterativeness in model development while also promoting consensus and/or reflecting on ways to combine participants’ assessments (S3) (57% of studies).

Finally, higher but not full levels of adherence to good methodological practice were found in the following areas: studies fully discussing the perspectives of relevant stakeholders and key players, clarifying the perspective of the problem owner and discussing whose views should be considered in model building (R1) (70% of studies); distinguishing between evidence, options’ performance and value information (I1) (76% of studies); using computer technology to display results and motivate discussion (E4) (74% of studies).

Reported limitations and challenges

The most common limitations and challenges that have been reported in the 129 reviewed studies are presented on Table 3. These were clustered by relatedness, leading to 12 concerns that were claimed by at least 4 studies each. These concerns relate to: (a) evidence and data processing within model building and use (47 studies, 36%); (b) differences in value systems and the influence of participants’ composition and numbers on evaluation (46 studies, 36%); (c) participants’ difficulties in understanding model building tasks and results (33 studies, 26%); (d) model developers having to trade-off methodological complexity with time and cost resources for model development (21 studies, 16%); (e) the selection of criteria and the construction of attributes in model structuring (20 studies, 15.5%); (f) modelling of uncertainty (19 studies, 15%); (g) addressing model additivity issues (17 studies, 13%); (h) the selection of methods (17 studies, 13%); (i) promoting consensus and dealing with the aggregation of participants’ answers (12 studies, 9%); (j) attempting to create universal/general evaluation models (9 studies, 7%); (k) fostering MCDA training and expertise (7 studies, 5%); and (l) generating model scores which have a meaningful interpretation (4 studies, 3%).

Table 3.

Identification of challenges and limitations in published MCDA studies (aggregated into clusters; the descending order reflects the number of studies raising a particular challenge or limitation).

Source: The authors from the literature

Cluster # Summary (number of studies) Clustered reported limitations and challenges Articles expressing limitations and challenges
#1 Evidence and Data related Difficulties (TECHNICAL) (47 studies) Multiple difficulties exist regarding the use of evidence and data in evaluation processes: information from multiple studies may be complex, not fully comparable (e.g. data often derived from separate trials differing in populations, treatment durations and calculated in different units), hard to capture by checklists or scales, have questionable quality, and participants may be overwhelmed by data (for instance numerous aggregations of data from non-standardised and often non-computerized databases). There may be lack of evidence and lack of good quality data for criteria deemed as relevant by evaluators, and there are challenges to synthesize relevant information. Participants may have a sense of information loss along evaluation processes. There is a lack of consensus about quantities such as ‘quality of life’ or ‘economic value’ of a healthy individual, which translate into evaluation difficulties. It is difficult to acquire and interpret data across heterogeneous health technologies [18, 19, 21, 2428, 31, 42, 60, 63, 69, 74, 80, 8284, 8789, 91, 94, 97, 98, 102, 103, 112, 113, 126, 130132, 135, 137, 140, 142, 143, 151153, 155, 158160, 164, 165]
#2 Value System Differences and Participant Selection issues (SOCIAL) (46 studies) There are variations in experts and stakeholders’ views and in the value systems of countries/regions/health systems. Value systems can vary over time and in response to new evidence. In multiple contexts evaluations rely on the views of members of a small panel/committee, with resulting evaluations being influenced possibly by participants’ characteristics and not being representative. In contexts where representativeness is important, it is not clear whose views should be considered. There are limits to involve a large number of participants in a face-to-face setting. MCDA studies have been involving much smaller numbers of participants than large patient preference studies [19, 21, 24, 26, 27, 31, 33, 36, 57, 59, 60, 62, 63, 65, 66, 72, 73, 75, 78, 80, 82, 84, 9799, 101103, 106, 109, 111, 113, 116, 121, 123, 126, 129, 131, 132, 139, 142, 144, 147, 148, 153, 162]
#3 Participant Difficulties in Evaluation Processes (SOCIAL) (33 studies) Participants face difficulties in interpreting data or in understanding evaluation processes; they also face cognitive difficulties in providing judgments (for instance using swing weighting, comparing mild and serious events, understanding orders of magnitude, interpreting weighting coefficients). Evaluation judgments may be frame-dependent (e.g. being influenced by the method in use), with multiple heuristics and biases. Applying, for instance, judgments may be prone to strategic behaviour, to vested interests and may be critically influenced by some participants; languages and translations may influence evaluations; participants typically have distinct levels of understanding; and weighting is influenced by ranges, with participants shying away from extremes [18, 19, 21, 24, 26, 28, 30, 31, 33, 36, 43, 5760, 68, 72, 73, 7578, 96, 98, 99, 104, 106, 116, 144, 149151, 155]
#4 Balancing Methodological Complexity and Resources (TECHNICAL) (21 studies) There is methodological complexity in using MCDA in HTA and a trade-off between methodological complexity and MCDA resources (including costs and time for model development). Standards and requirements for MCDA modelling may limit flexibility, adaptability, and timeliness. Many MCDA models are simplistic, with the choice of simple, intuitive, easy to use techniques, even if there is a compromise on rigor; often only partial information is requested from experts because of cost and time. Analysts are faced with the trade-off between ensuring an exhaustive set of criteria and the time and cognitive effort associated with using more criteria. Time is needed for evaluators to get acquainted with MCDA processes, and there may be participant fatigue. A significant amount of work is involved in reporting a MCDA model to evaluate health technologies [19, 24, 27, 28, 30, 33, 57, 59, 60, 69, 74, 97, 99, 116, 119, 127, 129, 140, 149, 151, 162]
#5 Criteria Selection and Attribute building Difficulties (TECHNICAL) (20 studies) The definition of evaluation criteria and attributes is a long, difficult and subjective process, further complicated because of variability of HTA terminology adopted in the hospital context. Some criteria, such as equity, are difficult to operationalize and the use of attributes is open to subjective interpretations. There is a lack of guidelines on the number of criteria and the structure of the evaluation model can become too extensive if all criteria need to be taken into account, as well as it can lead to cognitive burden and time-consuming procedures. Several attributes can be chosen for a criterion and there are difficulties in defining references for those attributes. Work is needed to advise how to estimate baseline effects. Not all aspects to be accounted for are quantifiable and it may not be possible to incorporate them into an MCDA model (e.g. context issues related to system capacity and appropriate use of an intervention) [21, 26, 30, 59, 61, 63, 73, 78, 82, 89, 123, 129, 130, 132, 134, 140, 149, 151, 157, 165]
#6 Uncertainty Modelling Needs (TECHNICAL) (19 studies) Health technology assessment entails uncertainty from multiple sources, related to scoring and weighting methods, criteria choice and with attributes in use (such as some based on point systems), as well as with evaluation judgments. Evaluators may not be able to give exact preference information on weights. Several modelling options, such as the choice of time horizon for costs and benefits, can influence evaluation [18, 19, 21, 24, 25, 33, 68, 75, 80, 93, 98, 113, 121, 122, 126, 143, 163, 164, 166]
#7 Model Additivity Issues (TECHNICAL) (17 studies) There are multiple issues related to the use of an additive model—for instance, it cannot be used to deal with thresholds (e.g. a threshold of incidence of adverse events above which the drug is considered unacceptable), as one cannot use trade-offs with it. There is potential for criteria to be neither exhaustive nor mutually exclusive, there may be overlaps and double counting (e.g. some degree of overlap is inherent in many of the endpoints), and one needs to deal with interdependencies. More complex methods are known, but their adoption may imply that the elicitation questions become too hard for evaluators [19, 21, 30, 32, 43, 61, 64, 77, 82, 93, 116, 121, 123, 133, 145, 151, 162]
#8 Methods’ Selection Issues (TECHNICAL) (17 studies) There is no consensus about the best framework and about the best weighting method, and methods are not standardised which raises validity issues. The selection of method may introduce bias, and inadequate weighting practices are recognised in the literature. There is generally no gold standard against which to compare results and results have not been replicated by independent third parties. There is a sense of arbitrariness in the implementation of an MCDA approach. Acknowledging the existence of different schools of thought, the appropriate weighting technique is essential, as are the circumstances under which a specific technique should be used [18, 26, 33, 43, 58, 68, 69, 98, 121, 122, 126, 128, 134, 146, 148]
#9 Consensus Promotion and Aggregation of Participant Answers (SOCIAL) (12 studies) Despite the importance of promoting consensus, consensus agreement varies across studies and needs to be both accommodated and reported. There are also issues on how to properly combine individual judgments and to assess consensus [18, 24, 26, 30, 99, 113, 116, 121, 131, 132, 142, 153]
#10 Introduce Flexibility Features for Universal/General Evaluation Models (TECHNICAL) (9 studies) Despite the ambition of building universal (or general) models to compare distinct technologies across diseases and therapeutic areas, there are conceptual and methodological difficulties in developing such models. Different models tend to be built for different contexts [24, 36, 59, 101, 103, 106, 121, 144, 155]
#11 MCDA Training and Expertise Needs (TECHNICAL) (7 studies) There is a lack of familiarity with MCDA techniques and, consequently, there is a need for training staff for MCDA implementation, as well as a need for participant training (e.g. patient training). Training requires time and resources [36, 43, 56, 84, 97, 101, 111, 116, 123, 136, 137]
#12 Model Scores Meaningfulness Issues (TECHNICAL) (4 studies) There are difficulties in interpreting model outputs and in understanding the meaning of these outputs, which need to be tested and validated. Scores are relative and produced in an interval scale, and thus limit the usefulness of a cost-value ratio; they also do not provide information about the absolute effectiveness, utilities, or absolute costs in monetary units [21, 36, 75, 93]

Discussion

The objective of this paper was to review existing literature of MCDA applications in HTA using aggregation approaches, to identify the scope of research published to date, to develop an understanding of the challenges and limitations reported in published research and to assess the methodological quality of studies whose main focus was to develop multidimensional models to evaluate health technologies. Several messages can be taken from this review, including an understanding of how the application of MCDA in HTA literature needs to evolve to address the limitations and challenges identified in the 129 studies.

Key messages

The systematic analysis of MCDA studies in the context of HTA has yielded seven key messages related to trends, the direction of published research to date and a range of methodological issues that need to be addressed in future research. These messages are discussed below.

Regarding trends in the application of MCDA in HTA studies, one observes a high growth in published research, with most studies developed in a small number of countries (UK, Netherlands, Canada, US, Germany and Italy) and disseminated through health (mainly non-clinical) journals. Model application studies are far from being able to cover all types of health technologies, decisions and decision contexts: evaluation models have been built mostly for pharmaceuticals and for general health technologies and interventions (although general technologies or interventions are restricted to one area, notably public health); most studies aimed to inform the selection of the best technology (with some looking into priority setting and resource allocation).

The majority of studies have developed experimental models in applied settings; the focus has been to discuss adoption, development and implementation issues and to develop frameworks for a wide range of contexts, thus showing the emerging and exploratory nature of research in the area. Studies discussing MCDA aspects have raised methodological issues and alerts for the methodological robustness of MCDA in the context of HTA, but also underlined the high potential and positive experience regarding MCDA exploration and adoption. Framework studies outlined MCDA processes for multiple technologies and evaluation purposes, and set principles, methods and processes for MCDA that address specific aspects and contexts.

The majority of studies made use of participatory approaches that relied on workshop sessions but other non-face-to-face and web-based processes have been explored, demonstrating the interest in overcoming the limitations of making decisions relying on a small number of individuals that meet face-to-face and the cost and time constraints.

Concerning methods in use, in light of the multicriteria value measurement literature, the fact that most studies did not model value functions and that AHP and point scaling were the two most commonly selected weighting procedures raises methodological issues (discussed below in the relevant section). Although issues concerning methodological quality have been raised in other HTA areas [168], our study findings provide evidence that more research, methodological quality improvements and more models developed in realistic settings are required. This is compatible with what others have argued in the literature [19, 32, 43, 121].

Regarding the analysis of the modelling application studies’ methodological quality in light of the PROACTIVE-S approach, results suggest action is needed to improve the methodological quality of MCDA in HTA studies. More than 50% of the studies had a number of shortcomings: they did not follow good practice regarding value measurement sub-steps, did not focus on objectives (not adopting a value-focused thinking perspective [34]), did not adopt best practice tools in building attributes to analyse health technology impact, did not address uncertainty in technology impact, did not model assumptions explicitly, did not detail social processes, and did not reflect upon behavioural issues. In general, the lowest methodological quality was found in value measurement sub-steps, as most studies expressed few concerns with model properties, dealt with preference independence and the underlying evaluation model structure, used methods that do not respect the theoretical foundations of multi-attribute value theory (mostly due to the use of the AHP technique that is prone to rank reversal [169, 170] and can violate a condition of order preservation [171] and due to the use of point systems that do not consider weighting references across criteria), did not address judgmental inconsistencies in model building, did not use proper procedures to measure partial value, and did not use adequate weighting procedures.

While authors may have conducted some methodological analyses but not expressed them in writing, overall results suggest that many model applications may not have been properly built and validated and, as a result, models may not have led to appropriate recommendations for decision-making. Detailed guidelines should be developed so that sound procedures and tools are adopted in model building. To ensure adequate methodological quality, published studies should include detailed methodological information so that they can be thoroughly analysed—for instance, studies should detail social processes in use for accuracy and for replicability, which may not be compatible with article size limitations defined in health journals. Some evidence points towards methodological issues not being identified during the article reviewing process: for instance, all studies presenting the formulation of an additive model should have made explicit the weighting references in use. These references are relevant to understand model results in light of the use of interval scales, and weighting coefficients can only be interpreted together with the references in use, as changes in references require weights to be recalculated. One study has described repeatedly the use of a ‘linear addictive equation’ [127]. Several studies have made use of similar modelling approaches, advocating the use of methods that do not necessarily follow good practice guidance.

Taking stock of the challenges and limitations identified in the reviewed studies, several challenges are identified by a very large number of studies. More than a quarter of the studies (a) raised questions regarding the use of health technology evidence and data in the evaluation process, (b) made explicit concerns regarding differences and variations in value systems across health stakeholders and contexts (expressing concerns about models reflecting the views of a small number of individuals and being influenced by the choice of individuals, such as committee members), and (c) discussed participants’ difficulties in model development and/or use. Given the scale of such concerns, it is imperative these aspects are addressed in future research. A discussion of how the “MCDA for HTA” literature can be developed to address each cluster of challenges and limitations follows, while the key issues are reported in Table 3.

Addressing the challenges in and advancing the “MCDA for HTA” debate

Challenge 1: evidence and data-related difficulties

Forty-seven studies reported difficulties regarding the use of evidence and data in evaluation processes, particularly in synthesising information, in dealing with data non-comparability issues and with large volumes of data (complete description of cluster 1 in Table 3). While these issues also apply to HTA in general, for the “MCDA in HTA” context research can help to provide guidelines to address these issues; synthesis formats can be developed to capture what is regarded as key data (in line with evaluation objectives to be achieved with health technologies), by explaining how to address variability in technology impact assessment (e.g. through considering impact intervals and performing sensitivity analysis), and by designing model features that consider cases of data incomparability and lack of data among others. Regarding this last point, flexible and non-closed models can be explored in which attributes consider not only quantitative aspects but also qualitative assessments from evaluators (for instance, committee members). Such models have been explored in other contexts, e.g. in the case of faculty evaluation [172]: in this context it was deemed a critical feature for model adoption that evaluators should not only consider quantitative metrics but also other complementary aspects within a qualitative and non-closed but formal assessment (e.g. to consider the number of high quality publications and the number of citations from faculty members, evaluators could consider qualitative aspects such as prizes and adjust evaluation scores). This study context led to the combination of quantitative with qualitative assessments in a multiplicative model structure. Furthermore, there is scope for developing guidelines that clarify and characterise different types of uncertainties in data, and that explain clearly which procedures to undertake for each type of uncertainty, for instance departing from the work by Stewart and Durbach [173].

Challenge 2: value systems’ differences and participants’ selection issues

Forty-six studies were concerned with the variation of value systems across experts, stakeholders and health systems and over time, as well as with MCDA models in HTA relying on the views of a small number of participants that may not represent all relevant views. While it needs to be acknowledged that value systems change over time (being also influenced by new evidence), which implies that evaluation models need to be reassessed and updated occasionally, a wide range of concepts and tools can be developed to ensure that models have the potential to reflect the perspectives of a diverse and larger number of health stakeholders. To address this challenge, “socio-technical processes” can be effectively designed and tested to involve a larger and more representative number of HTA stakeholders in the evaluation of health technologies; this, for instance, would avoid having to rely solely on the perspectives of a small number of evaluation committee members. Such a path has already started to be explored in other health contexts. For example, within the scope of building a population health index (based on a multi-criteria model structure) to evaluate population health across European regions, a socio-technical approach was adopted combining non-face-to-face web-Delphi processes to collect the views of a large number of European experts and stakeholders with face-to-face decision conferencing processes with a strategic group for building the multi-criteria model (as informed by evidence and by the views collected in the web-Delphis) [174, 175]. These processes can be developed and adapted further to collect health stakeholder views to inform the building of MCDA in HTA models, having to consider consensus and other issues (explored further under challenge 9). Stakeholder theory and engagement literature (discussed in [176]) can help to clarify which stakeholders to involve, under which type of involvement (which may include informing, consulting and co-deciding involvements) and with which format. Additionally, social research studies have explored statistical concepts to inform which participant numbers by stakeholder group are required for representativeness [177].

If properly designed, developed and enhanced by technology, web-based processes can facilitate the collection of information from a larger number of participants at relatively low cost [178]; there may also be space to develop structured techniques to involve larger groups of people in face-to-face settings [178]. Approaches can be further developed so as to test whether value systems tend to change over time, following the research idea explored by Lienert et al. [179] through the study on the stability of preferences over time in the context of wastewater infrastructure decision-making. A few studies in the review have re-tested the preferences of those participating in MCDA modelling with such aim (e.g. [56]).

Challenge 3: participant difficulties in evaluation processes

Thirty-three studies have mentioned several types of participant difficulties in interpreting data, understanding evaluation processes, and providing judgments; additionally, they raised related behavioural issues and biases affecting the development of MCDA models in the context of HTA. MCDA can assist in providing friendly protocols to be tested in empirical applications. Studies can incorporate behavioural research features so as to test preferred modes of questioning (taking into account behavioural issues reported in MCDA development). Some methods have shown to be cognitively friendly in empirical settings [39]. For instance, several studies in health have been using the MACBETH approach [180182] that provides an interactive questioning procedure based on qualitative judgements that only asks a decision-maker or a group for qualitative preference judgements between two elements at a time; this addresses cognitive uneasiness experienced by evaluators when trying to express their preference judgements numerically [54]; AHP also asked for qualitative judgments (on a ratio scale) in a large number of reviewed model applications, with studies providing positive feedback regarding its friendliness for participants. While several articles have reported participant difficulties in providing (quantitative) swing weighting judgments, MACBETH enables the use of qualitative swing weighting and has been used with a positive feedback in several of the reviewed model applications [61, 85, 94]. Other user-friendly and methodologically sound protocols may also be explored.

Behavioural research, informed by behavioural literature in general [53] and, specifically, by behavioural literature for MCDA contexts [55, 183], can be developed in MCDA for HTA, for example to compare participant preferences for modes of questioning, visual displays and methods. Eliminating bias from procedures that have been used in other contexts [184] can be adapted, as it is important for those facilitating the use of several protocols of questioning in the phase of model testing and validation: to illustrate, MACBETH qualitative swing weighting and quantitative swing weighting can be used interchangeably to explain and discuss the meaning of weighting coefficients to participants.

Training and guidelines for facilitation and making use of a wide range of existing resources [185187] can be developed to assist those developing MCDA for HTA applications—facilitation skills can help managing participants and better communication in workshop settings. Other training issues are discussed in challenge 11, below.

Challenge 4: balancing methodological complexity and resources

Twenty-one studies reported concerns related to the methodological complexity of using MCDA in HTA and the need to balance methodological complexity with cost, time and cognition in model development. A first issue is that the HTA community is not fully acquainted with MCDA concepts, methods and tools, as in most cases HTA education and training programmes do not cover MCDA or cover it superficially. If MCDA is to progress convincingly in HTA, it is expected that these programmes will need to enhance their curricula by including MCDA topics, that more MCDA courses are offered, and that HTA experts wishing to apply MCDA collaborate closely with MCDA experts.

A second issue concerns the extent to which pragmatism is acceptable in model development, as simplification can lead to models that do not respect basic MCDA properties. Some of the reviewed studies accept model simplifications [19, 63, 74], explicitly opting to build simple attributes and use weighting protocols that do not comply with multi-attribute decision theory. Many model simplifications may be inappropriate and are invalid (this has been shown for instance in the case of for river rehabilitation [188]); to that end literature and guidelines should provide guidance on which simplifications are acceptable.

A third issue is that methodological complexity can be addressed with a higher focus on the design of the socio-technical process [45, 46] in line with: balancing evidence and participatory processes; balancing larger non-face-to-face interaction to collect the views of a larger number of individuals with smaller face-to-face participatory processes; and with preparing a wide range of materials to assist participants in helping to build technology evaluation models [189].

Finally, concerning the cost and time costs for model development, there is scope for developing frameworks to produce reusable or easily adaptable models for several decision problematiques [40] and generating templates, so that evaluators can follow good practice and balance time and effort. Different HTA problematiques require distinct types of modelling approaches to be made available to model developers, e.g. modelling for choosing the best health technology (such as the choice of best pharmaceutical [61]), modelling for ranking health technologies (such as the ambition to ordering intervention strategies [95]), modelling for classifying technologies (as in the case of deciding which pharmaceutical falls within a reimbursement category [57]), modelling for allocating resources (such as the need to allocate a commissioning budget or nursing time to health programmes [92, 190]), and modelling for optimizing health care processes, which are also classified as health technologies [5] (as in the case of simultaneously defining hospital and long-term service locations, size of facilities and referral networks [191193] in line with the health system objectives).

Challenge 5: criteria selection and attribute construction difficulties

Twenty studies mentioned multiple issues and lack of guidance and support to the definition of evaluation criteria and to the construction of attributes. Guidelines to specifically assist in model structuring need to be developed, so as to avoid issues that should not arise if MCDA is properly used. Indeed, structuring is a key step in MCDA model development and results from applying the PROACTIVE-S approach suggest that researchers have not always adopted best practices or dedicated full attention to model structuring when developing model applications. If all relevant criteria are not considered and if attributes are inadequately designed, the succeeding steps in model development may not be successful [34] and claims of subjective interpretations of attributes may appear. A wide range of tools from the problem structuring methods literature can assist problem structuring for MCDA and generally have been unexplored in “MCDA for HTA” [194]. Clear examples on how to build and/or to model attributes may be developed, following [47, 195], with special attention to qualitative and multi-dimensional attributes that may be critical for cases with lack of data, of data incomparability and of preference dependence between criteria. New model structures should be researched so as to incorporate qualitative aspects within evaluations, and to explicitly deal with a lack or bad quality data (discussed in challenge 1). Existing literature explaining the pros and cons of choosing distinct reference levels within attributes—either local or global, absolute or relative levels [40, 48]—should be clearly made available to the “MCDA in HTA” research community. If a model becomes too large, hierarchical modelling techniques are also advised [196].

Challenge 6: uncertainty modelling needs

Nineteen studies raised issues related to model choice, methods in use, technology impact imprecision or variability, and participant judgments, which translate into different types of uncertainty. Although multiple studies have developed methods to deal with uncertainty and have clearly described different types of uncertainty [19], within the decision analysis spirit of ‘divide and conquer’ [197], there is scope for developing clear procedures on how to deal with each type of uncertainty so that participants and evaluators are better equipped with what modelling pathways to follow under the presence of each uncertainty source.

Challenge 7: model additivity issues

Seventeen studies raised issues related to the appropriateness of using an additive model, namely the way of dealing with thresholds (related to compensation of performance in the evaluation criteria), exhaustiveness of evaluation criteria, double counting (for instance related to the use of several endpoints), preference independence, and the potential cognitive burden related to the use of more complex methods. Clearer guidelines suggesting tests, protocols and tools may need to be developed in this area. Several modelling options may be explored to deal with non-compensability in the performance of technologies in distinct evaluation criteria, for instance, additive evaluation models can be combined with system rules in which minimal thresholds need to be attained so that the technology is considered for evaluation (use of thresholds such as in Bana e Costa et al. [196]). Concerning preference dependence, there is a need to develop tests with friendly protocols of questioning not only for identifying such issues (as in Oliveira et al.), but also to suggest how to make use of distinct model structures in a user-friendly way. Literature already advises on how to restructure models so as to respect additivity, for instance building constructed attributes that integrate preference dependent dimensions [48]; and some studies in health settings have already developed (and applied) user-friendly protocols of questioning to identify preference dependence cases and show how multilinear [195] and Choquet Integral-based models [198] can be built. Some studies in real settings have also explained the rationale for using multiplicative models [172]. Several of these studies have used the qualitative MACBETH protocol of questioning that has been shown to provide a user-friendly protocol of questioning [54, 181]. The structuring methods recalled in challenge 5 provide tools to avoid double counting and ensure exhaustiveness.

Challenge 8: method selection issues

Seventeen studies made explicit the desire to have a ‘best method and a best framework’ and raised validity and replicability issues. While it is not expected that there will be a single prevailing weighting method or approach in MCDA, there should be clarity about methods that have sound theoretical foundations and the limits of a pragmatic MCDA (discussed in challenge 4). Those developing MCDA for HTA should consider using several protocols during model development and validation (discussed in challenge 3), so as to ensure that evaluations do not rely on methods and that participants develop a better understanding about the evaluation model and results. Behavioural research (discussed in challenge 2), may be carried out to test whether participants prefer to express judgments in specific formats and under distinct methods, and to gauge the best forms to communicate model outputs. Further procedures for model testing and validation can be developed, for example involving experimental design in comparing model evaluations with real decisions within an ex-post evaluation frame. Replication of studies, analysis of preference stability and model retests (addressed in challenge 2) can also be used.

Challenge 9: consensus promotion and aggregation of participants’ answers issues

Twelve studies have explicitly recognised the importance of promoting consensus. It has been observed that consensus levels vary across studies and that clarity is needed about how to combine individual judgments. Following the view that a health technology evaluation model should be requisite (based on Phillips [49], it should be ‘sufficient in form and content to resolve the issues at hand’), the socio-technical design of the model building process needs to incorporate concepts and tools from group decision-making—for instance on voting systems, group decision support systems, group facilitation, and group thinking modelling (multiple issues covered in Kilgour and Eden [199])—to promote collaboration, convergence and alignment in model building [45]. The combination of individual judgments is relevant either in face-to-face contexts in which participants express their preferences that need to be combined and visualised by the group, and in non-face-to-face contexts, which require aggregation of individual answers. The choice of either format or their combination within a collaborative value modelling framework (such as proposed in [200]) may depend on time, cost and participant availability. To that end, there is a need for tools and guidance on how to proceed in such contexts and how to summarise and aggregate individual judgments or scores. Again, behavioural research can help answer which settings are more effective to promote consensus.

Challenge 10: introduce flexibility features for universal/general evaluation models

Nine studies have raised questions about whether it is possible to build general models that can be used to compare distinct health technologies across diseases or therapeutic areas. Despite the multitude of issues related to the evaluation of distinct technologies—for instance in comparing endpoints across diseases—it is open to “MCDA in HTA” research to introduce flexibility features in evaluation models and, thereby, promote their use across contexts. Such features include exploring: (a) the use of equivalence attributes (following the concept of strategic equivalence as defined in Keeney and Raiffa [16]), so that an attribute can be defined differently for different diseases but that it can be simultaneously compared across diseases; (b) the use of absolute references within each attribute that can be translated for distinct contexts (as discussed in challenge 5); (c) the use of qualitative assessments to complement quantitative assessments (as discussed in challenges 1 and 5); and (d) the use of weighting intervals that enable adjustment of weights for the context. The studies analysed within the review have most commonly used simple additive models, but some of these suggestions have been explored in other contexts, such as in Bana e Costa and Oliveira [172] in the context of faculty evaluation (notably the following figures were explored: qualitative assessments by evaluators; and interval weighting combined with optimization so that each faculty member has the combination of weighting coefficients that maximizes their value score).

Challenge 11: MCDA training and expertise needs

Seven studies have raised explicit concerns regarding the familiarity with MCDA techniques and the need for training researchers and participants. As discussed above, training and education in HTA rarely considers MCDA topics and user-friendly materials—such as videos—explaining the scope, features and applicability of MCDA in HTA are scarce. Successful and unsuccessful cases of application and of real implementation should be communicated clearly. Additionally, MCDA in the context of HTA should develop specifically designed decision support tools [201] to enable proper development of health technology evaluation models. Research can also explore the connection between MCDA and other evaluation techniques (for instance, Postmus et al. [202] have explored the extent to which net monetary benefit is a special case of SMAA).

Challenge 12: model scores and meaningfulness issues

Four studies have discussed issues related to the interpretation of model outputs and the meaning of model scores. These aspects relate to the use of interval scales, as multicriteria models based upon simple additive models produce value scores for health technologies that need to be anchored in two reference levels—for instance 100 and 0 corresponding to the best or worst plausible performances, respectively. These references are critical not only for weighting but also for the interpretation of value scores (for instance, what does a zero value mean?). This choice of reference levels relates also to issues discussed in challenge 5 (e.g. use of global or local attribute scales), and several paths may be explored to enable a meaningful interpretation of value scores. First, there are modelling features that can be used in some contexts—such as the use of absolute, intrinsic and meaningful references within attributes—that promote an understanding and interpretation of model scores. Second, when two technologies are compared, following the logic of the incremental cost effectiveness ratio (ICER), it is always possible to take zero (or placebo in economic evaluation) as the comparator. Based on this, analyses can be performed regarding the added cost and the added value on a common scale.

Figure 4 displays a visual representation linking the areas in which model application studies most highly deviating from methodological good practice with the 8 most important methodological challenges expressed in studies; it also displays suggested topics for research, which may address improvements in methodological practice or in reported limitations and challenges more explicitly and directly. There can be a connection between methodological challenges and deviations from methodological quality perspective and that suggested research topics have the potential to simultaneously contribute to good methodological robustness and to help researchers working in the area.

Fig. 4.

Fig. 4

Interconnectedness between the MCDA modelling steps with higher deviations from good methodological practice (on top), and the eight most reported limitations and challenges reported in MCDA in HTA studies (on bottom). Lines in the middle depict interrelations between those deviations and limitations/challenges, with topics near the lines (in capital letters) synthesising areas relevant for developing the state of the art within MCDA for HTA (topics discussed along the “Discussion” section).

Source: the authors from the literature

Study limitations

The study is not without limitations and challenges. First, given the multiple designations and the variety of nomenclature adopted in MCDA studies relevant to HTA, the choice of terms may have affected some of the results and potentially relevant studies may not have been included; additionally, restricting analysis to journal articles and book chapters published in English may also have had some impact on results. However, our search strategy has been comprehensive enough to ensure that the likelihood of omitting an eligible study was small. Despite that, and considering the recent upward trend in the publication of relevant studies over the past 5 years, it is likely that in the very near future an increasing number of new studies will be published, but we cannot control for this.

Second, there seems to be a different understanding on what MCDA actually is among scientists developing studies in this particular field. For instance, there are studies included in the sample collecting information from participants through surveys—without further interacting with participants or testing and validating models—and describing that they have been developing MCDA evaluation models. A strict view on what is MCDA could mean that these studies should be excluded. However, as these studies are insightful in many other respects, such as on how to involve participants in the evaluation of health technologies and on which areas it is relevant to explore MCDA in HTA, we decided to include them. Furthermore, this decision is also coherent with the objective of analysing the methodological quality of studies in the area.

Conclusion

This study shows that the application of MCDA in HTA is a growing field with increasing numbers of studies exploring its use in multiple contexts and under distinct perspectives, embedding its concepts and methods within technology policy- and decision-making processes, and showcasing its usefulness. Results show a number of limitations and challenges to address, a need to develop research and guidelines to promote quality and scientific rigor in the use of MCDA in HTA, as well as scope for advancing robust methodologies, processes and tools to assist modellers in the use of methods.

Several research paths have been identified within the scope of this study as potentially addressing the identified methodological challenges. Such paths include developing specific modelling approaches to account for distinct decision HTA contexts, such as to inform adoption, reimbursement and pricing decisions. In a way similar to HTA, training and education tools need to be developed and made available. To address concerns made explicit by researchers regarding the use of evidence and data within multicriteria modelling, new studies need to explore standardised ways of synthesising quantitative evidence and data as well as capture the quality of evidence in a structured format. Such synthesis formats should also be aligned with the objectives to be attained in the evaluation context. Additionally, studies in the area need to balance social with technical aspects in model development, and those interested in applying MCDA in the HTA context should learn from best practice and the experience from those developing models in practical settings. Collaborative research involving multiple health stakeholders is needed, and new technologies with a potential to involve and collect the views from a larger number of perspectives at a lower cost may be carefully designed and tested.

Acknowledgements

Partial financial support for Mónica Oliveira’s work in this study was provided by a grant from the European Commission, DG Research under the auspices of the IMPACT-HTA H2020 project (grant agreement number: 779312). The views represented in the paper do not necessarily reflect the views of the European Commission. We thank Carlos Bana e Costa, Ana Vieira and Aris Angelis for insightful discussions in earlier drafts of the paper; we are grateful to two anonymous referees for comments and suggestions which helped us improve the paper. All outstanding errors are our own.

Appendix A

Adopted search protocol for the systematic review of MCDA studies in HTA, with the following rules being adopted: [(“A” or “B” or “C” or “D”) and (“E” or “F” or “G” or “H”]

A: “MCDA” OR “Multicriteria Decision Analysis” OR “Multi-criteria Decision Analysis” OR “Multi Criteria Decision Analysis” OR “Multiple Criteria Decision Analysis” OR “Multiple-Criteria Decision Analysis”

B: “Multicriteria Analysis” OR “Multi-criteria Analysis” OR “Multiple-criteria”

C: “MAVT” OR “MAUT” OR “Multiattribute Decision Theory” OR “Multi-attribute Decision Theory” OR “Multiattribute Utility Theory” OR “Multi-attribute Utility Theory” OR “Multiattribute Utility” OR “Multi-attribute Utility”

D: “Multicriteria Decision Aiding” OR “Multiple-criteria Decision-Making” OR “Multiple criteria Decision-Making” OR “Multicriteria Decision-making” OR “Multiple-characteristics decision-making” OR “MCDM”

E: “Multicriteria Resource Allocation” OR “Multiple Criteria Resource Allocation” OR “Portfolio Decision Analysis” OR ((“Multicriteria Optimization” OR “Multiple Criteria Optimization”) AND “Resource Allocation”)

F: “HTA” OR “Health Technology Assessment” OR “Health Technology Appraisal” OR (“Health” AND “Technology” AND “Evaluation”) OR (“Health” AND “Technologies” AND “Evaluation”) OR (“Benefit-risk Assessment” AND “Health”) OR (“Value-based Assessment” AND “Health”) OR (“Economic evaluation” AND “Health”)

G: (((“Medical” OR “Clinical” OR “Hospital” OR “Health”) AND (“Devices” OR “Equipment” OR “Technology”)) OR “Drugs” OR “Pharmaceutical” OR “Medicine” OR “Screening” OR “Surgical”) AND (“Benefit-risk” OR “Appraisal” OR “Valuation” OR “Assessment” OR “Value Measurement” OR “Value-based Assessment”)

H: (“Treatment” OR “Therapy” OR Interventions” OR “Intervention”) AND (“Health” OR “Clinical” OR “Medical”) AND (“Benefit-risk” OR “Appraisal” OR “Valuation” OR “Assessment” OR “Value Measurement” OR “Value-based Assessment”)

Footnotes

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

References

  • 1.Velasco-Garrido M., Busse R.: Health Technology Assessment: An Introduction to Objectives, Role of Evidence, and Structure in Europe, in Policy Brief, World Health Organization and European Observatory on Health Systems and Policies, Editors. European Observatory on Health Systems and Policies: Copenhagen (2005)
  • 2.Sorenson C., Drummond M., Kanavos P.: Ensuring value for money in health care: the role of health technology assessment in the European Union. World Health Organization and European Observatory on Health Care Systems (2008)
  • 3.HTAi-INAHTA (2017) HTAglossary.net. http://htaglossary.net/Health+Technology+Assessment+%28HTA%29&highlight=health%20technology. cited 12th May 2017
  • 4.Drummond M, Tarricone R, Torbica A. Assessing the added value of health technologies: reconciling different perspectives. Value Health. 2013;16(S1):7–13. doi: 10.1016/j.jval.2012.10.007. [DOI] [PubMed] [Google Scholar]
  • 5.OECD . New health technologies. Paris: OECD Publishing; 2017. [Google Scholar]
  • 6.Sorenson C, et al. How Can the Impact of Health Technology Assessments be Enhanced?, in Health Systems and Policy Analysis Briefings, WHO Regional Office for Europe and European Observatory on Health Systems and Policies, Editor. World Health Organization and European Observatory on Health Systems and Policies (2008)
  • 7.Ciani O, et al. Linking the regulatory and reimbursement processes for medical devices: the need for integrated assessments. Health Econ. 2017;26(Suppl 1):13–29. doi: 10.1002/hec.3479. [DOI] [PubMed] [Google Scholar]
  • 8.Sampietro-Colom L, Martin J. Hospital-based health technology assessment: the next frontier. In: Sampietro-Colom L, Martin J, editors. Hospital-Based Health Technology Assessment: The Next Frontier for Health Technology Assessment. Switzerland: Springer; 2016. [Google Scholar]
  • 9.Tarricone R, Torbica A, Drummond M. Challenges in the assessment of medical devices: the MedtecHTA project. Health Econ. 2017;26(Suppl 1):5–12. doi: 10.1002/hec.3469. [DOI] [PubMed] [Google Scholar]
  • 10.Hubbard DW. How to Measure Anything: Finding the Value of Intangibles in Business. New Jersey: Wiley; 2014. p. 432. [Google Scholar]
  • 11.Devlin NJ, Sussex J. Incorporating Multiple Criteria in HTA: Methods and Processes. London: Office of Health Economics; 2011. p. 60. [Google Scholar]
  • 12.Angelis A, Lange A, Kanavos P. Using health technology assessment to assess the value of new medicines: results of a systematic review and expert consultation across eight European countries. Eur J Health Econ. 2018;19:123–152. doi: 10.1007/s10198-017-0871-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Nicod E. Why do health technology assessment coverage recommendations for the same drugs differ across settings? Applying a mixed methods framework to systematically compare orphan drug decisions in four European countries. Eur J Health Econ. 2017;18(6):715–730. doi: 10.1007/s10198-016-0823-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Simon HA. Models of Man: Social and Rational. New York: Wiley; 1957. [Google Scholar]
  • 15.Keeney RL. Decision analysis: an overview. Oper Res. 1982;30(5):803–838. doi: 10.1287/opre.30.5.803. [DOI] [PubMed] [Google Scholar]
  • 16.Keeney R, Raiffa H. Decisions with Multiple Objectives: Preferences and Value Tradeoffs. Cambridge: Wiley; 1976. [Google Scholar]
  • 17.Howard RA. Speaking of decisions: precise decision language. Decis Anal. 2004;1(2):71–78. [Google Scholar]
  • 18.Thokala P, et al. Multiple criteria decision analysis for health care decision making-an introduction: report 1 of the ISPOR MCDA emerging good practices task force. Value Health. 2016;19(1):1–13. doi: 10.1016/j.jval.2015.12.003. [DOI] [PubMed] [Google Scholar]
  • 19.Marsh K, et al. Multiple criteria decision analysis for health care decision making-emerging good practices: report 2 of the ISPOR MCDA emerging good practices task force. Value Health. 2016;19(2):125–137. doi: 10.1016/j.jval.2015.12.016. [DOI] [PubMed] [Google Scholar]
  • 20.Diaby V, Campbell K, Goeree R. Multi-criteria decision analysis (MCDA) in health care: a bibliometric analysis. Oper Res Health Care. 2013;2(1–2):20–24. [Google Scholar]
  • 21.Marsh K, et al. Assessing the value of healthcare interventions using multi-criteria decision analysis: a review of the literature. PharmacoEconomics. 2014;32(4):345–365. doi: 10.1007/s40273-014-0135-0. [DOI] [PubMed] [Google Scholar]
  • 22.Wahlster P, et al. Balancing costs and benefits at different stages of medical innovation: a systematic review of Multi-criteria decision analysis (MCDA) BMC Health Serv Res. 2015;15:262. doi: 10.1186/s12913-015-0930-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Adunlin G, Diaby V, Xiao H. Application of multicriteria decision analysis in health care: a systematic review and bibliometric analysis. Health Expect. 2015;18(6):1894–1905. doi: 10.1111/hex.12287. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Thokala P, Duenas A. Multiple criteria decision analysis for health technology assessment. Value Health. 2012;15(8):1172–1181. doi: 10.1016/j.jval.2012.06.015. [DOI] [PubMed] [Google Scholar]
  • 25.Sussex J, Towse A, Devlin N. Operationalizing value-based pricing of medicines a taxonomy of approaches. PharmacoEconomics. 2013;31(1):1–10. doi: 10.1007/s40273-012-0001-x. [DOI] [PubMed] [Google Scholar]
  • 26.Angelis A, Kanavos P. Value-based assessment of new medical technologies: towards a robust methodological framework for the application of multiple criteria decision analysis in the context of health technology assessment. PharmacoEconomics. 2016;34(5):435–446. doi: 10.1007/s40273-015-0370-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Goetghebeur MM, et al. Bridging health technology assessment (HTA) and efficient health care decision making with multicriteria decision analysis (MCDA): applying the EVIDEM framework to medicines appraisal. Med Decis Mak. 2012;32(2):376. doi: 10.1177/0272989X11416870. [DOI] [PubMed] [Google Scholar]
  • 28.Baltussen R, et al. Value assessment frameworks for HTA agencies: the organization of evidence-informed deliberative processes. Value Health. 2017;20:256–260. doi: 10.1016/j.jval.2016.11.019. [DOI] [PubMed] [Google Scholar]
  • 29.Wiseman V, et al. Using economic evidence to set healthcare priorities in low-income and lower-middle-income countries: a systematic review of methodological frameworks. Health Econ. 2016;25(S1):140–161. doi: 10.1002/hec.3299. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Angelis A, Kanavos P. Multiple criteria decision analysis (MCDA) for evaluating new medicines in health technology assessment and beyond: the advance value framework. Soc Sci Med. 2017;188:137–156. doi: 10.1016/j.socscimed.2017.06.024. [DOI] [PubMed] [Google Scholar]
  • 31.Garcia-Hernandez A. A note on the validity and reliability of multi-criteria decision analysis for the benefit-risk assessment of medicines. Drug Saf. 2015;38(11):1049–1057. doi: 10.1007/s40264-015-0344-2. [DOI] [PubMed] [Google Scholar]
  • 32.Morton A. Treacle and smallpox: two tests for multicriteria decision analysis models in health technology assessment. Value Health. 2017;20:512–515. doi: 10.1016/j.jval.2016.10.005. [DOI] [PubMed] [Google Scholar]
  • 33.Antioch KM, et al. International lessons in new methods for grading and integrating cost effectiveness evidence into clinical practice guidelines. Cost Eff Resour Alloc. 2017;15:1–15. doi: 10.1186/s12962-017-0063-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.Keeney RL. Value-focused Thinking: A Path to Creative Decision making. Harvard: Harvard University Press; 1992. [Google Scholar]
  • 35.PRISMA. Preferred reporting items for systematic reviews and meta-analyses (PRISMA) (2017). http://www.prisma-statement.org/. Cited 15 October 2017
  • 36.Marsh K, et al. Amplifying each patient’s voice: a systematic review of multi-criteria decision analyses involving patients. Appl Health Econ Health Pol. 2017;15(2):155–162. doi: 10.1007/s40258-016-0299-1. [DOI] [PubMed] [Google Scholar]
  • 37.Hunink MGM, et al. Decision Making in Health and Medicine: Integrating Evidence and Values. Cambridge: Cambridge University Press; 2014. p. 446. [Google Scholar]
  • 38.Hammond JS, Keeney RL, Raiffa H. Smart Choices: A Practical Guide to Making Better Decisions. Boston: Harvard Business School Press; 1999. [Google Scholar]
  • 39.Von Winterfeldt D, Edwards W. Decision Analysis and Behavioral Research. Cambridge: University Press; 1986. [Google Scholar]
  • 40.Belton V, Stewart TJ. Multiple Criteria Decision Analysis: An Integrated Approach. US: Springer; 2002. [Google Scholar]
  • 41.Bana-e-Costa CA, Mateus R. Common critical mistakes in evaluations with multiple criteria. In: Santos A, Cardadeiro E, Verga Matos P, editors. Estudos de Homenagem ao Professor José Amado da Silva. Lisboa: Sílabas & Desafios; 2017. pp. 277–316. [Google Scholar]
  • 42.Tervonen T, et al. Applying multiple criteria decision analysis to comparative benefit-risk assessment: choosing among statins in primary prevention. Med Decis Mak. 2015;35(7):859–871. doi: 10.1177/0272989X15587005. [DOI] [PubMed] [Google Scholar]
  • 43.Muhlbacher AC, Kaczynski A. Making good decisions in healthcare with multi-criteria decision analysis: the use, current research and future development of MCDA. Appl Health Econ Health Pol. 2016;14(1):29–40. doi: 10.1007/s40258-015-0203-4. [DOI] [PubMed] [Google Scholar]
  • 44.Phillips L. Best practice for MCDA in healthcare. In: Marsh K, editor. Multi-Criteria Decision Analysis to Support Healthcare Decisions. Cham: Springer; 2017. pp. 305–323. [Google Scholar]
  • 45.Phillips LD, Bana-e-Costa CA. Transparent prioritisation, budgeting and resource allocation with multi-criteria decision analysis and decision conferencing. Ann Oper Res. 2007;154(1):51–68. [Google Scholar]
  • 46.Cherns A. Principles of socio-technical design. Human Relat. 1976;29:783–792. [Google Scholar]
  • 47.Bana-e-Costa C.A., Beinat E.: Model-structuring in public decision-aiding. Operational Research working papers (LSEOR 05.79). Operational Research Group, Department of Management, London School of Economics and Political Science, London. ISBN 0753017016 (2005)
  • 48.Bana-e-Costa CA, et al. Development of reusable bid evaluation models for the Portuguese Electric Transmission Company. Dec Anal. 2008;5(1):22–42. [Google Scholar]
  • 49.Phillips L. A theory of requisite decision models. Acta Physiol (Oxf) 1984;56(1–3):29–48. [Google Scholar]
  • 50.Hughes D, et al. Recommendations for benefit–risk assessment methodologies and visual representations. Pharmacoepidemiol Drug Saf. 2016;25(3):251–262. doi: 10.1002/pds.3958. [DOI] [PubMed] [Google Scholar]
  • 51.Phillips L. Best practice for MCDA in healthcare. In: Marsh K, editor. Multi-Criteria Decision Analysis to Support Healthcare Decisions. Cham: Springer International Publishing; 2007. pp. 311–329. [Google Scholar]
  • 52.Bana-e-Costa CA, et al. A socio-technical approach for group decision support in public strategic planning: the Pernambuco PPA case. Group Decis Negot. 2014;23(1):5–29. [Google Scholar]
  • 53.Baron J. Thinking and Deciding. 4. New York: Cambridge University Press; 2008. [Google Scholar]
  • 54.Fasolo B, Bana-e-Costa CA. Tailoring value elicitation to decision makers’ numeracy and fluency: expressing value judgments in numbers or words. OMEGA. 2014;44:83–90. [Google Scholar]
  • 55.Montibeller G, von Winterfeldt D. Cognitive and motivational biases in decision and risk analysis. Risk Anal. 2015;35(7):1230–1251. doi: 10.1111/risa.12360. [DOI] [PubMed] [Google Scholar]
  • 56.Gilabert-Perramon A, et al. Drug evaluation and decision making in Catalonia: development and validation of a methodological framework based on multi-criteria decision analysis (MCDA) for orphan drugs. Int J Technol Assess Health Care. 2017;33(1):111–120. doi: 10.1017/S0266462317000149. [DOI] [PubMed] [Google Scholar]
  • 57.Iskrov G, Miteva-Katrandzhieva T, Stefanov R. Multi-criteria decision analysis for assessment and appraisal of orphan drugs. Front Public Health. 2016;4:214. doi: 10.3389/fpubh.2016.00214. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 58.Kolasa K, et al. Potential impact of the implementation of multiple-criteria decision analysis (MCDA) on the Polish pricing and reimbursement process of orphan drugs. Orphanet J Rare Dis. 2016;11:1–12. doi: 10.1186/s13023-016-0388-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 59.Schey C, et al. Multi-criteria decision analysis (MCDA): testing a proposed MCDA framework for orphan drugs. Orphanet J Rare Dis. 2017;12:10. doi: 10.1186/s13023-016-0555-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 60.Sussex J, et al. A pilot study of multicriteria decision analysis for valuing orphan medicines. Value Health. 2013;16(8):1163–1169. doi: 10.1016/j.jval.2013.10.002. [DOI] [PubMed] [Google Scholar]
  • 61.Angelis A, et al. Multiple criteria decision analysis in the context of health technology assessment: a simulation exercise on metastatic colorectal cancer with multiple stakeholders in the English setting. BMC Med Inform Decis Mak. 2017;17(1):149. doi: 10.1186/s12911-017-0524-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 62.Kwon SH, et al. Eliciting societal preferences of reimbursement decision criteria for anti cancer drugs in South Korea. Expert Rev Pharmacoecon Outcomes Res. 2017;17(4):411–419. doi: 10.1080/14737167.2017.1277144. [DOI] [PubMed] [Google Scholar]
  • 63.Wagner M, et al. Appraising the holistic value of Lenvatinib for radio-iodine refractory differentiated thyroid cancer: a multi-country study applying pragmatic MCDA. BMC Cancer. 2017;17(1):272. doi: 10.1186/s12885-017-3258-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 64.Danner M, et al. Integrating patients’ views into health technology assessment: analytic hierarchy process (AHP) as a method to elicit patient preferences. Int J Technol Assess Health Care. 2011;27(4):369–375. doi: 10.1017/S0266462311000523. [DOI] [PubMed] [Google Scholar]
  • 65.Hummel MJM, et al. Using the analytic hierarchy process to elicit patient preferences: prioritizing multiple outcome measures of antidepressant drug treatment. Patient: Patient-Cent Outcomes Res. 2012;5(4):225–237. doi: 10.1007/BF03262495. [DOI] [PubMed] [Google Scholar]
  • 66.Hsu JC, et al. Net Clinical benefit of oral anticoagulants: a multiple criteria decision analysis. PLoS One. 2015;10(4):1–15. doi: 10.1371/journal.pone.0124806. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 67.Tervonen T, et al. Comparison of oral anticoagulants for stroke prevention in nonvalvular atrial fibrillation: a multicriteria decision analysis. Value Health. 2017;20(10):1394–1402. doi: 10.1016/j.jval.2017.06.006. [DOI] [PubMed] [Google Scholar]
  • 68.Moore A, et al. Use of multicriteria decision analysis for assessing the benefit and risk of over-the-counter analgesics. J Pharm Pharmacol. 2017;69(10):1364–1373. doi: 10.1111/jphp.12770. [DOI] [PubMed] [Google Scholar]
  • 69.Tony M, et al. Bridging health technology assessment (HTA) with multicriteria decision analyses (MCDA): field testing of the EVIDEM framework for coverage decisions by a public payer in Canada. BMC Health Serv Res. 2011;11:329. doi: 10.1186/1472-6963-11-329. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 70.Danner M, et al. How well can Analytic Hierarchy Process be used to elicit individual preferences? Insights from a survey in patients suffering from age-related macular degeneration. Patient: Patient-Cent Outcomes Res. 2016;9(5):481–492. doi: 10.1007/s40271-016-0179-7. [DOI] [PubMed] [Google Scholar]
  • 71.de Greef-van der Sandt I, et al. A quantitative benefit-risk assessment approach to improve decision making in drug development: Application of a multicriteria decision analysis model in the development of combination therapy for overactive bladder. Clin Pharmacol Ther. 2016;99(4):442. doi: 10.1002/cpt.271. [DOI] [PubMed] [Google Scholar]
  • 72.Felli JC, Noel RA, Cavazzoni PA. A multiattribute model for evaluating the benefit-risk profiles of treatment alternatives. Med Decis Mak. 2009;29(1):104. doi: 10.1177/0272989X08323299. [DOI] [PubMed] [Google Scholar]
  • 73.Goetghebeur MM, et al. Combining multicriteria decision analysis, ethics and health technology assessment: applying the EVIDEM decisionmaking framework to growth hormone for Turner syndrome patients. Cost Eff Resour Alloc. 2010;8:4–18. doi: 10.1186/1478-7547-8-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 74.Goetghebeur MM, et al. Pragmatic multicriteria decision analysis (MCDA) combined with advanced pharmacoepidemiology for benefit-risk assessments of medicines adapted to the real-life constraints of regulators: development and case study. Ther Innov Regul Sci. 2016;50(5):620–631. doi: 10.1177/2168479016642812. [DOI] [PubMed] [Google Scholar]
  • 75.Hoshikawa K, Ono S. Discrepancies between multicriteria decision analysis-based ranking and intuitive ranking for pharmaceutical benefit-risk profiles in a hypothetical setting. J Clin Pharm Ther. 2017;42(1):80–86. doi: 10.1111/jcpt.12486. [DOI] [PubMed] [Google Scholar]
  • 76.Marsh K, et al. Evaluation of COPD treatments: a multicriteria decision analysis of aclidinium and tiotropium in the United States. Value Health. 2017;20(1):132–140. doi: 10.1016/j.jval.2016.08.724. [DOI] [PubMed] [Google Scholar]
  • 77.Nixon R, et al. A case study using the PrOACT-URL and BRAT frameworks for structured benefit risk assessment. Biom J. 2016;58(1):8. doi: 10.1002/bimj.201300248. [DOI] [PubMed] [Google Scholar]
  • 78.Jaramillo HEC, Goetghebeur M, Moreno-Mattar O. Testing multi-criteria decision analysis for more transparent resource-allocation decision making in Colombia. Int J Technol Assess Health Care. 2016;32(4):307. doi: 10.1017/S0266462316000350. [DOI] [PubMed] [Google Scholar]
  • 79.Kulak O, Goren HG, Supciller AA. A new multi criteria decision making approach for medical imaging systems considering risk factors. Appl Soft Comput. 2015;35:931–941. [Google Scholar]
  • 80.Ivlev I, Vacek J, Kneppo P. Multi-criteria decision analysis for supporting the selection of medical devices under uncertainty. Eur J Oper Res. 2015;247(1):216–228. [Google Scholar]
  • 81.Churilov L, et al. Multiattribute selection of acute stroke imaging software platform for extending the time for thrombolysis in emergency neurological deficits (EXTEND) clinical trial. Int J Stroke. 2013;8(3):204–210. doi: 10.1111/j.1747-4949.2012.00787.x. [DOI] [PubMed] [Google Scholar]
  • 82.Hilgerink MP, et al. Assessment of the added value of the Twente Photoacoustic Mammoscope in breast cancer diagnosis. Med Devices (Auckl) 2011;4:107–115. doi: 10.2147/MDER.S20169. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 83.Diaz-Ledezma C, Parvizi J. Surgical approaches for cam femoroacetabular impingement: the use of multicriteria decision analysis. Clin Orthop Relat Res. 2013;471(8):2509–2516. doi: 10.1007/s11999-013-2934-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 84.Hummel JM, et al. Predicting the health economic performance of new non-fusion surgery in adolescent idiopathic scoliosis. J Orthop Res. 2012;30(9):1453–1458. doi: 10.1002/jor.22104. [DOI] [PubMed] [Google Scholar]
  • 85.Hummel JM, et al. et al. Supporting the project portfolio selection decision of research and development investments by means of multi-criteria resource allocation modelling. In: Marsh K, et al.et al., editors. Multi-Criteria Decision Analysis to Support Healthcare Decisions. Cham: Springer; 2017. pp. 89–103. [Google Scholar]
  • 86.Ritrovato M, et al. Decision-oriented health technology assessment: one step forward in supporting the decision-making process in hospitals. Value Health. 2015;18(4):505–511. doi: 10.1016/j.jval.2015.02.002. [DOI] [PubMed] [Google Scholar]
  • 87.Ottardi C, et al. A comparative analysis of a disposable and a reusable pedicle screw instrument kit for lumbar arthrodesis: integrating HTA and MCDA. Health Econ Rev. 2017;7(1):17. doi: 10.1186/s13561-017-0153-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 88.Wahlster P, et al. Exploring the perspectives and preferences for HTA across German healthcare stakeholders using a multi-criteria assessment of a pulmonary heart sensor as a case study. Health Res Policy Syst. 2015;13:24. doi: 10.1186/s12961-015-0011-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 89.Martelli N, et al. Combining multi-criteria decision analysis and mini-health technology assessment: a funding decision-support tool for medical devices in a university hospital setting. J Biomed Inform. 2016;59:201–208. doi: 10.1016/j.jbi.2015.12.002. [DOI] [PubMed] [Google Scholar]
  • 90.Miot J, et al. Field testing of a multicriteria decision analysis (MCDA) framework for coverage of a screening test for cervical cancer in South Africa. Cost Eff Resour Alloc. 2012;10(1):2. doi: 10.1186/1478-7547-10-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 91.Airoldi M, Morton A. Portfolio decision analysis for population health. In: Salo A, Keisler J, Morton A, editors. Portfolio Decision Analysis: Improved Methods for Resource Allocation. New York: Springer; 2011. pp. 359–381. [Google Scholar]
  • 92.Airoldi M, et al. STAR–people-powered prioritization: a 21st-century solution to allocation headaches. Med Decis Mak. 2014;34(8):965–975. doi: 10.1177/0272989X14546376. [DOI] [PubMed] [Google Scholar]
  • 93.Wilson EC, Rees J, Fordham RJ. Developing a prioritisation framework in an English Primary Care Trust. Cost Eff Resour Alloc. 2006;4:3. doi: 10.1186/1478-7547-4-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 94.Oliveira MD, et al. Prioritizing health care interventions: a multicriteria resource allocation model to inform the choice of community care programmes. In: Tànfani E, Testi A, et al., editors. Advanced Decision Making Methods Applied to Health Care. Springer Milan: Milano; 2012. pp. 141–154. [Google Scholar]
  • 95.Le Gales C, Moatti JP. Searching for consensus through multi-criteria decision analysis: assessment of screening strategies for hemoglobinopathies in southeastern France. Int J Technol Assess Health Care. 1990;6(3):430–449. doi: 10.1017/s0266462300001033. [DOI] [PubMed] [Google Scholar]
  • 96.Peacock SJ, et al. Priority setting in health care using multi-attribute utility theory and programme budgeting and marginal analysis (PBMA) Soc Sci Med. 2007;64(4):897–910. doi: 10.1016/j.socscimed.2006.09.029. [DOI] [PubMed] [Google Scholar]
  • 97.Reddy B, Thokala P, Duenas A, et al. Mcda for resource allocation at a local level: an application in the UK. In: Marsh K, et al., editors. Multi-Criteria Decision Analysis to Support Healthcare Decisions. Cham: Springer; 2017. pp. 175–198. [Google Scholar]
  • 98.Bots PWG, Hulshof JAM. Designing multi-criteria decision analysis processes for priority setting in health policy. J Multi Criteria Decis Anal. 2000;9(1–3):56–75. [Google Scholar]
  • 99.Youngkong S, et al. Multicriteria decision analysis for including health interventions in the universal health coverage benefit package in thailand. Value Health. 2012;15(6):961–970. doi: 10.1016/j.jval.2012.06.006. [DOI] [PubMed] [Google Scholar]
  • 100.Mobinizadeh M, et al. A model for priority setting of health technology assessment: the experience of AHP-TOPSIS combination approach. DARU. 2016;24:1–12. doi: 10.1186/s40199-016-0148-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 101.Walker S, et al. A universal framework for the benefit-risk assessment of medicines: is this the way forward? Ther Innov Regul Sci. 2015;49(1):17–25. doi: 10.1177/2168479014547421. [DOI] [PubMed] [Google Scholar]
  • 102.Baeten SA, et al. Incorporating equity-efficiency interactions in cost-effectiveness analysis-three approaches applied to breast cancer control. Value Health. 2010;13(5):573–579. doi: 10.1111/j.1524-4733.2010.00718.x. [DOI] [PubMed] [Google Scholar]
  • 103.Agapova M, et al. Applying quantitative benefit-risk analysis to aid regulatory decision making in diagnostic imaging: methods, challenges, and opportunities. Acad Radiol. 2014;21(9):1138. doi: 10.1016/j.acra.2014.05.006. [DOI] [PubMed] [Google Scholar]
  • 104.Tervonen T, et al. MCDA swing weighting and discrete choice experiments for elicitation of patient benefit-risk preferences: a critical assessment. Pharmacoepidemiol Drug Saf. 2017;26(12):1483–1491. doi: 10.1002/pds.4255. [DOI] [PubMed] [Google Scholar]
  • 105.Alonso-Coello P, et al. GRADE evidence to decision (EtD) frameworks: a systematic and transparent approach to making well informed healthcare choices 1: introduction. Br Med J. 2016;353:i2016. doi: 10.1136/bmj.i2016. [DOI] [PubMed] [Google Scholar]
  • 106.Peacock S, et al. Overcoming barriers to priority setting using interdisciplinary methods. Health Policy. 2009;92(2–3):124–132. doi: 10.1016/j.healthpol.2009.02.006. [DOI] [PubMed] [Google Scholar]
  • 107.Craig LE, et al. Approaches to economic evaluations of stroke rehabilitation. Int J Stroke. 2014;9(1):88–100. doi: 10.1111/ijs.12041. [DOI] [PubMed] [Google Scholar]
  • 108.Carrera P, Ijzerman MJ. Are current ICER thresholds outdated? Valuing medicines in the era of personalized healthcare. Expert Rev Pharmacoecon Outcomes Res. 2016;16(4):435–437. doi: 10.1080/14737167.2016.1180980. [DOI] [PubMed] [Google Scholar]
  • 109.Paolucci F, et al. Decision making and priority setting: the evolving path towards universal health coverage. Appl Health Econ Health Policy. 2017;15(6):697–706. doi: 10.1007/s40258-017-0349-3. [DOI] [PubMed] [Google Scholar]
  • 110.Simoens S. How to assess the value of medicines? Front Pharmacol. 2010;1:115. doi: 10.3389/fphar.2010.00115. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 111.Simoens S. Health technologies for rare diseases: does conventional HTA still apply? Expert Rev Pharmacoecon Outcomes Res. 2014;14(3):315. doi: 10.1586/14737167.2014.906903. [DOI] [PubMed] [Google Scholar]
  • 112.Annemans L, et al. Recommendations from the European Working Group for Value Assessment and Funding Processes in Rare Diseases (ORPH-VAL) Orphanet J Rare Dis. 2017;12(1):50. doi: 10.1186/s13023-017-0601-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 113.Rotter JS, Foerster D, Bridges JF. The changing role of economic evaluation in valuing medical technologies. Expert Rev Pharmacoecon Outcomes Res. 2012;12(6):711–723. doi: 10.1586/erp.12.73. [DOI] [PubMed] [Google Scholar]
  • 114.Mühlbacher AC, Johnson FR. Giving patients a meaningful voice in European health technology assessments: the role of health preference research. Patient: Patient Cent Outcomes Res. 2017;10(4):527–530. doi: 10.1007/s40271-017-0249-5. [DOI] [PubMed] [Google Scholar]
  • 115.Xie F, et al. Using health technology assessment to support evidence-based decision-making in Canada: an academic perspective. Expert Rev Pharmacoecon Outcomes Res. 2011;11(5):513–521. doi: 10.1586/erp.11.60. [DOI] [PubMed] [Google Scholar]
  • 116.Diaby V, et al. Multi-criteria decision analysis for health technology assessment in Canada: insights from an expert panel discussion. Expert Rev Pharmacoecon Outcomes Res. 2015;15(1):13. doi: 10.1586/14737167.2015.965155. [DOI] [PubMed] [Google Scholar]
  • 117.Kolasa K, Kalo Z, Zah V. The use of non-economic criteria in pricing and reimbursement decisions in Central and Eastern Europe: issues, trends and recommendations. Expert Rev Pharmacoecon Outcomes Res. 2016;16(4):483. doi: 10.1080/14737167.2016.1215917. [DOI] [PubMed] [Google Scholar]
  • 118.Angelis A, Kanavos P, Montibeller G. Resource allocation and priority setting in health care: a multi-criteria decision analysis problem of value? Glob Policy. 2017;8:76–83. [Google Scholar]
  • 119.Towse A. Net clinical benefit: the art and science of jointly estimating benefits and risks of medical treatment. Value Health. 2010;13(Suppl 1):S30–S32. doi: 10.1111/j.1524-4733.2010.00753.x. [DOI] [PubMed] [Google Scholar]
  • 120.Sculpher M, Claxton K, Pearson SD. Developing a value framework: the need to reflect the opportunity costs of funding decisions. Value Health. 2017;20:234–239. doi: 10.1016/j.jval.2016.11.021. [DOI] [PubMed] [Google Scholar]
  • 121.Marsh K, et al. The use of MCDA in HTA: great potential, but more effort needed. Value Health. 2017;21(4):394–397. doi: 10.1016/j.jval.2017.10.001. [DOI] [PubMed] [Google Scholar]
  • 122.Garattini L, Padula A. Multiple criteria decision analysis in health technology assessment for drugs: just another illusion? Appl Health Econ Health Policy. 2017;16(1):1–4. doi: 10.1007/s40258-017-0345-7. [DOI] [PubMed] [Google Scholar]
  • 123.Baltussen R, et al. Multi-criteria decision analysis to prioritize health interventions: capitalizing on first experiences. Health Policy. 2010;96(3):262–264. doi: 10.1016/j.healthpol.2010.01.009. [DOI] [PubMed] [Google Scholar]
  • 124.Rosselli D, et al. HTA implementation in Latin American countries: comparison of current and preferred status. Value Health Reg Issues. 2017;14:20–27. doi: 10.1016/j.vhri.2017.02.004. [DOI] [PubMed] [Google Scholar]
  • 125.Endrei D, Molics B, Ágoston I. Multicriteria decision analysis in the reimbursement of new medical technologies: real-world experiences from Hungary. Value Health. 2014;17(4):487–489. doi: 10.1016/j.jval.2014.01.011. [DOI] [PubMed] [Google Scholar]
  • 126.Levitan B, Phillips LD, Walker S. Structured approaches to benefit-risk assessment: a case study and the patient perspective. Ther Innov Regul Sci. 2014;48(5):564–573. doi: 10.1177/2168479014536500. [DOI] [PubMed] [Google Scholar]
  • 127.Radaelli G, et al. Implementation of EUnetHTA core Model(R) in Lombardia: the VTS framework. Int J Technol Assess Health Care. 2014;30(1):105–112. doi: 10.1017/S0266462313000639. [DOI] [PubMed] [Google Scholar]
  • 128.Liberti L, McAuslane JN, Walker S. Standardizing the benefit-risk assessment of new medicines. Pharm Med N Z. 2011;25(3):139–146. [Google Scholar]
  • 129.Dionne F, et al. Developing a multi-criteria approach for drug reimbursement decision making: an initial step forward. J Popul Ther Clin Pharmacol. 2015;22(1):e68. [PubMed] [Google Scholar]
  • 130.Hallgreen CE, et al. Benefit-risk assessment in a post-market setting: a case study integrating real-life experience into benefit-risk methodology. Pharmacoepidemiol Drug Saf. 2014;23(9):974–983. doi: 10.1002/pds.3676. [DOI] [PubMed] [Google Scholar]
  • 131.Sarac SB, et al. A comprehensive approach to benefit-risk assessment in drug development. Basic Clin Pharmacol Toxicol. 2012;111(1):65–72. doi: 10.1111/j.1742-7843.2012.00871.x. [DOI] [PubMed] [Google Scholar]
  • 132.Migliore A, et al. Is it the time to rethink clinical decision-making strategies? From a single clinical outcome evaluation to a clinical multi-criteria decision assessment (CMDA) Med Hypotheses. 2015;85:433–440. doi: 10.1016/j.mehy.2015.06.024. [DOI] [PubMed] [Google Scholar]
  • 133.Agapova M, et al. A proposed approach for quantitative benefit-risk assessment in diagnostic radiology guideline development: the American College of Radiology Appropriateness Criteria Example. J Eval Clin Pract. 2017;23(1):128–138. doi: 10.1111/jep.12635. [DOI] [PubMed] [Google Scholar]
  • 134.Tsiachristas A, et al. Broader economic evaluation of disease management programs using multi-criteria decision analysis. Int J Technol Assess Health Care. 2013;29(3):301–308. doi: 10.1017/S0266462313000202. [DOI] [PubMed] [Google Scholar]
  • 135.Linkov I, Satterstrom FK, Corey LM. Nanotoxicology and nanomedicine: making hard decisions. Nanomed Nanotechnol Biol Med. 2008;4(2):167. doi: 10.1016/j.nano.2008.01.001. [DOI] [PubMed] [Google Scholar]
  • 136.Diaby V, Laurier C, Lachaine J. A Proposed framework for formulary listing in low-income countries. Pharm Med N Z. 2011;25(2):71–82. doi: 10.2165/11595220-000000000-00000. [DOI] [PubMed] [Google Scholar]
  • 137.Danko D. Health technology assessment in middle-income countries: recommendations for a balanced assessment system. J Mark Acess Health Policy. 2014 doi: 10.3402/jmahp.v2.23181. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 138.Danko D, Molnar MP. Balanced assessment systems revisited. J Mark Acess Health Policy. 2017;5(1):1355190. doi: 10.1080/20016689.2017.1355190. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 139.Husereau D, Boucher M, Noorani H. Priority setting for health technology assessment at CADTH. Int J Technol Assess Health Care. 2010;26(3):341. doi: 10.1017/S0266462310000383. [DOI] [PubMed] [Google Scholar]
  • 140.Poulin P, et al. Multi-criteria development and incorporation into decision tools for health technology adoption. J Health Organ Manag. 2013;2:246. doi: 10.1108/14777261311321806. [DOI] [PubMed] [Google Scholar]
  • 141.Goetghebeur MM, et al. Evidence and Value: impact on DEcisionMaking–the EVIDEM framework and potential applications. BMC Health Serv Res. 2008;8:270. doi: 10.1186/1472-6963-8-270. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 142.Wagner M, et al. Can the EVIDEM Framework tackle issues raised by evaluating treatments for rare diseases: analysis of issues and policies, and context-specific adaptation. Pharmacoeconomics. 2016;34(3):285–301. doi: 10.1007/s40273-015-0340-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 143.Golan O, Hansen P. Which health technologies should be funded? A prioritization framework based explicitly on value for money. Isr J Health Policy Res. 2012;1(1):44. doi: 10.1186/2045-4015-1-44. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 144.Agapova M, et al. Toward a framework for benefit-risk assessment in diagnostic imaging: identifying scenario-specific criteria. Acad Radiol. 2017;24(5):538–549. doi: 10.1016/j.acra.2017.02.002. [DOI] [PubMed] [Google Scholar]
  • 145.Barocchi MA, Black S, Rappuoli R. Multicriteria decision analysis and core values for enhancing vaccine-related decision-making. Sci Transl Med. 2016 doi: 10.1126/scitranslmed.aaf0756. [DOI] [PubMed] [Google Scholar]
  • 146.Timmis JK, Rigat F, Rappuoli R. Core values for vaccine evaluation. Vaccine. 2017;35(Suppl 1):A57–A62. doi: 10.1016/j.vaccine.2016.11.034. [DOI] [PubMed] [Google Scholar]
  • 147.Kaslow DC, et al. The role of vaccines and vaccine decision-making to achieve the goals of the Grand Convergence in public health. Vaccine. 2017;35(Suppl 1):A10–A15. doi: 10.1016/j.vaccine.2016.10.088. [DOI] [PubMed] [Google Scholar]
  • 148.Brixner D, et al. Considering multicriteria decision analysis (MCDA) simple scoring as an evidence-based HTA methodology for evaluating off-patent pharmaceuticals (OPPs) in emerging markets. Value Health Reg Issues. 2017;13:1–6. doi: 10.1016/j.vhri.2017.02.001. [DOI] [PubMed] [Google Scholar]
  • 149.Dunlop WCN, et al. BEACON: a summary framework to overcome potential reimbursement hurdles. Pharmacoeconomics. 2016;34(10):1051–1065. doi: 10.1007/s40273-016-0427-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 150.Vogler S, Zimmermann N, Habimana K. Stakeholder preferences about policy objectives and measures of pharmaceutical pricing and reimbursement. Health Policy Technol. 2016;5:213–225. [Google Scholar]
  • 151.Zelei T, et al. Systematic review on the evaluation criteria of orphan medicines in Central and Eastern European countries. Orphanet J Rare Dis. 2016;11(1):72. doi: 10.1186/s13023-016-0455-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 152.Polisena J, et al. Case studies that illustrate disinvestment and resource allocation decision-making processes in health care: a systematic review. Int J Technol Assess Health Care. 2013;29(2):174. doi: 10.1017/S0266462313000068. [DOI] [PubMed] [Google Scholar]
  • 153.Wernz C, Zhang H, Phusavat K. International study of technology investment decisions at hospitals. Ind Manag Data Syst. 2014;114(4):568–582. [Google Scholar]
  • 154.Gurtner S. Making the right decisions about new technologies: a perspective on criteria and preferences in hospitals. Health Care Manag Rev. 2014;39(3):245–254. doi: 10.1097/HMR.0b013e3182993b91. [DOI] [PubMed] [Google Scholar]
  • 155.Antoñanzas F, Terkola R, Postma M. The value of medicines: a crucial but vague concept. PharmacoEconomics. 2016;34(12):1227–1239. doi: 10.1007/s40273-016-0434-8. [DOI] [PubMed] [Google Scholar]
  • 156.Jehu-Appiah C, et al. Balancing equity and efficiency in health priorities in Ghana: the use of multicriteria decision analysis. Value Health. 2008;11:1081–1087. doi: 10.1111/j.1524-4733.2008.00392.x. [DOI] [PubMed] [Google Scholar]
  • 157.Cromwell I, Peacock SJ, Mitton C. ‘Real-world’ health care priority setting using explicit decision criteria: a systematic review of the literature. BMC Health Serv Res. 2015;15:164. doi: 10.1186/s12913-015-0814-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 158.Schmitz S, et al. Identifying and revealing the importance of decision-making criteria for health technology assessment: a retrospective analysis of reimbursement recommendations in ireland. PharmacoEconomics. 2016;34(9):925–937. doi: 10.1007/s40273-016-0406-z. [DOI] [PubMed] [Google Scholar]
  • 159.Johri M, Norheim OF. Can cost-effectiveness analysis integrate concerns for equity? Systematic review. Int J Technol Assess Health Care. 2012;28(2):125–132. doi: 10.1017/S0266462312000050. [DOI] [PubMed] [Google Scholar]
  • 160.Dionne F, et al. The challenge of obtaining information necessary for multi-criteria decision analysis implementation: the case of physiotherapy services in Canada. Cost Eff Resour Alloc. 2013;11(1):11–26. doi: 10.1186/1478-7547-11-11. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 161.Chen TY. A signed-distance-based approach to importance assessment and multi-criteria group decision analysis based on interval type-2 fuzzy set. Knowl Inf Syst. 2013;35(1):193–231. [Google Scholar]
  • 162.Broekhuizen H, et al. Estimating the value of medical treatments to patients using probabilistic multi criteria decision analysis. BMC Med Inform Decis Mak. 2015;15(1):102. doi: 10.1186/s12911-015-0225-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 163.Broekhuizen H, et al. Weighing clinical evidence using patient preferences: an application of probabilistic multi-criteria decision analysis. Pharmacoeconomics. 2017;35(3):259–269. doi: 10.1007/s40273-016-0467-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 164.Wen S, Zhang L, Yang B. Two approaches to incorporate clinical data uncertainty into multiple criteria decision analysis for benefit-risk assessment of medicinal products. Value Health. 2014;17(5):619–628. doi: 10.1016/j.jval.2014.04.008. [DOI] [PubMed] [Google Scholar]
  • 165.van Valkenhoef G, et al. Multicriteria benefit-risk assessment using network meta-analysis. J Clin Epidemiol. 2012;65(4):394–403. doi: 10.1016/j.jclinepi.2011.09.005. [DOI] [PubMed] [Google Scholar]
  • 166.Wang Y, Mai Y, He W. A quantitative approach for benefit-risk assessment using stochastic multi-criteria discriminatory method. Stat Biopharm Res. 2016;8(4):373–378. [Google Scholar]
  • 167.Postmus D, et al. A multi-criteria decision analysis perspective on the health economic evaluation of medical interventions. Eur J Oper Res. 2014;15(7):709–716. doi: 10.1007/s10198-013-0517-9. [DOI] [PubMed] [Google Scholar]
  • 168.Hutter M, Rodríguez-Ibeas R, Antonanzas F. Methodological reviews of economic evaluations in health care: what do they target? Eur J Health Econ. 2014;15(8):829–840. doi: 10.1007/s10198-013-0527-7. [DOI] [PubMed] [Google Scholar]
  • 169.Belton V, Gear T. On a shortcoming of Saaty’s method of analytic hierarchies. Omega. 1983;11(3):228–230. [Google Scholar]
  • 170.Dyer JS. Remarks on the analytic hierarchy process. Manag Sci. 1990;36(3):249–258. [Google Scholar]
  • 171.Bana-e-Costa CA, Vansnick JC. A critical analysis of the eigenvalue method used to derive priorities in AHP. Eur J Oper Res. 2008;187(3):1422–1428. [Google Scholar]
  • 172.Bana-e-Costa CA, Oliveira MD. A multicriteria decision analysis model for faculty evaluation. Omega. 2012;40:424–436. [Google Scholar]
  • 173.Stewart TJ, Durbach I. Dealing with uncertainties in MCDA. In: Greco S, Ehrgott M, Figueira J, editors. Multiple Criteria Decision Analysis. New York: International Series in Operations Research and Management Science; 2016. pp. 467–496. [Google Scholar]
  • 174.Bana-e-Costa CA, et al. Using the MACBETH socio-technical approach to build the EURO-HEALTHY PHI. In: Santana P, et al., editors. Promoting Population Health and Equity in Europe: From Evidence to Policy. Coimbra: University of Coimbra; 2017. pp. 71–77. [Google Scholar]
  • 175.Santana P, et al. Atlas of Population Health in European Union Regions. Coimbra: Imprensa da Universidade de Coimbra; 2017. [Google Scholar]
  • 176.Gooyert VD, et al. Reviewing the role of stakeholders in operational research: a stakeholder theory perspective. Eur J Oper Res. 2017;262(2):402–410. [Google Scholar]
  • 177.de Vaus D. Surveys in Social Research. 6. London: Routledge; 2014. [Google Scholar]
  • 178.Lewis LF. Group support systems: overview and guided tour. In: Kilgour DM, Eden C, editors. Handbook of Group Decision and Negotiation. Netherlands: Springer; 2010. pp. 249–268. [Google Scholar]
  • 179.Lienert J, Duygan M, Zheng J. Preference stability over time with multiple elicitation methods to support wastewater infrastructure decision-making. Eur J Oper Res. 2016;253(3):746–760. [Google Scholar]
  • 180.Bana-e-Costa CA, Vansnick J-C. MACBETH—an interactive path towards the construction of cardinal value functions. Int Trans Oper Res. 1994;1(4):489–500. [Google Scholar]
  • 181.Bana-e-Costa CA, De Corte J-M, Vansnick J-C. MACBETH. Int J Inform Technol Decis Mak. 2012;11(2):359–387. [Google Scholar]
  • 182.Oliveira MD, et al. Valuing health states: is the MACBETH approach useful for valuing EQ-5D-3L health states? Health Qual Life Outcomes. 2018;16:235. doi: 10.1186/s12955-018-1056-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 183.Morton A, Fasolo B. Behavioural decision theory for multi-criteria decision analysis: a guided tour. J Oper Res Soc. 2009;60:268–275. [Google Scholar]
  • 184.Shephard GG, Kirkwood CW. Managing the judgmental probability elicitation process: a case study of analyst/manager interaction. IEEE Trans Eng Manag. 1994;41(4):414–425. [Google Scholar]
  • 185.Schein EH. Process consultation revisited: building the helping relationship. Reading: Addison-Wesley; 1999. [Google Scholar]
  • 186.Schuman S. The IAF handbook of group facilitation: best practices from the leading organization in facilitation. San Francisco: Jossey-Bass; 2005. [Google Scholar]
  • 187.Phillips LD. Decision conferencing. In: Edwards W, Miles RF, von Winterfeldt D, editors. Advances in Decision Analysis: From Foundations to Applications. Cambridge: Cambridge University Press; 2007. pp. 375–399. [Google Scholar]
  • 188.Langhans SD, Lienert J. Four Common Simplifications of Multi-Criteria Decision Analysis do not hold for River Rehabilitation. PLoS ONE. 2016;11(3):e0150695. doi: 10.1371/journal.pone.0150695. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 189.IST . Deliverable 6.2 of WP6: Development of the multicriteria model to evaluate Population Health. Lisbon: Instituto Superior Técnico; 2017. [Google Scholar]
  • 190.Oliveira MD, et al. Prioritizing health care interventions: a multicriteria resource allocation model to inform the choice of community care programmes. In: Tànfani E, Testi A, et al., editors. Advanced decision making methods applied to health care. Milan: Springer; 2012. pp. 141–154. [Google Scholar]
  • 191.Oliveira MD, Bevan G. Modelling the redistribution of hospital supply to achieve equity taking account of patient’s behaviour. Health Care Manag Sci. 2006;9(1):19–30. doi: 10.1007/s10729-006-6277-7. [DOI] [PubMed] [Google Scholar]
  • 192.Mestre A, Oliveira MD, Barbosa-Póvoa A. Location-allocation approaches for hospital network planning under uncertainty. Eur J Oper Res. 2015;240:791–806. [Google Scholar]
  • 193.Cardoso T, et al. Moving towards an equitable long-term care network: a multi-objective and multi-period planning approach. Omega. 2016;58:69–85. [Google Scholar]
  • 194.Marttunen M, Lienert J, Belton V. Structuring problems for multi-criteria decision analysis in practice: a literature review of method combinations. Eur J Oper Res. 2017;263(1):1–17. [Google Scholar]
  • 195.Rodrigues TR, et al. Modelling multicriteria value interactions with reasoning maps. Eur J Oper Res. 2017;3(1):1054–1071. [Google Scholar]
  • 196.Bana-e-Costa CA, Carnero MC, Oliveira MD. A multi-criteria model for auditing a predictive maintenance programme. Eur J Oper Res. 2012;217(2):381–393. [Google Scholar]
  • 197.Raiffa H. Decision Analysis: Introductory Lectures on Choices Under Uncertainty. New York: Random House; 1968. [PubMed] [Google Scholar]
  • 198.Oliveira MD, Lopes DF, Bana-e-Costa CA. Improving occupational health and safety risk evaluation through decision analysis. Int Trans Oper Res. 2018;25(1):375–403. [Google Scholar]
  • 199.Kilgour DM, Eden C. Handbook of Group Decision and Negotiation: Advances in Group Decision and Negotiation. Netherlands: Springer; 2010. [Google Scholar]
  • 200.Vieira A, Oliveira MD, Bana-e-Costa C. Enhancing knowledge construction processes within multicriteria decision analysis: the collaborative value modelling framework. Omega. 2019 [Google Scholar]
  • 201.Bana-e-Costa CA. Readings in Multiple Criteria Decision Aid. Heidelberg: Springer; 1990. [Google Scholar]
  • 202.Postmus D, et al. A multi-criteria decision analysis perspective on the health economic evaluation of medical interventions. Eur J Health Econ. 2014;15:709–716. doi: 10.1007/s10198-013-0517-9. [DOI] [PubMed] [Google Scholar]
  • 203.French S, Maule J, Papamichail N. Decision Behaviour, Analysis and Support. Cambridge: Cambridge University Press; 2009. [Google Scholar]

Articles from The European Journal of Health Economics are provided here courtesy of Springer

RESOURCES