Skip to main content
. 2019 Apr 20;20(6):891–918. doi: 10.1007/s10198-019-01052-3

Table 3.

Identification of challenges and limitations in published MCDA studies (aggregated into clusters; the descending order reflects the number of studies raising a particular challenge or limitation).

Source: The authors from the literature

Cluster # Summary (number of studies) Clustered reported limitations and challenges Articles expressing limitations and challenges
#1 Evidence and Data related Difficulties (TECHNICAL) (47 studies) Multiple difficulties exist regarding the use of evidence and data in evaluation processes: information from multiple studies may be complex, not fully comparable (e.g. data often derived from separate trials differing in populations, treatment durations and calculated in different units), hard to capture by checklists or scales, have questionable quality, and participants may be overwhelmed by data (for instance numerous aggregations of data from non-standardised and often non-computerized databases). There may be lack of evidence and lack of good quality data for criteria deemed as relevant by evaluators, and there are challenges to synthesize relevant information. Participants may have a sense of information loss along evaluation processes. There is a lack of consensus about quantities such as ‘quality of life’ or ‘economic value’ of a healthy individual, which translate into evaluation difficulties. It is difficult to acquire and interpret data across heterogeneous health technologies [18, 19, 21, 2428, 31, 42, 60, 63, 69, 74, 80, 8284, 8789, 91, 94, 97, 98, 102, 103, 112, 113, 126, 130132, 135, 137, 140, 142, 143, 151153, 155, 158160, 164, 165]
#2 Value System Differences and Participant Selection issues (SOCIAL) (46 studies) There are variations in experts and stakeholders’ views and in the value systems of countries/regions/health systems. Value systems can vary over time and in response to new evidence. In multiple contexts evaluations rely on the views of members of a small panel/committee, with resulting evaluations being influenced possibly by participants’ characteristics and not being representative. In contexts where representativeness is important, it is not clear whose views should be considered. There are limits to involve a large number of participants in a face-to-face setting. MCDA studies have been involving much smaller numbers of participants than large patient preference studies [19, 21, 24, 26, 27, 31, 33, 36, 57, 59, 60, 62, 63, 65, 66, 72, 73, 75, 78, 80, 82, 84, 9799, 101103, 106, 109, 111, 113, 116, 121, 123, 126, 129, 131, 132, 139, 142, 144, 147, 148, 153, 162]
#3 Participant Difficulties in Evaluation Processes (SOCIAL) (33 studies) Participants face difficulties in interpreting data or in understanding evaluation processes; they also face cognitive difficulties in providing judgments (for instance using swing weighting, comparing mild and serious events, understanding orders of magnitude, interpreting weighting coefficients). Evaluation judgments may be frame-dependent (e.g. being influenced by the method in use), with multiple heuristics and biases. Applying, for instance, judgments may be prone to strategic behaviour, to vested interests and may be critically influenced by some participants; languages and translations may influence evaluations; participants typically have distinct levels of understanding; and weighting is influenced by ranges, with participants shying away from extremes [18, 19, 21, 24, 26, 28, 30, 31, 33, 36, 43, 5760, 68, 72, 73, 7578, 96, 98, 99, 104, 106, 116, 144, 149151, 155]
#4 Balancing Methodological Complexity and Resources (TECHNICAL) (21 studies) There is methodological complexity in using MCDA in HTA and a trade-off between methodological complexity and MCDA resources (including costs and time for model development). Standards and requirements for MCDA modelling may limit flexibility, adaptability, and timeliness. Many MCDA models are simplistic, with the choice of simple, intuitive, easy to use techniques, even if there is a compromise on rigor; often only partial information is requested from experts because of cost and time. Analysts are faced with the trade-off between ensuring an exhaustive set of criteria and the time and cognitive effort associated with using more criteria. Time is needed for evaluators to get acquainted with MCDA processes, and there may be participant fatigue. A significant amount of work is involved in reporting a MCDA model to evaluate health technologies [19, 24, 27, 28, 30, 33, 57, 59, 60, 69, 74, 97, 99, 116, 119, 127, 129, 140, 149, 151, 162]
#5 Criteria Selection and Attribute building Difficulties (TECHNICAL) (20 studies) The definition of evaluation criteria and attributes is a long, difficult and subjective process, further complicated because of variability of HTA terminology adopted in the hospital context. Some criteria, such as equity, are difficult to operationalize and the use of attributes is open to subjective interpretations. There is a lack of guidelines on the number of criteria and the structure of the evaluation model can become too extensive if all criteria need to be taken into account, as well as it can lead to cognitive burden and time-consuming procedures. Several attributes can be chosen for a criterion and there are difficulties in defining references for those attributes. Work is needed to advise how to estimate baseline effects. Not all aspects to be accounted for are quantifiable and it may not be possible to incorporate them into an MCDA model (e.g. context issues related to system capacity and appropriate use of an intervention) [21, 26, 30, 59, 61, 63, 73, 78, 82, 89, 123, 129, 130, 132, 134, 140, 149, 151, 157, 165]
#6 Uncertainty Modelling Needs (TECHNICAL) (19 studies) Health technology assessment entails uncertainty from multiple sources, related to scoring and weighting methods, criteria choice and with attributes in use (such as some based on point systems), as well as with evaluation judgments. Evaluators may not be able to give exact preference information on weights. Several modelling options, such as the choice of time horizon for costs and benefits, can influence evaluation [18, 19, 21, 24, 25, 33, 68, 75, 80, 93, 98, 113, 121, 122, 126, 143, 163, 164, 166]
#7 Model Additivity Issues (TECHNICAL) (17 studies) There are multiple issues related to the use of an additive model—for instance, it cannot be used to deal with thresholds (e.g. a threshold of incidence of adverse events above which the drug is considered unacceptable), as one cannot use trade-offs with it. There is potential for criteria to be neither exhaustive nor mutually exclusive, there may be overlaps and double counting (e.g. some degree of overlap is inherent in many of the endpoints), and one needs to deal with interdependencies. More complex methods are known, but their adoption may imply that the elicitation questions become too hard for evaluators [19, 21, 30, 32, 43, 61, 64, 77, 82, 93, 116, 121, 123, 133, 145, 151, 162]
#8 Methods’ Selection Issues (TECHNICAL) (17 studies) There is no consensus about the best framework and about the best weighting method, and methods are not standardised which raises validity issues. The selection of method may introduce bias, and inadequate weighting practices are recognised in the literature. There is generally no gold standard against which to compare results and results have not been replicated by independent third parties. There is a sense of arbitrariness in the implementation of an MCDA approach. Acknowledging the existence of different schools of thought, the appropriate weighting technique is essential, as are the circumstances under which a specific technique should be used [18, 26, 33, 43, 58, 68, 69, 98, 121, 122, 126, 128, 134, 146, 148]
#9 Consensus Promotion and Aggregation of Participant Answers (SOCIAL) (12 studies) Despite the importance of promoting consensus, consensus agreement varies across studies and needs to be both accommodated and reported. There are also issues on how to properly combine individual judgments and to assess consensus [18, 24, 26, 30, 99, 113, 116, 121, 131, 132, 142, 153]
#10 Introduce Flexibility Features for Universal/General Evaluation Models (TECHNICAL) (9 studies) Despite the ambition of building universal (or general) models to compare distinct technologies across diseases and therapeutic areas, there are conceptual and methodological difficulties in developing such models. Different models tend to be built for different contexts [24, 36, 59, 101, 103, 106, 121, 144, 155]
#11 MCDA Training and Expertise Needs (TECHNICAL) (7 studies) There is a lack of familiarity with MCDA techniques and, consequently, there is a need for training staff for MCDA implementation, as well as a need for participant training (e.g. patient training). Training requires time and resources [36, 43, 56, 84, 97, 101, 111, 116, 123, 136, 137]
#12 Model Scores Meaningfulness Issues (TECHNICAL) (4 studies) There are difficulties in interpreting model outputs and in understanding the meaning of these outputs, which need to be tested and validated. Scores are relative and produced in an interval scale, and thus limit the usefulness of a cost-value ratio; they also do not provide information about the absolute effectiveness, utilities, or absolute costs in monetary units [21, 36, 75, 93]