Table 1.
Terms | Explanation | Method of evaluation/estimation |
Transitivity assumption | The assumption of transitivity implies that interventions and populations in the included studies are comparable with respect to characteristics that may affect the relative effects. | Primarily, the evaluation is based on the clinical understanding of the disease, the competing interventions and the outcomes of interest. Once the data have been collected, it can be assessed statistically by comparing the distribution of the potential effect modifiers, when enough studies (eg, at least five) are available for each comparison. |
Consistency assumption | The assumption of consistency implies that the direct and indirect evidence are in statistical agreement for every pairwise comparison in a network. When transitivity is likely to hold, this is expected to be expressed in the data via consistency. However, the absence of statistical inconsistency is not evidence for the plausibility of the transitivity assumption. | Several methods have been suggested for the evaluation of consistency, which infer on the presence or absence of statistical inconsistency based on statistical tests (eg, z-test, χ2 test). For a review of the available approaches, see Donegan, 2013. The absence of statistically significant disagreement between direct and indirect estimates is not evidence for the plausibility of consistency, since the tests are often underpowered. |
Hierarchical model approach | This model relates the relative effects observed in the studies with the respective ‘true’ underlying effects and combines the available direct and indirect evidence for every comparison via the ‘consistency equations’. | – |
Multivariate meta-analysis model approach | This model considers the different observed direct comparisons as different outcomes and imposes the consistency assumption by assuming a common reference arm in all studies, which might be ‘missing at random’ in some of them. | – |
Ranking probability | The probability for a treatment of being ranked in a specific position (first, second, third, etc) in comparison with the other treatments in a network. | The ranking probability is estimated as the number of simulations that a treatment is ranked in a specific place (ie, first, second, third, etc) over the total number of simulations. It can be estimated either within a Bayesian framework (using MCMC simulations) or in a frequentist framework (using resampling methods). |
p (best) | The probability for a treatment of being the best (eg, the most effective or safe) in comparison with any other treatment in a network. | p (best) is estimated as the number of simulations that a treatment is ranked first over the total number of simulations. |
Mean/median rank | The average/median of the distribution of the ranking probabilities (for all possible ranks) for a treatment. Lower values correspond to better treatments. | Using the estimated ranking probabilities. |
SUCRA | Surface under the cumulative ranking curves (SUCRA) expresses the percentage of effectiveness/safety that a treatment has, when compared with an ‘ideal’ treatment that would be ranked always first without uncertainty. | Using the estimated ranking probabilities. |
Contribution of a study to the direct estimate | The contribution of a study to the direct estimate is the percentage of information that comes from a specific study in the estimation of a direct relative effect using standard pairwise meta-analysis. | This contribution is estimated by re-expressing the weights (eg, the inverse variance weights) of the studies as percentages. |
Contribution of direct comparison to the network estimates | The contribution of a direct comparison to the network estimate is the percentage of information that comes from a specific direct (summary) relative effect with available data in a network in the estimation of the relative effects using network meta-analysis. | The network estimates are indeed a weighted average of the available direct estimates in a network. Re-expressing these weights as percentages gives the contribution of each direct comparisons to every network estimate. Then, the contribution of a study to the network estimates is estimated by combining the study-specific contributions (to the direct estimates) and the comparison-specific contributions (to the network estimates). |
Loop-specific approach for inconsistency | The loop-specific approach is a ‘local’ approach that estimates inconsistency in every closed loop of a network, separately. | Inconsistency is estimated as the difference between the direct and indirect estimates for one of the comparisons in the loop. Then, a z-test is employed to assess the statistical significance of this difference. |
Design-by-treatment model | The design-by-treatment model is an ‘inconsistency model’ for network meta-analysis that relaxes the consistency assumption and infers for the presence of inconsistency in the entire network jointly (ie, ‘global’ test). | This models accounts for two types of inconsistency: the ‘loop’ and the ‘design’ inconsistency. Loop inconsistency is the disagreement between the different sources of evidence (eg, direct and indirect), while design inconsistency is the disagreement between studies with different designs (eg, two-arm vs three-arm trials). A χ2 test is employed to infer about the statistical significance of all possible inconsistencies in a network. |
MCMC, Markov chain Monte Carlo.