5. Additional methods for future updates.
Issue | Method |
Types of interventions | We will consider widening the range of interventions examined in future reviews to include concepts such as 'Motivation to Change'. |
Measures of treatment effect |
Continuous data We will summarise change‐from‐baseline ('change score') data alongside endpoint data where these are available. Change‐from‐baseline data may be preferred to endpoint data if their distribution is less skewed, but both types may be included together in meta‐analysis when using the MD (Higgins 2011a, p 270). Where the data are insufficient for meta‐analysis, we will report the results of the trial investigators' own statistical analyses comparing treatment and control conditions, using change scores. |
Unit of analysis issues |
Cluster‐randomised trials Where trials use clustered randomisation, study investigators may present their results after appropriately controlling for clustering effects (robust standard errors or hierarchical linear models). If, however, it is unclear whether a cluster‐randomised trial has used appropriate controls for clustering, we will contact the study investigators for further information. If appropriate controls were not used, we will request individual participant data and re‐analyse these using multilevel models that control for clustering. Following this, we will conduct a meta‐analysis of effect sizes and standard errors in RevMan 5 (Review Manager 2014), using the generic inverse method (Higgins 2011a). If appropriate controls were not used and individual participant data are not available, we will seek statistical guidance from the Cochrane Methods Group and external experts as to which method to apply to the published results in attempt to control for clustering. If there is insufficient information to control for clustering, we will enter the outcome data into RevMan5 (Review Manager 2014), using the individual as the unit of analysis, and then conduct a sensitivity analysis to assess the potential biasing effects of inadequately controlled clustered trials (Donner 2001). |
Dealing with missing data | The standard deviations of the outcome measures should be reported for each group in each trial. If these are not given, we will calculate these, where possible, from standard errors, confidence intervals, t‐values, F values or P values using the method described in the Cochrane Handbook for Systematic Reviews of Interventions, section 7.7.3.3 (Higgins 2011a). If these data are not available, we will impute standard deviations using relevant data (for example, standard deviations or correlation coefficients) from other, similar studies (Follman 1992), but only if, after seeking statistical advice, to do so is deemed practical and appropriate. Assessment will be made of the extent to which the results of the review could be altered by the missing data by, for example, a sensitivity analysis based on consideration of 'best‐case' and 'worst‐case' scenarios (Gamble 2005). Here, the 'best‐case' scenario is where all participants with missing outcomes in the experimental condition had good outcomes, and all those with missing outcomes in the control condition had poor outcomes; the 'worst‐case' scenario is the converse (Higgins 2011a, section 16.2.2). We will report data separately from studies where more than 50% of participants in any group were lost to follow‐up. Where meta‐analysis is undertaken, we will assess the impact of including studies with attrition rates greater than 50% through a sensitivity analysis. If inclusion of data from this group results in a substantive change in the estimate of effect of the primary outcomes, we will not add the data from these studies to trials with less attrition and will present them separately. Any imputation of data will be informed, where possible, by the reasons for attrition where these are available. We will interpret the results of any analysis based in part on imputed data with recognition that the effects of that imputation (and the assumptions on which it is based) can have considerable influence when samples are small. |
Assessment of reporting biases | We will draw funnel plots (effect size versus standard error) to assess small study effects, when there are greater than 10 studies. Asymmetry of the plots may indicate publication bias, although they may also represent a true relationship between trial size and effect size. If such a relationship is identified, we will further examine the clinical diversity of the studies as a possible explanation (Egger 1997; Jakobsen 2014; Lieb 2016). |
Data synthesis | For homogeneous interventions, we will group outcome measures by length of follow‐up, and use the weighted average of the results of all the available studies to provide an estimate of the effect of specific psychological interventions for people with antisocial personality disorder. We will use regression techniques to investigate the effects of differences in study characteristics on the estimate of the treatment effects. We will seek statistical advice before attempting meta‐regression. If meta‐regression is performed, it will be executed using a random‐effects model as per protocol. Where studies provide both endpoint or change data, or both, for continuous outcomes, we will perform meta‐analysis that combines both data types using the methods described by Da Costa 2013. We will consider pooling outcomes reported at different time points where this does not obscure the clinical significance of the outcome being assessed. To address the issue of multiplicity, future reviews should consider the following:
|
Subgroup analysis and investigation of heterogeneity | We will undertake subgroup analysis to examine the effect on primary outcomes of:
|
Sensitivity analysis | We will undertake sensitivity analyses to investigate the robustness of the overall findings in relation to certain study characteristics. A priori sensitivity analyses are planned for:
|
MD= Mean difference