Skip to main content
. 2020 Sep 3;2020(9):CD007667. doi: 10.1002/14651858.CD007667.pub3

5. Additional methods for future updates.

Issue Method
Types of outcome measures We may reconsider the primary and secondary outcomes in future reviews, to include pre‐clinical markers such as 'facial emotional recognition' or additional features of AsPD as listed in DSM‐5, ICD‐10, or future iterations of these guidelines e.g. ICD‐11.
Unit of analysis issues Cluster‐randomised trials
Where trials use clustered randomisation, study investigators may present their results after appropriately adjusting for clustering effects (robust standard errors or hierarchical linear models). Where it is unclear whether this was done, we will contact the study investigators for further information. If appropriate adjustments were not used, we will request individual participant data and re‐analyse using multilevel models which control for clustering. Following this, we will carry out meta‐analysis in Review Manager 5 (RevMan5; Review Manager 2014), using the generic inverse method (Higgins 2011a). If appropriate adjustments were not used, we will follow the method described by Donner 2001, imputing an intra‐cluster correlation coefficient and adjusting for sample size. If there is insufficient information to adjust for clustering, we will enter outcome data into RevMan5 using the individual as the unit of analysis, and then use sensitivity analysis used to assess the potential biasing effects of inadequately adjusted clustered trials.
Cross‐over trials
Should we be able to conduct a meta‐analysis combining the results of cross‐over trials, we will use the inverse variance methods recommended by Elbourne (Elbourne 2002), data permitting. When conducting a meta‐analysis combining the results of cross‐over trials.
Dealing with missing data Missing dichotomous data
We will report missing data and dropouts for each included study and report the number of participants who are included in the final analysis as a proportion of all participants in each study. We will provide reasons for the missing data in the narrative summary where these are available.
Missing standard deviations
The standard deviations of the outcome measures should be reported for each group in each trial. If these are not given, we will calculate these, where possible, from standard errors, confidence intervals, t‐values, F values or P values using the method described in the Cochrane Handbook for Systematic Reviews of Interventions, section 7.7.3.3 (Higgins 2011a). If these data are not available, we will impute standard deviations using relevant data (for example, standard deviations or correlation coefficients) from other, similar studies (Follman 1992) but only if, after seeking statistical advice, to do so is deemed practical and appropriate. Given that trials in this area are often conducted with small samples, any imputations (and the assumptions behind them) are likely to have an important impact. We will therefore follow, where possible, the method suggested by Higgins 2008 for weighting studies with imputed data.
Loss to follow up
We will report separately all data from studies where more than 50% of participants in any group were lost to follow‐up, and will exclude these from any meta‐analyses. The impact of including studies with high attrition rates (25 to 50%) will be subjected to sensitivity analysis. If inclusion of data from this group results in a substantive change in the estimate of effect of the primary outcomes, we will not add data from these studies to trials with less attrition, but will present them separately. We will assess the extent to which the results of the review could be altered by the missing data by conducting a sensitivity analysis based on consideration of 'best‐case' and 'worst‐case' scenarios (Gamble 2005). Here, the 'best‐case' scenario is that where all participants with missing outcomes in the experimental condition had good outcomes, and all those with missing outcomes in the control condition had poor outcomes; the 'worst‐case' scenario is the converse (Higgins 2011a, section 16.2.2).For example, in studies with less than 50% dropout rate, we will consider people leaving early to have had the negative outcome, except for adverse effects such as death.
Assessment of heterogeneity
  We will assess the extent of between‐trial differences and the consistency of results of any meta‐analysis in three ways: first, by visual inspection of the forest plots; second, by performing the Chi2 test of heterogeneity (where a significance level less than 0.10 will be interpreted as evidence of heterogeneity); and third, by examining the I2statistic (Higgins 2011a; section 9.5.2). The I2statistic describes approximately the proportion of variation in point estimates due to heterogeneity rather than sampling error. We will consider I2values less than 30% as indicating low heterogeneity, values in the range 31% to 69% as indicating moderate heterogeneity, and values greater than 70% as indicating high heterogeneity. We will attempt to identify any significant determinants of heterogeneity categorised at moderate or high.
Assessment of reporting biases We will draw funnel plots (effect size versus standard error) to assess publication bias, if we find sufficient studies. Asymmetry of the plots may indicate publication bias, although they may also represent a true relationship between trial size and effect size. If such a relationship is identified, we will further examine the clinical diversity of the studies as a possible explanation (Egger 1997; Jakobsen 2014; Lieb 2016). If insufficient data is available to employ statistical techniques, we will look at descriptive methods (such as time elapsed between the study and publication) to assess potential reporting bias.
Data synthesis We will conduct meta‐analyses to combine comparable outcome measures across studies. In carrying out meta‐analysis, the weight to be given to each study is the inverse of the variance, so that the more precise estimates (from larger studies with more events) are given more weight.
Where studies provide both endpoint and change data for continuous outcomes, we will perform a meta‐analysis that combines both types of data using the methods described by Da Costa 2013.
We will undertake a quantitative synthesis of the data using both fixed and random effects models. Random‐effects models will be used because studies may include somewhat different treatments or populations. Outcome measures will be grouped by length of follow‐up.
In addition, the weighted average of the results of all the available studies will be used to provide an estimate of the effect of antiepileptic drugs for aggression and impulsiveness. Where appropriate and if a sufficient number of studies are found, we will use regression techniques to investigate the effects of differences in the study characteristics on the estimate of the treatment effects. Statistical advice will be sought before attempting meta‐regression. If meta‐regression is performed, this will be executed using a random effects model.
We will consider pooling outcomes reported at different time points where this does not obscure the clinical significance of the outcome being assessed.
To address the issue of multiplicity, future reviews should consider the following:
  • adjusting P values and CIs of outcomes using the method described by (Jakobsen 2014);

  • adopting a hierarchy of outcome measures to select only one outcome per domain;

  • using the approaches outlined in point 5 of Table 3.2.c in the Cochrane Handbook for Systematic Reviews of Interventions (Higgins 2019).

Subgroup analysis and investigation of heterogeneity We will undertake a subgroup analysis to examine the effect on primary outcomes of:
  • comorbid diagnosis (e.g. other personality disorder, substance misuse disorder);

  • setting (inpatient; custodial; outpatient/community);

  • class of drug; and

  • inclusion of participants aged < 18 years

Sensitivity analysis We will undertake sensitivity analyses to investigate the robustness of the overall findings in relation to certain study characteristics. A priori sensitivity analyses are planned, data permitting, for:
  • concealment of allocation;

  • blinding of outcome assessors;

  • extent of dropouts; and

  • the potential biasing effects of inadequately adjusted clustered trials.