Skip to main content
. 2017 Jan 10;2017(1):CD010971. doi: 10.1002/14651858.CD010971.pub2
Method planned for data analysis Reason for non‐use
Cross‐over trials
For cross‐over trials with random allocation to period and an appropriate washout period, we will include the relevant effect estimate within the meta‐analysis, using the generic inverse variance method in Review Manager 2014.
No cross‐over psychosocial intervention trials were included.
Cluster‐RCTs
Cluster‐RCTs randomise groups of people rather than individuals. For each cluster‐RCT, we will first determine whether or not the data incorporate sufficient controls for clustering (such as robust standard errors or hierarchical linear models). If data do not have proper controls, we will then attempt to obtain an appropriate estimate of the data's intracluster correlation coefficient. If we cannot find an estimate in the report of the trial, then we will request an estimate from the trial report authors. If the authors do not provide an estimate, if possible, we will obtain one from a similar study and conduct a sensitivity analysis to determine if the results are robust when different values are imputed. We will do this according to procedures described in Higgins 2011c.
No cluster‐RCTs of psychosocial intervention were located.
Trials with multiple intervention groups
This is a common scenario. To avoid any unit of analysis errors in the meta‐analysis, we will use the following approach for a study that could contribute multiple comparisons.
  1. The interventions will only be analysed together if they are clinically similar. In this situation the control group will not be split, but the intervention groups will be combined to enable a single, pair‐wise comparison for the meta‐analysis. If the interventions are similar enough to be in a single meta‐analysis but not able to be combined, then the control group will be split.

  2. If the interventions are not similar, the data will be used in separate meta‐analyses.

No multiple intervention group trials were located.
Publication bias
If we identify sufficient trials (at least 10), we will use outcome data to produce a funnel plot to investigate the likelihood of overt publication bias (Sutton 2000). Any asymmetry of the funnel plot may indicate possible publication bias. We will explore other reasons for asymmetry such as poor methodological quality or heterogeneity. We will look for publication bias by comparing the results of the published and unpublished data.
We did not identify 10 or more trials within any particular type of psychosocial intervention.
Sensitivity analyses
Where data allow, we will perform sensitivity analyses to assess the robustness of conclusions in relation to two aspects of study design, listed below.
  1. The effect of inadequate allocation concealment, by the removal of studies judged as high or unclear risk of bias for this domain.

  2. The effect of inadequate blinding to treatment allocation by outcome assessors, by the removal of studies judged as high or unclear risk of bias for this domain.

We were unable to perform any analyses in relation to concealment of treatment allocation, as there were insufficient low‐risk studies (only one study was deemed to be at low risk in this domain).