Analysis | Methods |
Measures of treatment effect |
Dichotomous data For dichotomous data, we would have presented the results as summary odd ratios (OR) with 95% confidence intervals (CI). |
Continuous data The standardised mean difference (SMD) would have been used to combine trials that measured the same outcome, but used different scales. All outcomes would have been presented with 95% CIs. If a trial had provided multiple interchangeable measures of the same construct at the same time point, we would have calculated the mean SMD across these outcomes and the mean of their estimated variances. Where trials had reported the same outcomes using continuous and dichotomous measures, we would have re‐expressed ORs as SMDs, thereby allowing dichotomous and continuous data to be pooled together, as described in Chapter 6 of the Cochrane Handbook for Systematic Reviews of Interventions (Higgins 2021). | |
Ordinal data Ordinal data measured on shorter scales would have been analysed as dichotomous data by combining categories, and the intervention effect would have been expressed using OR. | |
Unit of analysis issues |
Cluster‐randomised trials We anticipated that trials using clustered randomisation would have controlled for clustering effects. In case of doubt, we would have contacted the first authors to ask for individual participant data to calculate an estimate of the intracluster correlation coefficient (ICC). Had this not been possible, we would have obtained external estimates of the ICC from a similar trial or from a study of a similar population, as described in Chapter 6 of the Cochrane Handbook for Systematic Reviews of Interventions (Higgins 2021). When the ICC was established, we would have used it to reanalyse the trial data. If ICCs from other sources were used, we would have reported this and conducted sensitivity analyses to investigate the effect of variation in the ICC. |
Cross‐over trials Cross‐over trials would have been analysed using combined data from all study periods, or using first period data if combined data were not available. | |
Trials with > 2 treatment arms Had > 1 of the interventions been a music intervention, and there had been sufficient information in the trial to assess the similarity of the interventions, we would have combined similar music interventions to allow for a single pair‐wise comparison. | |
Dealing with missing data | We would have explored the impact of including studies with high levels of missing data by performing sensitivity analyses based on consideration of best‐case and worst‐case scenarios. The potential impact of missing data on the findings of the review would have been addressed in the 'Discussion' section of the review. |
Assessment of heterogeneity | Had there been significant heterogeneity, we would have investigated it by conducting a subgroup analysis based on the participants' clinical characteristics and the interventions used in the included studies (see subsection on 'Subgroup analyses' below). |
Assessment of reporting bias | Had sufficient study data been available for individual outcomes, we would have drawn and inspected funnel plots for evidence of reporting or publication bias. We would have assessed funnel plot asymmetry visually and statistically using the Bee and Mazumdar (Begg 1994) and the Egger tests (Egger 1997); 10 or more studies are recommended. Had asymmetry been suggested by visual assessment or detected in any of these tests, we would have performed exploratory analyses to investigate if it reflected publication bias or a true relationship between trial size and effect size. |
Subgroup analyses | We would have conducted the following subgroup analyses.
|
Sensitivity analysis | We would have conducted a sensitivity analysis excluding trials using inadequate methods of blinding personnel. |