Skip to main content
. 2015 Jul 22;2015(7):CD007007. doi: 10.1002/14651858.CD007007.pub3
Analysis Method
Measures of treatment effect Where measurements were comparable and on the same scale, we intended to combine them to obtain mean differences. Where scales measured the same clinical outcomes in different ways (e.g. depression, quality of life), mean differences were to be standardised in order to combine results across scales. There have been insufficient data in studies in the review and update to undertake these analyses. These methods will be retained for subsequent updates.
Unit of analysis issues There was one included cluster‐RCT to date, which did account for clustering. For future updates, where studies have not appropriately accounted for clustering, we will re‐analyse data using methods recommended by Donner 1980.
Dealing with missing data Rates of missing data on the primary outcome have not required thus far that we undertake best‐case and worst‐case scenario analyses to estimate the effect of the missing data on the results of pooled studies. Such analyses would enable us to ascertain if observed effect sizes increased or decreased as a function of the extent of attrition in the two arms of the trial. These methods will be retained for subsequent updates.
Assessment of reporting biases We planned to draw funnel plots to investigate any relationship between effect size and study precision (closely related to sample size) (Egger 1997) to investigate a relationship that could be due to publication or related biases or due to systematic differences between small and large studies. Funnel plots (estimated differences in treatment effects against their standard error) were not drawn because there was an insufficient number of included studies (more than 10 are recommended), to identify asymmetry due to publication bias.
Subgroup analyses We planned to conduct subgroup analysis for type of healthcare setting (which was done) and the type of screening intervention (based on types of tools, questions), which could be done in a future update with more studies. We also stated in the protocol that we would undertake subgroup analysis based on screening intervention only or where it was embedded as part of a larger multi‐component intervention. However, the implications of our altered criterion for assessing inclusion of interventions/comparisons, which explicitly excludes interventions that extended beyond an immediate response and referral phase following screening, meant that this subgroup analysis has not been relevant to date.
Sensitivity analysis Our original protocol stated our intention to use sensitivity analysis to deal with study quality and differential dropout, which has been undertaken in this review. However, we have not used sensitivity analysis for intention‐to‐treat issues and duration of follow‐up as neither have applied to date. These methods will be retained for subsequent updates.