Summary of findings for the main comparison. Printed educational material vs. no intervention.
Printed educational material vs. no intervention | |||
Patient or population: healthcare professionals (physicians in 9/10 studies) Settings: multiple settings Intervention: printed educational material Comparison: no intervention | |||
Outcomes* | Standard median effect size | No of participants (studies) | Quality of the evidence (GRADE) |
Categorical measure of professional practice Absolute risk difference across various outcomes Mean follow‐up: 6 months | 0.02 higher (range from 0.00 to +0.11) | 294,937 (7 studies) | ⊕⊕⊝⊝ low1, 2, 3 |
Continuous measure of professional practice Standardised mean difference across various outcomes Mean follow‐up: 9 months | 0.13 higher (range from ‐0.16 to +0.36) | 297 (3 studies) | ⊕⊝⊝⊝ very low3,4,5 |
* Where studies reported more than one measure of each endpoint, the primary measure (as defined by the authors of the study) or the median measure was abstracted. For categorical measures, we calculated the odds ratio between the intervention of interest and the control intervention. For continuous endpoints, we calculated standardised mean difference by dividing the mean score difference of the intervention and comparison groups in each study by the pooled estimate standard deviation for the two groups | |||
GRADE Working Group grades of evidence High quality: Further research is very unlikely to change our confidence in the estimate of effect. Moderate quality: Further research is likely to have an important impact on our confidence in the estimate of effect and may change the estimate. Low quality: Further research is very likely to have an important impact on our confidence in the estimate of effect and is likely to change the estimate. Very low quality: We are very uncertain about the estimate. |
1 Unclear sequence generation.
2 Unclear addressing of incomplete outcome data.
3 Imprecision of the observed effect ‐ the analyses used do not allow computing confidence intervals to support an evaluation of the precision of the estimate. However, most of the median effect sizes of the individual studies included were imprecise.
4 Inadequate allocation concealment. 5 Inconsistency: one study measured a deterioration in outcomes whereas the other two showed improvements.