Table 2.
AMSTAR criteria | Beasley 1991[7] |
Khan 2000[26] |
Fergusson 2005[8] |
Gunnell 2005[11] |
Dubicka 2006[24] |
Hammad 2006[25] |
Apter 2006[20] |
Bridge 2007[18] |
Tauscher- Wisniewski 2007[19] |
Beasley 2007[21] |
Stone 2009[23] |
Carpenter 2011[22] |
proportion with item present |
kappaa |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1. Was an ‘a priori’ design provided? | Yes | No | Yes | No | Yes | No | Yes | Yes | Yes | No | Yes | Yes | 8/12 | 0.47 |
2. Was there duplicate study selection and data extraction? |
Yes | No | Yes | No | No | Yes | Yes | No | No | Yes | Yes | Yes | 7/12 | 0.68 |
3. Was a comprehensive literature search performed? |
Yes | No | Yes | No | Yes | No | No | Yes | No | No | No | No | 4/12 | 0.63 |
4. Was the status of publication used as an inclusion criterion? |
Yes | No | No | Yes | Yes | Yes | Yes | Yes | No | No | Yes | Yes | 8/12 | 0.33 |
5. Was a list of studies provided? | No | No | No | No | No | Yes | No | No | No | No | No | Yes | 2/12 | 0.75 |
6. Were the characteristics of the included studies provided? |
Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | No | Yes | Yes | Yes | 11/12 | 0.63 |
7. Was the scientific quality of the included studies assessed and documented? |
No | No | No | No | Yes | Yes | No | Yes | No | No | No | No | 3/12 | 0.47 |
8. Was the scientific quality of the included studies used appropriately in formulating conclusions? |
Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | No | Yes | Yes | No | 10/12 | 0.56 |
9. Were the methods used to combine the findings of studies appropriate? |
Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | No | Yes | Yes | 11/12 | 0.25 |
10. Was the likelihood of publication bias assessed? |
No | No | No | No | No | No | No | Yes | No | No | No | No | 1/12 | 0.63 |
11. Was the conflict of interest stated? | No | No | Yes | Yes | Yes | No | No | Yes | No | No | Yes | Yes | 6/12 | 0.67 |
Total score (out of 11) | 7 | 3 | 7 | 5 | 8 | 7 | 6 | 9 | 2 | 3 | 7 | 7 | --- | 0.86b |
Overall methodological quality (L=low,M=moderate,H=high) |
M | L | M | M | M | M | M | H | L | L | M | M | --- | --- |
a Kappa values for inter-rater reliability of the two independent coders who assessed each of the 11 items for the 12 reviews
b Intraclass correlation coefficient (ICC) assessing the inter-rater reliability of the two independent raters of the total quality score for the 12 reviews