Table 3.
Items | First Author and Year | ||||||||||
Awan (2002) | Beshara (2016) | Bonnechère (2014) | Cai (2019) | Chan (2010) | Chen (2020) | Cools (2014) | Correll (2018) | Çubukçu (2020) | Cuesta-Vargas (2016) | Da Cunha Neto (2018) | |
1. Were patients stable in the time between repeated measurements on the construct to be measured? | VG | VG | VG | VG | VG | VG | VG | VG | VG | VG | A |
2. Was the time interval between the repeated measurements appropriate? | A | VG | VG | VG | A | A | A | A | VG | VG | A |
3. Were the measurement conditions similar for the repeated measurements–except for the condition being evaluated as a source of variation? | VG | VG | VG | VG | A | VG | VG | VG | VG | VG | A |
4. Did the professional(s) administer the measurement without knowledge of scores or values of other repeated measurement(s) in the same patients? | VG | VG | VG | A | VG | VG | VG | VG | A | A | A |
5. Did the professionals(s) assign scores or determine values without knowledge of scores or values of other repeated measurements(s) in the same patients? | VG | VG | VG | A | VG | VG | VG | VG | A | A | A |
6. Were there any other important flaws in the design or statistical methods of the study? | D | D | VG | A | I | A | VG | VG | VG | VG | A |
7. For continuous scores: was an intraclass correlation (ICC) calculated? | A | VG | A | VG | A | VG | VG | VG | VG | VG | A |
8. For ordinal scores: was a (weighted) kappa calculated? | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A |
9. For dichotomous/nominal scores: was Kappa calculated for each category against the other categories combined? | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A |
Overall Score | D | D | A | A | I | A | A | A | A | A | A |
Items | First Author and Year | ||||||||||
De Baets (2020) | deWinter (2004) | Dougherty (2015) | Hawi (2014) | Huber (2015) | Hwang (2017) | Kolber (2011) | Kolber (2012) | Lim (2015) | Mejia-Hernandez (2018) | Milgrom (2016) | |
1 Were patients stable in the time between repeated measurements on the construct to be measured? | VG | VG | A | VG | VG | VG | VG | VG | VG | VG | VG |
2. Was the time interval between the repeated measurements appropriate? | A | A | VG | A | A | A | A | VG | VG | A | A |
3. Were the measurement conditions similar for the repeated measurements–except for the condition being evaluated as a source of variation? | VG | VG | VG | VG | VG | VG | VG | VG | VG | VG | VG |
4. Did the professional(s) administer the measurement without knowledge of scores or values of other repeated measurement(s) in the same patients? | A | A | VG | A | VG | A | VG | VG | VG | VG | A |
5. Did the professionals(s) assign scores or determine values without knowledge of scores or values of other repeated measurements(s) in the same patients? | A | A | VG | A | VG | A | VG | VG | VG | VG | A |
6. Were there any other important flaws in the design or statistical methods of the study? | A | VG | VG | D | A | D | VG | VG | VG | VG | D |
7. For continuous scores: was an intraclass correlation (ICC) calculated? | VG | A | A | VG | VG | A | VG | VG | VG | A | A |
8. For ordinal scores: was a (weighted) kappa calculated? | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A |
9. For dichotomous/nominal scores: was Kappa calculated for each category against the other categories combined? | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A |
Overall Score | A | A | A | D | A | D | A | VG | VG | A | D |
Items | First Author and Year | ||||||||||
Mitchell (2014) | Picerno (2015) | Poser (2015) | Ramos (2019) | Rigoni (2019) | Schiefer (2015) | Scibek (2013) | Shin (2012) | Walker (2016) | Werner (2014) | ||
1. Were patients stable in the time between repeated measurements on the construct to be measured? | VG | VG | VG | VG | VG | VG | VG | VG | VG | VG | |
2. Was the time interval between the repeated measurements appropriate? | A | A | VG | VG | A | A | A | A | A | A | |
3. Were the measurement conditions similar for the repeated measurements–except for the condition being evaluated as a source of variation? | VG | VG | VG | VG | VG | VG | VG | VG | VG | VG | |
4. Did the professional(s) administer the measurement without knowledge of scores or values of other repeated measurement(s) in the same patients? | VG | A | A | A | VG | VG | A | VG | VG | VG | |
5. Did the professionals(s) assign scores or determine values without knowledge of scores or values of other repeated measurements(s) in the same patients? | VG | A | A | A | VG | VG | A | VG | VG | VG | |
6. Were there any other important flaws in the design or statistical methods of the study? | VG | VG | A | VG | VG | A | A | VG | A | A | |
7. For continuous scores: was an intraclass correlation (ICC) calculated? | VG | VG | VG | A | VG | VG | VG | VG | VG | VG | |
8. For ordinal scores: was a (weighted) kappa calculated? | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A | |
9. For dichotomous/nominal scores: was Kappa calculated for each category against the other categories combined? | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A | |
Overall Score | A | A | A | A | A | A | A | A | A | A |
Abbreviations: VG: very good; A: adequate, D: doubtful; I: inadequate; N/A: not applicable.