Table 3.
Studies evaluating Strain elastography | Studies evaluating shear wave elastography | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
|
|||||||||||
Gaspari 2009 [21] | Iagnocco, 2010 [55] | Cannao, 2014 [30] | Suehiro, 2014 [56] | Dasgeb, 2015 [17] | Suehiro, 2015 [34] | Botar-Jid, 2016 [33] | Santiago, 2016 [19] | Hou, 2015 [38] | Liu, 2015 [57] | Yun Lee, 2015 [35] | |
1. Was interrater reliability of index test reported? | N | Y | Y | N | N | N | N | N | Y | Y | N |
2. Were raters blinded to the findings of other raters? | N/A | Y | Y | N/A | N/A | N/A | N/A | N/A | Y | Y | N/A |
3. Was intra-rater reliability of index test reported? | N | N | N | N | N | N | N | N | N | Y | N |
4. Were raters blinded to their prior findings? | N/A | Y | N/A | N/A | N/A | N/A | N/A | N/A | N/A | Y | N/A |
5. Was test-retest reliability of index test reported? | N | Y | N | N | N | N | N | Y | N/A | N | Y |
6. Was the stability of the variable being measured taken into account when determining the time interval between repeat measures? | N/A | Y | N/A | N/A | N/A | N/A | N/A | Y | N/A | N/A | ? |
7. Were raters blinded to additional cues that could bias results. | N/A | Y | N | N/A | N/A | N/A | N/A | Y | N | Y | N |
8. Was the sample size included in the analysis adequate? | N/A | Y | N | N/A | N/A | N/A | N/A | N | Y | Y | N |
9. For continuous scores: was an intraclass correlation coefficient (ICC) calculated? | N/A | N/A | N/A | N/A | N/A | N/A | N/A | Y | Y | Y | N |
10. For dichotomous/nominal/ordinal scores: was a kappa calculated? | N/A | N | Y | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A |
Y Yes, N No, ? unclear N/A Not Applicable.