Table 1.
Criterion | Definition | Standard |
---|---|---|
Reliability | Reliability is the degree to which the score is free from error. | ICC and Kappa for inter/intra and test- retest ratings are: excellent (≥0.75), adequate (0.4–0.70), or poor (≥0.40). (25) (34) |
Validity | The extent to which an instrument measures what it purports to measure. |
Construct/convergent and concurrent correlations: Excellent (≥0.60), Adequate (0.31–0.59), Poor (≤0.30) (25) ROC analysis – AUC: Excellent (≥0.90), Adequate (0.70–0.89), Poor (<0.70) (34) |
Respondent Burden | The ease with which a patient can complete the measure | Excellent (brief ≤ 15 min and acceptability high) Adequate (either longer (but appropriately so) or some reported problems with acceptability) Poor (both length and acceptability are problematic) (25) |
Administrative Burden | The ease with which scores can be calculated and understood. | Excellent (scoring by hand and resulting metric relevant and interpretable for researcher, clinicians and clients) Adequate (computer scoring, lack of detail for scoring criteria, more obscure interpretation) Poor (costly and/or complex scoring and/or interpretation) (25) |