Table 2.
Test | Purpose | % | Test | Purpose | % |
---|---|---|---|---|---|
1. Cronbach’s alpha | Internal consistency | 56.4% | 13. Face validity | 7.3% | |
2. Student t-test/ANOVA | Responsiveness | 38.2% | 14. Kappa | Internal consistency | 5.5% |
3. Criterion validity | Concurrent and predictive | 32.7% | 15. Wilcoxin rank test | Responsiveness | 3.6% |
4. Construct validity | Convergent and divergent validity OR ability to differentiate between groups | 32.7% | 16. Spearman correlation coefficient | Inter-rater reliability | 1.8% |
5. Pearson r | Intra-rater reliability | 20.0% | 17. Spearman correlation coefficient | Test–retest | 1.8% |
6. Intra-class correlation coefficient (ICC) | Intra-rater reliability | 16.4% | 18. Percent agreement | Inter-rater reliability | 1.8% |
7. Cut point | Sensitivity and specificity | 16.4% | 19. Percent agreement | Test-retest | 1.8% |
8. Effect size | Responsiveness | 14.5% | 20. ICC | Inter-rater reliability | 1.8% |
9. Pearson r | Test–retest | 2.7% | 21. ICC | Internal consistency | 1.8% |
10. Pearson r | Internal consistency | 10.9% | 22. Pearson r | Inter-rater reliability | 0.0% |
11. Area under receiver operating characteristic (ROC) curve | Responsiveness | 9.1% | 23. Spearman correlation coefficient | Intra-rater reliability | 0.0% |
12. Kappa | Test–retest | 7.3% | 24. Percent agreement | Intra-rater reliability | 0.0% |