Skip to main content
. 2020 Jun 23;34:66. doi: 10.34171/mjiri.34.66

Table 1. Criteria for evaluating instruments (14) .

Criteria Description Types Method
Reliability The degree to which an instrument is consistent or free from random error
• Test-retest reliability


• Internal consistency


• comparison with proxy responses

• Test–retest reliability

(ICC1 and К)

• Internal consistency (Coefficient α)


• Proxy responses (ICC)
Validity The degree to which an instrument measures what it intends to measure
• Factorial Structure


• Convergent correlations


• Discriminant groups
Factorial structure
(exploratory or confirmatory factor analysis, Rasch analysis)
Discriminant
(differences by means or %)
Responsiveness The ability of instrument to measure important changes following intervention (s) - Clinical criteria for change
Item/Instrument bias Assesses in practical terms if individual questions or summary scores are biased for individuals with SCI2 - -
Measurement model Examines if there are problems with floor effects (lowest level of ability) or ceiling effects (highest level of ability). - The instrument has scales or measures where 20% of persons with SCI are grouped at scoring determinations. Also, can demonstratesthe score distribution (mean and standard deviation).

1Intraclass correlation coefficient

2Spinal Cord Injury