Skip to main content
. 2016 Sep 13;10:1767–1776. doi: 10.2147/PPA.S108489

Table 1.

Summary of tests and criteria

Property Definition/test Criteria for acceptability
Item performance Data quality; assessed by completeness of data and score distributions
Frequency and percentage of missing data per item
Floor/ceiling effects
Response option frequencies
Items missing in more than 10% of responses
Ceiling effect: % of responses in the highest response category > (100/the number of response options on an item) %, eg, 20% for a five-response ordinal scale
Floor effect: % of responses in the lowest response category > (100/the number of response options on an item) %, eg, 20% for a five-response ordinal scale
Utilization of all response options
Reliability Internal consistency reliability: extent to which items in a domain measure the same concept; assessed by Cronbach’s α and inter-item correlations assessed by Spearman’s or Pearson’s coefficient Cronbach’s α≥0.70 was considered to indicate internal reliability
Inter-item correlation (Pearson or Spearman coefficient items of r>0.80 should be highlighted as potentially redundant)
Validity (hypothesis testing) Evidence that the questionnaire measures a single concept, that items can be combined to form a summary score, and that domains measure distinct but related concepts
Concurrent validity: hypotheses based on criterion measure(s)
Known groups differences: ability of a scale to differentiate among known groups
Concurrent validity: Spearman’s coefficient values of <0.3, 0.31–0.59, and >0.6 were considered to indicate low, moderate, and high levels of correlation, respectively, with acceptable ranges for convergence identified at ≥0.4