Skip to main content
. Author manuscript; available in PMC: 2009 Mar 2.
Published in final edited form as: Am J Speech Lang Pathol. 2006 Nov;15(4):307–320. doi: 10.1044/1058-0360(2006/030)

TABLE 2.

Definitions of characteristics of the evaluated instruments.

Characteristic Definitions/considerations for evaluation
Construct measured What is measured by the instrument and how is it defined?
Population Who is the target population of the instrument (for whom was the instrument intended?). Who is the actual tested sample (i.e., who was tested identifying details such as medical diagnosis and communication disorder). What is the size of tested sample?
Reliability Does the instrument give a consistent answer between 2 test times (test–retest reliability)? What is the instrument's internal consistency (i.e., the extent to which items measure various aspects of the same characteristic as measured by correlations among all items in a scale or measure the same construct in a scale using Cronbach's coefficient alpha or item-to-total correlations)?
Validity Does the instrument measure what it purports to measure? Includes face validity (do consumers of the instrument help verify the importance of items?); content validity (do experts in the field help verify the theoretical domain sampled by the test?); concurrent/convergent validity (how closely does an individual's test score correlate to his/her score on a criterion variable measured at about the same time the test score was obtained?); divergent validity (are scores from unassociated instruments related to scores on the reviewed instrument?); predictive validity (how closely does an individual's test score predict future performance on a criterion measure?); and construct validity (how well does an instrument measure an abstract or theoretical concept?).
Frequency of instrument use How many times has the instrument been used in the peer-reviewed research literature?