Table 2.
Glossary of psychometric and statistical terms
| Term | Definition |
|---|---|
| Construct validity | The extent to which a measurement corresponds to theoretical concepts (constructs) concerning the phenomenon under study [16]. |
| Convergent validity | The degree to which a measure is correlated with other measures to which it is theoretically predicted to correlate. In contrast, discriminant validity describes the degree to which the measure is not similar to (diverges from) other measures to which it theoretically should not be similar. Convergent validity and discriminant validity are variants of construct validity [16]. |
| Correlation coefficient | An index that quantifies the linear relationship between a pair of variables (range = −1 to 1), with the sign indicating the direction of the relationship and the numerical magnitude its strength. Values of −1 or 1 indicate that the sample values fall on a straight line, whereas a value of zero indicates the lack of any linear relationship between the two variables [17]. |
| Criterion validity | The extent to which the measurement correlates with an external criterion of the phenomenon under study [16]. |
| Cronbach’s α | The estimate of the correlation between the total score across a series of items from a rating scale and the total score that would have been obtained had a comparable series of items been employed [16]. Cronbach’s α is an index of internal consistency of a psychological test ranging from 0 to 1. (Guidelines for interpretation:<0.60, unacceptable; 0.60-0.65, minimally acceptable; 0.70-0.80, respectable; 0.80-0.90, very good; and >0.90, consider shortening the scale by reducing the number of items [18].) |
| Factor analysis | A set of statistical methods (e.g., maximum likelihood estimation) for analyzing the correlations among several variables in order to estimate the number of fundamental dimensions that underlie the observed data and to describe and measure those dimensions [16]. These underlying, unobservable, latent variables are usually known as the common factors [17]. Using exploratory factor analysis, no hypothesis about the number and kind of common factors exists prior to analysis. In the case of confirmatory factor analysis, the number of common factors has been predetermined. |
| Floor or ceiling effect | The number of respondents who achieved the lowest or highest possible score [10, 11]. |
| Goodness of fit | The degree of agreement between an empirically observed distribution and a mathematical or theoretical distribution [16]. |
| Internal consistency | The extent to which items in a (sub)scale are intercorrelated, thus measuring the same construct [10]. |
| Intraclass correlation | The proportion of variance of an observation due to between-subject variability in the “true” scores of a measuring instrument [17]. |
| Test-retest reliability | An index of score consistency over a brief period of time (typically several weeks), usually the correlation coefficient determined between administration of the test twice with a certain amount of time between administrations [17]. |