Decision tree for measuring socioeconomic conditions.
aDiemer
et al. (2013) provided an excellent set of pragmatic
considerations when measuring many of these variables. Galobardes et al.
(2006a, 2006b), Krieger et al. (1997), and Shavers (2007) provided a
description of theoretical strengths and limitations of income, wealth,
education, and other socioeconomic conditions.
bFor example, Wright (1997) and Wright & Perrone
(1977).
cSee Haug
(1977) for very serious concerns about the validity of
existing prestige measures.
dSee Coleman (1988) for a theoretical discussion of social
capital. Tulin et
al. (2018) provided one example of measuring social
capital.
eNote that procedures for selecting indicators for formative
models are largely undeveloped (West & Grimm, 2014).
Diamantopoulos
and Winklhofer (2001) provided a set of recommendations for
indicator selection. Their recommendation to use multiple-indicators
multiple-causes (MIMIC) models for path estimation should be ignored,
however, because MIMIC models are irrelevant to formative models (Lee et al.,
2013; Muthén, 1989). Theory on formative models has proceeded as
far as identifying when to use them and how to estimate them, but not on
how to decide which indicators to use for them. One approach to
selecting indicators begins with recognizing that a formatively measured
variable is essentially a variable optimized to predict a set of
outcomes. Because the formatively measured variable begins as the shared
variance of the outcomes, its indicators’ weights reflect only the
unique variance they contribute to this shared variance. Hence, their
weights, and thus the formative variable they contribute to, are
optimized to predict the outcomes. From this recognition, one approach
to picking indicators is to choose those that are relevant to
socioeconomic status (SES) and that are uniquely related to the
outcomes. Hence, income and education may be relevant for some outcomes,
whereas occupation and wealth may be relevant for others. A major issue
with this approach is that the chosen indicators need not be a complete
representation of SES but be only the set of variables that most fully
account for SES’s relation to an outcome. Thus, using only predictive
indicators to represent SES in a formative model could err and omit
variables important for a complete representation of SES. Thus, a better
approach might be to start with a set of indicators judged to represent
the breadth of SES. When entered into the model, the indicators of SES
from this broader set that do not uniquely predict the outcomes will
receive low weights and may need to be dropped to obtain satisfactory
model fit. To my knowledge, no guidelines exist for managing this
tension between model fit and content validity. (Note that this logic
follows that developed by Diamantopoulos & Winklhofer,
2001, for selection and retention of indicators.)
fNote that variables that are reflectively measured (e.g.,
identity, subjective SES) should be modeled as reflective indicators of
SES. Bollen and
Bauldry (2011) and Bainter and Bollen (2014)
provided examples of how to fit formative models. van Bork et al. (in
Asendorpf et
al., 2016, Figure 1, bottom half, p. 308) demonstrated how to test
whether formatively measured variables affect outcomes over and above
their indicators. I provide an example of these two steps in the
Supplemental Material available online using the
lavaan package in R (Rosseel, 2012).