Planning Phase
|
1 |
Develop a clear theoretical model for the instrument based on the available literature |
Briefly discuss alternative, theory-informed (CFA) factorial models which the instrument could manifest as. If factors within a CFA model are conceptually related, then there is also an argument for an ESEM model with a similar structure. Should a global factor be expected, a bifactor CFA should be described and an associated bifactor ESEM model mentioned. |
Yes |
Literature review |
|
|
For the validation of psychometric instruments, present clear hypotheses about the (CFA) factorial structure of the instrument and present alternative hypotheses that the ESEM models should, theoretically, provide a better representation of the data. For other studies, the relationships between exo- and endogenous factors should be clearly articulated |
Yes |
Literature review |
2 |
Plan for the most appropriate sample size |
Determine and plan for the most appropriate sample size for the study. This can be done in many ways [c.f. (70, 72, 88)]. Monte Carlo simulations are preferred [c.f. (73)] |
No |
|
Data Preparation Phase
|
3 |
Data cleaning, screening, and preparation |
Screen the data for potential issues (e.g. outliers), and prepare it for further analysis. Data quality checks should also be performed [c.f. (64)] |
No |
|
|
|
Decide upon an appropriate missing values strategy (e.g. Multiple imputations, FIML, sensitivity analysis) |
Yes |
Methods: statistical analysis |
4 |
Determine the most appropriate software, estimation method, rotation and procedure for the analysis |
Decide upon and report the software packages (and version number) that will be used for the analysis. ESEM is fully integrated in Mplus, but is currently only partially supported in R. |
Yes |
Methods: statistical analysis |
|
|
If data follows a multivariate normal distribution, employ the Maximum Likelihood Estimator in Mplus. If data is not normally distributed, either transform the data or use more robust estimation methods in Mplus (e.g. MLR, WLSMV). For all models with continuous indicators, the MLR estimator is also appropriate; models comprised of ordinal indicators WLSMV should be used. |
Yes |
Methods: statistical analysis |
|
|
Decide upon the most appropriate rotation method (Geomin / Target / Target Orthogonal). Geomin rotations (with an epsilon value of .5) for more exploratory approaches. Target rotations for confirmatory approaches. (Target) Orthogonal rotations are used for bifactor ESEM modeling. |
Yes |
Methods: statistical analysis |
|
|
Describe the analysis procedure to be employed |
|
|
Data Analysis and Reporting Phase
|
5 |
Determine appropriate goodness-of-fit indices, and indicators of measurement quality |
Decide upon which goodness-of-fit indices are most appropriate for the analyses (e.g., TLI/CFI, RMSEA, RMSEA Confidence Intervals etc). Report each index as well as the cut-off criteria to be considered. Multiple indicators are to be mentioned (c.f. Table 2). CFI/TLI/RMSEA should always be employed as the primary criterion. |
Yes |
Methods: statistical analysis |
|
|
Decide upon a priori indicators of measurement quality to be considered for the study (e.g. Standardized λ > 0.40; item uniqueness > 0.1 but <0.9; cross-loading tolerance levels; overall R2). However, the results should be considered in the context of the study and what they might mean or indicate without being rigid about minor deviations from the chosen guidelines. |
Yes |
Methods: statistical analysis |
6 |
Estimate and report the model fit indicators for competing CFA Models |
Multiple measurement models need to be estimated and their model fit statistics reported. Only models with theoretical justification should be estimated. The following models could be estimated: (1) Unidimensional model, (2) correlated first-order factorial models, (3) second-order or 'hierarchal' factorial model, and (4) bifactor models |
Yes |
Results: competing measurement models |
|
|
Report any potential modifications made to enhance model fit |
Yes |
Results: competing measurement models |
|
|
Tabulate all goodness-of-fit indices for all CFA models in a single table and indicate which models meet the pre-defined criteria mentioned and to make the comparison reader-friendly (refer to Point 5). |
Yes |
Results: competing measurement models |
7 |
Estimate and report the model fit indicators for competing ESEM Models |
Multiple ESEM measurement models should be estimated and model fit statistics reported. In principle, the ESEM alternatives to the traditional CFA models estimated in Point 6, should be reported. The following ESEM models could be estimated: (1) correlated first-order factorial ESEM models, (2) second-order or 'hierarchal' factorial ESEM model, (3) bifactor ESEM models and (4) ESEM within CFA models for use in structural models with other factors |
Yes |
Results: competing measurement models |
|
|
Tabulate all goodness-of-fit indices for all ESEM Models into the same table as the CFA models and indicate which models meet the pre-defined criteria mentioned in Point 5. |
Yes |
Results: competing measurement models |
8 |
Compare CFA and ESEM models to determine the best fitting model for the data |
CFA and ESEM models need to be compared against one another with the goodness-of-fit and measurement quality criteria mentioned in Point 5. Models that show comparatively better model fit should be retained for further analysis. It is, however, important to note that model fit should not be the only consideration, but the parameter estimates should also be closely inspected and considered. |
Yes |
Results: competing measurement models |
|
|
To retain ESEM models for further analysis, the following conditions need to be met: |
Yes |
Results: competing measurement models |
|
|
(a) The ESEM model should ideally show better data-model fit than the corresponding CFA model (including the same number of factors defined similarly). If the factor correlations for the ESEM model are smaller than those of the CFA model, then the ESEM model should be retained even if it fits as well as the CFA model. |
Yes |
Results: competing measurement models |
|
|
(b) For correlated factors models, the ESEM model should show reduced factor correlations |
Yes |
Results: competing measurement models |
|
|
(c) The ESEM model should only show small to medium cross-loadings. Should larger cross-loadings exist, then there should be a theoretical explanation presented for such. Perhaps there are ‘wording' effects or some logic that researchers can use to explain this. |
Yes |
Results: competing measurement models |
|
|
(d) The estimated latent factors within the ESEM model should be well defined |
Yes |
Results: competing measurement models |
|
|
(e) Should there be multiple medium to large cross-loadings in the ESEM model, it could indicate support for the presence of a larger global factor, and therefore the bifactor ESEM model could be explored. |
Yes |
Results: competing measurement models |
|
|
(f) Additional factors to consider for bifactor models: This model should ideally show better data-model fit than the corresponding CFA and ESEM models, there should be a well-defined G-Factor (where all items load significantly on such), and reasonably well defined S-Factors (cross- and non-significant loadings are permitted). For bifactor models, model fit should not be the only indicator informing a decision to retain. Researchers should also inspect parameter estimates before making final decisions. |
Yes |
Results: competing measurement models |
9 |
Report factorial correlations |
For the final retained measurement model (s), the factor correlations should be reported. This cannot be done for bifactor Models . Smaller factor correlations mean better discrimination between factors. The model with the smallest factor correlation is usually retained, however, decisions should be based in the context of the other considerations (model fit, measurement quality and parameter estimates) mentioned earlier. |
Yes |
Results: factorial correlations |
10 |
Report and compare item level parameters and reliability |
Item level parameters and indicators of measurement quality (standardized factor loadings, standard errors, item-level residual variances), as well as levels of reliability (composite reliability, or omega,), should be tabulated and reported. |
Yes |
Results: item level parameters |
|
|
Note, that if a bifactor CFA model is retained, editors or reviewers may request additional information such as the Explained Common Variance (ECV), the H-factor, the Factor Determinacy indicator, the Item level ECV, the Percent of Uncontaminated Correlations (PUC) and the Average Relative Bias Parameters could also be reported as additional indicators of reliability and measurement quality [For a tutorial cf. (84)]. |
Yes |
Results: item level parameters |
|
|
Decide upon appropriate indicators of reliability for both the CFA and ESEM models, such as composite reliability [ρ > 0.80; (85)], or Mc Donald's Omega [ω > 0.70; (86)]. |
|
|
|
|
Report the level of reliability for each (sub) scale of the instrument |
Yes |
Results: item level parameters |
Further or Additional Analysis
|
11 |
For further or additional analysis, the best fitting ESEM model is respecified as a CFA model through the ESEM-within-CFA estimation procedure. This affords the opportunity to use the ESEM-within-CFA model for more complex estimation procedures such as invariance testing, multi-group analysis, latent growth models, structural models and the like. |
Yes |
Results |