Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2023 Jun 30.
Published in final edited form as: J Nurs Meas. 2022 Jun 20;31(2):259–272. doi: 10.1891/JNM-2021-0007

“Construct Validity Analysis of the Genetics and Genomics in Nursing Practice Survey: Overcoming Challenges in Variable Response Instruments”

Alexandra Plavskin 1, William E Samuels 2, Kathleen A Calzone 3
PMCID: PMC9763545  NIHMSID: NIHMS1826365  PMID: 35725026

Abstract

Background and Purpose:

We evaluated the construct validity of the Genetics and Genomics Nursing Practice Survey by investigating the factoral structure, while attempting to account for varied response structures of the items.

Methods:

Exploratory factor analyses provided insights into item loadings. Confirmatory factor analyses and a version of common methods bias analyses evaluated construct validity while considering the instrument’s structural limitations. Structural equation models provided information regarding model fit.

Results:

The 7-factor model fit these data slightly better (AIC ≈ 169,405; RMSEA = .052) than did the 5-factor model (AIC ≈ 183,599; RMSEA = .063). Neither the CFI or TLI met commonly-accepted thresholds for adequate model fits.

Conclusion:

The response format of the GGNPS created challenges in evaluating the instrument for construct validity. Nonetheless, these results support the theory-based construct validity of the GGNPS.

Keywords: genetics, genomics, instrument development, validity testing


The Genetics and Genomics Nursing Practice Survey (GGNPS) is designed to evaluate registered nurses’ (RNs’) competency/knowledge, attitudes/receptivity, confidence, and decision/adoption of genetics and genomics into their practice. It also evaluates the effect of social systems in the healthcare setting on all of the previously-mentioned domains. The GGNPS was developed from Jenkins and colleagues’ (2010) well-tested instrument that assesses the adoption of genetics/genomics by family physicians and was revised to reflect nursing practice.

BACKGROUND AND CONCEPTUAL FRAMEWORK

Calzone and colleagues (2012) pilot tested the GGNPS for usability then retested it with a larger sample of 239 RNs employed by the National Institutes of Health. They then revised it based on feedback from participants and on reviews by content experts; revisions included the removal of questions that content experts considered unclear and the addition of two items about knowledge of genetics of common diseases which were added with permission from the Genetic Variation Knowledge Assessment Index (GKAI; Bonham et al., 2014). The GGNPS has also been tested for reliability with a subsample of participants from the post-educational intervention group of the Method for Introducing a New Competency into Nursing Practice (MINC) study (2016). RNs in two hospitals completed the GGNPS twice following the conclusion of a year-long educational intervention. The reliability assessment was conducted at the end of the MINC study to decrease potential guessing by participants when completing the GGNPS. If the reliability assessment was conducted prior to the MINC study, a lack of knowledge, difficulty answering questions, and potential guessing by participants could affect the reliability results. Calzone and colleagues reported that the mean agreement across all items in the GGNPS was “moderately” strong (mean Cohen’s κ ≈ .4), using Landis and Koch’s (1977) criteria.

Plavskin and colleagues (2019) evaluated the GGNPS for evidence of face-, content-, and construct-related aspects of validity. Face validity was evaluated by seven nurses who were chosen for their diverse clinical and educational backgrounds that represented perspectives from various parts of the nursing workforce. By intent, none of those reviewers had specific education or expertise in genetics/genomics. They evaluated the GGNPS for ease of understanding and ability to apply the instrument to nursing practice. Plavskin and colleagues reported that the threshold for acceptable support for face-related validity for the GGNPS was met; two reviewers reported that the instrument was very clear and easy to understand, the remaining five reviewers reported that it was somewhat clear and easy to understand. None reported that it was unclear.

Content validity was evaluated by eight experts in nursing and genetics/genomics (Plavskin et al., 2019). Expertise was defined as both current employment and scholarship in both genetics/genomics in nursing. A comprehensive content validity index (CVI) was calculated by taking the mean of content validity ratios across all items. The overall CVI was 0.81, exceeding 0.74, the value Wilson et al. (2012) suggested to be the required threshold for content validity among seven content experts.

A series of structural equation models (SEMs) were conducted to evaluate relationships between factors. Their analyses indicated that the model did not fit the data well, suggesting that further work was warranted.

Both Plavskin and colleagues’ (2019) initial validity analysis and this current study do not use the version of the GGNPS that includes revisions made after reliability testing because reliability testing was conducted after baseline data were collected in the MINC study (Calzone et al., 2016). Reliability testing had to be conducted following the educational intervention of the MINC study to prevent knowledge gaps from affecting reliability results. If baseline data were used for reliability testing, reliability could be affected by participants’ lack of knowledge about genetics/ genomics. To put it another way, if participants were guessing while completing the GGNPS, reliability testing would be adversely affected.

To account for the revisions made following reliability testing, items in Part 2, Question 2 (coded here as P2_Q2) and in Part 4, Question 3 (P4_Q3) were removed. We did not attempt to model items that had their response options changed (e.g., from a 7-point Likert scale to ones with fewer options) to prevent risk of making an incorrect assumption during calculations and increasing measurement error.

In summary, the GGNPS has undergone multiple revisions, the version of the GGNPS used for this study is an eight-part instrument, including demographic information. It is 13 pages long and takes approximately 15–20 minutes to complete. It includes multiple choice, Likert scale, and dichotomous (yes/no) questions. We used the version of the GGNPS that was current at the time of the study and provided the most robust, nation-wide data set available.

Conceptual Framework

We based our investigation of nurses’ uses of genetics/genomics in their clinical practices within Rogers’ (2003) Diffusion of Innovations (DOI) model, a process in which innovations are disseminated to members through social systems. According to the model, members of the social system initially learn about an innovation through early adopters within that system, form a favorable or unfavorable attitude about it through direct and indirect experiences, then decide whether to adopt it themselves. We do not directly consider the further steps of implementation or confirmation in Rogers’ model since those two aspects related to longer-term behaviors outside the range of what we can measure here.

Rogers (2003) contends that certain personality traits are more present among early adopters, including acceptance of uncertainty and risk. As a result, this study considered both attitudes/receptivity to innovations and the effect of confidence on the ultimate outcome of decision/adoption. While Rogers did not include confidence in his model, Calzone and colleagues (2012) defended adding confidence as a factor to the GGNPS because their research indicated that confidence plays a key role in the adoption of genetics/genomics among nurses. Finally, the domains of decision and adoption were considered as a single construct because, according to Rogers, decision is an internal thought process that is difficult to measure and observe—and therefor difficult to measure as separate from adoption. One’s decision is also closely related with the ultimate choice to adopt or reject an innovation.

We were also guided by Rogers’ (2003) conceptualization of three types of knowledge. Rogers described: Awareness, how-to, and principles knowledge. Awareness knowledge is initial awareness of an innovation; it is an early form of knowledge that may encourage or discourage individuals or groups to gain additional insights into the innovation. How-to knowledge, in which one learns how to use the innovation. Finally, principles knowledge embodies a more comprehensive understanding of how an innovation functions.

METHODS

Institutional review board approval for this study was obtained from the primary investigator’s institution, protocol number 2016-0067. Exemption from full review was allowed because data were not collected with identifying information, ensuring confidentiality.

The sample consisted of 7,798 RNs from 17 states representing all regions of the United States. All 23 institutions from which the respondents were sampled were Magnet® hospitals, selected because they are considered leaders in quality of nursing care. All hospitals, 21 intervention/2 control, were participating in a 1-year longitudinal genomic implementation study targeting nurses which included baseline and 12-month online surveys using the GGNPS which were distributed to the nurses electronically using hospital specific mechanisms (Calzone et al. 2014).

We investigated the latent structure of the GGNPS through a series of exploratory and confirmatory factor analyses (EFAs & CFAs) followed by additional latent variable analyses. The data were randomly split in half, with the EFAs run on one half and the other analyses (CFAs, SEMs, etc.) run on the other half. Data were split in half to prevent training the models to the data then conducting validity tests on the models derived from the same data.

We began with the EFAs to gain insights about the structure of the instrument. For the EFAs, we established, a priori, .30 as the minimum loading for considering whether to assign an item to a factor. Our final decisions were governed by theory-driven review of geomin rotations (an oblique rotation) given the robustness of that technique to moderately-complicated item loadings such as these (Asparouhov & Muthén, 2009).

The response format of the items varied, and we believed that this affected the relationships between them. We therefore investigated ways of addressing the effects of these varying response formats through CFAs that emulated common methods bias analysis. Afterwards, we tested relationships between the non-ostensible factors (and in light of the differing response formats) through structural equation models (SEMs). These SEMs tested not only the relationships between the proposed factors (based on Rogers’ DOI theory) but also attempted to account for the varying response formats of the items.

Given that Plavskin and colleagues (2019) did not find that the factor models they tested converged on stable sets of parameters, we began our investigation with EFAs to obtain better relationships between the instrument’s items. We conducted a 5-factor EFA based on work by Calzone and colleagues (2012, 2014). We also investigated an alternative model using a 7-factor EFA that paralleled Rogers’ conceptualization of different types of knowledge.

Following the EFAs, we conducted CFAs that variously modelled the variances of the items; these investigations included analyses similar to what one does when assessing common methods bias (Widaman, 1985; Williams et al., 1989), but instead of investigating the effect of the method of administration, we looked at items with similar response options (various numbers of Likert-scale levels, dichotomous, etc.). These analyses therefore resembled common methods bias wherein the “methods” were items of similar response formats.

Lastly, we conducted a series of SEMs to evaluate the relation between factors. Data were analyzed using R version 3.6.3 (R Core Team, 2020). R packages used for various aspects of these analyses included car (version 3.0.8; Fox & Weisberg, 2019), DiagrammeR (version 1.0.6.1; Iannone, 2020), EFAutilities (version 2.0.0; Zhang et al., 2019), knitr (version 1.28; Xie, 2020), lavaan (version 0.6–6; Rosseel, 2012), psych (version 1.9.12.31; Revelle, 2019), and semPlot (version 1.1.2; Epskamp, 2019).

RESULTS

Exploratory Factor Analyses

We first conducted a 5-factor EFA on a randomly-chosen half of the data, guided by Rogers’ (2003) DOI model. The 5 factors included confidence, attitudes/receptivity, knowledge/confidence, social systems, and decision/adoption (Figure 1). Supplement 1 presents the loadings of the items on the various factors in this 5-factor along with the provisional names of each factor as they appear to emulate Rogers’ domains. Upon analysis of the loadings, we noted that while some items loaded as predicted—such as the confidence factor—while others loaded unexpectedly. A number of items that Calzone and colleagues (2012) originally labeled as attitudes/receptivity split into two factors. Attitude/receptivity and social system items loaded on the same factor. By contrast, items originally predicted to be additional knowledge/competency loaded onto three separate items.

Figure 1.

Figure 1.

5-Factor Rogerian modeling.

Rogers (2003) noted three different types of knowledge: Awareness knowledge, how-to knowledge, and principles knowledge. Analysis of the EFA loadings and Rogers’ work led us to consider a 7-factor model that included 3 distinct knowledge/competency factors (Figure 2). These additional EFAs were conducted on the same split half of the data on which the 5-factor EFAs were conducted. The 7-factor EFA, confidence items loaded as predicted by Calzone and colleagues (2012); these loadings are presented in the table in Supplement 2. Knowledge items again separated into three different variables, with some knowledge/competency items also loading on the same factor as decision/adoption items. In the 7-factor model, social system items split between two factors, as did attitude/receptivity items (Supplement 2).

Figure 2.

Figure 2.

7-Factor Rogerian modeling.

Confirmatory Factor Analyses & Common “Methods” Bias Analyses

The factor structures provided by both the 5- and 7-factor solutions suggest that the GGNPS items can be meaningfully arranged around Rogers’ innovation framework. Plavskin and colleagues (2019) argued that the instrument’s latent constructs—the construct-related evidence of its validity—may be in part occluded by the different formats of the items. We formally investigated this possibility through analyses inspired by common methods bias analyses (Podsakoff et al., 2003). These and all subsequent analyses reported here were conducted on the split half of the data that were not used for the EFAs described above. We are interested in the possibility that “(a) there is systematic error variance in a set of measurements [here, response formats], and (b) that systematic error variance may be shared with [items of similar response formats] on another variable, (c) resulting in an inflated estimate of the relationship” for items of similar response formats (Brannick et al., 2010, p. 410).

We thus compared the following models: A null model assuming no underlying factor structure with a series of models that assigned items to factors based on either the 5- or 7-factor Rogerian models described in the EFA section above and detailed in Supplements 1 and 2. To explore the effects of similar response formats, we then constrained the item-level variances between items to varying degrees. These five models collectively investigated the effect of response formats on the clarity of two, hypothesized factor structures. These models do this through increasingly strict tests of how specific any such effect is to the response formats: Whether it’s primarily dichotomous versus Likert-scaled items or whether the different Likert scales strongly affect participants’ responses.

Table 1 presents the χ2 statistics for the fits of these models to the data. The Δχ2 column presents the change in model fit after applying the given constraint; the Δdf column provides the degrees of freedom for testing the significance of the change in model fit (Δχ2), and the p-value is the results of the significance test (i.e., the probability of findings such a difference by chance alone). The 5-factor model that does not constrain items variances fits the data better than the null model—a low bar to overcome, but a necessary one to justify further analyses.

TABLE 1.

Model Fits for 5-Factor CFAs Testing Effect of Similar Item Response Patterns

Model χ2 Δχ2 df Δdf p
Null model (i.e., of no parameters) 46576.0 2016 0
5-Factor without constrained item variances 18891.3 26818.9a 1883 70 <.0001
5-Factor with question group constrained item variances 14017.5 4873.8b 1899 16 <.0001
5-Factor with different likerts constrained item variances 19242.3 −5224.8b 2425 526 <.0001
5-Factor with all likerts constrained item variances 21156.4 −1914.1b 2422 3 ~1
a

The difference in χ2 is compared against the null model (row 1).

b

The difference in Δχ2 is compared against the 5-factor model without item variance constraints (row 2).

The Δχ2 and Δdf in rows three and four in Table 1 present the changes in model fit from the model in row 2 (the 5-factor model without constrained item variances). The model in the third row—in which items from the same question group were constrained to equal variances—fit the data significantly better than did the model (in row 2) that did not constrain those items’ variances. This suggests a clearer fit of the factor structure proposed in the 5-factor model after considering some of the covariances of items that are responded to similarly.

However, further constraining the item variances based on response format did not continue to improve the fit. The model in row 4 further constrained the item variances so that all, e.g., 4-level Likert responses shared the same variance despite being parts of different question groups. This model’s χ2 value was indeed larger than the χ2 value for the unconstrained model (in row 2); the difference between these χ2 values was significant (p << .001), as would be any χ2 value over 489 when df = 526. The model in the final row constrains all items with Likert-style responses to equal variance—regardless of how many levels those Likert-scale responses had. This model tests whether there is something different about Likert-style responses per se. This model, however, fit more poorly than any of the other models (except the null), including the unconstrained model in row 2.

The pattern of results for item constraints was the same for the 7-factor model as it was for the 5-factor model (Table 2). Constraining the variances of items within the same question group to equality—regardless of whether they loaded on the same factor—improved the model fit. Further constraining all Likert-style items did not improve the fit. Given these results, we constrained the variances of items within a given question group to equality in our subsequent analyses of the latent structure of the GGNPS. Our analyses therefore represent our best attempt to investigate a latent structure that lies in part “behind” the influence of the different response formats of the instrument’s items.

TABLE 2.

Model Fits for 7-Factor CFAs Testing Effect of Similar Item Response Patterns

Model χ2 Δχ2 df Δdf p
Null model (i.e., of no parameters) 46434.2 1953 0
7-Factor without constrained item variances 12197.1 33716.8a 1816 75 <.0001
7-Factor with question group constrained item variances 9535.7 2661.4b 1831 15 <.0001
7-Factor with different likerts constrained item variances 13412.0 −3876.3b 2349 518 <.0001
7-Factor with all likerts constrained item variances 14500.1 −1088.1b 2346 3 ~1
a

The difference in χ2 is compared against the null model (row 1).

b

The difference in Δχ2 is compared against the 5-factor model without item variance constraints (row 2).

Structural Equation Models

Both the 5- and 7-factor models had similar fit parameters, suggesting a consistency in latent structure and increasing the likelihood that the models accurately describe the data. The 7-factor model indicated a better fit as indicated by a lower value on the Akaike Information Criterion (AIC; see Table 1). The AIC evaluates model fit, but includes a penalty for the addition of extra parameters, to prevent overfitting (Posada & Buckley, 2004). The 7-factor model also indicated a better fit via the RMSEA, meeting the common RMSEA criterion of <.06 to indicate a good model fit. Neither the 5- nor 7-factor model met the commonly-suggested TLI > .90 threshold for an acceptable fit (Xia & Yang, 2019; Bentler & Bonett, 1980); both models also did not meet the common CFI > .95 fit threshold (Xia & Yang, 2019). Table 3.

TABLE 3.

Model Fit Indices for the 5- and 7-Factor Models of the GGNPS

5-Factor model 7-Factor model
RMSEA 0.063 0.052
AIC 183599.022 169405.220
CFI 0.728 0.830
TLI 0.711 0.819

DISCUSSION

Understanding and measuring nurses’ knowledge, attitudes, and use of genetic/genomics is a key research outcome because genetics and genomics are components of all clinical roles and specialties of nursing practice. Therefore, the GGNPS is a needed tool as clinical agencies seek to evaluate the increasingly important clinical competency of genetics/ genomics. The response format of the GGNPS, however, creates several challenges when evaluating the instrument for construct validity.

We sought to expand on previous work and further evaluate construct-related validity of this instrument using statistical methods to address challenges presented by the response format. The common methods bias analyses demonstrated that model fit improved the most when variance was constrained among items in the same question group—even when these items loaded on different theoretically-relevant factors.

Although the 5- and 7-factor models performed similarly, the 7-factor model demonstrated a better fit. Items within the 7-factor EFA model also loaded onto factors in ways we believe are more representative of Rogers’ (2003) model; the knowledge items in particular loaded in patterns that were similar to Rogers’ descriptions of three types of knowledge; the confidence items all loaded on one variable in the 7-factor model—as Rogers’ model would predict. However, not all of the attitude/receptivity, decision/adoption, and social system items loaded on their hypothesized domains in either the 5- or 7-factor models. We interpret this to mean that the variables were more interrelated than originally predicted.

Our conclusion gains further support from the models’ fit indices, the GGNPS does meet the commonly-accepted thresholds for construct validity, as evidenced by the RMSEA and differences in model AICs. While, the threshold for model fit was not met using CFI and TLI, these traditional thresholds are more difficult to attain due to the structure and format of the instrument and due to decreased variance among items. In fact, Hu and Bentler, statisticians whose 1998 article has been very influential on use of CFI and TLI, cautioned against using cutoff values without further consideration of the data and concomitant analyses; in addition, their work focused on continuous data (Xia & Yang, 2019). Because the GGNPS contains dichotomous, multiple choice, and Likert scale items, we based our assessment of construct validity on multiple factors including consistency of modeling results between 5-and 7-factor models, the RMSEA value, and parallels between the data models and Rogers’ DOI model.

This study had several limitations, including using a version of the GGNPS that did not include revisions made following reliability testing. However, we plan to use the revised version of the GGNPS to collect additional data and to further investigate construct-related validity. Additionally, items with consistently poor loadings will be considered for revision. Since this study is a secondary analysis of existing data, we were limited by our inability to modify response format of items and directly investigate if this would affect factor loadings and SEM results.

Relevance to Nursing Practice, Education or Research

Survey instruments utilized in nursing and the health sciences often include categorical, ordinal, and dichotomous items to capture complex information. This often requires a varied response format. By contrast, many statistical tools used to evaluate instruments use statistical methods which are most effective with continuous numerical data. This article may serve as a resource for nurse researchers evaluating instruments with varied response formats to provide valuable insight regarding construct validity.

Supplementary Material

1

Acknowledgments.

This research was supported in part by the Intramural Research Program of the NIH, Center for Cancer Research, Genetics Branch.

Funding.

The author(s) received no specific grant or financial support for the research, authorship, and/or publication of this article.

Footnotes

Disclosure. The authors have no relevant financial interest or affiliations with any commercial interests related to the subjects discussed within this article.

Contributor Information

Alexandra Plavskin, Hunter College, 425 East 25th St, New York, NY 10010.

William E. Samuels, Hunter College, 425 East 25th St, New York, NY 10010.

Kathleen A. Calzone, Center for Cancer Research, Genetics Branch, National Cancer Institute, 37 Convent Drive, Building 37, RM 6002C, Bethesda, MD 20892.

REFERENCES

  1. Asparouhov T, & Muthén B (2009). Exploratory structural equation modeling. Structural Equation Modeling: A Multidisciplinary Journal, 16(3), 397–438. [Google Scholar]
  2. Bentler PM, & Bonett DG (1980). Significance tests and goodness of fit in the analysis of covariance structures. Psychological Bulletin, 88(3), 588–606. [Google Scholar]
  3. Bonham VL, Sellers SL, & Woolford S (2014). Physicians’ knowledge, beliefs, and use of race and human genetic variation: New measures and insights. BMC Health Services Research, 14(1), 456. [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Brannick MT, Chan D, Conway JM, Lance CE, & Spector PE (2010). What is method variance and how can we cope with it? A panel discussion. Organizational Research Methods, 13(3), 407–420. [Google Scholar]
  5. Calzone KA, Culp S, Jenkins J, Caskey S, Edwards PB, Fuchs MA, Badzek L (2016). Test–retest reliability of the genetics and genomics in nursing practice survey instrument. Journal of Nursing Measurement, 24(1), 54–68. 10.1891/1061-3749.24.1.54 [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Calzone KA, Jenkins J, Culp S, Caskey S, & Badzek L (2014). Introducing a new competency into nursing practice. Journal of Nursing Regulation, 5(1), 40–47. [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Calzone KA, Jenkins J, Yates J, Cusack G, Wallen G, R., Liewehr DJ, McBride C (2012). Survey of nursing integration of genomics into nursing practice. Journal of Nursing Scholarship, 44(4), 428–436. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Epskamp S (2019). semPlot: Path Diagrams and Visual Analysis of Various SEM Packages’ Output. R package version 1.1.2. URL: https://CRAN.R-project.org/package=semPlot
  9. Fox J, & Weisberg S (2019). An R companion to applied regression (3rd ed.). Sage. URL: https://socialsciences.mcmaster.ca/jfox/Books/Companion/ [Google Scholar]
  10. Hu L, & Bentler PM (1998). Fit indices in covariance structure modeling: Sensitivity to underparameterized model misspecification. Psychological Methods, 3(4), 424. [Google Scholar]
  11. Iannone R (2020). DiagrammeR: Graph/Network Visualization. R package version 1.0.6.1. https://CRAN.R-project.org/package=DiagrammeR
  12. Jenkins J, Woolford S, Stevens N, Kahn N, & McBride CM (2010). Family physicians’ likely adoption of genomic-related innovations. Case Studies in Business, Industry and Government Statistics, 3(2), 70–78. [Google Scholar]
  13. Landis JR, & Koch GG (1977). The measurement of observer agreement for categorical data. Biometrics, 159–174 [PubMed] [Google Scholar]
  14. Plavskin A, Samuels WE, & Calzone KA (2019). Validity evaluation of the genetics and genomics in nursing practice survey. Nursing open, 6(4), 1404–1413. [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Posada D, & Buckley TR (2004). Model selection and model averaging in phylogenetics: Advantages of Akaike information criterion and Bayesian approaches over likelihood ratio tests. Systematic Biology, 53(5), 793–808. [DOI] [PubMed] [Google Scholar]
  16. Podsakoff PM, MacKenzie SB, Lee J-Y, & Podsakoff NP (2003). Common method biases in behavioral research: A critical review of the literature and recommended remedies. Journal of Applied Psychology, 88(5), 879–903. [DOI] [PubMed] [Google Scholar]
  17. R Core Team. (2020). R: A language and environment for statistical computing. R Foundation for Statistical Computing. URL: https://www.R-project.org/ [Google Scholar]
  18. Revelle W (2019). Psych: Procedures for personality and psychological research. Northwestern University. URL: https://CRAN.R-project.org/package=psych Version=1.9.12. [Google Scholar]
  19. Rogers EM (2003). Diffusion of innovations (5th ed.). Free Press. [Google Scholar]
  20. Rosseel Y (2012). Lavaan: An R package for structural equation modeling. Journal of Statistical Software, 48(2), 1–36. URL http://www.jstatsoft.org/v48/i02/ [Google Scholar]
  21. Widaman KF (1985). Hierarchically nested covariance structure models for multitrait-multimethod data. Applied Psychological Measurement, 9(1), 1–26. 10.1177/014662168500900101 [DOI] [Google Scholar]
  22. Williams LJ, Cote JA, & Buckley MR (1989). Lack of method variance in self-reported affect and perceptions at work: Reality or artifact? Journal of Applied Psychology, 74(3), 462–468. [Google Scholar]
  23. Wilson FR, Pan W, & Schumsky DA (2012). Recalculation of the critical values for Law she’s content validity ratio. Measurement and Evaluation in Counseling and Development, 45(3), 197–210. [Google Scholar]
  24. Xia Y, & Yang Y (2019). RMSEA, CFI, and TLI in structural equation modeling with ordered categorical data: The story they tell depends on the estimation methods. Behavior Research Methods, 51, 409–428. [DOI] [PubMed] [Google Scholar]
  25. Xie Y (2020). knitr: A General-Purpose Package for Dynamic Report Generation in R. R package version 1.28.
  26. Zhang G, Jiang G, Hattori M, & Trichtinger L (2019). EFAutilities: Utility Functions for Exploratory Factor Analysis. R package version 2.0.0. URL: https://CRAN.R-project.org/package=EFAutilities

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

1

RESOURCES