Skip to main content
Educational and Psychological Measurement logoLink to Educational and Psychological Measurement
. 2021 Jan 5;81(5):872–903. doi: 10.1177/0013164420982057

On the Detection of the Correct Number of Factors in Two-Facet Models by Means of Parallel Analysis

André Beauducel 1,, Norbert Hilger 1
PMCID: PMC8377342  PMID: 34565810

Abstract

Methods for optimal factor rotation of two-facet loading matrices have recently been proposed. However, the problem of the correct number of factors to retain for rotation of two-facet loading matrices has rarely been addressed in the context of exploratory factor analysis. Most previous studies were based on the observation that two-facet loading matrices may be rank deficient when the salient loadings of each factor have the same sign. It was shown here that full-rank two-facet loading matrices are, in principle, possible, when some factors have positive and negative salient loadings. Accordingly, the current simulation study on the number of factors to extract for two-facet models was based on rank-deficient and full-rank two-facet population models. The number of factors to extract was estimated from traditional parallel analysis based on the mean of the unreduced eigenvalues as well as from nine other rather traditional versions of parallel analysis (based on the 95th percentile of eigenvalues, based on reduced eigenvalues, based on eigenvalue differences). Parallel analysis based on the mean eigenvalues of the correlation matrix with the squared multiple correlations of each variable with the remaining variables inserted in the main diagonal had the highest detection rates for most of the two-facet factor models. Recommendations for the identification of the correct number of factors are based on the simulation results, on the results of an empirical example data set, and on the conditions for approximately rank-deficient and full-rank two-facet models.

Keywords: facet model, exploratory factor analysis, parallel analysis


Facet models have been discussed since L. Guttman (1954), who demonstrated their utility for intelligence research (L. Guttman & Levy, 1991; Schlesinger & Guttman, 1969). Other models of intelligence structure were also facet models (Guilford, 1967, 1975, 1988; Jäger, 1982, 1984; Jäger et al., 1997). Thus, facet models have been shown to be useful for intelligence research because they allow for a flexible representation of overlapping variances (Sternberg, 1981). Even when facet models have first been developed for models of intelligence structure, there have been attempts to use facet models for research on personality (Beauducel et al., 2005; Leue & Beauducel, in press) and in organizational psychology (Cronshaw & Jethmalani, 2005; Wheeler, 1993). Moreover, the similarity of two-facet models with multitrait–multimethod (MTMM) models (Campbell & Fiske, 1959) has also been noted (Süß & Beauducel, 2005). To sum up, facet models have been proposed in several fields of psychology (R. Guttman & Greenbaum, 1998).

According to L. Guttman (1954), a facet is a set that is a component of a cartesian product (Shye, 1998). For example, the cartesian product of facet A with the elements (1, 2, 3) and facet B with the elements (a, b) is A × B = C with the elements (1a, 2a, 3a, 1b, 2b, 3b). When applied to factor analysis, the elements of the facets are factors so that a two-facet loading pattern comprises two sets of factors with each variable loading on one factor of each facet. An example for a two-facet model is Jäger’s (1982, 1984) Berlin intelligence structure (BIS) model comprising an operation facet with four factors (Speed, Memory, Creativity, Reasoning), a content facet with three factors (Figural, Verbal, and Numerical Intelligence), and a general intelligence “g” factor (see Figure 1). The cartesian product of the operation and content facet yields the “cells” (or “structupels”) of the BIS. Each performance task can be classified into one “cell” of the BIS. Accordingly, each task is expected to load on one operation factor and on one content factor. As a general intelligence factor is assumed, positive intercorrelations of the primary factors are expected. The example demonstrates that the loading patterns resulting from facet models are more complex than conventional simple structure loading patterns, where each variable has only a substantial loading on one factor. Probably, exploratory factor analysis (EFA) with subsequent factor rotation has rarely been used for the investigation of facet models because of the complexity of their loading patterns. Facet models have more often been investigated by means of confirmatory factor analyses or parcel-based EFA (Bucik & Neubauer, 1996; Jäger, 1984; Süß & Beauducel, 2015). In order to close this gap, Beauducel and Kersting (2020) investigated by means of a simulation study how well two-facet models can be identified by means of EFA with subsequent factor rotation. They investigated a large number of rotation methods and they found that two-facet factor loading patterns can successfully be identified by means of EFA with subsequent Tandem II rotation (Comrey, 1967) for orthogonal loading patterns, and a Promax-based version of Tandem II rotation, or Geomin rotation (Yates, 1987) with Epsilon of .01 for oblique loading patterns.

Figure 1.

Figure 1.

Example for a two-facet model: The Berlin intelligence structure model.

Although Beauducel and Kersting (2020) showed that two-facet models can be found by means of EFA with appropriate factor rotation, they rotated the correct number of factors and did not investigate methods for the identification of the correct number of factors in two-facet models. The aim of the present study was therefore to investigate methods for the identification of the correct number of factors in two-facet models. However, the identification of the correct number of factors is a more complex issue for facet models than for conventional simple structure models because facet models may —or may not—be rank deficient. Deficient rank means that the number of independent columns of the unrotated factor loading matrix is q−1 although the corresponding two-facet loading matrix has q interpretable columns (factors). Beauducel and Kersting (2020) referred to studies showing the deficient rank of the loading matrices from MTMM models or from facet models (Eid, 2000; Eid et al., 2003; Grayson & Marsh, 1994). In order to set up an unrotated loading matrix for subsequent rotation toward a rank-deficient two-facet model, they proposed to extract two loading matrices by means of EFA: One with q−1 factors and one with q factors. Then, they performed orthogonal Procrustes/target rotation (Schönemann, 1966) of the unrotated loading matrix with q−1 factors toward the unrotated loading matrix with q factors. This yields a rank-deficient unrotated loading matrix that can be rotated in order to find a rank-deficient two-facet loading matrix with two salient loadings of each variable (Beauducel & Kersting, 2020). However, adding a further factor (by means of orthogonal target rotation, as described above) will only result in q factors when methods for the number of factors to extract indicate that q−1 factors should be retained for rotation. It will be shown below that two-facet models are rank deficient only when all salient loadings have the same magnitude and sign. When the sign and magnitude of the salient loadings is unknown (as is typical for EFA), the rank of two-facet loading matrices is also unknown. This is an additional challenge for the identification of the number of factors to extract. In a completely exploratory setting, it is unknown whether a rank-deficient two-facet loading matrix, a full-rank two-facet loading matrix, a full-rank conventional simple structure loading matrix or any other loading matrix might be expected. However, it follows from the setup of the unrotated loading matrix that is needed to find a rank-deficient two-facet loading matrix, that conventional EFA with the extraction and rotation of full-rank loading matrices is biased against the identification of rank-deficient two-facet loading matrices. Accordingly, an exact identification of the number of factors to extract is needed as a basis for the aforementioned procedure that allows to explore rank-deficient two-facet loading matrices by means of EFA with subsequent factor rotation.

A method that has been shown to be useful for the identification of the number of factors to extract is Horn’s (1965) parallel analysis (PA). PA is based on the comparison of the eigenvalues of an empirical correlation matrix with the eigenvalues of a correlation matrix based on random numbers with the same number of observed variables and the same number of cases as in the empirical data set. Traditional PA and several variations of PA have been investigated in several different contexts (Achim, 2017; Auerswald & Moshagen, 2019; Beauducel, 2001; Buja & Eyuboglu, 1992; Crawford et al., 2010; Cho et al., 2009; Green et al., 2012; Green et al., 2016; Lim & Jahng, 2019; Zwick, & Velicer, 1986) and PA can still be regarded as one of the most promising methods when compared with other methods. As PA is one of the best methods for the identification of the number of factors to extract, the focus of the present study was on relevant PA versions. A further reason for the focus on PA was that it is based on eigenvalues. Even when the relationship between eigenvalues and the rank of matrices is complex, eigenvectors with distinct eigenvalues are linearly independent (Magnus & Neudecker, 2007). This implies that the rank of a loading matrix is at least as large as the number of distinct nonzero eigenvalues. Therefore, the inspection of the eigenvalues, as it is performed with PA, is helpful in order to find the (approximate) rank of a loading matrix (see next section). Finally, an advantage of the focus on PA is that a large number of studies on PA is available. For example, Cho et al. (2009) have shown that the use of Pearson correlations for the random data can be justified even when the empirical data are categorical. However, only continuous data will be considered here because the identification of two-facet models is already quite a challenge due to the possible rank deficiency of the loading matrices. As several PA versions have been proposed, Table 1 contains an overview of the PA versions investigated in the present study. Some more detailed descriptions of relevant PA versions are given in the following.

Table 1.

Versions of Parallel Analysis Considered in the Present Study.

Label Description Authors
mPCA Based on the mean eigenvalues of the unreduced correlation matrix Horn (1965) a
95thPCA Based on the 95th percentile of the unreduced correlation matrix Glorfeld (1995)
m1PCA Based on mean eigenvalues of the unreduced correlation matrix when the first eigenvalue is eliminated Turner (1998)
95th1PCA Based on the 95th percentile of eigenvalues of the unreduced correlation matrix when the first eigenvalue is eliminated Glorfeld (1995), Turner (1998)
mFA Based on the mean eigenvalues of the correlation matrix with squared multiple correlations in the main diagonal Humphreys & Ilgen (1969)
95thFA Based on the 95th percentile of eigenvalues of the correlation matrix with squared multiple correlations in the main diagonal Humphreys & Ilgen (1969); Glorfeld (1995)
m1FA Based on the mean eigenvalues of the correlation matrix with squared multiple correlations in the main diagonal when the first eigenvalue is eliminated Humphreys & Ilgen (1969); Turner (1998)
95th1FA Based on the 95th percentile of the correlation matrix with squared multiple correlations in the main diagonal when the first eigenvalue is eliminated Humphreys & Ilgen (1969); Glorfeld (1995); Turner (1998)
slope-PCA Based on the mean of the slopes of three subsequent eigenvalues of the unreduced correlation matrix Gorsuch & Nelson (1981)
slope-FA Based on the mean of the slopes of three subsequent eigenvalues of the correlation matrix with squared multiple correlations in the main diagonal Humphreys & Ilgen (1969); Gorsuch & Nelson (1981)
a

As Horn (1965) initially proposed parallel analysis, this author might be listed in all lines.

It has been proposed to insert the squared multiple correlation of a variable with all other variables as a communality estimate for PA (Humphreys & Ilgen, 1969). Buja and Eyuboglu (1992) found problems with this PA version whereas Crawford et al. (2010) found that this PA version does not perform consistently worse than PA based on the eigenvalues of the unreduced correlation matrix. In the following, PA based on the squared multiple correlation inserted into the main diagonal of the correlation matrix as a communality estimate will be referred to as PA based on factor analysis (FA), whereas PA based on the unreduced correlation matrix will be referred to as PA based on principal components analysis (PCA; see Table 1). Another aspect that remains to be investigated is whether PA with facet data should be based on the mean or on the 95th percentile (Glorfeld, 1995) of the random eigenvalues. When mean-based PA is performed empirical factors are extracted when their eigenvalue is greater than the mean of the simulated number of random eigenvalues. When 95th percentile PA is performed only empirical factors with an eigenvalue greater than the 95th percentile of the simulated eigenvalues are retained. Crawford et al. (2010) found that the optimal PA method depends on the properties of the population data. They report specific results for each PA method, but they did not investigate faceted models. It can therefore not a priori be known whether PA based on PCA or PA based on FA combined with mean eigenvalues or with the 95th percentile of the eigenvalues is more appropriate for identifying the relevant number of factors in data based on two-facet models. It should also be noted that some problems with PA may occur when factors or components are correlated, which typically implies that the first eigenvalue is very large compared with the other eigenvalues (Beauducel, 2001; Turner, 1998). The elimination of the first eigenvalue from the empirical eigenvalues has been proposed as a method to overcome these problems (Turner, 1998) and will also be considered here (see Table 1).

Moreover, a method for the identification of the relevant number of factors can also be based on the relative size of the eigenvalues within one empirical data set. Zoski and Jurs (1993, 1996), Gorsuch and Nelson (1981), and Gorsuch (1983) proposed more objective versions of Cattell’s (1966) scree test. According to Gorsuch (1983), the series of mean slopes across three subsequent eigenvalues of the empirical data set is computed with the conventional objective scree test. Although this method could be promising, it was considered to remain close to the framework of PA in that the mean slopes of successive empirical eigenvalues were compared with the mean slopes of the corresponding noise eigenvalues in order to extract components or factors when the mean slope of empirical eigenvalues is larger than the mean slope of the corresponding noise eigenvalues. Let γi be the ith empirical eigenvalue, the slope of empirical eigenvalues is

Δγi=(γi1γi+1)/2,fori=2,...,p1. (1)

Let θi be the ith noise eigenvalue, the noise-eigenvalues slope is computed as

Δθi=(θi1θi+1)/2,fori=2,...,p1. (2)

Factors or components are retained for rotation when Δγi is greater than the mean of Δθi from 100 runs with noise eigenvalues. Since the empirical slope will only be as flat as the mean noise slope when there are no substantial empirical eigenvalues, the slope of q+ 1 factors will be larger than the mean noise slopes when there are q substantial eigenvalues. The slope-based PA can only be performed for the eigenvalues 2 to p−1, which is a minor limitation in the present context because two-facet models will always comprise more than one factor or component. Moreover, the slope-based PA as well as the other PA versions can be computed for unreduced eigenvalues from PCA as well as for the reduced eigenvalues from FA. As in Crawford et al. (2010), the reduced FA eigenvalues were based on communalities estimated from the squared multiple correlation between a variable and all remaining variables in a matrix (Humphreys & Ilgen, 1969). As the eigenvalues of the reduced correlation matrix can be negative, the additional condition was introduced that the reduced eigenvalue slope was only based on positive empirical eigenvalues.

When the PA versions described above are combined, this leads to the following 10 PA versions (see Table 1): Mean PCA eigenvalue (mPCA), 95th percentile of PCA eigenvalue (95thPCA), mean PCA eigenvalue without first empirical eigenvalue (m1PCA), 95th percentile of PCA eigenvalue when the first empirical component is eliminated (95th1PCA), mean FA eigenvalue (mFA), 95th percentile of FA eigenvalues (95thFA), mean FA eigenvalue when the first empirical factor is eliminated (m1FA), 95th percentile of FA eigenvalues when the first empirical factor is eliminated (95th1FA), mean slope of successive PCA eigenvalues (slope-PCA), mean slope of successive FA eigenvalues (slope-FA). Although further PA refinements (e.g., Achim, 2017; Dobriban & Owen, 2019; Green et al., 2012) and analyses of PA with binary and ordered polytomous items have meanwhile been published (Timmerman & Lorenzo-Seva, 2011; Weng & Cheng, 2005), the present selection already covers a relevant number of PA versions. As noted by Lim and Jahng (2019), the traditional PA has not been systematically outperformed by the newer PA versions and the new PA versions need to be investigated in several contexts. Since the detection of q−1 or q factors of two-facet models are a rather new context, even for traditional PA, the focus of the present study is not on the newest PA versions but on PA versions that have already been widely investigated. Even when the investigation of additional new PA versions may be interesting, the performance of more traditional PA versions is regarded as an important first step. Therefore, the selection of PA versions of the present study is close to the PA versions investigated by Crawford et al. (2010) with the only exception that a PA version based on eigenvalue differences that is close to the objective scree-test (Gorsuch, 1983) was also included. Moreover, only PA versions based on normally distributed variables were investigated.

First, ideal two-facet loading patterns are described and examples for rank-deficient and full-rank two-facet models are presented as a basis for the population models of the simulation study. Second, it is shown that two-facet loading matrices can also be approximately rank deficient and that the eigenvalues may indicate the approximate rank of the two-facet loading matrices. Finally, the specifications of the simulation study for two-facet models and the corresponding results are reported.

The Ideal Two-Facet Loading Patterns

An ideal two-facet loading pattern Λf can be defined as

Λf=[ΛA|ΛB],withΛA=IqA1pA0.51/2andΛB=0.51/21pBIqB. (3)

The overall number of variables is p = pAqA = pBqB and the overall number of factors is q = qA+qB, IqA is a qA×qA identity matrix, 1pA is a pA× 1 unit-vector, IqB is a qB×qB identity matrix, 1pB is a pB× 1 unit-vector, and “” denotes the Kronecker-product. Since each variable has two nonzero elements, the maximal size of the nonzero elements is 0.51/2 when all nonzero loadings are at maximum. For pA = pB = 2, p = 4, qA = qB = 2, and q = 4, Equation (3) yields

Λf=0.51/2[(1001)(11)|(11)(1001)]=0.51/2[(11000011)|(10011001)], (4)

which is the smallest version of a two-facet model. However, in ideal two-facet models all facets contain the same number of factors, all factors are represented by at minimum four nonzero variables (Fabrigar et al., 1999), so that all variables have substantial loadings on two factors and all factors have the same good chance to occur in EFA. Accordingly, for pA = pB = 4, p = 4, qA = qB = 2, and q = 4, Equation (3) yields

Λf=0.51/2[(1001)(1111)|(1111)(1001)]=0.51/2[(1111000000001111)|(1001101010010101)]. (5)

There is a linear dependency of Columns 1 and 2 with Columns 3 and 4 so that the rank is q−1 as can be seen from the corresponding row echelon form (see the appendix). Two-facet models with an asymmetric number of variables per factor are also possible. For example, for pA = 2, qA = 4, pB = 4, qB = 2, resulting in p = 8 and q = 6, Equation (3) yields

Λf=0.51/2[(1001000000001001)(11)|(1111)(1001)]=0.51/2[(11000011000000000000000011000011)|(1001100110011001)]. (6)

In the example given in Equation (6) the chances for the identification of the facet-B factors 5 and 6 is larger than the chances for the identification of the facet-A factors 1 to 4. As can be seen in Equations (3) to (5), each variable has a substantial loading on two factors in a two-facet loading pattern. Of course, there is a linear dependency of Columns 1 to 4 with Columns 5 and 6. Accordingly, again, the rank of Λf is q−1 (see the appendix). Although the exact rank deficiency of the loading matrices in Equations (5) and (6) also depends on the equal size of the salient loadings, it follows from these examples that loading matrices with a slight variation of the relative size of salient loadings can be approximately rank deficient, which may have consequences for the number of substantial eigenvalues of the correlation matrix reproduced from the loading matrix. In order to give an account of the relationship between rank, variation of salient loading size, and the eigenvalues of the reproduced correlation matrix, the salient loadings of Equation (5) were multiplied by .5 in order to provide a start loading matrix Λf1 with rank(Λf1) = q−1. Then, in 20 steps an increased amount of random noise was added to the salient loadings so that the variability of the salient loadings increased. For the resulting loading matrices Λf2 to Λf20 the rank is q. Equation (7) represents the start loading matrix Λf1 and the final loading matrix Λf20 resulting from this procedure:

Λf1,...,Λf20=[.50.50.50.50.00.00.00.00.00.00.00.00.50.50.50.50.50.00.50.00.00.50.00.50.50.00.50.00.00.50.00.50],...,[.58.43.41.55.00.00.00.00.00.00.00.00.39.44.50.62.41.00.53.00.00.49.00.40.45.00.51.00.00.56.00.58] (7)

Although the rank of Λf2 to Λf20 is q, the number of substantial eigenvalues of the correlation matrices reproduced from these loading matrices is q−1 (see Figure 2A).

Figure 2.

Figure 2.

(A) Approximately rank deficient loading matrices: Eigenvalues of the factors 1 to 8 for Λf1 to Λf20. (B) Full rank loading matrices: Eigenvalues of the factors 1 to 8 for Λf1 to Λf20

As can be seen from this example, several two-facet loading matrices would be rank deficient when the salient loadings are of equal size and have full column rank only because of a variable size of the salient loadings. These loading matrices are regarded as approximately rank deficient. Beauducel and Kersting (2020) based their rotational procedure on approximately rank-deficient loading matrices that might be typical for several two-facet models. Moreover, rank(Λf) = q−1 and a number of q−1 substantial eigenvalues of the reproduced correlation matrix is not true for two-facet loading matrices based on equal absolute salient loadings with variable sign. For example, rank(Λf) = q for

Λf=0.51/2[(1001)(1111)|(1111)(1001)]=0.51/2[(1111000000001111)|(1001101010010101)], (8)

as can be seen from the row echelon form (see the appendix). In order to explore the effect of the salient loading sign together with the effect of salient loading variability, the eigenvalues reproduced from the correlation matrices of the following 20 loading matrices with variable loading sign and rank = q were computed (Equation 9). The variability of the absolute salient loading size was the same for Λf21 to Λf40 as for Λf1 to Λf20:

Λf21,...,Λf40=[.50.50.50.50.00.00.00.00.00.00.00.00.50.50.50.50.50.00.50.00.00.50.00.50.50.00.50.00.00.50.00.50],...,[.58.43.41.55.00.00.00.00.00.00.00.00.39.44.50.62.41.00.47.00.00.51.00.40.45.00.49.00.00.44.00.58] (9)

The correlation matrices reproduced from these loading matrices have q substantial eigenvalues (Figure 2B), which indicates that the loading matrices Λf21 to Λf40 are not approximately rank deficient. There are several negative loadings in Λf21 to Λf40, but even with rather few negative loadings, it is possible to get rank(Λf) = q, as for example, for

Λf=0.51/2[11110000000011111001101010010101]. (10)

The examples show that full-rank two-facet loading matrices may be approximately rank deficient when there are only salient loadings with the same sign (Figure 2A) and that they are not approximately rank deficient when some factors have positive as well as negative loadings (Figure 2B). Negative salient loadings, as they occur in Equations (8) to 10, are not unusual in factor analyses of real data.

The linear dependency between two columns j and k of Λf results in nonzero covariances. As any two-facet model implies that each observed variable i to p has at least two salient loadings, it follows that

j=1qAk=1qBi=1p|λij||λik|>0, (11)

for any two-facet loading matrix, where “||” denotes the absolute value. However, if there is some variability of salient loading size and if one or more salient loadings of Λf are negative and all other salient loadings are positive, the condition of Equation (11) still holds, although one cannot exclude that

j=1qAk=1qB|i=1pλijλik|=0, (12)

because some λijλik may be positive and one λijλik may be negative so that the sum across all p elements may be zero. This implies that the covariance between all loading columns of Λf can be zero. It follows that two-facet loading matrices are not necessarily rank deficient so that the rank of these matrices can be q or q−1. Accordingly, it might be reasonable to compute rotated solutions for one factor more than proposed by PA. Note that the rank uncertainty of two-facet loading matrices demonstrated here is not related to sampling error.

Simulation Study

Method

A simulation study based on n = 400 and 1,000 cases, nine rank-deficient population two-facet factor models and nine full-rank population two-facet models with q = 4 to 12 factors in two facets with salient loadings l = .40, .50, and .60 was performed in order to investigate the detection of the correct number of population factors (q−1 or q, depending on the rank of the population matrix) by means of the 10 aforementioned PA versions (mPCA, 95thPCA, m1PCA, 95th1PCA, mFA, 95thFA, m1FA, 95th1FA, slope-PCA, slope-FA). All nonsalient population loadings were zero. Two levels of intercorrelation (ρ = .00, .30) between all factors were specified for l = .40, .50, and .60. A higher intercorrelation (ρ = .60) between all factors could only be specified for l = .40 and .50 because the communalities for l = .60 and ρ = .60 would have exceeded one. An example for the specification of salient loadings and factor intercorrelations is given in Figure 3. Further details how the sample correlations were computed from the population model parameters are given in the syntax example (Supplement, Part 1). As recommended from previous research (Fabrigar et al., 1999; Floyd & Widaman, 1995), each population factor was measured by at least four variables with salient loadings. This results in the following number of factors in the two facets and the following number of variables (p): 2 + 2 (p = 8), 2 + 3 (p = 18), 3 + 3 (p = 18), 3 + 4 (p = 24), 4 + 4 (p = 16), 4 + 5 (p = 20), 5 + 5 (p = 25), 5 + 6 (p = 30), and 6 + 6 (p = 36). The distribution of salient loadings on the factors was identical for the full-rank models and for the rank-deficient models. Therefore, all loading matrices were presented for the rank-deficient two-facet uncorrelated factor models with l = .50 in the Supplement (Part 2). Nine two-facet models × 2 model types (rank deficient vs. full rank) × 3 loadings sizes (l = .40, .50, .60) × 2 factor-intercorrelations (ρ = .00, .30) × 2 sample sizes (n = 400, 1,000) result in 216 conditions. For ρ = .60, nine two-facet models × 2 model types × 2 loadings sizes (l = .40, .50) × 2 sample sizes result in 72 additional conditions. For each condition 1,000 samples were drawn from the population. The dependent variable was the percentage of q−1 (for rank-deficient models) or q (for full-rank models) population factors detected by means of the PA versions. The percentage of over- and underfactorization of factors was also computed (see, Supplement, Part 3, Tables S1-S16). The simulation study was performed with IBM SPSS Version 26. An example for the syntax can be found in the Supplement (Part 1).

Figure 3.

Figure 3.

Conceptual graphic of the population two-facet correlated factor model with q = 4 and p = 8; simple arrows represent salient loadings, double arrows represent factor intercorrelations.

Results

The results for the rank-deficient population models are given in Tables 2 to 4. The percentage of q−1 sample eigenvalues based on two-facet factor models with salient population loadings of l = .40 larger than the respective noise eigenvalues is presented in Table 2. The traditional PA (mPCA) performs well for the orthogonal models with n = 400 and n = 1,000. Alternative PA versions like 95thPCA, 95thFA, m1PCA, m1FA, 95th1PCA, and 95th1FA did not lead to substantial improvements of detection rates. Since the detection rates were high for these PA versions, the percentage of over- and underfactorizations was very low (Supplement, Table S1). However, mFA and 95thFA resulted in acceptable detection rates for n = 400 and perfect detection rates for n = 1,000. This performance ranking of PA versions is not altered when uncorrelated factor models based on l = .50 (Table 3) and l = .60 (Table 4) are considered. To sum up, when rank-deficient two-facet uncorrelated factor models are to be expected, mPCA, 95thPCA, m1PCA, 95th1PCA, mFA, 95thFA, and 95th1FA can be recommended.

Table 2.

Percent of Correct Detection of q−1 Factors for Rank-Deficient Two-Facet Models for l = .40.

ρ = .00
n q mPCA 95thPCA m1PCA 95th1PCA mFA 95th FA m1FA 95th1FA slope-PCA slope-FA
4 100 99 34 76 100 100 0 99 43 96
5 100 100 99 100 98 100 94 100 33 38
6 100 100 99 100 97 100 91 99 28 29
7 100 100 100 100 95 99 92 99 23 26
400 8 96 86 93 99 98 100 87 99 18 62
9 93 80 96 99 94 99 87 98 16 21
10 98 94 99 100 91 99 83 97 14 15
11 97 93 99 100 83 97 78 95 13 14
12 100 98 100 100 79 95 74 94 10 10
4 100 100 81 97 100 100 0 1 45 97
5 100 100 100 100 100 100 100 100 37 40
6 100 100 100 100 100 100 100 100 29 54
7 100 100 100 100 100 100 100 100 28 31
1,000 8 100 100 100 100 100 100 0 100 21 79
9 100 100 100 100 100 100 100 100 19 90
10 100 100 100 100 100 100 100 100 14 60
11 100 100 100 100 100 100 100 100 16 17
12 100 100 100 100 100 100 100 100 12 13
ρ = .30
n q mPCA 95thPCA m1PCA 95th1PCA mFA 95th FA m1FA 95th1FA slope-PCA slope-FA
4 5 1 18 68 100 99 0 100 60 99
5 50 28 99 100 100 100 100 100 48 53
6 28 11 98 100 99 100 99 100 40 43
7 17 6 100 100 99 100 99 100 37 39
400 8 0 0 74 95 97 88 94 96 20 39
9 0 0 87 95 94 87 93 92 18 20
10 0 0 95 98 96 95 96 95 14 15
11 0 0 97 93 95 93 96 92 14 13
12 0 0 98 96 96 97 98 94 10 11
4 6 1 49 88 100 100 0 0 60 100
5 97 92 100 100 100 100 100 100 51 58
6 95 86 100 100 100 100 100 100 41 54
7 90 78 100 100 100 100 100 100 40 44
1,000 8 0 0 100 100 100 100 8 100 23 56
9 0 0 100 100 100 100 100 100 26 70
10 1 0 100 100 100 100 100 100 16 36
11 1 0 100 100 100 100 100 100 18 20
12 15 5 100 100 100 100 100 100 15 16
ρ = .60
n q mPCA 95thPCA m1PCA 95th1PCA mFA 95thFA m1FA 95th1FA slope-PCA slope-FA
4 0 0 1 36 67 19 0 67 79 97
5 0 0 98 95 91 62 94 68 67 72
6 0 0 92 93 83 37 89 46 45 46
7 0 0 91 64 59 21 55 17 44 45
400 8 0 0 13 27 6 0 21 0 4 4
9 0 0 33 9 4 0 5 0 2 3
10 0 0 42 2 3 0 1 0 2 2
11 0 0 21 0 1 0 0 0 1 1
12 0 0 7 0 0 0 0 0 0 0
4 0 0 2 37 99 90 0 2 81 100
5 0 0 100 100 100 100 100 100 72 77
6 0 0 100 100 100 100 100 100 49 51
7 0 0 100 100 100 100 100 100 53 58
1,000 8 0 0 70 97 96 55 62 94 10 14
9 0 0 93 99 96 72 99 91 12 17
10 0 0 100 100 99 94 100 97 5 4
11 0 0 100 99 99 90 99 90 6 7
12 0 0 100 99 99 96 99 93 3 3

Note. mPCA = mean PCA eigenvalue; 95th PCA = 95th percentile of PCA eigenvalue; m1PCA = mean PCA eigenvalue without first empirical eigenvalue; 95th1PCA = 95th percentile of PCA eigenvalue without first empirical component; mFA = mean FA eigenvalue; 95thFA = 95th percentile of FA eigenvalues; m1FA = mean FA eigenvalue without first empirical factor; 95th1FA = 95th percentile of FA eigenvalues without first empirical factor; slope-PCA = mean slope- of three successive PCA eigenvalues; slope-FA = mean slope of three successive FA eigenvalues.

Table 4.

Percent of Correct Detection of q− 1 Factors for Rank-Deficient Two-Facet Models for l = .60.

ρ = .00
n q mPCA 95thPCA m1PCA 95th1PCA mFA 95thFA m1FA 95th1FA slope-PCA slope-FA
4 100 100 100 100 100 100 100 100 100 100
5 100 100 100 100 100 100 100 100 99 100
6 100 100 100 100 100 100 100 100 97 98
7 100 100 100 100 100 100 100 100 96 99
400 8 100 100 100 100 100 100 100 100 85 89
9 100 100 100 100 100 100 100 100 92 98
10 100 100 100 100 100 100 100 100 86 89
11 100 100 100 100 100 100 100 100 90 92
12 100 100 100 100 100 100 100 100 85 88
4 100 100 100 100 100 100 100 100 100 100
5 100 100 100 100 100 100 100 100 100 100
6 100 100 100 100 100 100 100 100 98 99
7 100 100 100 100 100 100 100 100 98 99
1,000 8 100 100 100 100 100 100 100 100 89 92
9 100 100 100 100 100 100 100 100 94 98
10 100 100 100 100 100 100 100 100 91 95
11 100 100 100 100 100 100 100 100 93 99
12 100 100 100 100 100 100 100 100 88 93
ρ = .30
n q mPCA 95thPCA m1PCA 95th1PCA mFA 95thFA m1FA 95th1FA slope-PCA slope-FA
4 18 7 100 100 100 100 100 100 100 100
5 100 99 100 100 100 100 100 100 100 100
6 100 99 100 100 100 100 100 100 97 96
7 98 96 100 100 100 100 100 100 99 99
400 8 0 0 100 100 100 100 100 100 56 52
9 0 0 100 100 100 100 100 100 83 81
10 19 8 100 100 100 100 100 100 64 61
11 16 7 100 100 100 100 100 100 83 82
12 61 45 100 100 100 100 100 100 74 71
4 35 17 100 100 100 100 100 100 100 100
5 100 100 100 100 100 100 100 100 100 100
6 100 100 100 100 100 100 100 100 97 97
7 100 100 100 100 100 100 100 100 100 100
1,000 8 1 0 100 100 100 100 100 100 58 56
9 2 1 100 100 100 100 100 100 94 93
10 95 90 100 100 100 100 100 100 69 68
11 97 93 100 100 100 100 100 100 93 93
12 100 100 100 100 100 100 100 100 80 79

Note. mPCA = mean PCA eigenvalue; 95thPCA = 95th percentile of PCA eigenvalue; m1PCA = mean PCA eigenvalue without first empirical eigenvalue; 95th1PCA = 95th percentile of PCA eigenvalue without first empirical component; mFA = mean FA eigenvalue; 95thFA = 95th percentile of FA eigenvalues; m1FA = mean FA eigenvalue without first empirical factor; 95th1FA = 95th percentile of FA eigenvalues without first empirical factor; slope-PCA = mean slope-of three successive PCA eigenvalues; slope-FA = mean slope-of three successive FA eigenvalues.

Table 3.

Percent of Correct Detection of q− 1 Factors for Rank-Deficient Two-Facet Models for l = .50.

ρ = .00
n q mPCA 95thPCA m1PCA 95th1PCA mFA 95thFA M1FA 95th1FA slope-PCA slope-FA
4 100 100 100 100 100 100 58 100 75 100
5 100 100 100 100 100 100 100 100 63 70
6 100 100 100 100 100 100 100 100 56 64
7 100 100 100 100 100 100 100 100 52 58
400 8 100 100 100 100 100 100 100 100 42 85
9 100 100 100 100 100 100 100 100 39 84
10 100 100 100 100 100 100 100 100 38 42
11 100 100 100 100 100 100 100 100 32 33
12 100 100 100 100 100 100 100 100 30 32
4 100 100 100 100 100 100 0 99 74 100
5 100 100 100 100 100 100 100 100 68 75
6 100 100 100 100 100 100 100 100 59 98
7 100 100 100 100 100 100 100 100 53 63
1,000 8 100 100 100 100 100 100 100 100 49 88
9 100 100 100 100 100 100 100 100 46 97
10 100 100 100 100 100 100 100 100 42 90
11 100 100 100 100 100 100 100 100 39 91
12 100 100 100 100 100 100 100 100 34 38
ρ = .30
n q mPCA 95thPCA m1PCA 95th1PCA mFA 95thFA M1FA 95th1FA slope-PCA slope-FA
4 9 3 100 100 100 100 100 100 96 100
5 93 84 100 100 100 100 100 100 94 96
6 86 73 100 100 100 100 100 100 84 88
7 76 58 100 100 100 100 100 100 86 89
400 8 0 0 100 100 100 100 100 100 43 49
9 0 0 100 100 100 100 100 100 59 69
10 1 0 100 100 100 100 100 100 43 45
11 1 0 100 100 100 100 100 100 51 53
12 3 1 100 100 100 100 100 100 44 45
4 16 5 100 100 100 100 100 100 98 100
5 100 100 100 100 100 100 100 100 96 97
6 100 100 100 100 100 100 100 100 87 94
7 100 100 100 100 100 100 100 100 89 92
1,000 8 0 0 100 100 100 100 100 100 52 57
9 0 0 100 100 100 100 100 100 73 87
10 42 23 100 100 100 100 100 100 53 62
11 43 26 100 100 100 100 100 100 66 81
12 97 94 100 100 100 100 100 100 52 57
ρ = .60
n q mPCA 95thPCA m1PCA 95th1PCA mFA 95thFA m1FA 95th1FA slope-PCA slope-FA
4 0 0 100 100 100 100 100 100 100 100
5 0 0 100 100 100 100 100 100 100 100
6 0 0 100 100 100 100 100 100 64 60
7 0 0 100 100 100 100 100 100 86 84
400 8 0 0 100 89 100 94 98 58 2 2
9 0 0 100 95 100 98 99 71 5 4
10 0 0 100 91 100 98 98 64 2 2
11 0 0 100 91 100 98 95 56 3 3
12 0 0 100 92 100 98 92 51 1 1
4 0 0 100 100 100 100 100 100 100 100
5 0 0 100 100 100 100 100 100 100 100
6 0 0 100 100 100 100 100 100 68 67
7 0 0 100 100 100 100 100 100 95 94
1,000 8 0 0 100 100 100 100 100 100 3 2
9 0 0 100 100 100 100 100 100 28 24
10 0 0 100 100 100 100 100 100 2 2
11 0 0 100 100 100 100 100 100 18 16
12 0 0 100 100 100 100 100 100 3 3

Note. mPCA = mean PCA eigenvalue; 95thPCA = 95th percentile of PCA eigenvalue; m1PCA = mean PCA eigenvalue without first empirical eigenvalue; 95th1PCA = 95th percentile of PCA eigenvalue without first empirical component; mFA = mean FA eigenvalue; 95thFA = 95th percentile of FA eigenvalues; m1FA = mean FA eigenvalue without first empirical factor; 95th1FA = 95th percentile of FA eigenvalues without first empirical factor; slope-PCA = mean slope of three successive PCA eigenvalues; slope-FA = mean slope of three successive FA eigenvalues.

In contrast, mPCA does not work consistently well for rank-deficient two-facet correlated factor models based on l = .40 and ρ = .30. It does not detect q−1 for l = .40 and ρ = .60. The percentage of over- and underfactorization of the factors for l = .40 and ρ = .60 reveals that mPCA tends to underfactorization for correlated factor models (Supplement Table S3). Relevant improvements of the detection rates were obtained for m1PCA and especially for 95th1PCA (see Table 2). For l = .40 and ρ = .30 high detection rates were found for mFA, 95thFA, 95th1FA, m1FA. For l = .40, ρ = .60, and n = 400, no PA version had consistently high detection rates. All PA versions tend to underfactorization of factors in this condition (Supplementary Table S3; available online). For l = .40, ρ = .60, and n = 1,000 mFA had consistently high detection rates. For this condition, the mFA detection rates were higher across the models than for any other PA version. Moreover, perfect detection rates were found for m1PCA, 95th1PCA, mFA, 95thFA, m1FA, and 95th1FA for rank-deficient two-facet correlated factor models with l = .50 and l = .60 and ρ = .00 and ρ = .30 (Tables 2 and 3). As l = .40 is the most critical loading condition and as mFA resulted in the highest overall detection rates across samples sizes and factor intercorrelation sizes for this critical condition as well as in the other conditions, mFA can be recommended when two-facet correlated factor models tend to be rank deficient. It should, however, be noted that for factor intercorrelations of about .60, a large sample size (n = 1,000) or large salient loadings (≥.50) are necessary in order to get high detection rates with mFA.

The results for the full-rank population two-facet factor models are given in Tables 5 to 7. The percentage of q sample eigenvalues based on factor models with salient population loadings of l = .40 larger than the respective noise eigenvalues is presented in Table 5. Overall, the detection rates of the q population factors were considerably lower for full-rank population models. In sum, only mFA had acceptable detection rates for uncorrelated and correlated factor models up to q = 6 for ρ = .00 and ρ = .30. For q > 6 mFA tends to underfactorization of factors in this condition (Supplement Tables S9 and S10). For ρ = .60 and l = .40 no method had acceptable detection rates for a relevant subset of models (Table 5). All PA versions except the slope-based PA versions (which had overall low detection rates) tend to underfactorization of factors in this condition (Supplement Table S11). Detection rates increase for l = .50 with mFA and 95thFA resulting in acceptable to perfect detection rates for factor models up to q = 7 for n = 1,000, ρ = .00 and ρ = .30 (Table 6). Again, all PA versions that were not based on slopes tend to underfactorization of factors for models with large q (Supplement Table S12). Moreover, slope-FA had acceptable detection rates for full-rank population models based on uncorrelated factors with n = 1,000 for q = 4 to 6 and q = 8 to 11. For l = .50, ρ = .60, n = 1,000, and q = 4 to 6 mFA and 95thFA had nearly perfect detection rates (Table 6). For l = .60 and full-rank two-facet factor models based on n = 400, the detection rates for mFA and 95thFA were nearly perfect for q = 4 to 10 (Table 7). For l = .60 and n = 1,000, only mFA and 95thFA had perfect detection rates for all factor models. The detection rates for slope-FA were nearly perfect for l = .60, n = 1,000 and uncorrelated factors and acceptable for q = 4 to 6 and q = 8 to 11. No PA version had consistently high detection rates across all conditions for q > 6 full-rank two-facet models. All PA versions that were not based on slopes tend to underfactorization of factors. However, compared with the other PA versions mFA had the overall highest detection rates across conditions for the full-rank models. Although the overall detection rates were lower for the full-rank models with q > 6 than for the rank-deficient models, the relative performance of the PA versions was not very different for the full-rank models than for the rank-deficient models.

Table 5.

Percent of Correct Detection of q Factors for Full-Rank Two-Facet Models for l = .40.

ρ = .00
n q mPCA 95thPCA m1PCA 95th1PCA mFA 95thFA m1FA 95th1FA slope-PCA slope-FA
4 100 100 19 43 100 100 0 0 31 68
5 62 42 74 93 97 99 87 98 31 33
6 42 24 65 80 93 93 82 93 26 28
7 0 0 17 7 56 32 60 40 24 25
400 8 0 0 22 6 15 3 9 2 23 48
9 0 0 11 2 18 4 29 10 23 57
10 0 0 7 1 28 10 35 15 19 21
11 0 0 3 1 29 10 35 14 16 19
12 0 0 1 0 31 13 33 15 18 17
4 100 100 30 50 100 100 0 0 31 72
5 91 82 81 94 100 100 100 100 32 44
6 63 46 87 98 100 100 100 100 26 88
7 0 0 12 5 89 72 93 81 25 26
1,000 8 0 0 3 1 6 1 34 14 26 21
9 0 0 1 0 9 3 1 8 22 55
10 0 0 0 0 18 7 26 12 19 87
11 0 0 0 0 8 2 12 3 18 36
12 0 0 0 0 12 4 15 5 16 16
ρ = .30
n q mPCA 95thPCA m1PCA 95th1PCA mFA 95thFA m1FA 95th1FA slope-PCA slope-FA
4 87 65 3 20 100 97 0 0 8 76
5 1 0 84 79 90 72 88 81 16 37
6 1 0 68 65 81 61 81 68 11 31
7 0 0 11 3 19 4 23 6 21 30
400 8 0 0 25 7 4 0 0 1 22 30
9 0 0 13 2 6 0 10 1 19 37
10 0 0 8 1 10 1 10 1 16 21
11 0 0 4 0 10 1 7 1 17 22
12 0 0 2 0 9 1 4 0 14 17
4 100 98 1 10 100 100 0 0 1 70
5 0 0 99 99 100 99 100 100 8 41
6 0 0 96 91 99 93 99 97 4 75
7 0 0 12 4 33 14 51 25 13 30
1,000 8 0 0 8 1 0 0 24 2 8 5
9 0 0 1 0 1 0 0 0 6 34
10 0 0 0 0 6 1 6 1 5 74
11 0 0 0 0 2 0 2 0 7 28
12 0 0 0 0 2 0 1 0 7 21
ρ = .60
n q mPCA 95thPCA m1PCA 95th1PCA mFA 95thFA m1FA 95th1FA slope-PCA slope-FA
4 9 0 0 3 67 25 0 3 0 62
5 0 0 46 27 24 5 30 7 4 37
6 0 0 34 30 26 7 33 9 2 27
7 0 0 6 1 2 0 2 0 9 27
400 8 0 0 3 8 1 0 0 0 13 5
9 0 0 8 1 0 0 0 0 10 3
10 0 0 9 0 0 0 0 0 6 3
11 0 0 2 0 0 0 0 0 5 2
12 0 0 0 0 0 0 0 0 4 1
4 29 4 0 0 97 83 0 0 0 67
5 0 0 84 73 66 36 82 56 0 40
6 0 0 70 61 53 26 71 42 0 33
7 0 0 6 1 2 0 3 0 5 31
1,000 8 0 0 19 10 0 0 10 0 6 3
9 0 0 13 2 0 0 1 0 2 26
10 0 0 6 0 1 0 1 0 3 23
11 0 0 1 0 0 0 0 0 2 20
12 0 0 0 0 0 0 0 0 6 16

Note. mPCA = mean PCA eigenvalue; 95thPCA = 95th percentile of PCA eigenvalue; m1PCA = mean PCA eigenvalue without first empirical eigenvalue; 95th1PCA = 95th percentile of PCA eigenvalue without first empirical component; mFA = mean FA eigenvalue; 95thFA = 95th percentile of FA eigenvalues; m1FA = mean FA eigenvalue without first empirical factor; 95th1FA = 95th percentile of FA eigenvalues without first empirical factor; slope-PCA = mean slope of three successive PCA eigenvalues; slope-FA = mean slope of three successive FA eigenvalues.

Table 7.

Percent of Correct Detection of q Factors for Full-Rank Two-Facet Models for l = .60.

ρ = .00
n q mPCA 95thPCA m1PCA 95th1PCA mFA 95thFA m1FA 95th1FA slope-PCA slope-FA
4 100 100 98 100 100 100 0 16 86 88
5 98 96 100 100 100 100 100 100 98 99
6 79 66 100 100 100 100 100 100 96 97
7 0 0 5 2 100 100 100 100 95 98
400 8 0 0 0 0 100 100 100 100 94 98
9 0 0 0 0 100 99 100 95 92 99
10 0 0 0 0 100 98 99 94 92 98
11 0 0 0 0 94 77 81 54 91 94
12 0 0 0 0 89 70 72 44 88 90
4 100 100 99 99 100 100 0 8 87 90
5 100 100 100 100 100 100 100 100 99 100
6 99 96 100 100 100 100 100 100 98 100
7 0 0 3 3 100 100 100 100 97 100
1,000 8 0 0 0 0 100 100 98 100 96 98
9 0 0 0 0 100 100 100 100 95 100
10 0 0 0 0 100 100 100 100 94 100
11 0 0 0 0 100 100 100 100 94 100
12 0 0 0 0 100 100 100 100 89 98
ρ = .30
n q mPCA 95thPCA m1PCA 95th1PCA mFA 95thFA m1FA 95th1FA slope-PCA slope-FA
4 100 100 100 100 100 100 94 100 0 95
5 0 0 100 100 100 100 100 100 24 100
6 0 0 98 95 100 100 100 100 0 99
7 0 0 0 0 100 100 99 91 27 91
400 8 0 0 0 0 100 100 82 39 1 94
9 0 0 0 0 100 98 20 5 0 96
10 0 0 0 0 98 90 9 2 2 96
11 0 0 0 0 68 36 0 0 0 97
12 0 0 0 0 38 12 0 0 18 88
4 100 100 100 100 100 100 100 100 0 95
5 0 0 100 100 100 100 100 100 23 100
6 0 0 100 100 100 100 100 100 0 99
7 0 0 0 0 100 100 100 100 25 98
1,000 8 0 0 0 0 100 100 100 95 0 95
9 0 0 0 0 100 100 64 30 0 99
10 0 0 0 0 100 100 58 32 0 98
11 0 0 0 0 100 100 8 2 0 100
12 0 0 0 0 100 100 6 1 13 96

Note. mPCA = mean PCA eigenvalue; 95thPCA = 95th percentile of PCA eigenvalue; m1PCA = mean PCA eigenvalue without first empirical eigenvalue; 95th1PCA = 95th percentile of PCA eigenvalue without first empirical component; mFA = mean FA eigenvalue; 95thFA = 95th percentile of FA eigenvalues; m1FA = mean FA eigenvalue without first empirical factor; 95th1FA = 95th percentile of FA eigenvalues without first empirical factor; slope-PCA = mean slope of three successive PCA eigenvalues; slope-FA = mean slope of three successive FA eigenvalues.

Table 6.

Percent of Correct Detection of q Factors for Full-Rank Two-Facet Models for l = .50.

ρ = .00
n q mPCA 95thPCA m1PCA 95th1PCA mFA 95thFA m1FA 95th1FA slope-PCA slope-FA
4 100 100 89 95 100 100 0 0 58 85
5 89 76 100 100 100 100 100 100 57 66
6 59 42 99 97 100 100 100 100 52 63
7 0 0 6 2 98 91 97 90 46 53
400 8 0 0 0 0 58 28 0 35 46 89
9 0 0 0 0 48 23 53 26 44 94
10 0 0 0 0 52 28 51 27 37 55
11 0 0 0 0 26 11 23 9 34 39
12 0 0 0 0 25 9 19 6 33 37
4 100 100 92 95 100 100 0 0 59 87
5 100 99 100 100 100 100 100 100 63 95
6 88 78 100 100 100 100 100 100 54 99
7 0 0 5 3 100 100 100 100 50 73
1,000 8 0 0 0 0 84 63 1 1 51 94
9 0 0 0 0 69 46 64 51 46 97
10 0 0 0 0 79 60 77 58 42 99
11 0 0 0 0 46 25 42 20 39 98
12 0 0 0 0 50 31 44 27 34 54
ρ = .30
n q mPCA 95thPCA m1PCA 95th1PCA mFA 95thFA m1FA 95th1FA slope-PCA slope-FA
4 99 97 99 100 100 100 0 0 1 95
5 1 0 99 97 100 100 100 100 13 72
6 0 0 91 77 100 99 100 98 4 68
7 0 0 1 0 61 29 45 17 20 58
400 8 0 0 0 0 23 4 10 1 11 69
9 0 0 0 0 17 3 4 1 5 80
10 0 0 0 0 21 6 3 0 5 59
11 0 0 0 0 6 2 1 0 4 52
12 0 0 0 0 3 1 0 0 11 44
4 100 100 100 100 100 100 0 0 0 95
5 0 0 100 100 100 100 100 100 10 90
6 0 0 100 99 100 100 100 100 0 98
7 0 0 2 1 99 96 99 91 15 62
1,000 8 0 0 0 0 38 10 0 4 1 75
9 0 0 0 0 25 8 5 1 0 82
10 0 0 0 0 41 20 7 2 1 96
11 0 0 0 0 12 3 0 0 0 95
12 0 0 0 0 11 2 0 0 4 54
ρ = .60
n q MPCA 95thPCA m1PCA 95th1PCA mFA 95thFA m1FA 95th1FA slope-PCA slope-FA
4 28 3 79 99 99 99 0 0 0 87
5 0 0 67 38 88 65 73 41 0 67
6 0 0 53 25 85 57 60 27 0 58
7 0 0 0 0 1 0 0 0 8 39
400 8 0 0 1 0 4 0 0 0 1 29
9 0 0 0 0 2 0 0 0 0 27
10 0 0 0 0 2 0 0 0 5 29
11 0 0 0 0 0 0 0 0 1 23
12 0 0 0 0 0 0 0 0 9 13
4 61 29 100 100 100 100 0 0 0 89
5 0 0 99 95 100 100 100 99 0 79
6 0 0 86 67 100 99 98 91 0 71
7 0 0 0 0 26 6 4 0 6 47
1,000 8 0 0 0 0 8 1 0 0 0 43
9 0 0 0 0 5 0 0 0 0 61
10 0 0 0 0 10 2 0 0 0 64
11 0 0 0 0 1 0 0 0 0 54
12 0 0 0 0 0 0 0 0 7 30

Note. mPCA = mean PCA eigenvalue; 95thPCA = 95th percentile of PCA eigenvalue; m1PCA = mean PCA eigenvalue without first empirical eigenvalue; 95th1PCA = 95th percentile of PCA eigenvalue without first empirical component; mFA = mean FA eigenvalue; 95thFA = 95th percentile of FA eigenvalues; m1FA = mean FA eigenvalue without first empirical factor; 95th1FA = 95th percentile of FA eigenvalues without first empirical factor; slope-PCA = mean slope of three successive PCA eigenvalues; slope-FA = mean slope of three successive FA eigenvalues.

Empirical Example

The purpose of the empirical example was to investigate relevant PA versions in a data set for which a two-facet factor structure can be expected from previous research. The BIS model is a two-facet model comprising four operation factors and three content factors (see Figure 1) that has been replicated in several empirical data sets until now (e.g., Bucik & Neubauer, 1996; Beauducel & Kersting, 2002; Jäger et al., 1997; Süß & Beauducel, 2015; Süß et al., 2002). A two-facet loading pattern with positive loadings corresponding to the BIS model was also reported in Beauducel and Kersting (2020), who analyzed data based on a test constructed for the measurement of the BIS model (Jäger et al., 1997). It was therefore expected that the data presented by Beauducel and Kersting (2020) correspond to the BIS model. Accordingly, the intercorrelation matrix of 12 intelligence task aggregates provided in the Supplement of Beauducel and Kersting (2020) was used. The respective sample of 393 participants that has completed the BIS test is described in Beauducel and Kersting (2020). One task aggregate was formed for each of the 4 × 3 combinations (=12 cells) of operation and content factors. Beauducel and Kersting (2020) retained six factors and added a seventh loading column for factor rotation and report a seven-factor two-facet loading matrix corresponding to the BIS model. It follows from this result that PA should indicate that six factors are to be extracted in this data set. Only PA versions that were relatively successful in the simulation study were considered for the empirical example (Table 8). The most successful PA version, that is, mFA indicates that six factors should be extracted. Since Beauducel and Kersting (2020) have shown that six factors are the basis for the (rank-deficient) loading matrix of the BIS model, this result is compatible with expectations from previous research.

Table 8.

Empirical Eigenvalues Based on the Correlation Matrix of 12 Aggregates of Intelligence Tasks in Beauducel and Kersting (2020, Supplement, Table S2) and Corresponding Noise Eigenvalues of Relevant PA Versions.

Number of factors
Eigenvalues of correlation matrix: 1 2 3 4 5 6 7 8 9 10 11 12 emp. eigenvalues > mean noise eigenvalues
Empirical eigenvalues 4.66 1.40 1.20 1.00 .73 .69 .51 .45 .41 .37 .31 .28 mPCA: 3
Empirical eigenvalues, first eliminated 1.74 1.59 1.42 1.12 1.04 .90 .85 .82 .77 .72 .64 m1PCA: 6
Mean noise eigenvalues 1.29 1.21 1.15 1.10 1.06 1.02 .97 .93 .89 .84 .80 .74
95th percentile noise eigenvalues 1.35 1.26 1.20 1.14 1.09 1.05 1.01 .96 .92 .88 .83 .79 95thPCA: 2
95th1PCA: 5
Number of factors
Eigenvalues of reduced correlation matrix: 1 2 3 4 5 6 7 8 9 10 11 12
Empirical eigenvalues 4.12 .83 .66 .46 .15 .06 −.07 −.10 −.15 −.17 −.21 −.27 mFA: 6
Empirical eigenvalues, first eliminated .81 .65 .46 .15 .06 .00 −.07 −.11 −.14 −.18 −.21 m1FA: 6
Mean noise eigenvalues .32 .24 .18 .13 .08 .04 −.01 −.05 −.09 −.13 −.17 −.23
95th percentile noise eigenvalues .39 .29 .23 .17 .11 .07 .03 −.02 −.06 −.10 −.14 −.19 95thFA: 5
95th1FA: 5

Note. PA = parallel analysis; mPCA = mean PCA eigenvalue; 95thPCA = 95th percentile of PCA eigenvalue; m1PCA = mean PCA eigenvalue without first empirical eigenvalue; 95th1PCA = 95th percentile of PCA eigenvalue without first empirical component; mFA = mean FA eigenvalue; 95thFA = 95th percentile of FA eigenvalues; m1FA = mean FA eigenvalue without first empirical factor; 95th1FA = 95th percentile of FA eigenvalues without first empirical factor.

Discussion

This research was motivated by the finding that two-facet factor models can be identified by means of EFA when appropriate methods of exploratory factor rotation are used (Beauducel & Kersting, 2020). This is interesting because previous attempts to develop facet models were based on other methods as, for example, EFA with subsequent Procrustes-rotation (Guilford, 1967), EFA based on parceling (Jäger, 1982), smallest space analysis (L. Guttman & Levy, 1991), or confirmatory factor analysis (Bucik & Neubauer, 1996; Süß & Beauducel, 2015). The possibility to use EFA with exploratory factor rotation for the development of two-facet models and instruments may facilitate the exploration of faceted structures and may therefore allow for more conventional forms of model development and test construction in this area.

However, it was shown here that ideal two-facet loading matrices based on equal salient loading size can—but must not—be rank deficient. It was also shown that even full-rank two-facet loading matrices can be approximately rank deficient, with q−1 substantial eigenvalues of the reproduced correlation matrix, when the salient loadings on each factor have the same sign. In contrast, full-rank two-facet loading matrices with positive as well as negative salient loadings on some factors are typically not approximately rank deficient, that is, they result in q substantial eigenvalues. It follows that the exact rank and the approximate rank of two-facet loading matrices is not a priori known. Accordingly, the present simulation study investigates how well the correct number of factors can be detected by rather traditional PA versions when approximately rank-deficient and full-rank two-facet population loading matrices are given.

We investigated the detection rate of the correct number of factors in nine rank-deficient two-facet population models (rank: q−1) and nine full-rank two-facet population models (rank: q) by means of traditional PA (based on the mean of the unreduced eigenvalues; mPCA) as well as nine alternative PA versions. The dependent variable was the percentage of correctly detected q−1 factors for the rank-deficient population loading matrices and the percentage of correctly detected q factors for the full-rank population loading matrices.

It turns out that mPCA can only be recommended for orthogonal rank-deficient two-facet models and that mFA performed best across all conditions of the rank-deficient two-facet models. For correlated rank-deficient two-facet models mPCA tends to underfactorization, which might partially be compensated by the elimination of the first factor (m1PCA). However, mFA had higher detection rates as the other PA versions for the rank-deficient facet models. As it is not necessarily known in advance whether a two-facet model is based on uncorrelated or correlated factors, mFA can be recommended whenever a rank-deficient two-facet model might be expected. Overall, the detection rates for the full-rank two-facet models were considerably lower than for the rank-deficient two-facet models. The detection rates were especially low for full-rank two-facet correlated factor models, where all methods tend to underfactorization. Overall, mFA had the highest detection rate for full-rank two-facet models. For salient loadings of .50 and greater and interfactor correlations of .30 or less, mFA had acceptable detection rates. Thus, small salient loadings in combination with high factor intercorrelations may cause problems for mFA and for all other PA versions investigated here. The PA versions tend to underfactorization under these conditions. Other methods should be investigated in order to provide further recommendations for full-rank two-facet correlated factor models.

An empirical data set based on a test for the BIS model, a two-facet intelligence model that has been replicated several times (Bucik & Neubeuer, 1996; Jäger et al., 1997; Süß & Beauducel, 2015), was used in order to compare the PA versions with the highest detection rates. The most suitable PA version from the simulation study, that is, mFA, indicated that six factors should be extracted. This is compatible with the BIS model because a seventh loading column should be added in order to find an approximately rank-deficient two-facet model (Beauducel & Kersting, 2020). Overall, the results of the simulation study as well as results from the empirical data set underline that EFA based on PA is in principle suitable for the analysis of two-facet models but that—as in Beauducel and Kersting (2020)—rather large sample sizes and rather large salient loadings are necessary.

Some recommendations for EFA of two-facet models follow from the present research. As it is a priori unknown whether an expected two-facet population loading matrix tends to be rank deficient or tends to be of full rank, whether it is based on uncorrelated or correlated factors, a researcher has to perform EFA for at least two different numbers of factors and for the orthogonal and oblique rotations proposed in Beauducel and Kersting (2020). First, researchers may recode items in order to have rather clear expectations regarding the sign of salient loadings. This may enhance the probability of a conventional approximately rank-deficient two-facet model. Second, mFA with subsequent orthogonal rotation should be performed with one factor more as indicated (see, Beauducel & Kersting, 2020). If the resulting salient loadings of each factor have the same sign and a similar magnitude, an orthogonal approximately rank-deficient two-facet loading matrix was probably found. Inspection of cross-loadings may help to decide whether subsequent oblique rotation may be performed. However, if factors with positive and negative salient loadings or a considerable variation of the salient loadings occurs, researchers may not add a further factor. These recommendations, together with a sample size of at least 400 cases, may help to successfully explore two-facet models by means of EFA.

When a two-facet loading pattern has successfully been identified through mFA and appropriate subsequent rotation, it may be helpful to consider whether the loading pattern represents an MTMM model. This would be the case when the factors of one facet represent more formal aspects of the measurements. For example, when the data are based on questionnaire items, there might be a facet comprising factors for acquiescence or social desired responding. Another question is whether the factors of the two facets are reasonably well represented by measured variables or whether the factors of one facet are more completely represented than the factors of the other facet. Finally, it could also be that the two-facet loading pattern indicates that some combination of a factor of one facet with a factor of the other facet is not at all represented by measured variables. In order to detect possible underrepresentations of measured variables it might be helpful to construct a mapping sentence (R. Guttman & Greenbaum, 1998) from the factors and to develop hypotheses on the universe of measurements.

It should be acknowledged as a limitation of the present simulation study that some of the newest PA versions provided by Achim (2017), Green et al. (2012), and Dobriban and Owen (2019) were not investigated. Further studies on a large number of additional methods for the detection of the number of factors to retain for rotation should be performed in order to reach a more complete knowledge on this issue for two-facet models. This would be of special importance for full-rank two-facet models based on correlated factors because the current study shows that none of the rather traditional PA based methods investigated here had acceptable detection rates for the number of factors of these models. A further limitation of the present study is that only two-facet models and no three- or four-facet models were investigated. It should, however, be noted that the problem of unknown rank deficiency is even more pronounced for models with three or more facets. For example, three-facet models could be of full rank, could have a rank of q−1 or even q− 2. As the complexities resulting from the investigation of rank-deficient, approximately rank-deficient, and full-rank two-facet models are already considerable, a stepwise exploration of even more complex models may only be successful when a more complete knowledge on optimal procedures for EFA for two-facet models is available.

Supplemental Material

sj-pdf-1-epm-10.1177_0013164420982057 – Supplemental material for On the Detection of the Correct Number of Factors in Two-Facet Models by Means of Parallel Analysis

Supplemental material, sj-pdf-1-epm-10.1177_0013164420982057 for On the Detection of the Correct Number of Factors in Two-Facet Models by Means of Parallel Analysis by André Beauducel and Norbert Hilger in Educational and Psychological Measurement

Appendix

Results of transformation to the reduced row echelon form according to the Linear Algebra Toolkit (v1.25, Bogacki, 2000-2019).

For loading matrix in Equation (5), Rank = 3:

[10010101001100000000000000000000]

For loading matrix in Equation (6), Rank = 5:

[100001000000000000100001000000000011001000111000]

For loading matrices in Equations (7) and (8), Rank = 4:

[10000100000000000000100000010000]

Footnotes

Declaration of Conflicting Interests: The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Funding: The authors received no financial support for the research, authorship, and/or publication of this article.

ORCID iD: André Beauducel Inline graphic https://orcid.org/0000-0002-9197-653X

Supplemental Material: Supplemental material for this article is available online.

References

  1. Achim A. (2017). Testing the number of required dimensions in exploratory factor analysis. Quantitative Methods for Psychology, 13(1), 64-74. 10.20982/tqmp.13.1.p064 [DOI] [Google Scholar]
  2. Auerswald M., Moshagen M. (2019). How to determine the number of factors to retain in exploratory factor analysis: A comparison of extraction methods under realistic conditions. Psychological Methods, 24(4), 468-491. 10.1037/met0000200 [DOI] [PubMed] [Google Scholar]
  3. Beauducel A. (2001). Problems with parallel analysis in data sets with oblique simple structure. Methods of Psychological Research Online, 6(2), 141-157. https://www.dgps.de/fachgruppen/methoden/mpr-online/issue14/art2/article.html [Google Scholar]
  4. Beauducel A., Kersting M. (2002). Fluid and crystallized intelligence and the Berlin model of intelligence structure (BIS). European Journal of Psychological Assessment, 18(2), 97-112. 10.1027//1015-5759.18.2.97 [DOI] [Google Scholar]
  5. Beauducel A., Kersting M. (2020). Identification of facet models by means of factor rotation: A simulation study and data analysis of a test for the Berlin model of intelligence structure. Educational and Psychological Measurement, 80(5), 995-1019. 10.1177/0013164420909162 [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Beauducel A., Kersting M., Liepmann D. (2005). A multitrait-multimethod model for the measurement of sensitivity to reward and sensitivity to punishment. Journal of Individual Differences, 26(4), 168-175. 10.1027/1614-0001.26.4.168 [DOI] [Google Scholar]
  7. Bogacki P. (2000-2019). Linear algebra toolkit v1.25. https://www.math.odu.edu/~bogacki/cgi-bin/lat.cgi
  8. Bucik V., Neubauer A. (1996). Bi-modality in the Berlin model of intelligence structure (BIS): A replication study. Personality and Individual Differences, 21(6), 987-1005. 10.1016/S0191-8869(96)00129-8 [DOI] [Google Scholar]
  9. Buja A., Eyuboglu N. (1992). Remarks on parallel analysis. Multivariate Behavioral Research, 27(4), 509-540. 10.1207/s15327906mbr2704_2 [DOI] [PubMed] [Google Scholar]
  10. Campbell D. T., Fiske D. W. (1959). Convergent and discriminant validation by the multitrait-multimethod matrix. Psychological Bulletin, 56(2), 81-105. 10.1037/h0046016 [DOI] [PubMed] [Google Scholar]
  11. Cattell R. B. (1966). The scree test for the number of factors. Multivariate Behavioral Research, 1(2), 245-276. 10.1207/s15327906mbr0102_10 [DOI] [PubMed] [Google Scholar]
  12. Cho S.-J., Li F., Bandalos D. (2009). Accuracy of the parallel analysis procedure with polychoric correlations. Educational and Psychological Measurement, 69(5), 748-759. 10.1177/0013164409332229 [DOI] [Google Scholar]
  13. Comrey A. L. (1967). Tandem criteria for analytic rotation in factor analysis. Psychometrika, 32(2), 277-295. 10.1007/BF02289422 [DOI] [PubMed] [Google Scholar]
  14. Crawford A., Green A. B., Levy R., Lo W.-J., Scott L., Svetina D., Thompson M. S. (2010). Evaluation of parallel analysis methods for determining the number of factors. Educational and Psychological Measurement, 70(6), 885-901. 10.1177/0013164410379332 [DOI] [Google Scholar]
  15. Cronshaw S. F., Jethmalani S. (2005). The structure of workplace adaptive skill in a career inexperienced group. Journal of Vocational Behavior, 66(1), 45-65. 10.1016/j.jvb.2003.11.004 [DOI] [Google Scholar]
  16. Dobriban E., Owen A. B. (2019). Deterministic parallel analysis: An improved method for selecting factors and principal components. Journal of the Royal Statistical Society Series B, 81(1), 163-183. 10.1111/rssb.12301 [DOI] [Google Scholar]
  17. Eid M. (2000). A multitrait–multimethod model with minimal assumptions. Psychometrika, 65(2), 241-261. 10.1007/BF02294377 [DOI] [Google Scholar]
  18. Eid M., Lischetzke T., Nussbeck F. W., Trierweiler L. I. (2003). Separating trait effects from trait-specific method effects in multitrait-multimethod models: A multiple indicator CTC(M–1) model. Psychological Methods, 8(1), 38-60. 10.1037/1082-989X.8.1.38 [DOI] [PubMed] [Google Scholar]
  19. Fabrigar L. R., Wegener D. T., MacCallum R. C., Strahan E. J. (1999). Evaluating the use of exploratory factor analysis in psychological research. Psychological Methods, 4(3), 272-299. 10.1037/1082-989X.4.3.272 [DOI] [Google Scholar]
  20. Floyd F. J., Widaman K. F. (1995). Factor analysis in the development and refinement of clinical assessment instruments. Psychological Assessment, 7(3), 286-299. 10.1037/1040-3590.7.3.286 [DOI] [Google Scholar]
  21. Glorfeld L. W. (1995). An improvement on Horn’s parallel analysis methodology for selecting the correct number of factors to retain. Educational and Psychological Measurement, 55(3), 377-393. 10.1177/0013164495055003002 [DOI] [Google Scholar]
  22. Gorsuch R. L. (1983). Factor analysis (2nd ed.). Lawrence Erlbaum. [Google Scholar]
  23. Gorsuch R. L., Nelson J. (1981). CNG scree test: An objective procedure for determining the number of factors [Paper presentation]. Annual Meeting of the Society for Multivariate Experimental Psychology. [Google Scholar]
  24. Grayson D., Marsh H. W. (1994). Identification with deficient rank loading matrices in confirmatory factor analysis: Multitrait-multimethod models. Psychometrika, 59(1), 121-134. 10.1007/BF02294271 [DOI] [Google Scholar]
  25. Green S. B., Levy R., Thompson M. S., Lu M., Lo W. J. (2012). A proposed solution to the problem with using completely random data to assess the number of factors with parallel analysis. Educational and Psychological Measurement, 72(3), 357-374. 10.1177/0013164411422252 [DOI] [Google Scholar]
  26. Green S. B., Redell N., Thompson M. S., Levy R. (2016). Accuracy of revised and traditional parallel analyses for assessing dimensionality with binary data. Educational and Psychological Measurement, 76(1), 5-21. 10.1177/0013164415581898 [DOI] [PMC free article] [PubMed] [Google Scholar]
  27. Guilford J. P. (1967). The nature of human intelligence. McGraw-Hill. [Google Scholar]
  28. Guilford J. P. (1975). Factors and factors of personality. Psychological Bulletin, 82(5), 802-814. 10.1037/h0077101 [DOI] [PubMed] [Google Scholar]
  29. Guilford J. P. (1988). Some changes in the structure-of-intellect model. Educational and Psychological Measurement, 48(1), 1-4. 10.1177/001316448804800102 [DOI] [Google Scholar]
  30. Guttman L. (1954). An outline of some new methodology for social research. Public Opinion Quarterly, 18(4), 395-404. 10.1086/266532 [DOI] [Google Scholar]
  31. Guttman L., Levy S. (1991). Two structural laws for intelligence tests. Intelligence, 15(1), 79-103. 10.1016/0160-2896(91)90023-7 [DOI] [Google Scholar]
  32. Guttman R., Greenbaum C. W. (1998). Facet theory: Its development and current status. European Psychologist, 3(1), 13-36. 10.1027/1016-9040.3.1.13 [DOI] [Google Scholar]
  33. Horn J. L. (1965). A rationale and test for the number of factors in factor analysis. Psychometrika, 30(2), 179-185. 10.1007/BF02289447 [DOI] [PubMed] [Google Scholar]
  34. Humphreys L. G., Ilgen D. R. (1969). Note on a criterion for the number of common factors. Educational and Psychological Measurement, 29(3), 571-578. 10.1177/001316446902900303 [DOI] [Google Scholar]
  35. Jäger A. O. (1982). Mehrmodale Klassifikation von Intelligenzleistungen: Experimentell kontrollierte Weiterentwicklung eines deskriptiven Intelligenzstrukturmodells [Multimodel classification of intelligence performance: Experimentally controlled development of a descriptive model of intelligence structure]. Diagnostica, 23, 195-225. [Google Scholar]
  36. Jäger A. O. (1984). Intelligenzstrukturforschung: Konkurrierende Modelle, neue Entwicklungen, Perspektiven [Structural research on intelligence: Competing models, new developments, perspectives]. Psychologische Rundschau, 35, 21-35. [Google Scholar]
  37. Jäger A. O., Süß H. M., Beauducel A. (1997). Berliner Intelligenzstruktur-Test. BIS-Test, Form 4 [Test for the Berlin Model of Intelligence Structure, Form 4]. Hogrefe. [Google Scholar]
  38. Leue A., Beauducel A. (in press). A facet theory approach for the psychometric measurement of conflict monitoring. Personality and Individual Differences. 10.1016/j.paid.2020.110479 [DOI]
  39. Lim S., Jahng S. (2019). Determining the number of factors using parallel analysis and its recent variants. Psychological Methods, 24(4), 452-467. 10.1037/met0000230 [DOI] [PubMed] [Google Scholar]
  40. Magnus J. R., Neudecker H. (2007). Matrix differential calculus with applications in statistics and econometrics (3rd ed.). Wiley. [Google Scholar]
  41. Schlesinger I. M., Guttman L. (1969). Smallest space analysis of intelligence and achievement tests. Psychological Bulletin, 71(2), 95-100. 10.1037/h0026868 [DOI] [Google Scholar]
  42. Schönemann P. H. (1966). A generalized solution of the orthogonal Procrustes problem. Psychometrika, 31, 1-10. 10.1007/BF02289451 [DOI] [Google Scholar]
  43. Shye S. (1998). Modern facet theory: Content design and measurement in behavioral research. European Journal of Psychological Assessment, 14(2), 160-171. 10.1027/1015-5759.14.2.160 [DOI] [Google Scholar]
  44. Sternberg R. J. (1981). The evolution of theories of intelligence. Intelligence, 5(3), 209-230. 10.1016/S0160-2896(81)80009-8 [DOI] [Google Scholar]
  45. Süß H.-M., Beauducel A. (2005). Faceted models of intelligence. In Wilhelm O., Engle R. (Eds.). Understanding and measuring intelligence (pp. 313-332). Sage. [Google Scholar]
  46. Süß H.-M., Beauducel A. (2015). Modeling the construct validity of the Berlin intelligence structure model (BIS). Estudos de Psychologia/Pychological Studies, 32(1), 13-25. 10.1590/0103-166X2015000100002 [DOI] [Google Scholar]
  47. Süß H. M., Oberauer K., Wittmann W. W., Wilhelm O., Schulze R. (2002). Working-memory capacity explains reasoning ability-and a little bit more. Intelligence, 30(3), 261-288. 10.1016/S0160-2896(01)00100-3 [DOI] [Google Scholar]
  48. Timmerman M. E., Lorenzo-Seva U. (2011). Dimensionality assessment of ordered polytomous items with parallel analysis. Psychological Methods, 16(2), 209-220. 10.1037/a0023353 [DOI] [PubMed] [Google Scholar]
  49. Turner N. E. (1998). The effect of common variance and structure pattern on random data eigenvalues: Implications for the accuracy of parallel analysis. Educational and Psychological Measurement, 58(4), 541-568. 10.1177/0013164498058004001 [DOI] [Google Scholar]
  50. Weng L.-J., Cheng C.-P. (2005). Parallel analysis with unidimensional binary data. Educational and Psychological Measurement, 65(5), 697-716. 10.1177/0013164404273941 [DOI] [Google Scholar]
  51. Wheeler J. J. (1993). A facet model for the organisational decision making orientation of midlevel managers. Journal of Industrial Psychology, 19(3), 18-22. 10.4102/sajip.v19i3.562 [DOI] [Google Scholar]
  52. Yates A. (1987). Multivariate exploratory data analysis: A perspective on exploratory factor analysis. State University of New York Press. [Google Scholar]
  53. Zoski K. W., Jurs S. (1993). Using multiple regression to determine the number of factors to retain in factor analysis. Journal of Multiple Linear Regression Viewpoints, 20, 5-9. [Google Scholar]
  54. Zoski K. W., Jurs S. (1996). An objective counterpart to the visual scree test for factor analysis: The standard error scree. Educational and Psychological Measurement, 56(3), 443-451. 10.1177/0013164496056003006 [DOI] [Google Scholar]
  55. Zwick W. R., Velicer W. F. (1986). Comparison of five rules of determining the number of components to retain. Psychological Bulletin, 99(3), 432-442. 10.1037/0033-2909.99.3.432 [DOI] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

sj-pdf-1-epm-10.1177_0013164420982057 – Supplemental material for On the Detection of the Correct Number of Factors in Two-Facet Models by Means of Parallel Analysis

Supplemental material, sj-pdf-1-epm-10.1177_0013164420982057 for On the Detection of the Correct Number of Factors in Two-Facet Models by Means of Parallel Analysis by André Beauducel and Norbert Hilger in Educational and Psychological Measurement


Articles from Educational and Psychological Measurement are provided here courtesy of SAGE Publications

RESOURCES