Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2020 Nov 26.
Published in final edited form as: J Youth Adolesc. 2016 Jun 11;45(10):2094–2107. doi: 10.1007/s10964-016-0495-1

Agreement in Youth-Parent Perceptions of Parenting Behaviors: A Case for Testing Measurement Invariance in Reporter Discrepancy Research

Justin D Russell 1, Rebecca A Graham 2, Erin L Neill 3, Carl F Weems 3
PMCID: PMC7690213  NIHMSID: NIHMS794935  PMID: 27289553

Abstract

While conventional wisdom suggests that parents and their adolescent offspring will often disagree, the nature of discrepancies in informant reports of parenting behaviors is still unclear. This paper suggests testing measurement invariance in an effort to clarify if discrepancies in informant scores reflect true differences in perspectives on the same construct, or if the instrument is simply not measuring the same construct across parents and youth. The study provides an example by examining invariance and discrepancy across child, adolescent, and parent reports on the Alabama Parenting Questionnaire. The sample for this study was 255 youth (51.4% male) aged 6 to 17 years (Mage = 12.3 years) and an accompanying parent. A five-factor model of the measure was found to provide approximately equivalent measurement across four participant groups (children under 12 years, adolescents aged 12 to 18 years, and parents of each group, respectively). Latent mean levels of reported parenting constructs varied greatly across informants. Age moderated the association between reports of two subscales, Parental Involvement and Positive Parenting, such that adolescents were more consistent with parents. The findings highlight the utility of testing measurement invariance across informants prior to evaluating differences in their reports, and demonstrate the benefits of considering invariance in the larger conversation over informant discrepancies.

Keywords: informant discrepancy, parenting practices, measurement invariance, Alabama Parenting Questionnaire, agreement, alignment method

Introduction

A wealth of research indicates that parenting practices affect many areas of child development, and continue to influence children’s well-being across the lifespan (Belsky & de Haan, 2011; Bornstein, 2015; Morris, Cui, & Steinberg, 2013). Aspects of parenting and the parent-child relationship are associated with a wide range of positive and negative outcomes for children and teens (Aquilino & Supple, 2001). Recognition of the magnitude of parental influence on child development has resulted in numerous attempts to quantitatively assess parenting (Robinson, Mandleco, Olsen, & Hart, 2011; Shelton, Frick, & Wootton, 1996; Strayhorn & Weidman, 1988). Generally, the use of multi-informant data collection is considered best practice as it provides information from multiple perspectives across different situations (Hughes & Gullone, 2010; Richters, 1992; Silverman & Saavedra, 2004). Moreover, a vast body of research across constructs such as anxiety (Weems, Taylor, Marks, & Varela, 2010), interparental conflict (Davies, Martin, & Cicchetti, 2012), and conduct problems (Frick, Cornell, Barry, Bodin, & Dane, 2003) has shown that collecting both the parent and child’s report of parenting behaviors provides more insight into family and individual functioning than either report alone (see De Los Reyes, Thomas, Goodman, & Kundey, 2013).

While conventional wisdom suggests that parents and their adolescent offspring will always disagree, the nature of discrepancies in informant reports of parenting behaviors is not well understood. Research using multi-informant assessment regularly encounters a long-standing issue, namely disagreement among reporters. Generally, parents and children show poor agreement in their assessment of one another’s behavior (Achenbach, McConaughy, & Howell, 1987; De Los Reyes et al., 2012; De Los Reyes, Goodman, Kliewer, & Reid-Quinones, 2010; De Los Reyes & Kazdin, 2004, 2005). The discrepancy between reports has been the focus of decades of research and debate, with various theorists attributing the differences to combinations of methodological, contextual, or cognitive factors. We contribute to the discussion by describing the need to demonstrate measurement invariance across populations of study and provide an applied example across parent and youth reports on the Alabama Parenting Questionnaire (Frick, 1991; Shelton et al., 1996).

Discrepancies across Informant Reports

Informant discrepancy is commonly defined as the level of disagreement (or lack of overlap) between two people rating the same phenomena. This inconsistency complicates integration of informants’ reports and may lead to inconsistencies in research findings or clinical interpretations (e.g., (Casey & Berman, 1985; De Los Reyes & Kazdin, 2009). While informant discrepancy is found across many domains of psychology research, it has been a particular focus in developmental and family psychology, where investigators have generally found low levels of agreement between parent and child reports (De Los Reyes & Kazdin, 2004). In a landmark meta-analysis of 119 studies utilizing multiple informant assessment of child behaviors, Achenbach, McCounaughy, and Howell (1987) reported a mean correspondence between parent and child reporters of r = .25. Recent work by De Los Reyes et al. (2015) updated findings from Achenbach et al. by compiling the results of all studies and meta-analyses, from 1989 – 2014 that have compared parent and child reports on internalizing and externalizing difficulties in youth. The trend originally described by Achenbach et al. has continued, in that parents and children generally show low-to-moderate agreement on child internalizing (r = .25) and externalizing (r = .30) problems.

Like other constructs, parents and children often disagree in their reports of parenting behaviors (De Los Reyes et al., 2010). In a study of 559 early adolescents and their caregivers, Guion and colleagues (2009) found that children reported comparatively lower frequency of nurturing parent behavior, while parents reported greater frequency of inconsistent and harsh discipline. In contrast, other studies have found evidence suggesting that parents tend to perceive themselves to be more supportive and accepting than do their children (Gaylord, Kitzmann, & Coleman, 2003; Tein, Roosa, & Michaels, 1994). While the nature of the discrepancy remains unknown, parents and children may be interpreting their interactions differently, or perhaps may not be defining parenting behaviors in the same manner (Tein et al., 1994).

Adolescence may be associated with changes in the discrepancy between parent and child informants (Tein et al., 1994). In their meta-analysis, Achenbach and colleagues (1987) found that interrater agreement was significantly greater in children 6 – 11 years of age than for adolescents (12 or older). However, in terms of parenting type behaviors, Stuart and Jose (2012) found that discrepancies between parents’ and adolescents’ ratings of family dynamics increased over time. Specifically, progression through adolescence was associated with a widening gap between reports of family conflict. In comparison to younger adolescents and their parents, older teens reported substantially greater household conflict, while parents of older adolescents reported less. The authors further identified a consistent gap across adolescence in parent and teen reports of positivity in the family. In many ways, these findings are to be expected. Adolescence is a developmental period during which the parent-child relationship undergoes significant transformation resulting in a shift of power and more egalitarian parent-child relationship (De Goede, Branje, & Meeus, 2009). Adolescents strive for autonomy and individuation and spend progressively more time with peers and less time with family (Collins & Steinberg, 2008; Larson, Richards, Moneta, Holmbeck, & Duckett, 1996). A moderate level of parent-child conflict during this developmental period is conceded to be normative, temporary, and functional in facilitating youth individuation as a separate entity from their family unit (Smetana, 1988). Such findings suggest the need to examine if adolescence moderates differences in nature of parent child discrepancies. Specifically, are perceptions of parenting different in children 6 – 11 years of age compared to adolescents (12 or older) and again compared to parents themselves?

Operationalizing informant discrepancies as the arithmetic difference between two informants scores is problematic (Edwards, 2002; Laird & De Los Reyes, 2013; Laird & Weems, 2011). De Los Reyes et al. (2013) published a comprehensive review of commonly used methods for treating multiple informant data and describe what they term a “grand discrepancy,” in the methods researchers use to analyze cross-informant data, and the assumptions made about the nature of the differences in responding across informants. For example, analyses involving multiple-informant data sets commonly use one of several procedures to synthesize responses into a unified variable. These may include the use of combinatorial rules for collapsing scores from several reporters into a single variable (e.g., use of highest/lowest score), structural equation models where responses from different informants load on to a unitary latent variable, or more commonly in controlled trials research, a priori selection of a single “superior” informant (out of many possible) whose report will singularly assess treatment efficacy.

While these methods are substantially different on the surface, at their core each requires the researcher to assume that informant discrepancies are ignorable artifacts of measurement (i.e., error). However, per De Los Reyes (2011), “…we should only accept the idea that informant discrepancies do not contain useful information when data exist to support this idea,” (p. 8). Indeed, researchers should identify the degree to which differences across informant responses reflect random variation in measurement, or rather reflect meaningful variation in the construct being measured. However, in practice, we find that the analyses necessary to resolve this dilemma are rarely carried out. In this article, we attempt to address the implications of this statement by testing measurement invariance across parents and their children in perceptions of parenting and examining if there are differences in invariance across children and adolescents.

The Case for Measurement Invariance

Differences between informant reports may be informative, but mainly to the extent that the investigator can empirically demonstrate that the assessment instrument has measured equivalent constructs in each individual. That is, do discrepancies in informant scores reflect true differences between informants, or rather, are these discrepancies a consequence of the measurement instrument measuring fundamentally different constructs? In their Operations Triad Model, De Los Reyes et al. (2013) describe the decision rules researchers must make in the course of multiple-informant research to determine the nature of the differences in informant reports. The current article adds an additional methodological component to this framework by demonstrating the core analyses necessary to inform such decisions. That is, we believe resolving questions about discrepancies in informant reports on a psychological measure requires analysis of that instrument’s measurement invariance across informant groups, such as parents and children. Horn and McArdle (1992) define the question of measurement invariance as, “whether or not, under different conditions of observing and studying phenomena, measurement operations yield measures of the same attribute,” (p. 117). While this statement has broad applicability across scientific measurement, in the current context, our “different conditions” are the informant populations completing a self-report instrument (our “measurement operation”), with our goal the determination of whether this instrument provides equivalent, “measures of the same attribute” or measurement invariance. Critically, in the absence of measurement invariance across groups, the basis for making cross-group comparisons is questionable, as the instrument may not be measuring the same construct in the two groups.

Ultimately, failing to test the assumption of equivalent measurement leads to ambiguous interpretation of the results. Differences in sample means (e.g., parents versus youth) might be representative of true differences across populations, or may simply reflect differences in the manner of construct measurement across groups. This last point is of critical importance in the debate over the informative value of informant discrepancies. When measurement invariance is established, discrepancies in construct means cannot be discarded as a statistical ‘nuisance’, as recommended by Roberts and Caspi (2001) because these values represent true differences across groups. However, in the absence of measurement invariance, informant discrepancies have no meaning, and thus, lack the informative value suggested by De Los Reyes and colleagues (De Los Reyes, Henry, Tolan, & Wakschlag, 2009).

Measurement invariance refers to consistency in measurement model parameters, that is, the loadings, intercepts, and residual variance of the items defined by the latent construct assessed. It is most commonly evaluated by testing the comparative fit of a series of nested models that incrementally constrain sets of model parameters to equality across groups: (1) a configural invariance model, where an identical model form is applied to the data from each group, with all model parameters free to vary, (2) a metric invariance model that constrains factor loadings across groups, (3) a scalar invariance model further constraining item intercepts across groups. Metric or scalar invariance are confirmed when comparison with the less constrained parent model (configural or metric, respectively) reveals that additional constraints do not cause a significant reduction in fit to the data. Full measurement invariance is said to occur when configural, metric, and scalar invariance hold (Brown, 2015). Though some researchers assert that true measurement invariance further requires evaluating the fit of a model with residual variances constrained. In this article, we do not test this assumption and instead refer readers to Little (2013) who provides a highly accessible read on why equality of residual variances may not be theoretically justified, nor meaningful in measurement.

In practice, researchers regularly fail to find full measurement invariance (Vandenberg & Lance, 2000). Indeed, it seems unreasonable to expect a measurement instrument to perform equivalently across all possible populations. However, Byrne, Shavelson, and Muthén (1989) have demonstrated that, in most cases, partial or approximate measurement invariance may be acceptable. That is, latent means may be compared across groups in models with parameters that cause tests for scalar or metric invariance to fail. These “offending” parameters are permitted to vary across groups in a partial invariance model. Identification of full or partial measurement invariance then permits analyses comparing latent mean differences across groups, which is generally thought to provide a more lucid understanding of between-group variation (Hancock, 1997; Thompson & Green, 2013; Green & Thompson, 2012). However, we are aware of very few publications that report testing measurement invariance across parent and child reports. Janssens and colleagues (2015) tested measurement invariance of a five-factor model of parent control behaviors underlying the Parental Behavior Scale (PBS; (Van Leeuwen et al., 2013) across mothers, fathers, and their teens. The authors identified partial measurement invariance, suggesting that measurement operation was largely equivalent across informants. Dirks and colleagues (2014) found support for partial measurement invariance of the Screen for Child Anxiety Related Emotional Disorders across caregiver-child dyads. Their finding that the measurement of anxiety constructs was not fully equivalent across parent and child reports may be representative of a larger, unidentified issue in clinical multiple-informant assessment. Moreover, we refer readers to this publication as a template for analyses of measurement invariance with categorical item measures (our analyses use continuous items) which is, in our opinion, a neglected topic in the literature.

The Current Study

The current study examines the discrepancy between parent and offspring reports on parenting practices, using groups of youth over 12 years of age and under 12 years of age, as well as the parents of these respective groups. Initially, however, we test for measurement invariance, or the consistency of the six-factor model of the Alabama Parenting Questionnaire between parent, teen, and child reporters. First, we use structural equation modeling analyses to conduct hypothesis-based testing of the six-factor model fit to a sample comprised of both parents and children. Second, we evaluate measurement invariance across four groups of participants whom, for brevity, we refer to as: children (under 12 years), teens (12 to 18 years), parents-children (parents of the child group), and parents-teens (parents of the teen group). Third, where full or partial measurement invariance is identified, we further evaluate the invariance of structural model components across groups. That is, we test for differences in the associations between factors of the Alabama Parenting Questionnaire across groups (i.e., the factor variance-covariance matrix). Fourth, and again assuming measurement invariance, we compare the measure’s constructs across types of reporters, by analyzing the differences in latent means. In conducting these analyses, we provide an applied example of the importance of considering measurement invariance in multiple informant research, while further expanding research into the psychometric foundation of the Alabama Parenting Questionnaire.

Drawing from the work of Dirks and colleagues (2014) who found support for partial measurement invariance across parent and child reports in an anxiety measure, we similarly hypothesize that the Alabama Parenting Questionnaire will demonstrate approximate (though not full) measurement invariance across informants. This hypothesis is further encouraged by the rarity with which researchers have found support for full measurement invariance across three or more groups (Asparouhov & Muthén, 2014). Moreover, given the theoretical changes in the manifestation of parenting constructs across development, as well as reporter, we further anticipate that factor variances and covariances will be largely inconsistent across reporters. Finally, and along the same reasoning, we hypothesize that latent factor means of all factors will show substantial variation across groups. Based on the work of Gaylord and colleagues (2003; Tein et al., 1994) we expect parents to reflect more positively on their behavior than children or teens.

Method

Participants

The sample for this study was composed of 255 youth (51.4% male) aged 6 to 17 years (Mage = 12.3 years) and an accompanying parent from 191 families (92.6% maternal parent). The ethnicity of the sample was 43.3% Caucasian, 37.8% African-American, 6.7% Hispanic, .8% Asian, with 11.4% of other ethnic backgrounds. The range of the family income for this sample was as follows: Less than $20,000 (36.9%), $20,000 - $49,999 (35.7%), over $50,000 (26.3%), and 1.2% did not report their family income.

Measure

Parenting behaviors were assessed using the parent and child versions of the Alabama Parenting Questionnaire (Frick, 1991; Shelton et al., 1996), a 42-item self-report measure that assesses parenting practices from child and parent reports across five domains: Positive Parenting, Parental Involvement, Poor Monitoring, Inconsistent Discipline, and Corporal Punishment, with the former two constructs positively worded, and the latter three negatively worded. Items are linguistically equivalent across parent and child forms, in that only the object of each sentence is changed to reference the child or the parents, respectively. Essau, Sasagawa, and Frick (2006) conducted a factor analysis of scores from a sample of 1,219 German school-children to identify a five factor model largely consistent with the originally hypothesized scale structure.

A number of previous studies have evaluated aspects of the reliability and validity of both the parent and child versions of the Alabama Parenting Questionnaire. For example, Shelton et al. (1996) reported that scales from the measure were generally uncorrelated with measures of a socially desirable response set for both the child report and parent report forms (r’s ranging from −0.01 to 0.23 across dimensions). Dadds, Maujean, and Fraser (2003) found good levels of test-retest reliability in an Australian community sample of 4 to 9 year old children using the parent report form across a 2-week period (r’s .84-.90). In the current study, internal consistency of subscale scores was quantified using hierarchical omega (Kelley & Pornprasertmanit, 2016), and is presented in Table 1.

Table 1.

Internal Consistency and Observed Score Means across Informants for Alabama Parenting Questionnaire Subscales

ω (95% C.I.)a Children
(n=125)
Teens
(n=130)
Parents
(n=255)

SE SE SE
Positive Parenting .83 (.79 - .85) 23.47 4.36 21.53 5.08 25.52 3.66
Parental Involvement .81 (.78 - .84) 35.33 7.16 33.24 7.50 38.90 5.77
Inconsistent Discipline .67 (.62 - .72) 13.60 4.38 14.40 4.06 14.55 4.16
Poor Monitoring .80 (.76 - .83) 17.74 5.47 22.83 6.76 16.26 5.83
Corporal Punishment .79 (.74 - .84) 6.01 3.17 4.14 1.90 5.06 2.36
a

Computed from 1000 bootstrapped samples.

Note. ω = hierarchical omega coefficient.

Procedure

Data from this study were collected as part of a larger project collecting information about parents and children residing in a large metropolitan area in the Gulf South region (Scott & Weems, 2010). A university institutional review board approved all study protocols and procedures prior to data collection. Upon arrival, parents and children were greeted by laboratory staff and provided with an overview of the nature of the investigation. Informed consent was obtained from the parent and informed assent was obtained from the child. Parent and child completed a battery of questionnaires in separate, quiet rooms. When necessary participants were assisted with reading comprehension by trained research assistants or graduate students (i.e., young participants were read the assessment battery by research assistants who closely monitored the child’s comprehension of the questions). At the conclusion of the overarching study, participants were debriefed (i.e., offered the opportunity to ask questions, provided information about local mental health resources) and provided with US$30 as compensation for their time.

Statistical Analyses

Analyses were undertaken with the intention of assessing the measurement invariance of the six-factor measurement model of the Alabama Parenting Questionnaire across youth development. Given the volume of literature demonstrating that parents and their children tend to show limited agreement when assessing the same construct, we also chose to investigate the consistency of measurement across parent and child informants. Furthermore, we decided to test whether child age might act as a moderator of measurement invariance across reporters. Conducting these analyses therefore required us to evaluate measurement invariance across groups of children, teens, parents of children, and parents of teens.

When a measurement model retains the same structure or form (i.e., paths between items and factors) across groups, the scale it underlies is said to be configurally invariant. Configural invariance across parent and child participants, for example, indicates that item responses from each informant are governed by the same general constructs. That is, measurement models of the scale specific to each group share a common form; however, all other parameters (e.g., loadings, intercepts) are free to vary. Operationally, the assumption of configural invariance may be tested by conducting a multi-group confirmatory factor analysis (MGCFA) model with no parameters constrained. When this model is an acceptable fit to the data, the assumption is satisfied (Brown, 2015).

Configural invariance of the six-factor measurement model (Essau et al., 2006) was evaluated using Mplus v7.3 (Muthén & Muthén, 2012) with full-information maximum-likelihood estimation. Scaling for the indicators was set by fixing the variance of each latent variable to one. Criteria for acceptable model fit was specified as follows: comparative fit index (CFI) ≥ 0.90, root-mean-square error of approximation (RMSEA) ≤ .08, and standardized root-mean-square residual ≤ .10 (Cheung & Rensvold, 2002; Hu & Bentler, 1999; Kline, 2013). In keeping with common reporting standards for structural equation model analyses, the chi-square fit statistic is reported below, though it did not inform our decisions about model fit. Briefly, the chi-square test is highly sensitive to both sample size and model complexity, and is known to inflate Type II error on this basis. Increasingly, researchers are turning to more practical fit indices (e.g., RMSEA, CFI) which are more robust to the peculiarities of individual models (Brown, 2015; Jöreskog, 1969; West, Taylor, & Wu, 2012). Therefore, fit decisions were based on the combined information provided from the CFI, RMSEA, and SRMR, thus offsetting the limitations of any individual statistic.

Full measurement invariance further requires that both metric and scalar invariance are achieved. When measurement invariance is known, researchers may more confidently assume that differences in latent variable means across groups represent true differences across populations, rather than inconsistencies in measurement. In practice, however, full measurement invariance is an uncommon finding. Rather, researchers are typically satisfied with models that show approximate equivalence (i.e., some loadings or intercepts free to vary; (Byrne et al., 1989) or partial measurement invariance across groups. The frequency of this result has caused some investigators to question whether the requirements for metric invariance may be too stringent (Asparouhov & Muthén, 2014; Muthén & Asparouhov, 2014; van de Schoot et al., 2013) particularly when testing measurement invariance across more than two groups. Moreover, identification of partial measurement invariance commonly requires progressive manual testing of model changes suggested by modification indices, a process that becomes analytically cumbersome as the number of groups increases beyond two. The number of possible appropriately fitting partial invariance models grows with item count and sample size, and no guarantee is made that the selected model will represent the most parsimonious or theoretically sound solution (Asparouhov & Muthén, 2014).

Asparouhov and Muthén (2014; Muthén & Asparouhov, 2014) have recently presented a novel technique that uses an iterative algorithm to identify the most parsimonious partial invariance model. The alignment optimization method does not constrain parameters across groups, but instead attempts to identify the permutation of constraints that will allow for maximum invariance across groups, while minimizing loss of fit to the data. Similar estimation processes that attempt to maximize model simplicity and minimize factor overlap are commonly used in exploratory factor analysis (Asparouhov & Muthén, 2014). Though the evaluative process is markedly different from traditional model-comparison methods of MI testing, the alignment method continues to provide estimates of intercepts, loadings, means, and variances across groups. Moreover, both methods share a common starting point, in that a user-specified configural model is tested across groups, with factor means and variances fixed at one and zero. In the alignment method, an unknown target model is assumed that holds partial measurement invariance across groups, while freeing the fewest parameters possible. The optimization algorithm initially uses a series of pairwise comparisons to determine the magnitude of parameter differences between groups. Model parameters are progressively allowed to vary until estimation converges on a partial invariance solution that demonstrates equivalent fit to the original configural model, while maximizing consistency across groups. As in a traditional invariance testing, the assumption of measurement invariance may be qualitatively assessed according to the total number of non-invariant parameters.

Due to its benefits when testing for measurement invariance across several groups, we chose to use the alignment method to identify an approximate measurement invariance model. Approximate (alternatively, partial) measurement invariance is a prerequisite to testing for differences in the structural components of a model. Structural invariance testing has no bearing on the validity of measurements across groups, but instead can inform research by illuminating the differences in the latent factors and their associations. Differences in factor variances and covariances were tested by testing if constraining these parameters in the identified approximate invariance model resulted in poorer fit to the data. Finally, we used latent means difference testing (i.e., structured means modeling; see (Hancock, 1997) to evaluate differences in latent variables. This approach is comparable to methods used for group-wise comparison of observed variables, such as t-tests or analysis of variance. Generally, this method may be superior to comparable methods of group-wise comparisons with observed variables (t-tests, analysis of variance), in that by operating in the structural equation modeling framework, a latent means comparison approach allows the researcher to parcel out measurement error, thus allowing for a purer, and more precise contrast of constructs across groups (Green & Thompson, 2012; Thompson & Green, 2013)

Results

Descriptive statistics for observed scores on all subscales are presented in Table 1. Analyses began by conducting confirmatory factor analysis of the six-factor model described by Essau et al. (2006). In a preliminary step, we tested estimation of the six-factor model separately in each group. Estimation encountered a non-positive definite covariance matrix between factors in child and parent reporters, and produced a non-significant negative residual variance estimate for item 38 in older youth. Models demonstrating such issues are referred to as “Heywood cases,” (Chen, Bollen, Paxton, Curran, & Kirby, 2001; Wothke, 1993) where, due to the eccentricities of the model (e.g., improper specification), the estimation process will either fail to converge on a single solution, or do so in a manner that produces unreliable results. Generally, such models require re-specification. As suggested by Muthén (2005; Hox, 2010) the residual variance of item 38 was treated by fixing this parameter to zero. A revised model reached convergence in all groups without issue. A configural invariance model (i.e., identical model across groups) was tested next. All factor loadings were significant at the p < .01 level, with z-scores ranging from 2.88 to 18.34 (Table 2). Results from fit indices, χ2 (2204) = 3686.702, p < .001, CFI = 0.713, RMSEA = .076 (95% C.I.: .072, .080), SRMR = .092, partially supported the model’s fit to the data, in that while RMSEA and SRMR values suggested adequate fit, the CFI value did not. Based on these findings, we proceeded under the assumption of configural invariance.

Table 2.

Factor Structure and Measurement (Non-) Invariance of a Five-Factor Model of the Alabama Parenting Questionnaire across Informants

Loading Invariance Intercept Invariance

Item # Item Content b (SE) Child Teen Parent-
Child
Parent-
Teen
Child Teen Parent-
Child
Parent-
Teen
Corporal Punishment

APQ-33 Your parents spank you with their hand when you have done
something wrong.
1.00 (0.00)
APQ-35 Your parents slap you when you have done something wrong. 1.02 (0.13)
APQ-38 Your parents hit you with a belt, switch, or object when you have
done something wrong.
1.86 (0.20)
Inconsistent Discipline

APQ-3 Your parents threaten to punish you and then do not do it. 1.00 (0.00)
APQ-8 You talk your parents out of punishing you after you have done
something wrong.
1.43 (0.19)
APQ-12 Your parents give up trying to get you to obey them because it's too
much trouble.
0.88 (0.16)
APQ-22 Your parents let you out of a punishment early (like lift restrictions
earlier than they originally said).
1.08 (0.13)
APQ-25 Your parents do not punish you when you have done something wrong. 0.79 (0.17)
APQ-31 The punishment your parents give depends on their mood. 0.89 (0.17)
Positive Parenting

APQ-2 Your parents tell you that you are doing a good job. 1.00 (0.00)
APQ-5 Your parents reward or give something extra to you for behaving well. 0.96 (0.09)
APQ-13 Your parents compliment you when you have done something well. 0.89 (0.06)
APQ-16 Your parents praise you for behaving well. 1.23 (0.08)
APQ-18 Your parents hug or kiss you when you have done something well. 1.28 (0.09)
APQ-27 Your parents tell you that they like it when you help out around the
house.
1.00 (0.08)
APQ-6 You fail to leave a note or let your parents know where you are going. 1.00 (0.00)
APQ-10 You stay out in the evening past the time you are supposed to be
home.
0.99 (0.08)
APQ-17 Your parents do not know the friends you are with. 0.86 (0.10)
APQ-19 You go out without a set time to be home. 1.01 (0.11)
APQ-21 You go out after dark without an adult with you. 1.15 (0.12)
APQ-24 Your parents get so busy that they forget where you are and what you
are doing.
0.61 (0.07)
APQ-28 You stay out later than you are supposed to and your parents don't
know it.
0.79 (0.09)
APQ-29 Your parents leave the house and don't tell you where they are going. 0.53 (0.10)
APQ-30 You come home from school more than an hour past the time your
parents expect you to be home.
0.73 (0.09)
APQ-32 You are at home without an adult being with you. 0.73 (0.11)
Parental Involvement

APQ-1 You have a friendly talk with your mom. 1.00 (0.00)
APQ-4 Your mom/dad helps with some of your special activities (such as
sports, boy/girl scouts, church youth groups).
1.07 (0.11)
APQ-7 You play games or do other fun things with your mom/dad. 1.16 (0.09)
APQ-9 Your mom/dad asks you about your day in school. 0.92 (0.10)
APQ-11 Your mom/dad helps you with your homework. 1.09 (0.12)
APQ-14 Your mom/dad asks you what your plans are for the coming day. 1.00 (0.11)
APQ-15 Your mom/dad drives you to a special activity. 0.98 (0.11)
APQ-20 Your mom/dad talks to you about your friends. 1.14 (0.11)
APQ-26 Your mom/dad goes to a meeting at school, like a PTA meeting or
parent/teacher conference.
1.09 (0.11)

✔Model parameter is invariant across informants. ✖ Model parameter is non-invariant across informants.

Note. APQ = Alabama Parenting Questionnaire.

The alignment method was used to evaluate the measurement invariance of the six-factor measurement model. In accordance with our earlier results, the variance of item 38 was fixed at zero. This revised configural invariance model was specified as the starting model for estimation, which proceeded to identify a partial invariance model that exhibited a negligible difference in fit. This model constrains all factor loadings to invariance, while allowing intercepts for seven items to vary in one or more groups (Table 2). While this model is negligibly different in fit from the configural baseline, it demonstrated equivalence across only two of three reporters. Intercepts for these items were generally equivalent for two, but not three groups, with the exception of item 20, “Your mom\dad talks to you about your friends,” (Parental Involvement) where no groups held a common intercept. Invariant parameters were distributed across latent factors, and no causality for their non-equivalence could be identified.

Next, we tested the structural invariance of the factor model. Specifically, we tested for differences in factor variances and covariance across reporters by comparing the fit of models constraining these parameters across groups to a baseline model that did not. Factor variances or covariances were deemed to be consistent across groups when constraining them did not reduce fit to the data. First, fit of the approximate invariance model identified above was compared to a model that also constrained factor variances across groups. Constraining variances significantly worsened fit to the data, Δχ2 (15) = 63.568, p < .001, suggesting the variance of factors might be different across informants. A revised model was estimated that allowed the variance of the Corporal Punishment factor to differ across groups. This model was negligibly different in fit from the unconstrained baseline, Δχ2 (12) = 18.753, p = .095 confirming that the remaining factor variances are generally the same in each informant group. Adding constraints that forced covariances between factors to be equal (except those involving Corporal Punishment) did not reduce fit to the data, Δχ2(18) = 16.657, p = .547, implying that covariances do not significantly vary across the four types of informants. Factor correlations and variances for the full sample are presented in in Table 3, with invariant parameters indicated.

Table 3.

Factor Correlations and Variances in the Alabama Parenting Questionnaire

Factor 1. 2. 3. 4. 5.
1. Positive Parenting 0.43*
2. Parental Involvement 0.88* 0.36*
3. Inconsistent Discipline 0.03 0.13 0.32*
4. Poor Monitoring −0.33* −0.27* 0.56* 0.49*
5. Corporal Punishment −0.01 −0.07 0.02 0.10 0.32*

Note. Variances presented along the diagonal. Bolded parameter values are invariant across informants.

*

p < .001.

Having shown that the Alabama Parenting Questionnaire meets the assumption of partial measurement invariance across informants, we were free to test if youth age moderated agreement. For the purposes of these analyses, reports were separated according to parent or youth source, and mean scores were computed for each subscale. Separate analyses were conducted across constructs, with youth report as the dependent variable predicted by parent report, youth age, and an age-by-parent report interaction term. The interaction term was significant in analyses related to Parental Involvement and Positive Parenting, signifying that age moderated parent-youth agreement on these constructs (Table 4). For both constructs, this interaction was decomposed by testing the conditional association between reports for youth at ages one standard deviation above (15.50 years) and below (9.05) the mean (12.28). Figures 1 and 2 demonstrate the effect of age on the association between parent and child reports on Parental involvement and Positive parenting, respectively.

Table 4.

Regression Analyses Testing Age as a Moderator of the Agreement between Parent and Youth Reports of Parental Involvement and Positive Parenting

b (S.E.) t Semi-
partial
Parental Involvement (Youth Report)
 Youth Age −0.08 (0.14) 0.56 −.03
 Parental Involvement (Parent Report) 0.48 (0.08) 6.29** .38
 Parental Involvement (Parent Report) by Youth Age 0.05 (0.02) 2.18* .13


Positive Parenting (Youth Report)
 Youth Age −0.11 (0.09) 1.22 −.07
 Positive Parenting (Parent Report) 0.48 (0.08) 5.89** .35
 Positive Parenting (Parent Report) by Youth Age 0.05 (0.03) 2.09* .12
*

p < .05,

**

p < .01.

Figure 1.

Figure 1

Youth age as a moderator of agreement between parent and youth reports on Alabama Parenting Questionnaire sub-scale Parental Involvement.

Figure 2.

Figure 2

Youth age as a moderator of agreement between parent and youth reports on Alabama Parenting Questionnaire sub-scale Positive Parenting.

We further tested for differences in measurement constructs across reporters by contrasting their factor means. This is functionally equivalent to comparison of observed scores using simple means difference testing with observed scores (e.g., ANOVA), and may be similarly interpreted. To enhance the interpretability of our results, latent means of a reference group (children) were fixed at zero. As a result, values for all other informants reflect their distance from the child group (zero), and may be evaluated using test statistics. All constructs differed in some fashion across groups (mean differences are shown in Table 4), and differences described here were significant at the p < .05 level. The most notable discrepancies were found in reports of Positive Parenting, where parents of children reported the highest levels, followed by parents of teens, while children and then teens reported experiencing lower levels in comparison. Parents of both children and teens reported more Parental Involvement than either child or teen informants, though this difference was only significant for teens. Conversely, teens and children both reported significantly higher levels of Poor Monitoring than did their respective parents. Finally, teens and their parents reported significantly lower levels of Corporal Punishment than did children or their parents.

Discussion

At its core, debate over the meaning researchers give to differences across informants reports of the same phenomenon comes down to disagreement over whether those discrepancies are a function of the informants or the instruments we use to measure them. Psychometricians have framed this issue as one of measurement invariance, or attempting to determine whether an instrument shows equivalent performance across one or more contexts or (in the current case) types of informants. The current study describes how measurement invariance analyses may enhance understanding the nature of informant discrepancy, by providing evidence that the discrepancy in reports represents different perspectives on the same construct and are not measurement differences such that the instrument is measuring fundamentally different constructs. The modified five-factor model of the Alabama Parenting Questionnaire demonstrated approximate measurement invariance across informant populations. Specifically, loadings were invariant across reporters, though several item intercepts were not. Using the alignment method proposed by (Asparouhov & Muthén, 2014), we were able to identify a partially invariant measurement model, allowing us to make meaningful comparisons between informants by testing differences in their latent means, as well as test for moderation of agreement by developmental context. Results from these analyses suggest that when appropriately compared, parents, teens, and children report markedly different views of parenting practices.

Broadly, we believe our findings make a contribution to knowledge on two fronts. First, our results expand the psychometric research underlying the Alabama Parenting Questionnaire, by evaluating its invariance across three types of reporters. The instrument is an increasingly popular choice for researchers who wish to investigate parenting practices using multi-informant methodology. However, at present, we are aware of no research examining the consistency of its measurement across informants. Second, we present an applied example of the importance of testing measurement invariance across informants, particularly parents and children. Our results build on the previous work of both Dirks et al. (2014) and Janssens et al. (2015) by explicating that conceptual similarity across informants or shared item content on forms, are not enough. Prior to comparing mean levels of any constructs, researchers should take care to ensure that the same constructs have been captured in all groups by testing the invariance of their measurements. When the equivalence of measurements is unknown, findings from cross-group comparisons are inherently weaker, as these results can be attributable to either true population differences, or simple measurement differences. However, in the current study, by identifying approximate measurement invariance across informants, we confirm that differences in observed subscale means across informants are indeed representative of true differences across parents, teens, and children. Based on this result, comparisons between parent and youth reports on observed Alabama Parenting Questionnaire subscale scores may be appropriate, as our findings demonstrate such a contrast would be made between largely equivalent constructs.

Identification of an approximate invariance model allowed us to compare latent means of Alabama Parenting Questionnaire factors across informants. Generally, we found that parents, teens, and children differ substantially on their reports of parenting practices. Overall, parents appeared more likely to judge their parenting practices more positively, while teens were most negative in their assessments, confirming one of our hypotheses. This finding aligns with research suggesting that youth become more critical of parenting practices and family dynamics during adolescence (Butner et al., 2009; Stuart & Jose, 2012), when normative conflict between parent and teen begins to occur as part of the process of individuation (Smetana, 1988). Parents, however, may be more likely to judge their practices positively due to their position of power in relation to youth. Theoretically, children and teens may tend to focus on those aspects of parenting that, while appropriate, are less enjoyable for youth (such as parental monitoring or discipline). In addition, when examining covariance among latent variables, we found that the variance of (and therefore covariance with) Corporal Punishment was inconsistent across groups. We attribute this to the likely differences in application and importance of corporal punishments as children move through development (Gershoff, 2002).

The assumption of approximate measurement invariance satisfied, we were free to test whether developmental context moderated the consistency of parent and youth reports. Results demonstrated that age did indeed moderate agreement on two subscales, Parental Involvement and Positive arenting. Interestingly, and contrary to our hypotheses, for both constructs the effect was such that agreement was higher among adolescents than younger children. We theorize that cognitive development may play a role here, as teens might be developing greater insight into the motivations underlying their parents’ behavior (e.g., desire to be a positive role model, be involved in the child’s life).

Because full measurement invariance of the instrument was not observed, we recommend researchers take care to interpret results from some items with caution. In particular, we note that the intercept for item 20 (“Your mom/dad talks to you about your friends”) was non-invariant across all reporters. It may be that parents, teens, and children understand this item differently. Parents may be having different conversations with teens versus children about friends and relationships, and may perceive those conversations differently than youth. Thus, conceptualization of this item may be influenced by the evolving nature of parental involvement across development.

While the current study adds to the debate over the nature of informant discrepancies, the current research, is merely a beginning. There is a need to expand tests of measurement invariance both to additional measures, and additional criteria (e.g., gender, SES, etc.). Moreover, there is a need to demonstrate measurement invariance when informant discrepancies are used as predictors. For example, a growing body of research uses differences in informant reports to predict mental health issues such as disruptive behavior (De Los Reyes et al., 2009) and social anxiety (De Los Reyes, Bunnell, & Beidel, 2013), as well as global severity of mental health concerns (De Los Reyes, Alfano, Lau, Augenstein, & Borelli, 2016). By definition, invariant measures provide a more accurate picture of true differences across informants, and as a result, these differences should be more reliable predictors of a hypothetical outcome. Future work might compare the predictive validity of informant differences drawn from invariance versus non-invariant measures. We posit that such research will further demonstrate the importance of analyses testing whether instruments provide equivalent measurement across contexts.

Conclusions

De Los Reyes et al. (2013) overview the ongoing discussion in the literature regarding the analytical treatment of informant discrepancies, their potential predictive utility, and even their relevance. For example, some scholars operate as though discrepancies in measurement are the result of random error and can be safely ignored (Roberts & Caspi, 2001). Others have taken the opposite approach, noting, for example, the predictive value of differences between parent and child reports on future development (Mash & Hunsley, 2005). Critically, both approaches have typically been based on an untested assumption about the nature of these differences. That is, investigators often do not test whether discrepancies are truly representative of differences in perspectives across informants, or a function of the measurement instrument measuring fundamentally different constructs. We suggest that confidence can be gained by testing measurement invariance to determine whether the constructs measured in each group are truly comparable.

Table 5.

Latent Mean Differences on Alabama Parenting Questionnaire Subscales across Informants

Corporal
Punishment
Inconsistent
Discipline
Positive
Parenting
Poor
Monitoring
Parental
Involvement
Children 0.00a 0.00a 0.00 0.00a 0.00a
Teens −0.71b 0.30a,b −0.55 0.70 −0.56
Parents - Children −0.02a 0.29a,b 0.85 −0.63 0.25a,b
Parents - Teens −0.53b 0.43b 0.47 −0.08a 0.38b

Note. Means in the same row that do not share subscripts differ at p < .05.

Reference group

Acknowledgments

Funding

The current research was supported by grants from the National Institute of Mental Health (MH067572) and Institute of Mental Hygiene, as well as an institutional research grant awarded to CFW.

Biographies

Justin D. Russell is a doctoral student in the Department of Psychology at Iowa State University. His research uses a multi-method approach to illuminate the interweaving pathways between early childhood trauma, socioemotional development, and the ontogeny of severe emotional and behavioral disorders. He is particularly interested in applications of this research to inform treatment of youth in juvenile justice and residential treatment contexts.

Rebecca A. Graham is a research associate at Louisiana State University Health Sciences Center. She received her doctorate in applied developmental psychology from the University of New Orleans in 2015. Dr. Graham’s research has examined the interrelated nature of parent and child disorders, particularly anxiety and traumatic stress. She is further interested in elucidating the mechanisms underlying familial effects of psychopathology, and using this knowledge to inform treatment with youth and their families.

Erin L. Neill is a doctoral student in the Department of Human and Family Studies at Iowa State University. She received her MSW from the University of Pennsylvania in 2009. Her research examines the mechanisms underlying treatment for post-traumatic stress disorder in youth, particularly in terms of the factors moderating its efficacy. More broadly, she is interested in delineating the pathways of disordered emotional development among children.

Carl F. Weems is professor and chair of the Department of Human and Family Studies at Iowa State University. He received his doctorate in lifespan developmental psychology in 1999 from Florida International University. Dr. Weems research has broadly focused on developmental models of emotion development and dysregulation. He is particularly interested in advancing knowledge related to empirical measurement of youth psychology, as well as applied research related to intervention and prevention efforts with psychopathology.

Footnotes

Submitted for publication in Journal of Youth and Adolescence, and inclusion in the upcoming special issue, “Discrepancies in Adolescent-Parent Perceptions of the Family and Adolescent Adjustment.”

Authors’ Contributions

JDR wrote the paper with assistance from RAG and CFW, who developed an early draft. RAG and CFW jointly developed the initial idea for this manuscript, which was then appended by JDR. JDR conducted the statistical analyses. ELN provided critical revisions. CFW conducted the study, consulted on analyses, and provided revisions. All authors approve the submission of this manuscript.

Conflicts of Interest

The authors report no conflicts of interest.

Ethical Approval

All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards.

Informed Consent

Informed consent was obtained from participating parents, while informed assent was obtained from youth.

References

  1. Achenbach TM, McConaughy SH, Howell CT. Child/adolescent behavioral and emotional problems: implications of cross-informant correlations for situational specificity. Psychological Bulletin. 1987;101(2):213–232. doi: 10.1037/0033-2909.101.2.213. [DOI] [PubMed] [Google Scholar]
  2. Aquilino WS, Supple AJ. Long-term effects of parenting practices during adolescence on well-being outcomes in young adulthood. Journal of Family Issues. 2001;22(3):289–308. doi: 10.1177/019251301022003002. [DOI] [Google Scholar]
  3. Asparouhov T, Muthén BO. Multiple-group factor analysis alignment. Structural Equation Modeling: A Multidisciplinary Journal. 2014;21(4):495–508. doi: 10.1080/10705511.2014.919210. [DOI] [Google Scholar]
  4. Belsky J, de Haan M. Annual Research Review: Parenting and children’s brain development: the end of the beginning. Journal of Child Psychology and Psychiatry, and Allied Disciplines. 2011;52(4):409–428. doi: 10.1111/j.1469-7610.2010.02281.x. [DOI] [PubMed] [Google Scholar]
  5. Bornstein MH. In: Handbook of child psychology and developmental science: Ecological settings and processes. 7th Bornstein MH, Leventhal T, editors. Vol. 4. Wiley; Hoboken, NJ: 2015. [Google Scholar]
  6. Brown TA. Confirmatory factor analysis for applied research. 2nd Guilford Press; New York: 2015. [Google Scholar]
  7. Butner J, Berg CA, Osborn P, Butler JM, Godri C, Fortenberry KT, Wiebe DJ. Parent-adolescent discrepancies in adolescents’ competence and the balance of adolescent autonomy and adolescent and parent well-being in the context of Type 1 diabetes. Developmental Psychology. 2009;45(3):835–849. doi: 10.1037/a0015363. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Byrne BM, Shavelson RJ, Muthén BO. Testing for the equivalence of factor covariance and mean structures: The issue of partial measurement invariance. Psychological Bulletin. 1989;105(3):456–466. doi: 10.1037/0033-2909.105.3.456. [DOI] [Google Scholar]
  9. Casey RJ, Berman JS. The outcome of psychotherapy with children. Psychological Bulletin. 1985;98(2):388–400. doi: 10.1037/0033-2909.98.2.388. [DOI] [PubMed] [Google Scholar]
  10. Chen F, Bollen KA, Paxton P, Curran PJ, Kirby JB. Improper solutions in structural equation models causes, consequences, and strategies. Sociological Methods & Research. 2001;29(4):468–508. doi: 10.1177/0049124101029004003. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Cheung GW, Rensvold RB. Evaluating goodness-of-fit indexes for testing measurement invariance. Structural Equation Modeling. 2002;9(2):233–255. doi: 10.1207/S15328007SEM0902_5. [DOI] [Google Scholar]
  12. Collins WA, Steinberg L. Adolescent development in interpersonal context. In: Damon W, Lerner RM, editors. Child and adolescent development: An advanced course. Wiley; Hoboken, NJ: 2008. pp. 551–592. [Google Scholar]
  13. Dadds M, Maujean A, Fraser J. Parenting and conduct problems in children: Australian data and psychometric properties of the Alabama Parenting Questionnaire. Australian Psychologist. 2003;38(3):238–241. doi: 10.1080/00050060310001707267. [DOI] [Google Scholar]
  14. Davies PT, Martin MJ, Cicchetti D. Delineating the sequelae of destructive and constructive interparental conflict for children within an evolutionary framework. Developmental Psychology. 2012;48(4):939–955. doi: 10.1037/a0025899. [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. De Goede IHA, Branje SJT, Meeus WHJ. Developmental changes in adolescents’ perceptions of relationships with their parents. Journal of Youth and Adolescence. 2009;38(1):75–88. doi: 10.1007/s10964-008-9286-7. [DOI] [PubMed] [Google Scholar]
  16. De Los Reyes A. Introduction to the special section: More than measurement error: Discovering meaning behind informant discrepancies in clinical assessments of children and adolescents. Journal of Clinical Child & Adolescent Psychology. 2011;40(1):1–9. doi: 10.1080/15374416.2011.533405. [DOI] [PubMed] [Google Scholar]
  17. De Los Reyes A, Alfano CA, Lau S, Augenstein TM, Borelli JL. Can we use convergence between caregiver reports of adolescent mental health to index severity of adolescent mental health concerns? Journal of Child and Family Studies. 2016;25(1):109–123. doi: 10.1007/s10826-015-0216-5. [DOI] [Google Scholar]
  18. De Los Reyes A, Augenstein TM, Wang M, Thomas SA, Drabick DAG, Burgers DE, Rabinowitz J. The validity of the multi-informant approach to assessing child and adolescent mental health. Psychological Bulletin. 2015;141(4):858–900. doi: 10.1037/a0038498. [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. De Los Reyes A, Bunnell BE, Beidel DC. Informant discrepancies in adult social anxiety disorder assessments: Links with contextual variations in observed behavior. Journal of Abnormal Psychology. 2013;122(2):376–386. doi: 10.1037/a0031150. [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. De Los Reyes A, Ehrlich KB, Swan AJ, Luo TJ, Wie MV, Pabón SC. An experimental test of whether informants can report about child and family behavior based on settings of behavioral expression. Journal of Child and Family Studies. 2012;22(2):177–191. doi: 10.1007/s10826-012-9567-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. De Los Reyes A, Goodman KL, Kliewer W, Reid-Quinones K. The longitudinal consistency of mother–child reporting discrepancies of parental monitoring and their ability to predict child delinquent behaviors two years later. Journal of Youth and Adolescence. 2010;39(12):1417–1430. doi: 10.1007/s10964-009-9496-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. De Los Reyes A, Henry DB, Tolan PH, Wakschlag LS. Linking informant discrepancies to observed variations in young children’s disruptive behavior. Journal of Abnormal Child Psychology. 2009;37(5):637–652. doi: 10.1007/s10802-009-9307-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. De Los Reyes A, Kazdin AE. Measuring informant discrepancies in clinical child research. Psychological Assessment. 2004;16(3):330–334. doi: 10.1037/1040-3590.16.3.330. [DOI] [PubMed] [Google Scholar]
  24. De Los Reyes A, Kazdin AE. Informant discrepancies in the assessment of childhood psychopathology: a critical review, theoretical framework, and recommendations for further study. Psychological Bulletin. 2005;131(4):483–509. doi: 10.1037/0033-2909.131.4.483. [DOI] [PubMed] [Google Scholar]
  25. De Los Reyes A, Kazdin AE. Identifying evidence-based interventions for children and adolescents using the range of possible changes model: A meta-analytic illustration. Behavior Modification. 2009;33(5):583–617. doi: 10.1177/0145445509343203. [DOI] [PubMed] [Google Scholar]
  26. De Los Reyes A, Thomas SA, Goodman KL, Kundey SMA. Principles underlying the use of multiple informants’ reports. Annual Review of Clinical Psychology. 2013;9:123–149. doi: 10.1146/annurev-clinpsy-050212-185617. [DOI] [PMC free article] [PubMed] [Google Scholar]
  27. Dirks MA, Weersing VR, Warnick E, Gonzalez A, Alton M, Dauser C, Woolston J. Parent and youth report of youth anxiety: evidence for measurement invariance. Journal of Child Psychology and Psychiatry. 2014;55(3):284–291. doi: 10.1111/jcpp.12159. [DOI] [PubMed] [Google Scholar]
  28. Edwards JR. Alternatives to difference scores: Polynomial regression and response surface methodology. In: Drasgow F, Schmitt, editors. Measuring and analyzing behavior in organizations: Advances in measurement and data analysis. Jossey-Bass; San Francisco, CA: 2002. pp. 350–400. [Google Scholar]
  29. Essau CA, Sasagawa S, Frick PJ. Psychometric properties of the Alabama parenting questionnaire. Journal of Child and Family Studies. 2006;15(5):595–614. doi: 10.1007/s10826-006-9036-y. [DOI] [Google Scholar]
  30. Frick PJ. Alabama Parenting Questionnaire. University of Alabama; 1991. Retrieved from http://sites01.lsu.edu/faculty/pfricklab/apq/ [Google Scholar]
  31. Frick PJ, Cornell AH, Barry CT, Bodin SD, Dane HE. Callous-unemotional traits and conduct problems in the prediction of conduct problem severity, aggression, and self-report of delinquency. Journal of Abnormal Child Psychology. 2003;31(4):457–470. doi: 10.1007/s10648-005-5728-9. [DOI] [PubMed] [Google Scholar]
  32. Gaylord N, Kitzmann K, Coleman J. Parents’ and children’s perceptions of parental behavior: Associations with children’s psychosocial adjustment in the classroom. Parenting Science and Practice. 2003;3(1):23–47. doi: 10.1207/S15327922PAR0301_02. [DOI] [Google Scholar]
  33. Gershoff ET. Corporal punishment by parents and associated child behaviors and experiences: a meta-analytic and theoretical review. Psychological Bulletin. 2002;128(4):539–579. doi: 10.1037/0033-2909.128.4.539. [DOI] [PubMed] [Google Scholar]
  34. Green SB, Thompson MS. A flexible structural equation modeling approach for analyzing means. In: Hoyle RH, editor. Handbook of Structural Equation Modeling. Guilford Press; New York, NY: 2012. pp. 393–416. [Google Scholar]
  35. Guion K, Mrug S, Windle M. Predictive value of informant discrepancies in reports of parenting: relations to early adolescents’ adjustment. Journal of Abnormal Child Psychology. 2009;37(1):17–30. doi: 10.1007/s10802-008-9253-5. [DOI] [PubMed] [Google Scholar]
  36. Hancock GR. Structural equation modeling methods of hypothesis testing of latent variable means. Measurement and Evaluation in Counseling and Development. 1997;30(2):91–105. [Google Scholar]
  37. Horn JL, McArdle JJ. A practical and theoretical guide to measurement invariance in aging research. Experimental Aging Research. 1992;18(3):117–144. doi: 10.1080/03610739208253916. [DOI] [PubMed] [Google Scholar]
  38. Hox JJ. Multilevel analysis: Techniques and applications. 2nd Routledge; New York: 2010. [Google Scholar]
  39. Hughes EK, Gullone E. Discrepancies between adolescent, mother, and father reports of adolescent internalizing symptom levels and their association with parent symptoms. Journal of Clinical Psychology. 2010;66(9):978–995. doi: 10.1002/jclp.20695. [DOI] [PubMed] [Google Scholar]
  40. Hu L, Bentler PM. Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling: A Multidisciplinary Journal. 1999;6(1):1–55. doi: 10.1080/10705519909540118. [DOI] [Google Scholar]
  41. Janssens A, Goossens L, Van Den Noortgate W, Colpin H, Verschueren K, Van Leeuwen K. Parents’ and adolescents’ perspectives on parenting: Evaluating conceptual structure, measurement invariance, and criterion validity. Assessment. 2015;22(4):473–489. doi: 10.1177/1073191114550477. [DOI] [PubMed] [Google Scholar]
  42. Jöreskog KG. A general approach to confirmatory maximum likelihood factor analysis. Psychometrika. 1969;34(2):183–202. doi: 10.1002/j.2333-8504.1967.tb00991.x. [DOI] [Google Scholar]
  43. Kelley K, Pornprasertmanit S. Confidence intervals for population reliability coefficients: Evaluation of methods, recommendations, and software for composite measures. Psychological Methods. 2016;21(1):69–92. doi: 10.1037/a0040086. [DOI] [PubMed] [Google Scholar]
  44. Kline R. Exploratory and confirmatory factor analysis. In: Petscher Y, Schatschneider C, Compton DL, editors. Applied quantitative analysis in the social sciences. Routledge; New York, NY: 2013. pp. 171–207. [Google Scholar]
  45. Laird RD, De Los Reyes A. Testing informant discrepancies as predictors of early adolescent psychopathology: Why difference scores cannot tell you what you want to know and how polynomial regression may. Journal of Abnormal Child Psychology. 2013;41(1):1–14. doi: 10.1007/s10802-012-9659-y. [DOI] [PubMed] [Google Scholar]
  46. Laird RD, Weems CF. The equivalence of regression models using difference scores and models using separate scores for each informant: Implications for the study of informant discrepancies. Psychological Assessment. 2011;23(2):388–397. doi: 10.1037/a0021926. [DOI] [PubMed] [Google Scholar]
  47. Larson RW, Richards MH, Moneta G, Holmbeck G, Duckett E. Changes in adolescents’ daily interactions with their families from ages 10 to 18: Disengagement and transformation. Developmental Psychology. 1996;32(4):744–754. doi: 10.1037/0012-1649.32.4.744. [DOI] [Google Scholar]
  48. Little TD. Longitudinal structural equation modeling. Guilford Press; New York: 2013. [Google Scholar]
  49. Mash EJ, Hunsley J. Evidence-based assessment of child and adolescent disorders: Issues and challenges. Journal of Clinical Child & Adolescent Psychology. 2005;34(3):362–379. doi: 10.1207/s15374424jccp3403_1. [DOI] [PubMed] [Google Scholar]
  50. Morris AS, Cui L, Steinberg L. Parenting research and themes: What we have learned and where to go next. In: Larzelere RE, Morris AS, Harrist AW, editors. Authoritative parenting: Synthesizing nurturance and discipline for optimal child development. American Psychological Association; Washington, DC: 2013. pp. 35–58. [Google Scholar]
  51. Muthén BO. Negative residual variance. Mplus Discussion-Structural Equation Modeling. 2005 Jan 20; Retrieved from http://www.statmodel.com/discussion/messages/11/555.html?1358188287.
  52. Muthén BO, Asparouhov T. IRT studies of many groups: the alignment method. Frontiers in Psychology. 2014;5(978) doi: 10.3389/fpsyg.2014.00978. [DOI] [PMC free article] [PubMed] [Google Scholar]
  53. Muthén LK, Muthén BO. Mplus user’s guide. 7th Muthén & Muthén; Los Angeles, CA: 2012. [Google Scholar]
  54. Richters JE. Depressed mothers as informants about their children: a critical review of the evidence for distortion. Psychological Bulletin. 1992;112(3):485–499. doi: 10.1037/0033-2909.112.3.485. [DOI] [PubMed] [Google Scholar]
  55. Roberts BW, Caspi A. Personality development and the person-situation debate: It’s déjà vu all over again. Psychological Inquiry. 2001;12(2):104–109. doi: 10.1207/S15327965PLI1202_04. [DOI] [Google Scholar]
  56. Robinson CC, Mandleco B, Olsen SF, Hart CH. Authoritative, authoritarian, and permissive parenting practices: Development of a new measure. Psychological Reports. 2011;77(3):819–830. doi: 10.2466/pr0.1995.77.3.819. [DOI] [Google Scholar]
  57. Scott BG, Weems CF. Patterns of actual and perceived control: are control profiles differentially related to internalizing and externalizing problems in youth? Anxiety, Stress, & Coping. 2010;23(5):515–528. doi: 10.1080/10615801003611479. [DOI] [PubMed] [Google Scholar]
  58. Shelton K, Frick P, Wootton J. Assessment of parenting practices in families of elementary school-age children. Journal of Clinical Child Psychology. 1996;25(3):317–329. doi: 10.1207/s15374424jccp2503_8. [DOI] [Google Scholar]
  59. Silverman WK, Saavedra LM. Assessment and diagnosis in evidence-based practice. In: Barrett PM, Ollendick TH, editors. Handbook of interventions that work with children and adolescents: Prevention and treatment. Wiley; West Sussex, UK: 2004. pp. 49–70. [Google Scholar]
  60. Smetana JG. Concepts of self and social convention: Adolescents’ and parents’ reasoning about hypothetical and actual family conflicts. In: Gunnar MR, Collins WA, editors. Development during the transition to adolescence. Vol. 21. Lawrence Erlbaum Associates; Hillsdale, NJ: 1988. pp. 79–122. [Google Scholar]
  61. Strayhorn JM, Weidman CS. A parent practices scale and its relation to parent and child mental health. Journal of the American Academy of Child & Adolescent Psychiatry. 1988;27(5):613–618. doi: 10.1097/00004583-198809000-00016. [DOI] [PubMed] [Google Scholar]
  62. Stuart J, Jose PE. The influence of discrepancies between adolescent and parent ratings of family dynamics on the well-being of adolescents. Journal of Family Psychology. 2012;26(6):858–868. doi: 10.1037/a0030056. [DOI] [PubMed] [Google Scholar]
  63. Tein J, Roosa M, Michaels M. Agreement between parent and child reports on parental behaviors. Journal of Marriage and the Family. 1994;56(2):341–355. doi: 10.2307/353104. [DOI] [Google Scholar]
  64. Thompson MS, Green SB. Evaluating between-group differences in latent variable means. In: Hancock GR, Mueller RO, editors. Structural equation modeling: A second course. 2nd Information Age Publishing; Charlotte, NC: 2013. pp. 163–218. [Google Scholar]
  65. Vandenberg RJ, Lance CE. A review and synthesis of the measurement invariance literature: Suggestions, practices, and recommendations for organizational research. Organizational Research Methods. 2000;3(1):4–70. doi: 10.1177/109442810031002. [DOI] [Google Scholar]
  66. van de Schoot R, Kluytmans A, Tummers L, Lugtig P, Hox J, Muthén BO. Facing off with Scylla and Charybdis: a comparison of scalar, partial, and the novel possibility of approximate measurement invariance. Frontiers in Psychology. 2013;4(770) doi: 10.3389/fpsyg.2013.00770. [DOI] [PMC free article] [PubMed] [Google Scholar]
  67. Van Leeuwen KG, Vermulst AA, Kroes G, De Meyer R, Nguyen L, Veerman JW. Verkorte Schaal voor Ouderlijk Gedrag (VSOG): Handleiding [Brief Scale of Parental Behavior] Praktikon; Nijmegen, The Netherlands: 2013. [Google Scholar]
  68. Weems CF, Taylor LK, Marks AB, Varela RE. Anxiety sensitivity in childhood and adolescence: Parent reports and factors that influence associations with child reports. Cognitive Therapy and Research. 2010;34(4):303–315. doi: 10.1007/s10608-008-9222-x. [DOI] [Google Scholar]
  69. West SG, Taylor AB, Wu W. Model fit and model selection in structural equation modeling. In: Hoyle TD, editor. Handbook of structural equation modeling. Guilford Press; New York, NY: 2012. pp. 209–231. [Google Scholar]
  70. Wothke W. Nonpositive definite matrices in structural modeling. In: Bollen KA, Long JS, editors. Testing structural equation models. SAGE; Newbury Park, CA: 1993. pp. 256–293. [Google Scholar]

RESOURCES