Skip to main content
Journal of Pediatric Psychology logoLink to Journal of Pediatric Psychology
. 2018 May 24;43(9):1059–1067. doi: 10.1093/jpepsy/jsy038

PROMIS Peer Relationships Short Form: How Well Does Self-Report Correlate With Data From Peers?

Katie A Devine 1,, Victoria W Willard 2, Matthew C Hocking 3, Jerod L Stapleton 1, David Rotter 1, William M Bukowski 4, Robert B Noll 5
PMCID: PMC6147751  PMID: 29800306

Abstract

Objective

To examine the psychometric properties of the Patient-Reported Outcomes Measurement Information System (PROMIS®) peer relationships short form (PR-SF), including association with peer-reported friendships, likeability, and social reputation.

Method

203 children (Mage = 10.12 years, SD = 2.37, range = 6–14) in Grades 1–8 completed the 8-item PR-SF and friendship nominations, like ratings, and social reputation measures about their peers during 2 classroom visits approximately 4 months apart, as part of a larger study. A confirmatory factor analysis, followed by an exploratory factor analysis, was conducted to examine the factor structure of the PR-SF. Spearman correlations between the PR-SF and peer-reported outcomes evaluated construct validity.

Results

For the PR-SF, a 2-factor solution demonstrated better fit than a 1-factor solution. The 2 factors appear to assess friendship quality (3 items) and peer acceptance (5 items). Reliability was marginal for the friendship quality factor (.66) but adequate for the acceptance factor (.85); stability was .34 for the PR-SF over 4 months. The PR-SF (8 items) and acceptance factor (5 items) both had modest but significant correlations with measures of friendship (rs = .25–.27), likeability (rs = .21–.22), and social reputation (rs = .29–.44).

Conclusions

The PR-SF appears to be measuring two distinct aspects of social functioning. The 5-item peer acceptance scale is modestly associated with peer-reported friendship, likeability, and social reputation. Although not a replacement for peer-reported outcomes, the PR-SF is a promising patient-reported outcome for peer relationships in youth.

Keywords: children, peer relationships, PROMIS, psychometrics, social functioning


Developing and maintaining friendships, being liked by peers, and social behavior with peers are important developmental tasks of childhood (Rubin, Bukowski, & Bowker, 2015). Peer-reported friendships, acceptance, and leadership behaviors represent significant markers of the social adjustment of youth (Bukowski & Adams, 2005) and are predictive of several important positive long-term outcomes including psychological health and adult economic success (Coie, 1990; Parker & Asher, 1987). Conversely, problems with peers are predictive of psychological difficulties (Ladd & Troop-Gordon, 2003), risky sexual behavior (Lansford, Dodge, Fontaine, Bates, & Pettit, 2014), substance abuse, and suicidality (Prinstein, Boergers, Spirito, Little, & Grapentine, 2000).

Because peer reports of children’s social functioning are stable, reliable, and predict many important long-term outcomes (Bagwell, Newcomb, & Bukowski, 1998; Gest, Sesma, Masten, & Tellegen, 2006; Morison & Masten, 1991), gathering data from peers in the school setting is an extensively validated method to assess three essential elements of social adjustment with peers—friendships, acceptance, and social reputation (i.e., how peers view one another in terms of social behavior such as leadership, victimization, etc.). Unfortunately, peer-reported outcomes are typically expensive and more difficult to obtain than self-report measures owing to the considerable time and resources required to access schools and interact with children. Data are collected in schools by asking classmates to identify their best friends, rate how much they like each classmate (i.e., Is a child liked?), and describe the social behavior of peers (i.e., What are they like?). As these school-based methods are resource-intensive (e.g., requiring multiple levels of permission to access schools and interact with children), researchers often administer proxy reports to parents, teachers, or children as more efficient alternatives. Existing research, however, suggests that data from those sources are only modestly correlated with data from peers and lack external validity (Crowe, Beauchamp, Catroppa, & Anderson, 2011; Noll & Bukowski, 2012). Another option is self-report data on social functioning, which has the potential to offer important insight into a child’s perceptions of their likability and social status. However, there is very limited research on how self-reports (i.e., patient-reported outcomes or PROs) of social functioning relate to actual data from peers (Salley et al., 2015).

The Patient-Reported Outcomes Measurement Information System (PROMIS®) was developed to systematically assess key domains of PROs, including mental, physical, and social health, with psychometrically rigorous measures (Cella et al., 2007; Reeve et al., 2007). This approach was initiated to give patients a voice with standardized measures of key outcomes across domains and diseases (Garcia et al., 2007). The pediatric PROMIS includes both self-report and parent proxy options (Irwin et al., 2010, 2012), and PROMIS researchers identified social health as a particularly critical domain.

The PROMIS peer relationships short form (PR-SF) was developed by DeWalt and colleagues in accordance with PROMIS methodology (DeWalt et al., 2013). Candidate items were generated based on literature review, expert input, focus groups, and cognitive interviews, and then tested in a large group of children (Irwin et al., 2010). Item response theory was used to create an item pool, which allows for both the use of fixed-item short forms (four to eight items per domain) and options for a computerized version of the instrument that selects items (or adapts) based on an individual’s responses to specific questions. The eight items of the PR-SF were selected based on psychometrics and consideration of their content as representative of aspects of social health (DeWalt et al., 2013). The peer relationships scale initially was designed to assess two aspects of social health: “social function” or involvement in and satisfaction with participation in social activities and “sociability” or the ability to get along with others (DeWalt et al., 2013). Ultimately, the final PROMIS peer relationships item pool and fixed-item short forms were determined to involve a single factor that reportedly assesses children’s perceptions of the quality of their relationships with peers (DeWalt et al., 2013).

Research findings have generally suggested that the PR-SF demonstrates good feasibility, acceptability, and known-groups validity; that is, scores can differentiate between some disease groups (DeWalt et al., 2015), but this has not been consistently supported (Selewski et al., 2013). A recent study described the psychometric properties of several pediatric PROMIS short forms, including the PR-SF, and focused on static versus dynamic administration, test–retest reliability, and administration across disease groups (Varni et al., 2014). Another study administered the PROMIS and other well-known self-report measures of quality of life but only discussed similarities and differences in mean scores (Irwin et al., 2010). Missing from these reports and the literature more broadly is the discussion of construct validity (i.e., the extent to which the PR-SF measures social functioning). This missing information regarding its correspondence to other established measurement tools, especially well-validated peer-reported measures, is a significant limitation of the measure in its current form.

If the PR-SF is associated with data from classmates, it would be a valuable, efficient, and practical measure of self-reported social functioning for children and adolescents that overcomes many of the logistical challenges of assessing social functioning via peer report. As such, the objective of the current article was to evaluate the psychometric properties of the PR-SF, including factor analysis, stability over time, and its associations with peer-reported measures of acceptance/friendships, received like ratings, and social reputation. It was hypothesized that (a) the eight-item PR-SF would yield a one-dimensional factor with acceptable internal consistency, (b) stability over time would be high, and (c) associations with peer-based measures would be modest but significant.

Method

Study Design and Procedures

This study was a secondary analysis of baseline data from a school-based intervention to improve social adjustment outcomes of childhood brain tumor survivors (Devine et al., 2016). Stability over time analyses included data from the baseline (late fall/winter) and Time 2 (spring) assessments, which occurred about 3.9 months (SD =0.7) after baseline. Only the comparison classrooms were used for stability analyses to avoid bias due to intervention effects. Classrooms were identified because one student was a brain tumor survivor; this child and their family gave permission to contact the child’s school at the onset of the study. All children within the classroom of the survivor were eligible to participate. Although the school principal and classroom teacher knew the specific reason for the study, when consent was obtained and the work was done in the classroom, no mention was made of the child surviving a brain tumor.

Institutional review board approval was obtained before all study procedures. All students who returned a signed parental consent form and provided assent were given netbook computers with privacy screens to complete the self-report and peer nomination measures at both time points.

Participants

In 13 classrooms from 13 different school districts within one geographic region, 232 of 269 parents (86.2%) gave consent for their child to participate. Of these 232 children, 219 (94%) gave assent to participate. Only one child declined; the remaining 12 children (5%) were absent on the baseline assessment day. For these analyses, we excluded the brain tumor survivors (n = 13, one per classroom) because they were a small subsample and evidence suggests they may have inaccurate self-perceptions of social functioning compared with typically developing peers (Salley et al., 2014).

The baseline sample consisted of 206 children: 48.5% male, 91.7% non-Hispanic White, 5.3% Black, and 2.9% other. Children were in Grades 1–8, with an average age of 10.12 years (SD =2.37, range = 6.2–14.9). Three children left the classroom early for educational reasons and therefore did not complete the baseline survey, resulting in a final baseline sample of 203. For stability analyses, only children who completed Time 1 and Time 2 were included (n = 180, 89% of those who completed Time 1 and eligible for Time 2, as one classroom was excluded from Time 2 owing to delays from the principal being on leave). For further details regarding recruitment and retention, please see the primary article (Devine et al., 2016).

Measures

Pediatric PROMIS Peer Relationships Short Form v1.0 (PR-SF)

Children reported on the quality of peer relationships during the past 7 days on eight items using a 5-point Likert-type scale that ranged from “never” to “almost always” (DeWalt et al., 2013). The PR-SF yields a total raw summary score that is converted to a standardized T-score (M = 50, SD = 10); higher scores indicate better peer relationships (see Table I for item content).

Table I.

Geomin-Rotated Factor Loadings From EFA

Factor 1: “Friendship Quality” Factor 2: “Peer Acceptance”
1. I felt accepted by other kids my age. 0.277 0.419
2. I was able to count on my friends. 0.586 −0.034
3. I was able to talk about everything with my friends. 0.633 0.144
4. I was good at making friends. 0.137 0.571
5. My friends and I helped each other out. 0.429 0.310
6. Other kids wanted to be my friend. 0.023 0.753
7. Other kids wanted to be with me. −0.170 1.014
8. Other kids wanted to talk to me. 0.004 0.780

Note. The factors are oblique; the factor loadings are regression coefficients and therefore may be larger than 1 in magnitude. Bold font indicates the factor on which the item was retained.

Peer Acceptance/Friendships

Each child’s acceptance by peers was calculated as the number of times the child was chosen as a friend by peers (i.e., friend nominations). Children could select an unlimited number of friends from their classroom (Bukowski & Hoza, 1989). Liking ratings were collected by asking each participating child to rate how much they like each of the participating classmates on a 5-point scale from “someone you do not like” to “someone you like a lot” (Asher, Singleton, Tinsley, & Hymel, 1979). The received liking score was the arithmetic mean of the ratings received by a child from peers (Bukowski & Hoza, 1989).

Social Reputation

The Revised Class Play (RCP; Masten, Morison, & Pellegrini, 1985) was used to evaluate a child’s social reputation with peers. Children were asked to “cast” classmates into roles in a hypothetical play (e.g., “someone who has good ideas for things to do”). These roles fit specific indicators of social behavior and competence (Bukowski, Cillessen, & Velásquez, 2012). They were able to nominate an unlimited number of peers for each item but were restricted to nominating only boys or only girls to prevent gender stereotyping (Zeller, Vannatta, Schafer, & Noll, 2003), and because of the primary aim of the parent study, which focused on the child who survived a brain tumor. Classes were assigned the gender to match the child with the brain tumor at the time of data collection for this measure only; seven classes completed the RCP nominating boys and six classes nominated girls. Zeller and colleagues (2003) provided strong evidence of reliability and validity of this instrument in terms of the patterns of correlations with measures of peer acceptance within a large sample of children. The measure was adapted to remove items representing aggression and disruptiveness based on aims of the parent study (Reference blinded for review). A child’s score for the other subscales—Popular-Leadership (10 items), Sensitive-Isolated (6 items), and Prosocial (5 items)—were based on the number of times a child was nominated by peers for each item within the scale, and adjusted for the number of same-sex and opposite-sex nominators in the classroom using a regression-based procedure (Velásquez, Bukowski, & Saldarriaga, 2013). This procedure retains the original metric of the score. The subscales demonstrated acceptable levels of internal consistency (Popular-Leadership: α = .95; Sensitive-Isolated: α = .89; Prosocial: α = .91). Previous longitudinal studies have shown the Popular-Leadership score to be predictive of later indices of competence and the Sensitive-Isolated score to be predictive of psychopathology and behavioral problems (Hymel, Rubin, Rowden, & Lemare, 1990; Morison & Masten, 1991; Rubin, 1993).

Data Analyses

Missing data were examined. There were no missing data on the PR-SF. One child was not rated by peers on like ratings owing to a technical error in the program administering the survey and was excluded from analyses with these outcomes. Given that these data were collected in schools, we examined the proportion of variance in PR-SF that was accounted for by school clustering by calculating the intraclass correlation (ICC). The ICC was 0.04; an ICC < .05 can be considered not substantial and unlikely to result in significant bias in model parameter estimates (Thomas & Heck, 2001). Given the small ICC, our focus on individual responses, not school-level characteristics, and the small number of schools in our analysis, we decided to present factor analyses at the student level instead of a more complex multilevel factor analysis.

Confirmatory factor analysis (CFA) was conducted to evaluate the fit of the hypothesized one-factor model in the current sample using Mplus Version 7.31 (Muthén & Muthén, 1998-2015). We used the model chi-square, the comparative fit index (CFI), the Tucker–Lewis index (TLI), the root mean square error of approximation (RMSEA), and standardized root mean square residual (SRMR) as indices of model fit. We interpreted CFI and TLI > .90 as an indication of reasonable fit and CFI and TLI > .95 as a good fit (Hu & Bentler, 1999) and interpreted RMSEA and SRMR ≤ .05 as a close fit, between .05 and .08 as a reasonable fit, and ≥ .10 as a poor fit (Browne & Cudeck, 1993).

If the one-factor model demonstrated a poor fit, exploratory factor analysis (EFA) with oblique geomin rotation was planned to compare alternative factor structures. Oblique rotation was chosen to allow the factors to correlate. The Kaiser–Meyer–Olkin measure of sampling adequacy (KMO index) was used to evaluate the suitability for factor analysis, with KMO > 0.5 generally considered to be suitable (Williams, Onsman, & Brown, 2010). As recommended by Fabrigar and colleagues (1999), we used multiple methods to determine the number of factors to be extracted, including Horn’s parallel analysis (Horn, 1965), examination of the scree plot, and consideration of the fit indices such as Akaike information criterion (AIC) and RMSEA. Maximum likelihood estimator was used. Loadings in the .40 range and above were considered substantial (Floyd & Widaman, 1995). Cronbach’s α was calculated to assess reliability; however, coefficient omega was also calculated, as it has been recommended over alpha as a more robust indicator of reliability that relies on fewer and more realistic assumptions than alpha (Dunn, Baguley, & Brunsden, 2014).

Pearson correlations assessed stability over time using all available data. Construct validity was evaluated by cross-sectional Spearman correlations with the number of friend nominations, average like rating, and peer social reputation at baseline. Spearman correlations were chosen because results from these measures are typically not normally distributed. The raw scale scores of the PR-SF were used instead of the expected a posteriori estimation of the latent score to be consistent with how the static PR-SF scale is scored according to the manual and for ease of use in practical applications.

Results

Factor Structure

Results of the CFA indicated that the single-factor model had an inadequate fit, χ2(20) = 86.30, p < .001, CFI = 0.89, TLI = 0.85, RMSEA = 0.13, 90% CI [0.10, 0.16], and SRMR = 0.07. The KMO index was 0.84. Horn’s parallel analysis suggested retaining one factor, with the eigenvalues for the sample correlation matrix being 3.95 and 1.12 for the first two factors and eigenvalues from the parallel analysis being 1.30 and 1.19. However, the scree plot and the fit statistics, such as the AIC and RMSEA, suggested that the two-factor solution demonstrated a better fit. The two-factor solution, χ2(13) = 25.86, p = .02, AIC = 4368.87, CFI = 0.98, TLI = 0.96, RMSEA = 0.07, 90% CI [0.03, 0.11], and SRMR = .03, was significantly better than the one-factor model, χ2(7) = 60.44, p < .001. The correlation between the two factors was 0.51.

Factor loadings are reported in Table I. Items 2, 3, and 5 loaded on Factor 1, which we labeled as “Friendship Quality.” This scale had fair to adequate reliability, Cronbach’s α = .66, omega = .74. Factor 2, which we labeled “Peer Acceptance,” included Items 1, 4, 6, 7, and 8. This scale had adequate reliability, Cronbach’s α = .85, omega = .83. Descriptive statistics for the PR-SF scored as a one-factor or two-factor scale are presented in Table II. Stability over approximately 4 months (between baseline and Time 2) was modest compared with peer ratings (Table III); the percent variance in Time 2 reports attributable to baseline reports ranged from 21% to 29% for the self-report measures and from 56% to 79% for peer reports.

Table II.

Reliability and Descriptive Statistics for the Original PR-SF as Well as the Two Factors From Our EFA

Subscale Alpha Omega M SD Range
PR-SF (t-scores) .85 .88 47.25 8.61 17.7–64.4
PR-SF (raw) .85 .88 3.93 0.78 1.00–5.00
Factor 1: Friendship Quality .66 .74 4.10 0.79 1.00–5.00
Factor 2: Peer Acceptance .85 .83 3.82 0.93 1.00–5.00

Note. Omega is a robust indicator of reliability that relies on fewer and more realistic assumptions than alpha (Dunn et al., 2014). PR-SF = PROMIS Peer Report Short Form; n = 203.

Table III.

Stability From Time 1 to Time 2 Using Pearson Correlations

n Time 1 − Time 2
r p
PR-SF 180 .54 <.001
Factor 1: Friendship quality 180 .46 <.001
Factor 2: Peer acceptance 180 .52 <.001
No. of friend nominations 241 .75 <.001
Average like rating 240 .85 <.001
Popular-Leadership 133 .80 <.001
Sensitive-Isolated 132 .89 <.001
Prosocial 132 .78 <.001

Note. PR-SF = PROMIS Peer Relationship Short Form. Sample size varies owing to type of report (i.e., self-report for the PR-SF, Factor 1, Factor 2, limited to those who completed both time points; or peer-report, limited to the number of children in each classroom at both time points for the number of friend nominations and average like rating, and to the number of boys or girls in the classroom for the social reputation scales).

Construct Validity

The original PR-SF (eight items) and our PR-SF Peer Acceptance factor (five items) demonstrated correlations of small to medium magnitude (Cohen, 1988) with the measure of acceptance/friendships (rs= .25 and .27), the received average liking score (rs= .21 and .22), and the social reputation measures (rs= .29–.44; Table IV). Notably, the PR-SF and the PR-SF Peer Acceptance factor were positively correlated with the positive social reputation measures (Popular-Leadership and Prosocial scales: rs= .29–.44) and negatively correlated with the negative social reputation measure (Sensitive-Isolated: rs= −.34 and −.32). The magnitude of correlations for the PR-SF total score and the PR-SF Peer Acceptance factor was nearly identical, differing by .02 at a maximum, and indicated that self-reported social functioning on the PR-SF accounted for between 8% and 19% of the variance in peer-reported social reputation. The correlations between the Friendship Quality factor and peer-reported outcomes were small and nonsignificant for number of friend nominations and average like ratings (r = .12 for both) and were small and significant for most social reputation outcomes (Popular-Leadership: r = .26; Prosocial: r = .17; Sensitive-Isolated: r = −.26), suggesting that self-reported friendship quality accounts for 1% to 7% of the variance in peer-reported social reputation.

Table IV.

Spearman’s Correlations Between PR-SF Factors and Peer-Reported Outcomes

n PR-SF
Factor 1: Friendship quality
Factor 2: Peer acceptance
rs p rs p rs p
No. of friend nominations 203 .25 <.001 .12 .10 .27 <.001
Average like rating 202 .21 .002 .12 .10 .22 .001
Popular-Leadership 115 .44 <.001 .26 .004 .44 <.001
Sensitive-Isolated 114 −.34 <.001 −.26 .006 −.32 <.001
Prosocial 113 .29 .002 .17 .08 .30 .001

Note. Sample size for social reputation measures (Popular-Leadership, Sensitive-Isolated, and Prosocial) are limited to one gender in each classroom (seven boy classrooms, six girl classrooms) owing to the aims of the primary study.

Discussion

This study assessed several basic psychometric features of the previously developed static PR-SF (DeWalt et al., 2013). Results provide minimal support that the PR-SF is a single-factor scale of “quality of peer relationships” as previously published (DeWalt et al., 2013). Although results of Horn’s parallel analysis were consistent with a single factor, factor analysis on a typically developing cohort of children (ages 6–14) suggests that the measure would be better conceptualized as a two-factor scale. The first factor contains items related to friendship quality and correlates at a small magnitude with collected peer-reported outcomes. The second factor contains items related to perceptions of peer acceptance and demonstrates small to medium correlations with peer-reported number of friendships, average like ratings, and social reputation (i.e., popularity-leadership, sensitivity-isolation, and prosocial behaviors). These factors resemble the two subdomains initially proposed in the development of the PR-SF—social function (i.e., involvement in and satisfaction with social roles) and sociability (i.e., ability to get along with others)—though they were not supported in the initial factor analysis that included items related to relationships with adults (DeWalt et al., 2013). Our data suggest slightly different interpretations of those subdomains, with the first domain focusing on quality of relationships with friends and the second domain focusing more on acceptance in relationships with all peers. This distinction is important, as peer acceptance has historically been a better predictor of long-term outcomes than friendship quality (Gifford-Smith & Brownell, 2003).

Interestingly, the eight-item PR-SF and the five-item Peer Acceptance factor demonstrate similar psychometric properties. That is, they had comparable internal reliability consistency and correlations with peer-reported outcomes were of similar magnitude. This suggests that researchers and clinicians interested in patient-reported peer acceptance can use the five-item subscale instead of the eight-item PR-SF without compromising psychometric properties. For those interested in brief measures of peer functioning, the PROMIS developers have also suggested alternative four- and six-item versions of the PR-SF. Interestingly, both these shortened versions contain items that loaded on both of the factors found in this study. Future work should compare the shortened subscale found in this work with these other alternative short measures. In addition, future work should examine if the three-item factor correlates with measures of friendship quality, as this was not assessed in the current study. Alternatively, researchers and clinicians interested in friendship quality may consider expanding these items to improve reliability and validity of this subscale.

This study is the first to examine if the PR-SF correlates with peer ratings of friendships, likeability, and social reputation. The correlations between the PR-SF and peer-reported outcomes were higher than the few published reports of correlations between other well-known self-report measures and peer-reported outcomes. There is one article examining these links for the Child Behavior Checklist (Noll & Bukowski, 2012), one for the Social Skills Rating System (Hoza et al., 2005), and none for the Social Functioning subscale of the Pediatric Quality of Life Inventory (PedsQL). The challenges linking parent-, teacher-, and/or self-reports to actual peer data are major. The correlations found here suggest that although the PR-SF does not replace peer reports, it may be a more reasonable and efficient alternative than other well-known measures. Given the importance of peer relationships on later psychological outcomes, reliable and well-validated self-report measures of peer relationships are critical to research and clinical care, as obtaining such data directly from peers presents significant logistical challenges.

The stability of the PR-SF over a 4-month period was modest in comparison with peer-reported ratings. However, 2-week test–retest correlations were adequate in previous work (i.e., .76–.81; Varni et al., 2014). Given that the PR-SF measures current (i.e., within the past week) self-perceived social functioning, we would expect the test–retest correlations to be lower over a longer interval, during which perceived social functioning may change. This is in contrast to classroom-based peer nominations, which are known to be exceptionally stable (Bukowski et al., 2012). The modest correlations and striking differences in stability over time between peer reports and the self-reported PR-SF suggest that different sources of information capture different aspects of social functioning. Both provide valuable information, but one method may be preferable depending on the goals of the study and resources available. If interested in self-perceptions and associations with proximal variables, self-report would be preferable. If interested in peer perceptions or prediction of more distal variables (e.g., adult economic success), peer report would be preferable. Self-report requires fewer resources to implement, making it more feasible for studies to incorporate.

Despite the strengths of this work, we were limited by a relatively small sample size, particularly in comparison with the PROMIS validation studies, and a developmentally heterogeneous group of children ages 6–14 years. In addition, we recruited a relatively homogeneous group in terms of racial/ethnic composition and geographic location, which may not generalize to more diverse groups in other areas of the United States. Although the average PR-SF t-score in our sample (47.25) was slightly lower than the original validation study (DeWalt et al., 2013), it was well within the expected range, suggesting that our sample was not distinctively different from the validation samples. Given the small number of classrooms (n = 13) included in our sample and the small ICC, we did not account for the nonindependence of these data clustered within schools. Future studies conducted with a larger number of schools should use multilevel CFA to confirm the factor structure while accounting for clustered data. In addition, we focused on convergent validity (i.e., positive/negative correlations with peer outcomes), and future studies should include measures of divergent validity to continue to evaluate the construct validity of the PR-SF. It is notable that our sample was selected because there was a child who was a brain tumor survivor in each classroom. We did not assess the degree to which classmates were aware of the survivor’s prior diagnosis (all survivors were off-treatment at the time of the study). Although simply having a child with a chronic illness in the classroom is unlikely to affect healthy peers’ social reputation or friendships, particularly as it relates to this study’s aim of examining associations between peer-reported measures and self-reported social functioning, it is possible that these classes were different from classrooms where there is no child with chronic illness. Future research should examine the potential effects of classroom-level characteristics such as presence of a child with chronic illness or school-wide character-building curriculum.

Future studies may wish to focus on youth known to experience social difficulties (i.e., youth with Attention-Deficit/Hyperactivity Disorder and children with neurofibromatosis Type 1) to determine the applicability of the PR-SF in these samples. Our sample size prevented exploration of differences between survivors of brain tumors and typically developing children, though significant previous work (Schulte & Barrera, 2010) would suggest that PR-SF scores for brain tumor survivors might differ from typically developing peers. We also recognize that EFA involves several decisions in the analytic procedures, including the number of factors to extract and the estimator and rotation to be used, and different decisions may produce different factor structures. Additional research using CFA is needed. In addition, future work should examine whether the PR-SF or the alternative scoring proposed in this work is sensitive to change over time and correlates with any changes seen in peer-reported outcomes.

Overall, our results suggest that the PR-SF measures both perceived social acceptance and relationship quality and correlates in the expected direction with extensively validated peer-reported outcomes. Clearly, the PR-SF should not be considered equivalent to peer report, but it is a promising measure that can easily be administered to children and provides valuable information.

Acknowledgments

The authors would like to thank Olle Jane Z. Sahler, MD, Andrea Farkas Patenaude, PhD, Tristram H. Smith, PhD, Anne Lown, DrPH, and David N. Korones, MD, for their guidance in study design for the main study. They also thank Janet Gibbons, EdD, Jeanne Guastaferro, MA, Caroline Dormajian, PhD, Stephen Uebbing, EdD, Shauna Kaisen, Dalia Cong, Samantha Bieck, and Kate Mosten for their assistance in data collection and data management.

Funding

This work was funded by St. Baldrick’s Foundation. The first author was also supported by grants from the National Cancer Institute at the National Institutes of Health (K07CA174728 and P30CA072720). The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.

Conflicts of interest: None declared.

References

  1. Asher S. R., Singleton L. C., Tinsley B. R., Hymel S. (1979). A reliable sociometric measure for preschool children. Developmental Psychology, 15, 443–444. [Google Scholar]
  2. Bagwell C. L., Newcomb A. F., Bukowski W. M. (1998). Preadolescent friendship and peer rejection as predictors of adult adjustment. Child Development, 69, 140–153. [PubMed] [Google Scholar]
  3. Browne M., Cudeck R. (1993). Alternative ways of assessing model fit In Bollen K. A., Long J. S. (Eds.), Testing structural equation models (pp. 136–162). Newbury Park, CA: Sage Publications. [Google Scholar]
  4. Bukowski W. M., Adams R. (2005). Peer relationships and psychopathology: Markers, moderators, mediators, mechanisms, and meanings. Journal of Clinical Child and Adolescent Psychology, 34, 3–10. [DOI] [PubMed] [Google Scholar]
  5. Bukowski W. M., Cillessen A. H. N., Velásquez A. M. (2012). Peer ratings In Laursen B., Little T. D., Card N. A. (Eds.), Handbook of developmental research methods (pp. 211–228). New York, NY: Guilford Press. [Google Scholar]
  6. Bukowski W. M., Hoza B. (1989). Popularity and friendship: Issues in theory, measurement, and outcome In Berndt T. J., Ladd G. W. (Eds.), Peer relationships in child development (pp. 15–45). New York, NY: John Wiley & Sons. [Google Scholar]
  7. Cella D., Yount S., Rothrock N., Gershon R., Cook K., Reeve B., Ader D., Fries J. F., Bruce B., Rose M. (2007). The Patient-Reported Outcomes Measurement Information System (PROMIS): Progress of an NIH Roadmap cooperative group during its first two years. Medical Care, 45, S3–S11. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Cohen J. (1988). Statistical Power Analysis for the Behavioral Sciences. 2nd Edn. Hillsdale, NJ: Lawrence Erlbaum Associates. [Google Scholar]
  9. Coie J. D. (1990). Toward a theory of peer rejection In Asher S. R., Coie J. D. (Eds.), Peer Rejection in Childhood (pp. 365–401). Cambridge, England: Cambridge University Press. [Google Scholar]
  10. Crowe L. M., Beauchamp M. H., Catroppa C., Anderson V. (2011). Social function assessment tools for children and adolescents: A systematic review from 1988 to 2010. Clinical Psychology Review, 31, 767–785. [DOI] [PubMed] [Google Scholar]
  11. Devine K. A., Bukowski W. M., Sahler O. J. Z., Ohman-Strickland P., Smith T. H., Lown E. A., Patenaude A. F., Korones D. N., Noll R. B. (2016). Social competence in childhood brain tumor survivors: Feasibility and preliminary outcomes of a peer-mediated intervention. Journal of Developmental & Behavioral Pediatrics, 37, 475–482. [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. DeWalt D. A., Gross H. E., Gipson D. S., Selewski D. T., DeWitt E. M., Dampier C. D., Hinds P. S., Huang I-C., Thissen D., Varni J. W. (2015). PROMIS® pediatric self-report scales distinguish subgroups of children within and across six common pediatric chronic health conditions. Quality of Life Research, 24, 2195–2208. [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. DeWalt D. A., Thissen D., Stucky B. D., Langer M. M., Morgan DeWitt E., Irwin D. E., Lai J. S., Yeatts K. B., Gross H. E., Taylor O., Varni J. W. (2013). PROMIS Pediatric Peer Relationships Scale: Development of a peer relationships item bank as part of social health measurement. Health Psychology, 32, 1093–1103. [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Dunn T. J., Baguley T., Brunsden V. (2014). From alpha to omega: A practical solution to the pervasive problem of internal consistency estimation. British Journal of Psychology, 105, 399–412. [DOI] [PubMed] [Google Scholar]
  15. Fabrigar L. R., Wegener D. T., MacCallum R. C., Strahan E. J. (1999). Evaluating the use of exploratory factor analysis in psychological research. Psychological Methods, 4, 272. [Google Scholar]
  16. Floyd F. J., Widaman K. F. (1995). Factor analysis in the development and refinement of clinical assessment instruments. Psychological Assessment, 7, 286. [Google Scholar]
  17. Garcia S. F., Cella D., Clauser S. B., Flynn K. E., Lad T., Lai J.-S., Reeve B. B., Smith A. W., Stone A. A., Weinfurt K. (2007). Standardizing patient-reported outcomes assessment in cancer clinical trials: A Patient-Reported Outcomes Measurement Information System initiative. Journal of Clinical Oncology, 25, 5106–5112. [DOI] [PubMed] [Google Scholar]
  18. Gest S. D., Sesma A. Jr, Masten A. S., Tellegen A. (2006). Childhood peer reputation as a predictor of competence and symptoms 10 years later. Journal of Abnormal Child Psychology, 34, 507–526. [DOI] [PubMed] [Google Scholar]
  19. Gifford-Smith M. E., Brownell C. A. (2003). Childhood peer relationships: Social acceptance, friendships, and peer networks. Journal of School Psychology, 41, 235–284. [Google Scholar]
  20. Horn J. L. (1965). A rationale and test for the number of factors in factor analysis. Psychometrika, 30, 179–185. [DOI] [PubMed] [Google Scholar]
  21. Hoza B., Gerdes A. C., Mrug S., Hinshaw S. P., Bukowski W. M., Gold J. A., Arnold L. E., Abikoff H. B., Conners C. K., Elliott G. R., Greenhill L. L., Hechtman L., Jensen P. S., Kraemer H. C., March J. S., Newcorn J. H., Severe J. B., Swanson J. M., Vitiello B., Wells K. C., Wigal T. (2005). Peer-assessed outcomes in the multimodal treatment study of children with attention deficit hyperactivity disorder. Journal of Clinical Child and Adolescent Psychology, 34, 74–86. [DOI] [PubMed] [Google Scholar]
  22. Hu L., Bentler P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling: A Multidisciplinary Journal, 6, 1–55. [Google Scholar]
  23. Hymel S., Rubin K. H., Rowden L., LeMare L. (1990). Children's peer relationships: Longitudinal prediction of internalizing and externalizing problems from middle childhood. Child Development, 61, 2004–2021. [Google Scholar]
  24. Irwin D. E., Gross H. E., Stucky B. D., Thissen D., DeWitt E., Lai J., Amtmann D., Khastou L., Varni J. W., DeWalt D. A. (2012). Development of six PROMIS pediatrics proxy-report item banks. Health and Quality of Life Outcomes, 10, 22.. [DOI] [PMC free article] [PubMed] [Google Scholar]
  25. Irwin D. E., Stucky B. D., Thissen D., DeWitt E. M., Lai J. S., Yeatts K., Varni J. W., DeWalt D. A. (2010). Sampling plan and patient characteristics of the PROMIS pediatrics large-scale survey. Quality of Life Research, 19, 585–594. [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Ladd G. W., Troop-Gordon W. (2003). The role of chronic peer difficulties in the development of children's psychological adjustment problems. Child Development, 74, 1344–1367. [DOI] [PubMed] [Google Scholar]
  27. Lansford J. E., Dodge K. A., Fontaine R. G., Bates J. E., Pettit G. S. (2014). Peer rejection, affiliation with deviant peers, delinquency, and risky sexual behavior. Journal of Youth and Adolescence, 43, 1742–1751. [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Masten A. S., Morison P., Pellegrini D. S. (1985). A revised class play method of peer assessment. Developmental Psychology, 21, 523–533. [Google Scholar]
  29. Morison P., Masten A. S. (1991). Peer reputation in middle childhood as a predictor of adaptation in adolescence: A seven year follow up. Child Development, 62, 991–1007. [DOI] [PubMed] [Google Scholar]
  30. Muthén L., Muthén B. (1998-2015). Mplus user’s guide. Seventh edition. Los Angeles, CA: Muthén & Muthén. [Google Scholar]
  31. Noll R. B., Bukowski W. M. (2012). Commentary: Social competence in children with chronic illness: The devil is in the details. Journal of Pediatric Psychology, 37, 959–966. [DOI] [PubMed] [Google Scholar]
  32. Parker J. G., Asher S. R. (1987). Peer relations and later personal adjustment: Are low-accepted children at risk? Psychological Bulletin, 102, 357–389. [DOI] [PubMed] [Google Scholar]
  33. Prinstein M. J., Boergers J., Spirito A., Little T. D., Grapentine W. J. (2000). Peer functioning, family dysfunction, and psychological symptoms in a risk model for adolescent inpatients' suicidal ideation severity. Journal of Clinical Child Psychology, 29, 392–405. [DOI] [PubMed] [Google Scholar]
  34. Reeve B. B., Hays R. D., Bjorner J. B., Cook K. F., Crane P. K., Teresi J. A., Thissen D., Revicki D. A., Weiss D. J., Hambleton R. K., Liu H., Gershon R., Reise S. P., Lai J. S., Cella D.; PROMIS Cooperative Group. (2007). Psychometric evaluation and calibration of health-related quality of life item banks: Plans for the Patient-Reported Outcomes Measurement Information System (PROMIS). Medical Care, 45, S22–S31. [DOI] [PubMed] [Google Scholar]
  35. Rubin K. H. (1993). The Waterloo Longitudinal Project: Correlates and consequences of social withdrawal from childhood to adolescence In Rubin K. H., Asendorpf J. B. (Eds.), Social withdrawal, inhibition, and shyness in childhood (pp. 291–314). Hillsdale, NJ: Erlbaum. [Google Scholar]
  36. Rubin K. H., Bukowski W. M., Bowker J. C. (2015). Children in peer groups In Bornstein M., Leventhal T. (Eds.), Handbook of Child Psychology and Developmental Science (7th ed, Vol. 4, pp. 175–222). New York, NY: Wiley. [Google Scholar]
  37. Salley C. G., Gerhardt C. A., Fairclough D. L., Patenaude A. F., Kupst M. J., Barrera M., Vannatta K. (2014). Social self-perception among pediatric brain tumor survivors compared to peers. Journal of Developmental and Behavioral Pediatrics, 35, 427–434. [DOI] [PMC free article] [PubMed] [Google Scholar]
  38. Salley C. G., Hewitt L. L., Patenaude A. F., Vasey M. W., Yeates K. O., Gerhardt C. A., Vannatta K. (2015). Temperament and social behavior in pediatric brain tumor survivors and comparison peers. Journal of Pediatric Psychology, 40, 297–308. [DOI] [PMC free article] [PubMed] [Google Scholar]
  39. Schulte F., Barrera M. (2010). Social competence in childhood brain tumor survivors: A comprehensive review. Supportive Care in Cancer, 18, 1499–1513. [DOI] [PubMed] [Google Scholar]
  40. Selewski D. T., Collier D. N., MacHardy J., Gross H. E., Pickens E. M., Cooper A. W., Bullock S., Earls M. F., Pratt K. J., Scanlon K., McNeill J. D., Messer K. L., Lu Y., Thissen D., DeWalt D. A., Gipson D. S. (2013). Promising insights in the health related quality of life for children with severe obesity. Health and Quality of Life Outcomes, 11, 29.. [DOI] [PMC free article] [PubMed] [Google Scholar]
  41. Thomas S. L., Heck R. H. (2001). Analysis of large-scale secondary data in higher education research: Potential perils associated with complex sampling designs. Research in Higher Education, 42, 517–540. [Google Scholar]
  42. Varni J. W., Magnus B., Stucky B. D., Liu Y., Quinn H., Thissen D., Gross H. E., Huang I-C., DeWalt D. A. (2014). Psychometric properties of the PROMIS® pediatric scales: Precision, stability, and comparison of different scoring and administration options. Quality of Life Research, 23, 1233–1243. [DOI] [PMC free article] [PubMed] [Google Scholar]
  43. Velásquez A. M., Bukowski W. M., Saldarriaga L. M. (2013). Adjusting for group size effects in peer nomination data. Social Development, 22, 845–863. [Google Scholar]
  44. Williams B., Onsman A., Brown T. (2010). Exploratory factor analysis: A five-step guide for novices. Australasian Journal of Paramedicine, 8, 1–13. [Google Scholar]
  45. Zeller M., Vannatta K., Schafer J., Noll R. B. (2003). Behavioral reputation: A cross-age perspective. Developmental Psychology, 39, 129–139. [DOI] [PubMed] [Google Scholar]

Articles from Journal of Pediatric Psychology are provided here courtesy of Oxford University Press

RESOURCES