Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2013 Jul 8.
Published in final edited form as: J Community Psychol. 2012 Apr 4;40(3):277–291. doi: 10.1002/jcop.20508

A New Measure for Assessing Youth Program Participation

Jennifer Sarah Tiffany 1, Deinera Exner-Cortens 1, John Eckenrode 1
PMCID: PMC3703671  NIHMSID: NIHMS441801  PMID: 23843679

Abstract

Participation in after-school programs is an important lever to improve adolescents’ health and well-being; however, well-defined measurement of the quality of participation in these programs is limited. The present study validated a newly designed measure of participation in a sample of urban youth enrolled in community-based after-school programs. Exploratory and confirmatory factor analyses were used to test the structure of the 20-item Tiffany-Eckenrode Program Participation Scale (TEPPS). Results suggest that the scale is comprised of four subscales (Personal Development, Voice/Influence, Safety/Support and Community Engagement). The TEPPS was also correlated with several commonly used measures of program participation. Findings from this paper provide support for the use of the newly designed scale as a valid and reliable measure of quality program participation by youth.

Keywords: participation, adolescent sexual health, HIV risk reduction, positive youth development, factor analysis


Adolescents and emerging adults face a wide array of health issues (e.g., obesity, unintended pregnancy, STIs/HIV, violence/bullying, drugs/alcohol), as well as opportunities for health promotion and achieving health equity. In the United States, participation in after-school programs may provide opportunities for meaningful choice and voice by promoting active engagement by youth in the systems and decisions that affect their lives, and hold potential as a means of improving young people’s health outcomes (Larson & Angus, 2011; Pittman, Irby, Tolman, Yohalem, & Ferber, 2003; Wallerstein, 2006). In this paper, we briefly review current approaches to measuring program participation and the conceptual frameworks that inform them, including recent efforts to better assess the quality and characteristics of participation. We then report on our work to develop a theoretically-grounded and reliable measure of youth participation in the context of a study of adolescent sexual health promotion.

Frameworks for Understanding Adolescent Program Participation

Organized activities for young people comprise a key setting for positive youth development efforts (Shinn & Yoshikawa, 2008). Settings that promote positive youth development assist young people to build the knowledge and behaviors needed for a healthy, engaged and productive adolescence through creating social contexts rich in the services, opportunities and supports that foster these capabilities (Pittman et al., 2003). The theoretical frameworks that provide the foundation for positive youth development include Bronfenbrenner’s ecology of human development, in which youth are seen as engaged in a dynamic interaction with their environments that may aid or stifle their development; community-level prevention approaches that seek to change environmental as well as individual risk factors; and resiliency research regarding the role of protective factors in ameliorating risk (Bronfenbrenner & Morris, 1998; Hawkins, Catalano, & Arthur, 2002; Werner & Smith, 1992). Building on these conceptual frameworks, positive youth development emphasizes changes in both individual and setting-level characteristics that will enhance young people’s resilience and healthy maturation. After-school programs represent one of those developmentally relevant settings. As such, measurement of the extent and characteristics of youth program participation can inform and guide these positive youth development efforts.

Related literatures serve to further identify specific aspects of program participation that may relate to healthy adolescent development. Empowerment approaches to health promotion describe “the quality and intensity of active involvement” and “involvement of participants as decision-makers and social change agents” as key elements in strategies linked to improved health outcomes for youth (Wallerstein, 2006, p. 12). For example, HIV prevention strategies that enhance participation and specifically address gender inequality may hold promise relative to more traditional approaches to health education (Wallerstein, 2006; see also Minkler & Wallerstein, 2003; O’Donoghue, Kirshner, & McLaughlin, 2006).

Such theoretical examinations of participation also highlight subjective elements such as a sense of agency, and contextual elements such as opportunities for problem-solving, both of which may be subject to measurement and intervention (Larson & Angus, 2011; Smith, Peck, Denault, Blazevski, & Akiva, 2010). In this context, problem-solving involves young peoples’ active participation in defining and addressing issues pertinent to their lives (Pittman et al., 2003). Prilleltensky, Nelson and Peirson (2001) theorize that political and psychological “power and control are…key instruments in the promotion of resilience” and thus contribute to positive health outcomes for children and youth (p. 151).

Increasingly, research has identified significant associations between positive youth development constructs and youth health promotion/risk reduction outcomes. Peterson and others have studied the contributions of psychological empowerment and sociopolitical control to individual risk reduction as well as to community attachment and civic participation (Peterson, Peterson, Agre, Christens, & Morton, 2011; Zimmerman, 1995). Recent examinations of Add Health data also suggest that developmental assets such as autonomy, empathy, and self-esteem are associated with sexual health during emerging adulthood (Galinsky & Sonenstein, 2011). Finally, a sense of agency appears to reduce the risk of unintended pregnancy, increase capacity to avoid risky situations, and contribute to positive sexual health outcomes among young women (Schalet, 2009). According to Larson and Angus (2011), agency involves “skills for steering towards achievement of meaningful and challenging real-world goals,” and relies on a complex set of capabilities to “predict and influence the environment” that can be deliberately nurtured and developed by youth programs (p. 290). This conceptual and empirical body of work underscores the value of drawing on several theoretical frameworks when developing measures of program participation among adolescents.

Measurement

In their recent review of research on youth participation in organized activities, Bohnert, Fredricks and Randall (2010) summarized current methods of measuring intensity, duration, breadth and engagement, and presented recommendations for best practices regarding measurement. According to their review, youth participation in prevention and youth development programs is currently measured primarily in terms of time (i.e., hours per week spent at a center or on an activity, duration of involvement with a program), with some studies also examining breadth of involvement (i.e., the number of different activities in which a youth took part) (see also Gardner, Roth, & Brooks-Gunn, 2008; Klein et al., 2006; Linver, Roth, & Brooks-Gunn, 2009; Randall & Bohnert, 2009). A number of studies also assess participation using an approach that combines measures of both intensity (hours per week) and breadth or diversity of activities in which youth are engaged (Bohnert et al., 2010). In sum, most commonly used measures of participation focus on quantity (i.e., time on task), while relatively few studies have developed measures that integrate intensity, duration and/or breadth, or report on correlations among these dimensions of participation (Bohnert et al., 2010). The dearth of well developed measures addressing the quality of participation as experienced by youth reduces the ability of researchers, program providers, and policy-makers to assess the impacts of participation on health outcomes

There is also increasing support for supplementing measures of quantity (e.g., number of hours, number of programs) with an examination of the quality of participation in these programs, as measured by participant engagement (Bohnert et al., 2010; Hirsch, Mekinda, & Stawicki, 2010; Weiss, Little, & Bouffard, 2005). Bohnert and colleagues (2010) affirm that “scholars argue that [engagement] is the missing link in organized activity research” (p. 593), and some studies find that engagement predicts the quality of program implementation as well as youth outcomes (Hirsch et al., 2010; Shernoff, 2010). Currently, measurement approaches to individual engagement generally rely on short instruments completed by facilitators or teachers regarding individual youth, including brief observational assessments (Bohnert et al., 2010; Mahoney, Lord, & Carryl, 2005; Mahoney, Parente, & Lord, 2007; McGuire & Gamble, 2006; Yohalem & Wilson-Ahlstrom, 2010). More innovative approaches may obtain individual-level responses from youths’ themselves. For example, Shernoff (2010) investigated the impact of specific characteristics of youths’ experiences using an experience sampling method, in which middle school students were randomly prompted to report on key characteristics of their current activity, and found that the quality of participation in program activities was significantly related to social competence and academic achievement.

Research informed by ecological-developmental theories also provides groundwork for a more nuanced approach to the measurement of quality of participation (Durlak, Mahoney, Bohnert, & Parente, 2010). Smith and colleagues (2010) build on Bronfenbrenner’s (1999) insights about the importance of understanding ecological context and concentrate on assessing point-of-service interactions and the characteristics of micro-settings (i.e., specific programs, rather than larger agencies). This framework enables researchers to explore the interactions between settings and the youth that participate in them, bringing contextual and developmental nuances to the fore (Riggs & Greenberg, 2004).

In contrast to a measurement framework that solely emphasizes intensity, duration and breadth of involvement, an ecological-developmental frame of reference also considers structural features of programs, described as program characteristics in Bohnert et al.’s (2010) review. From an ecological-developmental perspective, high quality youth participation results from structural elements like having supportive adult program staff/facilitators, learning opportunities, safety, and voice in program decision-making. To assess overall quality of participation, then, it is important to measure both individual- (e.g., intensity of participation) and program-level (e.g., presence of supportive adults) characteristics. The current measure attempts to integrate these facets of measurement, to provide a more comprehensive assessment of program participation.

The Present Study

The present study was initiated in 2005 as a community-based participatory research (CBPR) project involving a university, state government agencies, and community-based organizations receiving state support for providing after-school programs for adolescents in New York City. One of our primary aims was to develop a brief, plain-language, reliable scale of quality of program participation that could facilitate research in the field, as well as provide program staff with a practical tool to assess, evaluate and improve practices related to youth participation.

Our approach to developing a measure of participation emphasized the quality of the experiences and structural elements of the programs in which youth were engaged, rather than simply the amount of time spent in the program or the variety of the youths’ activities. The ecological-developmental literature supported the inclusion of items assessing social connectedness as well as items assessing linkages between program activities and other ecological contexts (e.g., peer groups, schools). This literature also supported our focus on assessing youth experiences within micro-settings, in this case the specific programs provided by agencies (Bronfenbrenner, 1999; Smith et al., 2010). We also developed items measuring characteristics of programs that foster positive youth development, such as young people’s sense of safety within the program and relationships with adult staff. Hart’s (1997) and Wallerstein’s (2006) insights regarding participation and youth empowerment guided the development of items assessing youth experiences of agency within programs. We also reviewed instruments such as the “Youth Engagement Tool/Youth Adult Leaders for Program Excellence” assessment guide (Camino, Zeldin, Mook, & O’Connor, 2004), which offer models for describing positive youth development practices using youth-friendly language.

The CBPR partnership conducted a pilot study, with data gathered in early 2006; a longitudinal study (2007–2009) which supplied the baseline data reported in this paper; and subsequent qualitative studies of the constructs included in the participation measure as well as projects aimed at translating newly developed instruments for use in routine evaluation and program improvement efforts. The action research methods used throughout the study were consistent with the emphasis on ecological context, active engagement, and equity/power sharing articulated in the theoretical frameworks on which we relied. The present paper reports on development of a new scale for measuring theoretically significant characteristics of program participation, the Tiffany-Eckenrode Program Participation Scale (TEPPS).

Methods

Preliminary Instrument Design

Pilot data gathered from 98 participants during 2006 suggested adequate reliability and validity of the newly developed TEPPS, and laid the foundation for an expansion of the scope of the study and of the CBPR partnership in 2007 (Tiffany et al., 2007a; Tiffany, Prado, Eckenrode, Birnel & Bat-Chava, 2009; Tiffany et al., 2007b). The overall goal of the larger study was to further validate the TEPPS and to assess the extent to which active program participation could serve as a lever to promote adolescent sexual health, HIV risk reduction, and social connectedness.

The initial version of the TEPPS included 21 items. Based on data gathered during the 2006 pilot study, we excluded three items that did not contribute substantively to the measure. We added several items addressing additional opportunities for individual development to the scale based on feedback from youth involved in the pilot study. The resulting 21-item TEPPS was administered to 331 12–18 year old adolescents in NYC during early 2008 as part of our longitudinal study, with follow-up assessments conducted six and twelve months later. We retained 91% of participants at six- and twelve-month follow-up. Baseline data gathered during early 2008 were analyzed to generate this report.

Participants

To be eligible to participate in the study, adolescents needed to be currently engaged in an after-school program at one of eight community-based agencies in New York City. The youth involved in the study were primarily participants in small group activities (e.g., academic enrichment programs, peer education efforts, and/or discussion groups addressing gender issues). Youth enrolled in some larger recreational programs offered by agencies were also recruited, enabling us to assess youth experiences in a range of programs representing the heterogeneity of after-school activities (Riggs & Greenberg, 2004). In most cases, the after-school program was small enough that all enrolled youth were included in the sample. In cases where surveying all youth in a program was not feasible (e.g., large sports or recreational programs), a convenience sample of youth was recruited into the survey. Sample demographics at baseline are shown in Table 1.

Table 1.

Demographics of Participants at Baseline (n=331)

Age, mean (SD) 15.25 (1.33)
Gender
Male 37.5% (n=124)
Female 61.9% (n=205)
Transgender male-to-female 0.6% (n=2)^
Race/Ethnicity
Black Hispanic 12.1% (n=40)
Black, non-Hispanic 42.3% (n=140)
Other race, Hispanic+ 3.0 % (n=10)
Other race, non-Hispanic 7.3% (n=24)
More than one race, Hispanic 26.0% (n=86)
More than one race, non-Hispanic 9.1% (n=30)
Unknown 0.3% (n=1)
Sexual Orientation
Straight 75.8% (n=250)
Gay/lesbian 10.3% (n=34)
Bisexual 9.7% (n=32)
Questioning/Unsure 4.2% (n=14)

Percentages may not add to 100 due to rounding error.

^

Transgender (male to female) individuals were considered females in data analyses due to small cell size.

+

Other race categories include White, Asian, American Indian/Alaska Native, and Native Hawaiian or other Pacific Islander.

Procedures

Data for the present study were collected over a 16-week period beginning in January 2008. The instrument was administered as a computer-based survey in all programs, except one program which lacked access to a computer lab. Participants at this program completed a paper-and-pencil version. The survey included both open- and closed-ended items, and took approximately 25–45 minutes to complete. In addition to measures of participation, the survey instrument assessed sexual health promotion and HIV risk reduction practices, HIV-related knowledge and worry, utilization of health care services including HIV testing, family connectedness, school and community connectedness, ethnic identity, parenting style and parental monitoring, experiences of violence and abuse, perceived levels of distress, internal assets, and community resources. All study procedures and instruments were reviewed and approved by Cornell University’s Institutional Review Board for Human Participants.

Measures

Program participation

Quality of program participation

Characteristics related to the quality of program participation were measured using the TEPPS. Scale items are listed in Table 2. Responses to the participation items were measured on a 5-point Likert-type scale, ranging from “not at all true for me (1)” to “very true for me (5)”. Higher TEPPS scores indicate higher levels of program participation.

Table 2.

Descriptive Statistics for the 20-item TEPPS (n=273)

Mean (SD) Minimum Maximum
Adults in the program listen to what I have to say. 4.45 (0.79) 2 5
I help decide things like program activities or rules. 3.00 (1.28) 1 5
I feel I have a lot of voice/power to influence decisions about the program. 3.46 (1.16) 1 5
I am very involved in program activities. 4.22 (0.97) 1 5
The program’s activities are challenging and interesting. 3.91 (1.01) 1 5
I learn a lot from participating in the program. 4.36 (0.92) 1 5
I think that participating in the program will help me to get a job. 4.39 (0.93) 1 5
I think that participating in the program will help me to continue my education. 4.41 (0.95) 1 5
Adults at the program respect me. 4.60 (0.75) 1 5
Staff at the program pay attention to what’s going on in my life. 4.08 (1.01) 1 5
It was easy for me to get involved in the program. 4.40 (0.94) 1 5
I feel close to at least one staff member at the program. 4.26 (1.00) 1 5
There’s at least one staff member that I can go to for support or help with a problem. 4.47 (0.89) 1 5
I have friends who also take part in the program. 4.71 (0.72) 1 5
The program finds ways to involve my family. 3.49 (1.22) 1 5
The program and my school work together to offer activities and services. 3.02 (1.52) 1 5
The program has had a positive influence on how people in my community treat me. 3.50 (1.22) 1 5
The program has had a positive influence on how I treat people from my neighborhood. 3.75 (1.16) 1 5
I usually feel safe when I am involved in program activities. 4.60 (0.77) 1 5
I plan to work on community issues after I stop participating in the program. 3.46 (1.27) 1 5
Intensity

Participants were asked to report the number of hours per week they spent at the program on a 4-point Likert-type scale, ranging from “less than 1 hour per week (1)” to “10 hours or more (4)”.

Duration

Participants were asked how long they had been involved with the program on a 5-point Likert-type scale, ranging from “less than 1 month (1)” to “greater than 1 year (5)”.

Breadth

In this study, rather than looking at the number of different activities in which youth were involved, we used the number and characteristics of roles reported by youth as a measure of breadth of participation. Youth reported their roles within programs using a checklist. These roles were coded and analyzed in relation to the level of engagement and opportunities for influence on decision-making that they offered youth (for example, participating in activities ranked lower in engagement than reviewing educational material which ranked lower than serving on a committee that interviewed and hired staff).

Family connectedness

Family connectedness was measured using an 8-item scale developed for this study. This scale was based on the work of Whitlock (2004). Example items include “someone in my family tells me when I do a good job” and “I have a close relationship with at least one of my parents or guardians”. Responses to the family connectedness scale were measured on a 5-point Likert-type scale, ranging from “strongly disagree (1)” to “strongly agree (5)”. Higher scores indicate higher levels of connectedness; scores ranged from 8 to 40. Cronbach’s alpha for this sample was 0.90.

Demographics

Participants completed several demographic questions, including questions on race, ethnicity, age, gender and sexual orientation.

Missing Data

The rate of missing data on individual items in the TEPPS was relatively low, with percent missing ranging from 0–3%. However, in the baseline sample, 17.5% of participants (n=58) had missing data on a least one of the variables that comprised the participation scale. Thus, complete case analysis reduced the eligible sample to 273 participants.

In order to explore missingness, and its potential impact on results, individuals with any missing data were compared to individuals with no missing data. Participants with missing data spent slightly less time at the program each week (2.30 vs. 2.53 hours, p=.048) and had slightly higher scores on the family connectedness scale (32.4 vs. 30.0, p=.035). There were no significant differences between missing and non-missing individuals on age, gender, race, ethnicity, sexual orientation, or duration or breadth of program involvement.

Analysis

To determine the structure of the participation scale, we first used exploratory factor analysis (EFA). Since it is theoretically plausible that subscales within the TEPPS are correlated, we ran EFA using both orthogonal (Varimax) and oblique (Direct Oblimin) rotations (Pedhauzer & Schmelkin, 1992). As the oblique rotation suggested a correlated factor structure, these results were retained and are presented here. Based on our eligible sample size (n=273), factor loadings above 0.326 were considered significant (Stevens, 2002, p. 394). To help determine the optimal number of factors to retain, we compared the results of Kaiser’s criterion, the scree plot, parallel analysis (PA) and Velicer’s minimum average partial (MAP) test (Hayton, Allen, & Scarpello, 2004; Nelson, 2005; O’Connor, 2000). We assessed reliability using Cronbach’s alpha, and assessed convergent validity using correlations. Results were evaluated at p < .05.

Following EFA, we used confirmatory factor analysis (CFA) to test a 1-factor and 4-factor model of the participation scale. Because of the non-normality of several of the TEPPS variables, we estimated these models using maximum likelihood estimation with robust standard errors (MLR; Yuan & Bentler, 2000). Fit indices used included the Satorra-Bentler scaled chi-square (χ2; Satorra & Bentler, 2001), the normed chi-square test (χ2/df), the comparative fit index (CFI; Bentler, 1990), the Tucker-Lewis index (TLI; Tucker & Lewis, 1973), the standardized root-mean-square residual (SRMR; Jöreskog & Sörbom, 1981), and the root-mean-square error of approximation (RMSEA; Steiger, 1989). Acceptable model fit is indicated by CFI and TLI ≥ .95, SRMR ≤ .08, and RMSEA ≤ .06 (Hu & Bentler, 1998). While no firm cut-off for χ2/df exists, a value <2.0 is often used to indicate adequate model fit (Marsh, Balla, & McDonald, 1988). To compare the two models, we used Akaike’s information criteria (AIC); models with lower AIC values are preferred. All analyses except CFA were performed in SPSS V18. CFA was conducted in MPlus V6.

Results

Exploratory Factor Analysis

As part of preliminary analyses conducted prior to EFA, correlations between the 21 items of the TEPPS were examined. Due to problems with collinearity, one variable (“There is someone at the program I can talk to”) was dropped from the scale. This deletion resulted in a final scale containing 20 items; descriptive statistics for these items are listed in Table 2.

To determine the number of factors to extract, four criteria were used. Velicer’s MAP procedure and the scree plot suggested we retain one factor; PA suggested we retain two factors; and Kaiser’s rule suggested we retain five factors. Since the MAP procedure may underfactor while the scree plot, PA and Kaiser’s rule may overfactor, and since Kaiser’s rule is considered the upper bound of the number of factors to be retained (Hayton et al., 2004), we also examined the pattern matrices for the one-, two-, three-, four- and five-factor solutions. Using knowledge from the factor retention criteria, as well as the results for the various solutions, we concluded that while a one-factor solution fits these data, a four-factor solution is also plausible. The one-factor solution explained 32.4% of the total variance, and the four-factor solution explained 53.0% of the total variance. In the one-factor solution, all factor loadings were significant except the loading for “I have friends who also take part in the program”; however, because of the theoretical importance of this item to our measure of participation, we retained this item in the final scale.1 The mean for the full scale was 80.54 (SD=11.32, range: 40–99).

Results from the four-factor pattern matrix are presented in Table 3. Based on item content, the four subscales of the TEPPS were labeled Personal Development (PD), Voice/Influence (VI), Safety/Support (SS), and Community Engagement (CE). The Personal Development subscale contained 7 items, and explained 32.4% of the total variance. Voice/Influence contained 4 items, and explained 7.5% of the variance; Safety/Support contained 4 items, and explained 6.9% of the variance; and Community Engagement contained 5 items, and explained 6.2% of the variance. Correlations between each of the four subscales are listed in Table 4.

Table 3.

Four-factor Solution for the TEPPS

Factor 1: PD Factor 2: VI Factor 3: SS Factor 4: CE
The program’s activities are challenging and interesting. .775
I think that participating in the program will help me to continue my education. .743
I learn a lot from participating in the program. .713
Staff at the program pay attention to what’s going on in my life. .589
I think that participating in the program will help me to get a job. .577
Adults at the program respect me. .522
Adults in the program listen to what I have to say. .497
I help decide things like program activities or rules. .737
I feel I have a lot of voice/power to influence decisions about the program. .668
It was easy for me to get involved in the program. .567
I am very involved in program activities. .357
I have friends who also take part in the program. .764
I usually feel safe when I am involved in program activities. .632
There’s at least one staff member that I can go to for support or help with a problem. .535
I feel close to at least one staff member at the program. .525
The program has had a positive influence on how people in my community treat me. .724
The program and my school work together to offer activities and services. .702
The program has had a positive influence on how I treat people from my neighborhood. .598
The program finds ways to involve my family. .467
I plan to work on community issues after I stop participating in the program. .303
Mean (SD) 30.07(4.49) 14.98(3.11) 17.76(2.68) 17.14(4.21)
Sub-scale minimum 12 6 8 5
Sub-scale maximum 35 20 20 25
Cronbach’s alpha .82 .66 .73 .68

PI: Personal Involvement; VI: Voice/Influence; SS: Safety/Support; CE: Community Engagement

Table 4.

Correlations between subscales of the TEPPS

Factor 1: PD Factor 2: VI Factor 3: SS Factor 4: CE
Factor 1: PD 1.00
Factor 2: VI .266* 1.00
Factor 3: SS .290* .185^ 1.00
Factor 4: CE .361* .143+ .193^ 1.00
*

p < .001;

^

p < .01;

+

p < 0.05

Confirmatory Factor Analysis

In order to test whether the one- or four-factor solution best fit these data, CFA was performed. From the assessment of the various fit statistics, it appeared the four-factor model (χ2(161, N=331)=284.25, p<.001; χ2/df=1.77; CFI=.920; TLI=.905; RMSEA=.048; SRMR=.055; AIC=17035.50) provided a more adequate fit to these data than the one-factor model (χ2(167, N=331)=390.09, p<.001; χ2/df=2.34; CFI=.854; TLI=.834; RMSEA=.064; SRMR=.059; AIC=17164.96); however, this result needs to be replicated on additional samples before conclusions can be made regarding the optimal structure of this scale.

Reliability and Validity

Cronbach’s alpha for the one-factor solution was 0.87. For the four-factor solution, Cronbach’s alpha values were 0.82 (PD), 0.66 (VI), 0.73 (SS), and 0.68 (CE) (Table 3). These values all indicate adequate reliability.

To test convergent validity, we ran correlations between the total scale and each of the four subscales with measures of intensity, duration, and breadth of program participation, as well as a scale measuring family connectedness. Family connectedness is positively correlated with youth development and adolescent sexual health outcome measures in other studies (Kirby & Miller, 2002).

As shown in Table 5, factor-based scores on the total participation scale were positively correlated with the number of hours per week the participant spent at the program (intensity) (r=.181, p < .01), the number of roles the participant had at the program (breadth) (r=.169, p < .01) and the participant’s level of family connectedness (r=.270, p < .001). Scores on the Personal Development, Safety/Support and Community Engagement subscales were also significantly correlated with intensity and family connectedness, but were not correlated with breadth. Scores on the Voice/Influence subscale were correlated with breadth (r=.311, p < .001), but were not correlated with any of the other measures. Finally, length of involvement (duration) was only significantly correlated with scores on the Safety/Support subscale (r=.125, p < .05). In sum, our measure of participation exhibited significant correlations with measures of intensity, breadth and family connectedness, but was not as highly correlated with duration (Table 5).

Table 5.

Correlations between the TEPPS and Related Constructs

Intensity Breadth Duration Family Connectedness
Full Scale .181^ .169^ .026 .270*
Factor 1: PD .178^ .094 −.031 .266*
Factor 2: VI −.006 .311* .011 .083
Factor 3: SS .254* .049 .125+ .205^
Factor 4: CE .150+ .096 .024 .258*
*

p < .001;

^

p < .01;

+

p < .05

Discussion

This paper reports the development of a new 20-item, plain-language, theoretically-grounded, and reliable scale that will facilitate future research on program participation, as well as provide a practical tool for programs seeking to assess, evaluate and improve practices related to youth engagement. The TEPPS assesses the quality of youths’ participation experiences as well as structural elements of the programs in which youth were engaged, rather than simply the amount of time spent in or variety of the youths’ activities. We found support in the ecological-developmental literature for focusing on youths’ experiences within the significant micro-settings provided by after-school programs. Both the ecological-developmental literature and positive youth development practices provide support for the conceptually distinct subscales included in the TEPPS (Personal Development, Voice/Influence, Safety/Support, and Community Engagement).

The 20-item TEPPS was significantly correlated with several commonly used measures of program participation (intensity and breadth), suggesting that the new scale possesses convergent validity, while also assessing additional dimensions related to high quality program participation. The approach taken to measuring breadth and intensity of participation in the present study differs from the research reviewed by Bohnert and colleagues (2010) in the following ways: First, we measured breadth of participation by examining the number and characteristics of roles reported by youth, rather than by measuring participation in different types of activities; these roles were coded and analyzed in relation to the level of participation and opportunities for influence on decision-making that they offered youth. Second, we measured intensity of participation in overall programs rather than in specific activities; some of the programs included a variety of activities, and some provided career ladders for youth that included opportunities for engagement in more complex activities and increasing levels of decision-making over time.

Three of the four TEPPS subscales were significantly correlated with intensity (Personal Development, Safety/Support, Community Engagement). However, only one subscale (Safety/Support) was significantly correlated with duration. This may be because duration is not as reflective of quality participation as measures of intensity and breadth. In our interactions with youth in our subsequent qualitative studies, we noted that some participants who were involved in programs for longer periods of time used the program strictly as a safe space; though physically present, these participants were not fully engaged in program activities. This qualitative finding is supported by the significant correlation between the Safety/Support subscale and our measure of duration.

Voice/Influence was the only subscale correlated with breadth of program involvement, a measure that captures both the number of roles fulfilled by youth and the level of engagement those roles require. The correlation of the overall TEPPS measure and the Voice/Influence subscale with our breadth measure warrants further study, as this set of measures may be particularly useful for assessments focused on the degree to which programs offer opportunities for increasing youths’ levels of decision-making and responsibility for determining program activities. These dimensions of youth programming are of interest to practitioners and scholars working within both positive youth development and empowerment/participatory action research paradigms.

The 20-item TEPPS measure and subscales assessing Personal Development, Safety/Support, and Community Engagement were all positively and significantly correlated with our measure of family connectedness, confirming our initial hypothesis that these elements of youth experience were interrelated. Family connectedness is known to be a factor that contributes to adolescent sexual health and risk reduction (Kirby & Miller, 2002), but the relationship between family connectedness and highly engaged program participation remains understudied. The TEPPS will contribute to a more nuanced understanding of this potentially complex relationship, as well as relationships between participation and adolescent sexual health, another understudied association.

Limitations and Conclusion

While the TEPPS is a promising measure of the quality of youth program participation, some limitations of this study should be noted. First, the non-random selection of youth into our study may introduce a conservative bias into our data, as enrolled youth may in some cases have been more highly engaged than other program participants. However, our sampling method was justified by our interest in examining whether participation measures would work effectively and generate useable findings in a variety of program settings. Second, because we have so far only used the TEPPS in our longitudinal study, we were unable to conduct CFA on an alternate sample. In order to clarify the optimal structure of this scale, we will perform additional psychometric testing in the future.

The TEPPS approach to measurement sees high quality participation as contingent upon structural elements within programs that enable youth to access supportive adults, learning opportunities, safety, and voice in program decision-making, as well as upon individual characteristics of the youths’ themselves. As such, we believe the use of the TEPPS will enable researchers and practitioners to better understand experiences of adolescent program participation, which will allow improved assessment of the relationship of participation with outcomes of key interest, including sexual health and access to health-related resources. Further, the use of the TEPPS will aid in quality assurance efforts by program providers, by allowing a more thorough assessment of the quality of participation as experienced by youth.

Supplementary Material

TEPPS Documentation

Acknowledgments

The project described in this report was supported in part by Award Number R21NR009764 from the National Institute of Nursing Research and by USDA grant number NYC-323442-0219950. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institute of Nursing Research, the National Institutes of Health, or the USDA.

Footnotes

1

The full solution for the one-factor scale can be obtained from the authors.

The authors have no conflicts of interest to report.

References

  1. Bentler PM. Comparative t indexes in structural models. Psychological Bulletin. 1990;107:238–246. doi: 10.1037/0033-2909.107.2.238. [DOI] [PubMed] [Google Scholar]
  2. Bohnert AM, Fredricks J, Randall E. Capturing unique dimensions of youth organized activity involvement: Theoretical and methodological considerations. Review of Educational Research. 2010;80:576–610. [Google Scholar]
  3. Bronfenbrenner U. Environments in developmental perspective: Theoretical and operational models. In: Friedman SL, Wachs TD, editors. Measuring environment across the life span. Washington, DC: American Psychological Association; 1999. pp. 3–28. [Google Scholar]
  4. Bronfenbrenner U, Morris PA. The ecology of developmental process. In: Damon W, Lerner RM, editors. Handbook of child psychology: Theoretical models of human development. 5. New York: John Wiley; 1998. pp. 993–1028. [Google Scholar]
  5. Camino L, Zeldin S, Mook C, O’Connor C. Youth and adult leaders for program excellence. Ithaca, NY: Cornell University, ACT for Youth Center of Excellence; 2004. [Google Scholar]
  6. Durlak JA, Mahoney JL, Bohnert AM, Parente ME. Developing and improving after-school programs to enhance youth’s personal growth and adjustment: A special issue of AJCP. American Journal of Community Psychology. 2010;45:285–293. doi: 10.1007/s10464-010-9298-9. [DOI] [PubMed] [Google Scholar]
  7. Galinsky AM, Sonenstein FL. The association between developmental assets and sexual enjoyment among emerging adults. Journal of Adolescent Health. 2011;48:610–615. doi: 10.1016/j.jadohealth.2010.09.008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Gardner M, Roth J, Brooks-Gunn J. Adolescents’ participation in organized activities and developmental success 2 and 8 years after high school: Do sponsorship, duration, and intensity matter? Developmental Psychology. 2008;44:814–830. doi: 10.1037/0012-1649.44.3.814. [DOI] [PubMed] [Google Scholar]
  9. Hart RA. Children’s participation: The theory and practice of involving young citizens in community development and environmental care. New York: Earthscan; 1997. [Google Scholar]
  10. Hawkins JD, Catalano RF, Arthur MW. Promoting science-based prevention in communities. Addictive Behaviors. 2002;27:951–976. doi: 10.1016/s0306-4603(02)00298-8. [DOI] [PubMed] [Google Scholar]
  11. Hayton JC, Allen DG, Scarpello V. Factor retention decisions in exploratory factor analysis: A tutorial on parallel analysis. Organizational Research Methods. 2004;7:191–205. [Google Scholar]
  12. Hirsch BJ, Mekinda MA, Stawicki J. More than attendance: The importance of after-school program quality. American Journal of Community Psychology. 2010;45:447–452. doi: 10.1007/s10464-010-9310-4. [DOI] [PubMed] [Google Scholar]
  13. Hu L, Bentler PM. Fit indices in covariance structure modeling: Sensitivity to underparameterized model misspecification. Psychological Methods. 1998;3:424–453. [Google Scholar]
  14. Jöreskog KG, Sörbom D. LISREL V: Analysis of linear structural relationships by the method of maximum likelihood. Chicago, IL: National Education Resources; 1981. [Google Scholar]
  15. Kirby D, Miller BC. Interventions designed to promote parent-teen communication about sexuality. New Directions in Child and Adolescent Development. 2002;97:93–110. doi: 10.1002/cd.52. [DOI] [PubMed] [Google Scholar]
  16. Klein JD, Sabaratnam P, Matos MA, Smith SM, Kodjo C, Lewis K, Dandino C. Development and factor structure of a brief instrument to assess the impact of community programs on positive youth development: The Rochester Evaluation of Asset Development for Youth (READY) tool. Journal of Adolescent Health. 2006;39:252–260. doi: 10.1016/j.jadohealth.2005.12.004. [DOI] [PubMed] [Google Scholar]
  17. Larson RW, Angus RM. Adolescents’ development of skills for agency in youth programs: Learning to think strategically. Child Development. 2011;82:277–294. doi: 10.1111/j.1467-8624.2010.01555.x. [DOI] [PubMed] [Google Scholar]
  18. Linver MR, Roth JL, Brooks-Gunn J. Patterns of adolescents’ participation in organized activities: Are sports best when combined with other activities? Developmental Psychology. 2009;45:354–367. doi: 10.1037/a0014133. [DOI] [PubMed] [Google Scholar]
  19. Mahoney JL, Lord H, Carryl E. An ecological analysis of after-school program participation in the development of academic performance and motivational attributes for disadvantaged children. Child Development. 2005;76:811–825. doi: 10.1111/j.1467-8624.2005.00879.x. [DOI] [PubMed] [Google Scholar]
  20. Mahoney JL, Parente ME, Lord H. After-school program engagement: Links to child competence and program quality and content. Elementary School Journal. 107:385–404. [Google Scholar]
  21. Marsh HW, Balla JR, McDonald RP. Goodness-of-fit indexes in confirmatory factor analysis: The effect of sample size. Psychological Bulletin. 1988;103:391–410. [Google Scholar]
  22. McGuire KJ, Gamble WC. Community service for youth: The value of psychological engagement over number of hours spent. Journal of Adolescence. 2006;29:289–298. doi: 10.1016/j.adolescence.2005.07.006. [DOI] [PubMed] [Google Scholar]
  23. Minkler M, Wallerstein N. Community based participatory research for health. San Francisco, CA: Jossey Bass; 2003. [Google Scholar]
  24. Nelson LR. Some observations on the scree test, and on coefficient alpha. Journal of Educational Research and Measurement. 2005;3:1–17. [Google Scholar]
  25. O’Connor BP. SPSS and SAS programs for determining the number of components using parallel analysis and Velicer’s MAP test. Behavior Research Methods, Instruments, & Computers. 2000;32:396–402. doi: 10.3758/bf03200807. [DOI] [PubMed] [Google Scholar]
  26. O’Donoghue JL, Kirshner B, McLaughlin M. Youth participation: From myths to effective practice. The Prevention Researcher. 2006;13:3–6. [Google Scholar]
  27. Pedhauzer E, Schmelkin L. Measurement, design and analysis: an integrated approach. Hillsdale, NJ: Erlbaum; 1991. [Google Scholar]
  28. Peterson NA, Peterson CH, Agre L, Christens BD, Morton CM. Measuring youth empowerment: Validation of a sociopolitical control scale for youth in an urban community context. Journal of Community Psychology. 2011;39:592–605. [Google Scholar]
  29. Pittman K, Irby M, Tolman J, Yohalem N, Ferber T. Preventing problems, promoting development, encouraging engagement: Competing priorities or inseparable goals? Washington, DC: The Forum for Youth Investment, Impact Strategies, Inc; 2003. [Google Scholar]
  30. Prilleltensky I, Nelson G, Peirson L. The role of power and control in children’s lives: An ecological analysis of pathways towards wellness, resilience, and problems. Journal of Community and Applied Social Psychology. 2001;11:143–158. [Google Scholar]
  31. Randall ET, Bohnert AM. Organized activity involvement, depressive symptoms, and social adjustment in adolescents: Ethnicity and socioeconomic status as moderators. Journal of Youth and Adolescence. 2009;38:1187–1198. doi: 10.1007/s10964-009-9417-9. [DOI] [PubMed] [Google Scholar]
  32. Riggs NR, Greenberg MT. After-school youth development programs: A developmental-ecological model of current research. Clinical Child and Family Psychology Review. 2004;7:177–190. doi: 10.1023/b:ccfp.0000045126.83678.75. [DOI] [PubMed] [Google Scholar]
  33. Satorra A, Bentler PM. A scaled difference chi-square test statistic for moment structure analysis. Psychometrika. 2001;66:507–514. doi: 10.1007/s11336-009-9135-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
  34. Schalet AT. Subjectivity, intimacy, and the empowerment paradigm of adolescent sexuality: The unexplored room. Feminist Studies. 2009;35:133–159. [Google Scholar]
  35. Shernoff DJ. Engagement in after-school programs as a predictor of social competence and academic performance. American Journal of Community Psychology. 2010;45:325–337. doi: 10.1007/s10464-010-9314-0. [DOI] [PubMed] [Google Scholar]
  36. Shinn M, Yoshikawa H. Toward positive youth development: Transforming schools and community programs. New York: Oxford University Press; 2008. [Google Scholar]
  37. Smith C, Peck SC, Denault A, Blazevski J, Akiva T. Quality at the point of service: Profiles of practice in after-school settings. American Journal of Community Psychology. 2010;45:358–369. doi: 10.1007/s10464-010-9315-z. [DOI] [PubMed] [Google Scholar]
  38. Steiger JH. EzPATH: A supplementary module for SYSTAT and SYGRAPH. Evanston, IL: SYSTAT; 1989. [Google Scholar]
  39. Stevens JP. Applied multivariate statistics for the social sciences. 4. Hillsdale, NJ: Erlbaum; 2002. [Google Scholar]
  40. Tiffany JS, Brea M, Eckenrode JJ, Friedrichs EK, Peters RM, Richards-Clarke C, Stewart D, Tallon T. Exploring participatory HIV prevention strategies for youth through a community-based participatory research partnership. Paper presented at the 135th American Public Health Association (APHA) Annual Meeting; November 3–7, 2007; Washington, DC. 2007a. Abstract #154104. [Google Scholar]
  41. Tiffany JS, Prado G, Eckenrode J, Birnel SV, Bat-Chava Y. Building an ecological-developmental model for adolescent HIV prevention: Program participation and family connectedness. Poster presentation during the 137th APHA Annual Meeting; November 7–11, 2009; Philadelphia, PA. 2009. Abstract #208762. [Google Scholar]
  42. Tiffany JS, Tallon T, Brea M, Eckenrode JJ, Friedrichs EK, Richards-Clarke C, Stewart D, Peters RM. HIV prevention and participatory program strategies for youth: Are voice, choice, decision-making, and opportunities for personal development linked to risk reduction practices?. Paper presented at the 135th APHA Annual Meeting; November 3–7, 2007; Washington, DC. 2007b. Abstract #155680. [Google Scholar]
  43. Tucker LR, Lewis C. A reliability coefficient for maximum likelihood factor analysis. Psychometrika. 1973;38:1–10. [Google Scholar]
  44. Wallerstein N. What is the evidence on effectiveness of empowerment to improve health? 2006 Retrieved from http://www.euro.who.int/__data/assets/pdf_file/0010/74656/E88086.pdf.
  45. Weiss HB, Little PMD, Bouffard SM. More than just being there: Balancing the participation equation. New Directions for Youth Development. 2005;105:15–31. doi: 10.1002/yd.105. [DOI] [PubMed] [Google Scholar]
  46. Werner E, Smith R. Overcoming the odds: High risk children from birth to adulthood. New York: Cornell University Press; 1992. [Google Scholar]
  47. Whitlock JL. Places to be and places to belong. Ithaca, NY: ACT for Youth Center of Excellence, Cornell University, Family Life Development Center; 2004. [Google Scholar]
  48. Yohalem N, Wilson-Ahlstrom A. Inside the black-box: Assessing and improving quality in youth programs. American Journal of Community Psychology. 2010;45:350–357. doi: 10.1007/s10464-010-9311-3. [DOI] [PubMed] [Google Scholar]
  49. Yuan K, Bentler PM. Three likelihood-based methods for mean and covariance structure analysis with nonnormal missing data. Sociological Methodology. 2000;30:165–200. [Google Scholar]
  50. Zimmerman MA. Psychological empowerment: Issues and illustrations. American Journal of Community Psychology. 1995;23:581–600. doi: 10.1007/BF02506983. [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

TEPPS Documentation

RESOURCES