Abstract
In this manuscript, I present an observer-rated measure of child self-regulation, the Response to Challenge Scale (RCS). The RCS was designed to measure children’s cognitive, affective, and motor regulation in response to a physical challenge course. 198 children (Kindergarten through fifth grade) were evaluated using the RCS. All children individually completed a challenge course on two separate occasions four months apart. During their completion of the tasks, research-trained observers rated the degree to which children exhibited cognitive, affective, and motor regulation. In a fully-crossed research design, five raters on Occasion 1 and six raters on Occasion 2 rated all children. I examined the RCS within the Generalizability Theory (GT) framework to analyze construct validity (PRS). Results demonstrated that raters are able to distinguish between children’s self-regulation in various domains, providing some validity evidence for the RCS, and supporting the theory that self-regulation is a construct that is evidenced in different domains (Baumeister, 1997).
Keywords: Self-regulation, executive function, observer ratings, Generalizability Theory
Introduction
Self-regulation may be one of the major human capacities that contribute to positive outcomes (Baumeister, 1997). Recently,Moffit et al. (2011) reported results of a longitudinal study that documented the importance of self-control, which they defined as “an umbrella construct that bridges concepts and measurements from different disciplines (e. g., impulsivity, conscientiousness, self-regulation, delay of gratification, inattention-hyperactivity, executive function, willpower, intertemporal choice)” (p. 2693). In an observational study of 1,000 children recruited at birth and followed for 32 years, Moffit et al. found that measures of self-control in early childhood predicted physical health, substance dependence, personal wealth, and criminal outcomes in adulthood. They concluded that success depends largely on self-control, making it an important target for measurement and intervention. Therefore, the development and study of interventions to improve children’s self-control, or self-regulation, and appropriate instruments to assess these constructs are important priorities for research (e. g., Blair & Diamond, 2008; Moffit et al., 2011).
Blair and Diamond (2008) described self regulation as a core executive function that encompasses and extends beyond self-control: “Self-regulation refers to the primarily volitional cognitive and behavioral processes through which an individual maintains levels of emotional, motivational, and cognitive arousal that are conducive to positive adjustment and adaptation, as reflected in positive social relationships, productivity, achievement, and a positive sense of self” (p. 900). Baumeister (1997) defined self-regulation as referring to “the processes by which the self alters its own responses, including thoughts, emotions, and behaviors” (146); and Kopp (1991) described self-regulation as a construct that includes “compliance, delay of gratification, control of impulses and affect, modulation of motor and verbal activities, and the ability to act in accordance with social norms in the absence of external monitors” (p. 38). Most research definitions of self-control and self-regulation refer to self-control or regulation in several domains: cognition or attention, behavior, emotion or affect, and motor or physical activity. In this study, I present a child assessment instrument designed to measure self-regulation, which I defined as the extent to which children exhibit cognitive control (including focus and attention), behavioral control, emotional or affective control, and motor control.
Measurement of Self-Regulatory Abilities
If self-regulation is to increasingly become a target for intervention as well as an important construct in the study of child development and outcomes, psychometrically robust measures of self-regulation are needed. In prior studies, researchers have used a range of informants and methods to measure individual differences in self-regulation. Self-regulation has been measured using self-report questionnaires (e. g., Pintrich & DeGroot, 1990; Magno, 2010; Martinez-Pons, 2000; Frlec & Vidmar, 2001; Kojima & Ikeda; Ho, Maruyama, & Yamazaki, 1999; Nakata & Shiomi, 1998) that ask participants to rate themselves on items addressing persistence, regulation of moods, goal-setting, self-evaluation, motivation, self-assertiveness, and self-inhibition. Parent and teacher reports also have been used to measure children’s self-regulatory abilities (e. g., Self-Control Rating Scale: Kendall & Wilcox, 1979), and other parent or teacher measures have been used to infer self-regulation based on subscales selected from broader assessment scales. For example, Garcia-Barrera, Kamphaus, & Bandalos (2011) studied executive function using the Problem Solving, Attentional Control, Behavioral Control, and Emotional Control scales from the Behavior Assessment Scale for Children (BASC). [As Blair & Diamond (2008) note, there is some overlap between self-regulation and executive function, but self-regulation differs in that it “addresses both suppressing disruptive emotions and encouraging the flourishing of positive emotions” (p. 901).]
Researchers also have measured individual differences in self-regulation by evaluating performance on tasks designed to measure specific constructs included under the umbrella of self-regulation. These tasks have included delay/response inhibition tasks (e. g., Silverman & Ragusa, 1990), delay of gratification tasks (e.g., Wulfert et al., 2002; Silverman & Gaines, 1996), attention tasks (e. g., Power, Chapieski, & McGrath, 1985), persistence tasks (e. g, Power et al., 1985), and vigilance tasks (e. g., Silverman & Gaines, 1996). These studies have demonstrated the value of measuring self-regulation in a situational context in which researchers present an individual with a challenge for which a successful response requires self-regulation in different domains. Observations of individuals responding to these challenging tasks have produced important data on differential responding that is attributed to individual differences in self-regulation.
Development of the Response to Challenge Scale (RCS)
In a study designed to measure the outcomes of a Tae Kwon Do intervention, Lakes and Hoyt (2004) developed the Response to Challenge Scale to measure self-regulation in children. They hypothesized that an enhanced capacity for self-regulation would be one of the outcomes of the intervention, and, therefore, included it as a dependent variable of interest in the program evaluation. The RCS is similar to tools used in previous research in its presentation of a challenging task and assessment of task performance. RCS items assess constructs identified as important components of self-regulation in previous research, such as persistence, attention, emotional control, and motor control. The RCS was designed to be used by raters unfamiliar with the children they are rating to reduce rater bias and to avoid the inherent limitation of having raters nested within participants (such as when parents provide ratings of their children). As this is a more complex rating task than simply recording response latencies, Lakes and Hoyt used multiple raters to improve the generalizability of the ratings. Most measures typically use one or two raters (i. e., teachers, parents, or experimenters), and the contribution of rater variance to overall score variance in many of those studies is unknown. By increasing the number of raters and using generalizability analyses to estimate rater bias, the RCS presents a more stringent form of measurement.
Rationale for the RCS
The context for the assessment of self-regulation using the RCS is based on the premise that an individual’s ability to respond to challenges can be measured in a real-life or experimental situation and that this response may provide an indication of future reactions to challenging situations. One factor that influences an individual’s ability to address a challenge successfully is self-regulation. Because the measure was designed to evaluate an intervention that used an exercise program (Tae Kwon Do training) to teach self-regulation, we implemented the RCS in an exercise context (i. e., performance on an obstacle course). Children were asked to complete a number of tasks ranging from relatively simple (jogging one lap around the room) to more difficult (e. g., crawling through a long, tight tunnel; jumping over a high target). The obstacle course included tasks of varying difficulty in order to ensure that all children would find some portions of the obstacle course easy and some difficult. The obstacle course contained a sequence of ten different tasks, some of them having multiple parts (e. g., a somersault repeated four times). Performance on the obstacle course required self-regulatory capacities in three dimensions: motor or physical control (e. g., coordination required to complete tasks), affective or emotional control (e. g., persistence in spite of the increasing difficulty/seeming impossibility of certain tasks), and cognitive control (e. g., attention/concentration required to observe and repeat all required segments of the obstacle course). In recognition that the latter could be confounded with short-term memory abilities, children were given prompts when needed to complete the obstacle course.
Components of Self-Regulation Measured
As the RCS was designed to measure responses to a particular task, the components selected for measurement were ones that the author theorized would be measurable in that context. These included cognitive control, emotional or affective control, and motor or physical control. Behavioral control, another important domain of self-regulation was excluded as it was not expected to be as easily measured in the given context for this private school sample, where the vast majority of children were very cooperative; instead, Lakes and Hoyt (2004) included parent and teacher reports of behavioral control in their study. Thus, the RCS in its original form contained three subscales: cognitive self-regulation, affective self-regulation, and physical/motor self-regulation (see Appendix).
Cognitive Regulation
The cognitive subscale was designed to answer the following question: Does this child exhibit control over mental processes, including attention and concentration? Baumeister et al. (1998) stated, “Self-regulation… includes control over one’s mental processes, such as the ability to concentrate and to persist on tasks” (p. 118). The items chosen for the cognitive subscale reflect cognitive factors that are manifestations of self-regulation, including attention, concentration, and focus.
Affective Regulation
Baumeister et al. (1998) stated that self-regulation encompasses affect regulation, which they defined as, “control over one’s emotional states and moods” (p. 118). The items selected for the affective subscale include persistence, emotional control, confidence, and motivation. In prior research, scientists have identified persistence (e. g., Pintrich & De Groot, 1990), control over emotions (e. g., Baumeister, 1998), and motivation (e. g., Martinez-Pons, 2000) as self-regulatory abilities. This subscale was constructed to answer the question: Does this child exhibit control over his or her affective states and moods?
Physical Regulation
Kopp’s (1991) review of the various manifestations of self-regulation included control over motor or physical activities. The physical subscale of the RCS consists of three items hypothesized to reflect individual ability to regulate motor movements. Items included skillfulness in completing physical tasks, athleticism, and coordination; the intent of this subscale was to answer the question: Does this child exhibit motor control?
Prior Analyses of the RCS
The focus of this manuscript is to describe the development of the RCS and report results from construct validity analyses. The RCS has been included in two prior published studies: an intervention evaluation study where it proved to be a useful tool for measuring outcomes (Lakes & Hoyt, 2004) and a methodological primer on generalizability theory (GT), where the authors presented examples from RCS data for illustrative purposes (Lakes & Hoyt, 2009). Lakes & Hoyt (2009) reported results from a two-facet generalizability analysis in which participants (persons) were crossed with raters and items (PRI), treating raters and items as the primary sources of error. With a fully-crossed research design and five raters, expected (PRI) generalizability coefficients (a g coefficient is interpreted like a reliability coefficient, but takes multiple sources of error into account; Lakes & Hoyt, 2009) were g = .89 (Cognitive), g = .91 (Affective), and g = .89 (Physical/Motor). These results provided important information regarding various sources of error and provided estimates of increases and decreases in the dependability of the RCS given differing numbers of items and raters. Lakes and Hoyt (2009) also examined generalizability over occasions (i. e., test-retest reliability over four months) as well as items and raters in a three-facet (PRIO) analysis using study control group participants only. Test-retest reliabilities using conventional reliability measurements were .64, .84, and .80 for the Cognitive, Affective, and Physical scales, respectively; the test-retest reliability g coefficients (which consider multiple sources of error simultaneously and are always lower than measures based on a single source of error) were .37, .71, and .58 for the Cognitive, Affective, and Physical subscales, respectively. Although the test-retest reliability coefficients, measured using standard methods and compared to general standards for adequate test-retest reliability, were sufficient (.84 and .80) for the Affective and Physical scales, the coefficient for the Cognitive scale (.64) was lower, which could suggest that this domain may be more difficult to rate in the context used or may just be the result of maturation on this factor over the course of the school year (pre-testing was conducted the first week of the academic year, and post-testing was conducted about four months later). These results were provided in the context of a primer on GT and its application to child psychology research, and provided strong evidence of scale generalizability.
Moreover, a recent RCS study (Lakes, 2011) addressed convergent and discriminant validity. RCS measures of cognitive and affective self-regulation were positively and significantly correlated with an established test of executive function. The RCS Cognitive subscale was significantly and negatively correlated with teacher-rated inattention and hyperactivity, and the RCS Affective subscale was significantly and negatively correlated with parent ratings of conduct problems. Both the RCS Affective and Cognitive subscales were significantly and positively correlated with parent ratings of self-regulation. Together, these results provided strong evidence for the convergent and discriminant validity of the RCS.
Although the generalizability and the convergent and discriminant validity of the RCS have been reported previously, these manuscripts did not describe the development and content of the RCS. Thus, one goal of the present manuscript was to describe the development and content of the RCS. In addition, Lakes & Hoyt (2004) reported that a factor analysis of the RCS indicated that the cognitive and affective subscales of the RCS were strongly related; however, this factor analysis could not demonstrate whether or not raters could distinguish between children’s self-regulation in these domains. To answer this question, a generalizability analysis was necessary. Therefore, the second goal of this study was to analyze construct validity within the Generalizability Theory framework to determine whether or not raters were able to distinguish between self-regulation in different domains as measured by the RCS.
Hypothesis for the Present Study: Generalizability Analyses of Construct Validity
I conducted two separate two-facet generalizability analyses (PRS: Person, Rater, Scale) to examine construct validity by looking at redundancy among the RCS subscales within a GT framework. The RCS was designed to measure selfregulation in three domains, but it may be that it is difficult for raters to distinguish among cognitive, affective, and motor control when observing performance on a physically challenging task. The goal of the present study was to examine this issue within a GT framework, conducting two-facet (PRS) G analyses, treating subscales and raters as sources of error. The PS variance component in these analyses assesses the discriminant validity of the subscales as a proportion of the total variance. I hypothesized that raters would be able to distinguish between children’s self-regulation in different domains.
Method
Participants
I examined data from the original intervention study for which the RCS was developed (Lakes & Hoyt, 2004). The participants in this study were 198 students (grades K through 5) at a private elementary school in the Midwestern United States, of whom 83% were Caucasian, 8% were Asian-American, 2% were African-American, less than 1% were Native-American, and 2% were identified as having other racial/ethnic backgrounds (4% did not respond to this question). Approximately 73% were from families with incomes of more than $100,000 per year, and 51.5% (N = 102) were female. The breakdown by grade level was: Kindergarten (N = 32), 1st grade (N = 34), 2nd grade (N = 33), 3rd grade (N = 34), 4th grade (N = 43), and 5th grade (N = 27).
Procedures
Parents or guardians of all the students in the school (N = 208) received a mailing that included information about the research intervention and a consent form. One parent withheld consent and three others gave restricted consent (omitting one or two of the measures). 207 students began the study, and two left due to relocation and subsequent transfer of schools. Seven students were absent during either the pre- or post-test periods and, therefore, they were not included, yielding a final N of 198 (4% attrition). Students were evaluated at two time points four months apart.
Instrument
As described above, the RCS is an observer-rated measure of children’s responses to a physical challenge and includes 16 bipolar adjectives (e.g., Vulnerable—Invincible) rated on 7-point scales. Students completed a challenging obstacle course and were rated by seven independent raters. Negatively worded items were reversed prior to aggregation, so that possible scores on all subscales ranged from 1 to 7, with higher scores indicating greater self-regulation.
Rater characteristics and training
Raters were advanced undergraduate and early graduate students in psychology who were hired for the sole purpose of serving as raters during the pre- and post-test evaluations. Raters had no prior knowledge of or experience with the children and were blind to the experimental conditions of the study. Raters participated in 30 minutes of rater training. During training, raters received a guide defining terms used on the RCS and participated in a group discussion regarding the definitions of terms and how to rate each. The training included a brief presentation from the principal investigator (Lakes) who emphasized the importance of consistency in ratings (i. e., if a rater tended to be stringent with one student, he or she was to retain the same level of stringency across all students). The investigator provided examples of strong and weak performances and how to rate those performances. In addition, raters spent additional time (after the 30 minute training) reviewing the written guidelines.
Procedure
During each testing period, one homeroom class (on average there were two homeroom classes per grade level) reported to the school gymnasium for 50 minutes of evaluation using the RCS. Children completed the obstacle course individually and were evaluated one at a time. Raters were instructed to anchor their ratings on the first child of a given grade level so that they would rate all children of that grade category in comparison to age-level peers. To increase the level of difficulty for older children, the obstacle course was adapted for each grade level. Adaptations included increasing the number of tasks and other modifications such as raising targets to make them more difficult to reach.
Analyses
Analyses were conducted on fully-crossed subsamples. (I excluded raters who did not rate all participants and participants who were not rated by all raters.) In order to obtain fully-crossed subsamples, I tried to maximize n r (the number of raters in a G study) even when it meant some reduction in the number of participants (n p). The fully-crossed subsample included 181 participants with five raters at Occasion 1 and 189 participants and six raters at Occasion 2. The number of raters included in the analysis is an important consideration when interpreting the R (rater) variance in the analyses. Main effects in these analyses are less reliably estimated (i.e., have larger standard errors) than interactions, and Shavelson & Webb (1981) recommended nr = 8 for stable estimates of R variance. Thus, it was important to maximize nr for each analysis, even when it meant omitting participants. The analyses in this study are dependability or generalizability analyses (see Brennan, 2001). These analyses examine sources of error in RCS ratings and provide estimates of generalizability of scores for future studies (termed decision or D studies) using differing numbers of raters and test items. All analyses were conducted using the program GENOVA (Crick & Brennan, 1982).
Results
I examined the construct validity of the RCS by conducting generalizability analyses of scores over raters and subscales, using PRS analyses. Table 1 reports results of the two-facet PRS analyses, one analysis for each testing occasion, where P refers to persons, R refers to raters, and S refers to subscales. As described by Lakes & Hoyt (2009), generalizability analyses allow researchers to study multiple sources of variance in ratings and the interactions between those sources. For a detailed description of how to interpret the variance components and their interactions, I refer readers to the Lakes & Hoyt (2009) primer on Generalizability Theory. To address the research question for this study, I focus on PS.
Table 1.
PRS Analyses
| Var Srce |
DF | Var Est |
SE | % Var |
p | DF | Var Est |
SE | % Var |
p | Mean Var Est |
% Mean Var |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Occasion 1 | Occasion 2 | |||||||||||
| P | 180 | 0.449 | 0.055 | 49% | .00 | 187 | 0.479 | 0.057 | 43% | .00 | 0.464 | 46% |
| R | 4 | 0.054 | 0.034 | 6% | .08 | 5 | 0.181 | 0.099 | 16% | .08 | 0.118 | 12% |
| S | 2 | 0.023 | 0.018 | 2% | .15 | 2 | 0.029 | 0.022 | 3% | .18 | 0.026 | 3% |
| PR | 720 | 0.146 | 0.010 | 16% | .00 | 935 | 0.146 | 0.009 | 13% | .00 | 0.146 | 14% |
| PS | 360 | 0.104 | 0.010 | 11% | .00 | 374 | 0.128 | 0.011 | 12% | .00 | 0.116 | 11% |
| RS | 8 | 0.009 | 0.004 | 1% | .03 | 10 | 0.008 | 0.003 | 1% | .04 | 0.008 | 1% |
| PRS,e | 1440 | 0.132 | 0.005 | 14% | .00 | 1870 | 0.139 | 0.005 | 13% | .00 | 0.136 | 13% |
Note. Var Src = Source of Variance. DF = Degrees of Freedom. Var est = variance estimate. SE = Standard error. % Var = Percentage of total variance. Mean Var est = average across two occasions. % Mean var = mean var est/mean var total. P = Person, R = Rater, S = Subscale.
If the RCS subscales do measure the manifestation of self-regulation in different domains as hypothesized, then I would expect to find variance in PS (indicating that persons scoring high on one subscale don’t necessarily score high on another). Variance in PS was moderate (11% and 12%) and was significant (p<.01) on both occasions. These results provide evidence of construct validity, as they indicate that raters are able to differentiate and rate the manifestation of selfregulation in different domains.
Discussion
The RCS is a theoretically derived assessment tool that measures cognitive, affective, and physical/motor regulation as a child engages in a series of challenging tasks. Baumeister’s (1997) theory described self-regulation as a single construct that is manifested in various domains. If the theory holds true that cognitive, affective, and physical self-regulation are all manifestations of a central underlying capacity for self-regulation, then it stands to reason that the subscales of the RCS would be highly correlated. Therefore, the objective for a robust assessment of self-regulation is not to obtain discriminant validity evidence between these domains (i. e., the RCS subscales), but to demonstrate that there is some distinction between these highly related domains. The results of this study accomplished this objective. Results of the generalizability analyses indicated that raters are able to distinguish between selfregulation in different domains (i.e., cognitive, affective, and motor); in other words, children with high scores on one subscale (e. g., affective regulation) do not necessarily obtain high scores on the other subscales (e. g., cognitive regulation).
As noted previously, this type of analysis does not provide direct evidence of discriminant validity between any two subscales (e.g., cognitive and affective). A previous factor analysis of the RCS items (Lakes & Hoyt, 2004) showed weak discriminant validity between the Cognitive and Affective subscales—items on the subscales appeared to load on the same factor, and the subscales had a high correlation (r = .90). Items on the Physical/Motor subscale formed a more distinct factor (r’s = .75 and .78 with the Cognitive and Affective subscales, respectively). As the PRS analyses do not distinguish between subscales, it is difficult to determine whether or not they provide more evidence of separate factors for the Cognitive and Affective subscales. However, the PRS results do indicate that raters can distinctively measure self-regulation in different domains. Moreover, differences in the generalizability of Cognitive and Affective subscale scores have been documented in prior generalizability analyses (Lakes & Hoyt, 2009). The variance partitioning noted in those analyses also suggests that raters differentiated among the cognitive, affective, and motor domains of self-regulation. Considered as a whole, the current body of evidence for the RCS suggests that cognitive, affective, and motor regulation are distinguishable, but highly related, which is consistent with self-regulation theory.
Conclusion
As awareness of the importance of self-regulation in children is growing (e. g., Moffit et al., 2011; Blair & Diamond, 2008), it is imperative that rigorous measurement approaches are developed to further research in this area. The scientific study of interventions to promote self-regulation in children will require robust instruments that measure child self-regulation in different contexts and that gather information from different sources. The evidence indicates that the RCS is a psychometrically sound measure for child self-regulation research. As an observerrated measure, the RCS minimizes the bias inherent in teacher and parent outcome measures (i. e., since different parents and teachers rate different children, characteristics of the raters, such as rating severity or leniency, are difficult to measure or control). To address this problem, the RCS can be used in a fullycrossed research design where multiple raters rate every child and scores are aggregated across raters; this approach reduces rater bias, which is often problematic in child assessment. In summary, generalizability analyses of the RCS conducted in this research and in prior research (Lakes & Hoyt, 2009) indicate that observers are able to evaluate children’s self-regulation in different domains as they respond to a challenging task, providing a robust form of assessment that can be used in combination with other measures (e. g., parent and teacher rating scales and child performance on tasks), as part of a multi-method, multiple-informant approach to measuring child self-regulation.
Acknowledgments
This manuscript is based on Dr. Lakes’ doctoral dissertation and the guidance of her advisor, Dr. William Hoyt, is gratefully acknowledged.
Biography
Kimberley D. Lakes, PhD, is an assistant professor in the Department of Pediatrics at the University of California, Irvine and is the Co-Director for Community Engagement of the Institute for Clinical and Translational Science in the UC Irvine School of Medicine. Dr. Lakes’ research interests include child assessment, intervention to promote self-regulation, and research methods.
Appendix A
RESPONSE TO CHALLENGE SCALE
Kimberley Lakes
Rater ID: _____Child’s ID or Name:__________________________Date:____________
After the child’s performance on the task has concluded, please rate the child on each of the following characteristics:
| Vulnerable | 1 | 2 | 3 | 4 | 5 | 6 | 7 | Invincible |
| Attentive | 1 | 2 | 3 | 4 | 5 | 6 | 7 | Inattentive |
| Assertive | 1 | 2 | 3 | 4 | 5 | 6 | 7 | Timid |
| Self-disciplined | 1 | 2 | 3 | 4 | 5 | 6 | 7 | Unrestrained |
| Unmotivated | 1 | 2 | 3 | 4 | 5 | 6 | 7 | Motivated |
| Persevering | 1 | 2 | 3 | 4 | 5 | 6 | 7 | Quitting |
| Unfit | 1 | 2 | 3 | 4 | 5 | 6 | 7 | Athletic |
| Confident | 1 | 2 | 3 | 4 | 5 | 6 | 7 | Insecure |
| Coordinated | 1 | 2 | 3 | 4 | 5 | 6 | 7 | Clumsy |
| Resistant | 1 | 2 | 3 | 4 | 5 | 6 | 7 | Involved in task |
| Distractible | 1 | 2 | 3 | 4 | 5 | 6 | 7 | Focused |
| Control over Emotions | 1 | 2 | 3 | 4 | 5 | 6 | 7 | Uncontrolled Emotions |
| Strong-willed | 1 | 2 | 3 | 4 | 5 | 6 | 7 | Weak-willed |
| Awkward | 1 | 2 | 3 | 4 | 5 | 6 | 7 | Skillful |
| Disengaged | 1 | 2 | 3 | 4 | 5 | 6 | 7 | Engaged |
| Fearful | 1 | 2 | 3 | 4 | 5 | 6 | 7 | Fearless |
Appendix B
RCS Items by Subscale (Based on Factor Analysis; Lakes & Hoyt, 2004)
Cognitive Subscale
Attentive/Inattentive
Self-disciplined/Unrestrained
Involved in task/Resistant
Focused/Distractible
Strong-willed/Weak-willed
Engaged/Disengaged
Affective Subscale
Invincible/Vulnerable
Assertive/Timid
Persevering/Quitting
Motivated/Unmotivated
Confident/Insecure
Control over emotions/Uncontrolled emotions
Fearless/Fearful
Physical/Motor Subscale
Athletic/Unfit
Coordinated/Clumsy
Skillful/Awkward
References
- Baumeister RF. Esteem threat, self-regulatory breakdown, and emotional distress as factors in self-defeating behavior. Review of General Psychology. 1997;1:145–174. [Google Scholar]
- Blair C, Diamond A. Biological processes in prevention and intervention: The promotion of self-regulation as a means of preventing school failure. Development and Psychopathology. 2008;20:899–911. doi: 10.1017/S0954579408000436. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Brennan RL. Generalizability Theory. New York: Springer; 2001. [Google Scholar]
- Crick JE, Brennan RL. GENOVA: A generalized analysis of variance system (FORTRAN IV computer program and manual) Boston: Dorchester: Computer Facilities, University of Massachusetts; 1982. [Google Scholar]
- Cronbach LJ, Gleser GC, Nanda AN, Rajaratnam N. The dependability of behavioral measurements: Theory of generalizability for scores and profiles. New York: John Wiley; 1972. [Google Scholar]
- Garcia-Barrera MA, Kamphaus RW, Bandalos D. Theoretical and statistical derivation of a screener for the behavioral assessment of executive functions in children. Psychological Assessment. 2011;23:64–79. doi: 10.1037/a0021097. [DOI] [PubMed] [Google Scholar]
- Hoyt WT. Rater bias in psychological research: When is it a problem and what can we do about it? Psychological Methods. 2000;5:64–86. doi: 10.1037/1082-989x.5.1.64. [DOI] [PubMed] [Google Scholar]
- Hoyt WT, Kerns MD. Magnitude and moderators of bias in observer ratings: A meta-analysis. Psychological Methods. 1999;4:403–424. [Google Scholar]
- Hoyt WT, Melby JN. Dependability of measurement in counseling psychology: An introduction to generalizability theory. The Counseling Psychologist. 1999;27:325–352. [Google Scholar]
- Ito J, Maruyama A, Yamazaki A. Relationship between perceived self-regulation and prosocial behavior in preschool children. Japanese Journal of Educational Psychology. 1999;47:160–169. [Google Scholar]
- Kendall PC, Wilcox LE. Self-control in children: Development of a rating scale. Journal of Consulting and Clinical Psychology. 1979;47:1020–1029. doi: 10.1037//0022-006x.47.6.1020. [DOI] [PubMed] [Google Scholar]
- Kojima M, Ikeda Y. Self-regulation in persons with Down’s Syndrome. Japanese Journal of Special Education. 2000;37:37–48. [Google Scholar]
- Kopp CB. Young children’s progression to self-regulation. In: Bullock M, editor. The development of intentional action. Basel: Karger; 1991. pp. 38–54. [Google Scholar]
- Lakes KD, Hoyt WT. Promoting self-regulation through schoolbased martial arts training. Journal of Applied Developmental Psychology. 2004;25:283–302. [Google Scholar]
- Lakes KD, Hoyt WT. Applications of Generalizability Theory to clinical child and adolescent psychology research . Journal of Clinical Child and Adolescent Psychology. 2009;38:1–22. doi: 10.1080/15374410802575461. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lufi D, Cohen A. A scale for measuring persistence in children. Journal of Personality Assessment. 1987;51:178–185. doi: 10.1207/s15327752jpa5102_2. [DOI] [PubMed] [Google Scholar]
- Magno C. Assessing academic self-regulated learning among Filipino college students: The factor structure and item fit. The International Journal of Educational and Psychological Assessment. 2010;5(1):61–78. [Google Scholar]
- Martinez-Pons M. Emotional intelligence as a self-regulatory process: a social cognitive view. Imagination, Cognition, & Personality. 1999–2000;19:331–350. [Google Scholar]
- Mischel W, Shoda Y, Peake PK. The nature of adolescent competencies predicted by preschool delay of gratification . Journal of Personality and Social Psychology. 1988;54:687–696. doi: 10.1037//0022-3514.54.4.687. [DOI] [PubMed] [Google Scholar]
- Moffit TE, Arseneault L, Belsky D, Dickson N, Hancox RJ, Harrington H, Houts R, Poulton R, Roberts BW, Ross S, Sears MR, Thomson WM, Caspi A. A gradient of childhood self-control predicts health, wealth, and public safety. PNAS. 2011;108:2693–2698. doi: 10.1073/pnas.1010076108. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Muraven M, Baumeister RF. Self-regulation and depletion of limited resources: Does self-control resemble a muscle? Psychological Bulletin. 2000;126:247–259. doi: 10.1037/0033-2909.126.2.247. [DOI] [PubMed] [Google Scholar]
- Nikata S, Shiomi K. Structures of self-regulation in Japanese preschool children. Psychological Reports. 1997;81:63–66. [Google Scholar]
- Pintrich PR, De Groot EV. Motivational and self-regulated learning components of classroom academic performance. Journal of Educational Psychology. 1990;82:33–40. [Google Scholar]
- Power TG, Chapieski ML, McGrath MP. Assessment of individual differences in infant exploration and play. Developmental Psychology. 1985;21:974–981. [Google Scholar]
- Shavelson RJ, Webb NM. Generalizability Theory: A primer. Newbury Park, CA: Sage; 1991. [Google Scholar]
- Silverman IW, Ragusa DM. A short-term longitudinal study of the early development of self-regulation. Journal of Abnormal Child Psychology. 1992;20:415–435. doi: 10.1007/BF00918985. [DOI] [PubMed] [Google Scholar]
- Silverman IW, Gaines M. Using standard situations to measure attention span and persistence in toddler-aged children: Some cautions. The Journal of Genetic Psychology. 1996;157:397–410. doi: 10.1080/00221325.1996.9914874. [DOI] [PubMed] [Google Scholar]
