Skip to main content
PLOS One logoLink to PLOS One
. 2022 May 17;17(5):e0268510. doi: 10.1371/journal.pone.0268510

Psychometric evaluation of protective measures in Native STAND: A multi-site cross-sectional study of American Indian Alaska Native high school students

Allyson Kelley 1, Thomas McCoy 2, Megan Skye 3, Michelle Singer 4, Stephanie Craig Rushing 4,*, Tamara Perkins 5, Caitlin Donald 3, Kavita Rajani 3, Brittany Morgan 6, Kelley Milligan 1, Tosha Zaback 3, William Lambert 3
Editor: Richard Huan XU7
PMCID: PMC9113605  PMID: 35580116

Abstract

American Indian and Alaska Native (AI/AN) youth are strong in culture and rich in heritage and experience unique strengths and challenges throughout adolescence. Documenting conditions that protect against risk factors associated with poor health outcomes are needed. We explored scales that measure self-esteem, culture, social support, and community from a sample of 1,456 youth involved in Native STAND, a culturally-relevant evidence-based sexual health intervention. We established content validity by reviewing existing literature and community feedback. Construct validity was examined using factor analysis. The final self-esteem model included seven items, factor loadings ranged from 0.47 to 0.63 for positive self-esteem and 0.77 to 0.81 for negative self-esteem. The final culture model included three items, factor loadings 0.73 to 0.89. The social support scale included four items, factor loadings ranged from 0.86 to 0.87 for family social support and 0.75 to 0.77 for friends social support. The community and community safety scale included three items; factor loadings ranged from 0.52 to 0.82. Coefficient alphas for scales ranged from α = 0.63 to α = 0.86. This study validated scales in a national sample of AI/AN youth–psychometric scales provide an essential tool for documenting the needs and strengths of AI/AN youth.

Introduction

The 577+ American Indian and Alaska Native (AI/AN) tribes and communities throughout the U.S. are diverse in their language, culture, traditions, spiritual practices, and heritage. This diversity must be considered when developing and utilizing measures to assess the effectiveness of health and educational programs, interventions, and policies. AI/AN youth under 18 years old make up about 29% of the AI/AN population–too many experience poor mental health outcomes resulting from disproportionate exposure to historical trauma and other social, structural, political, and environmental factors [1]. As a result, AI/AN youth are placed at greater risk for suicide, poor mental health, depression, and substance misuse [2].

Culturally-adapted evidence-based interventions (EBIs) are emerging as an essential tool for building health equity in AI/AN populations [3]. Assessing the validity of instruments that document the characteristics of program participants and program impact is essential. Without valid instrumentation, it is nearly impossible to know if an intervention worked, for whom, and in what context. For decades, researchers, policymakers, educators, and interventionists have been challenged with data collection in AI/AN populations, including restricted data access, limited capacity to analyze data, difficulty finding comparison groups, limited quality data, small sample sizes, lack of national AI/AN samples, limited comparison groups, lack of cultural validity, and varying levels of programmatic capacity [45]. Researchers often use assessment instruments designed for the general population in AI/AN contexts–often, these lack cultural context or community feedback [5]. Developing valid instruments with AI/AN communities requires an understanding of differences in views about what determines health and wellness and the unique community and cultural practices that protect youth. The language and terms used to define health outcomes in mainstream survey tools may be based on western medialized constructs that have little relevance in AI/AN community settings [5]. Researchers working with AI/AN communities to evaluate health programs and interventions need to reliably measure risk and protective factors among program participants. In this review, protective factors refer to conditions or variables that may increase positive health outcomes in AI/AN youth.

Research suggests that self-esteem, social support, community connections, and culture may be protective [6]. Self-esteem has been associated with school success, internal locus of control, perceptions of competence, social support, and coping skills [7]. Peer and family social support has been inversely correlated with AI/AN youth substance misuse [8]. Family caring has been associated with positive AI/AN youth mental health [9]. Cultural connectedness has been associated with academic success and resilience [10]. One study found that AI/AN youth with an interest in their culture were less likely to display violent behaviors [11]. Stable community conditions may foster healthy connections and safety [8, 12], which may be protective against substance misuse and suicide ideation. Such research demonstrates forward progress in understanding what determines health in AI/AN youth. Still, more work is needed to capture the strengths of AI/AN youth and communities through the development of valid survey measures.

Attempts to measure protective factors such as self-esteem, social support, community, and culture are evident in standardized assessments, but few have been developed with AI/AN youth in mind. First, Rosenberg’s self-esteem scale has been used to assess participant self-esteem characteristics in the general population [13]. Second, Zimet and colleagues developed a Multidimensional Perceived Social Support scale [14]–which is widely used in the health programming milieu. Unlike social support and self-esteem, where standard mental health assessments are used or adapted, assessing aspects of culture and ethnic identity requires a different approach. Snowshoe and colleagues developed the Cultural Connectedness Short Scale (CCSS) to measure culture in First Nations youth [15]. The CCSS is meant to be adapted by First Nations and Native communities based on their unique culture, traditions, and context. Importantly, culture and community depend on values, history, traditions, customs, sense of belonging, and membership–the construct of culture is often defined and conceptualized with a specific community in mind [16].

Much has been done to develop and evaluate culturally-relevant health interventions that build on the strengths and resilience of AI/AN youth. Less has been done to assess the effectiveness of these interventions using validated survey tools appropriate for the AI/AN community and population. The objective of this study was to validate the measurement properties of the Native Students Together Against Negative Decisions (Native STAND) survey instrument among AI/AN high school students.

About Native STAND

Native STAND is a comprehensive sexual health curriculum for Native high school students that supports healthy decision-making through interactive discussions and activities that promote self-esteem, goals and values, team building, negotiation and refusal skills, and effective communication. The 90-minute lessons contain stories from Tribal communities that ground learning in cultural teachings. Results from previous Native STAND evaluations demonstrate it to be an effective approach for addressing healthy relationships, STDs, and teen pregnancy [17]. The original STAND intervention was designed and evaluated among rural youth in the southern United States and found to promote condom self-efficacy, STI risk behavior knowledge, and conversations with peers about other sexual health topics among participating students. Previous evaluations of the Native STAND curriculum conducted from 2010 to 2012 with a sample of 90 students reported positive results, with increases in STD/HIV knowledge, reproductive health, and healthy relationships [17]. The original survey tool was developed by the Centers for Disease Control and focused heavily on sexual health knowledge, attitudes, and behavior. For the present study, we cut back the survey tool, updated the demographic questions, and expanded the protective measures. No prior studies have established the reliability and validity of the tool’s protective measures.

Methods

The Native STAND D&I research study was a collaboration between [REMOVED FOR BLINDED REVIEW] and 48 Tribes and Native-serving organizations located across the U.S. The study protocol was reviewed and approved by OHSU (IRB00000734) and the Portland Area Indian Health Service Institutional Review Board (659942).

Sample

Students were located in 17 states, including Alaska. Most of the sites were from rural communities (82.7%) and were from the western part of the United States. The youth who signed up to participate in Native STAND filled out the Native STAND survey prior to participation, from September 2015 to March 2019.

Procedure

In 2014 the study team began revising the Native STAND survey tool, which was adapted from the Native Youth Survey and other questionnaires [17]. During the first round of site recruitment, the study team asked educators to review and discuss the survey instrument during an in-person orientation/training. Changes to the initial survey tool included reducing the number of self-esteem items, eliminating positive outlook, morals, and values, adaptability, and some measures on cultural pride and identity, to focus the instrument on the most important measures and minimize survey fatigue [18]. Facilitator feedback and literature reviews were used to establish the content validity of the survey measures. The final survey tool was utilized with three cohorts of students at 48 sites and assessed a broad range of health knowledge, attitudes, beliefs, intentions, behaviors, and skills relating to physical, sexual, mental, and psycho-social health. This study focuses on four protective measures included in the updated Native STAND survey tool: self-esteem, culture, social support, and community safety.

Parental consent and youth assent were obtained prior to data collection. To ensure the confidentiality of survey responses, each youth received a paper survey labeled with a unique study ID and a manila envelope. After the youth completed their survey, they sealed them in the envelope and returned them to the educator on site. Educators then mailed their class’s surveys to the Native STAND project office for data entry and cleaning. Once received, researchers would immediately scan the survey for any safety concerns. If a safety concern was noted, educators would be notified that a safety concern was noted among their students so they could address it with safety protocols they had in place within their institution.

Measures

Self-esteem

Participants were asked their level of agreement with seven statements based on questions selected from the Rosenberg Self-esteem scale [13] by the study team and reviewed with participating health educators (e.g., I take a positive attitude toward myself). The team examined the VOICES survey [7] and the original Rosenberg Self-esteem scale, including ten items, five positively worded and five negatively worded. The study team reduced the number of items on the scale from 10 to seven and omitted one negatively worded item [7, 13,18]. Previous research reports the potential low reliability of negatively worded items in youth and culturally-diverse populations [7]. Response options were based on a Likert-style scale where 1 = Strongly Disagree to 5 = Strongly Agree.

Culture

Participants were asked to rate their level of agreement with three statements based on culture questions identified by the study team and reviewed with participating health educators (e.g., I believe that I have many strengths because I am Native American).

Social support. Participants were asked to rate their level of agreement with four statements based on questions identified by the study team and reviewed with participating health educators (e.g., I can talk about my problems with my friends).

Community and community safety

Participants were asked to rate their level of agreement with three statements based on questions identified by the study team and reviewed with participating health educators (e.g., I feel safe in my community or neighborhood).

[See S1 File]

Data analysis

Item analyses were first performed for descriptive statistics of item ratings (mean (M), standard deviation (SD)) and sample size of youth with complete responses. Inter-item correlations among responses as well as corrected item-total correlations were estimated. Exploratory factor analysis (EFA) was performed using principal axis factoring exaction with an oblique promax rotation. The number of factors extracted was based on parallel analysis [19]. An item was considered to load onto a factor if its factor loading was at least 0.40. Non-loading items and items loading on more than one factor were removed and the EFA re-ran. Internal consistency reliability was estimated for each factor using Cronbach’s alpha (α). All analyses were performed in SPSS v26 (IBM Corp., Armonk, NY) and Mplus v8.3 [20]. A two-sided p-value < 0.05 was considered statistically significant.

Results

Sample characteristics

Our sample included 1,456 students, with a median age of 15, and were 50.2% female, 45.7% male, and 1.2% transgender (Table 1). Students represented 48 sites from throughout the US, most (84.9%) were American Indian.

Table 1. Characteristics of Native STAND participants, 2015–2019 (N = 1,456).

Participant Characteristic n (%)
Age (Median [IQR]) 15 [2]
Gender
    Female 731 (50.2)
    Male 665 (45.7)
    Transgender 17 (1.2)
    missing 43 (3.0)
Race/Ethnicity
    American Indian/Alaska Native 1,236 (84.9)
    Other (non-White) 179 (12.2)
Sexual Orientation
    Straight/Heterosexual 1,045 (71.8)
    LGBTQ2S+ 129 (8.9)
    Unsure/Don’t Know 188 (12.9)
    missing 94 (6.5)
Geographical distribution
    Oregon 166 (11.4)
    Arizona 394 (27.1)
    New Mexico 401 (27.5)
    Other 495 (34.0)

*Note. IQR = Interquartile range.

Descriptive results

Table 2 displays descriptive results of the Native STAND student participants. Mean scores ranged from 2.5 to 4.4.

Table 2. Item location within constructs of Native STAND participants, 2015–2019.

Construct/Item M (SD) r item-total
Self-Esteem (n = 1,373) a
    I smile and laugh a lot 4.1 (0.86) .426
    I adjust well to new situations and challenges 3.7 (0.82) .411
    I try to do my best 4.3 (0.72) .397
    I am optimistic about my future 3.9 (0.86) .456
    I have a sense of what life is calling me to do 3.4 (0.95) .361
    Sometimes I think I am no good at all (RV)a 3.0 (1.16) .628
    I feel that I am a failure (RV) a 2.5 (1.13) .628
Culture (n = 1,390) a
    Being Native American is a major part of my identity 4.2 (1.02) .759
    I believe that I have many strengths because I am Native American 3.8 (1.02) .661
    I have spent more time trying to find out more about the history, traditions, and customs of Native people 3.8 (1.04) .691
Social Support (n = 1,422) a
    If I had a personal problem, I could ask someone in my family for help 3.7 (1.15) .758
    Share thoughts/feelings family 3.7 (1.12) .758
    I have friends who support me 4.0 (0.94) .588
    I can talk about my problems with my friends 3.6 (1.13) .588
Community (n = 1,416) a
    I feel safe in my community or neighborhood 3.9 (0.90) .538
    If I had to move, I would miss the community I now live in 3.9 (1.14) .432
    I feel safe at home 4.4 (0.77) .405

a Individual variable denominators differ depending on missingness. Anchor ratings of 1 = Strongly Disagree to 5 = Strongly Agree.

b Reverse Coded (RV). M = mean, SD = standard deviation, ritem-total = corrected item-total correlation.

Self-esteem

Item ratings varied from 3.4 for “calling” to 4.3 for “best” on average (SD ranged from 0.72 to 0.95) with a sample size of 1,373 youth with complete item responses. Inter-item correlations ranged from 0.213 to 0.329 (average = 0.278) and corrected item-total correlations ranged from 0.361 to 0.628. Two factors of self-esteem resulted based on parallel analysis, which explained 53.9% of the variance in the seven self-esteem items (see Fig 1). The two reverse scored (negatively worded) items loaded on their own factor (Negative Self-esteem) while the remaining five not reverse scored loaded onto their own factor (Positive Self-esteem), with a correlation between them of 0.442. Factor loadings ranged from 0.47 to 0.63 for Positive Self-esteem (Cronbach’s α = 0.655) and 0.77 to 0.81 for Negative Self-esteem (α = 0.771).

Fig 1. Self-esteem parallel analysis scree plot and path diagram.

Fig 1

Culture

Item ratings varied from 3.8 for “history” and "native strengths" to 4.2 for “native identity” on average (all SD = 1.0) with a sample size of 1,390 youth with complete item responses. Inter-item correlations ranged from 0.565 to 0.692 (average = 0.636) and corrected item-total correlations ranged from 0.661 to 0.759. One factor of AI/AN Culture resulted based on parallel analysis which explained 75.8% of the variance in the three culture items (Fig 2). Factor loadings ranged from 0.73 to 0.89, and internal consistency was high (α = 0.840).

Fig 2. Culture parallel analysis scree plot and path diagram.

Fig 2

Social support

Item ratings varied from 3.6 for “talk problems” to 4.0 for “support” on average (SD ranged from 0.94 to 1.1) with a sample size of 1,422 youth with complete item responses. Inter-item correlations ranged from 0.286 to 0.759 (average = 0.439) and corrected item-total correlations ranged from 0.588 to 0.758. Two factors of social support resulted based on parallel analysis, which explained 83.7% of the variance in the four social support items (Fig 3). The two “family” related items loaded on their own factor (Family social support) while the two “friends” related items loaded onto their own factor (Friends social support), with a correlation between them of 0.475. Factor loadings ranged from 0.865 to 0.874 for Family social support (α = 0.732) and 0.759 to 0.770 for Friends social support (α = 0.862).

Fig 3. Social support parallel analysis scree plot and path diagram.

Fig 3

Community and community safety

Ratings varied from 3.9 for “would miss” and “feel safe” to 4.4 for “home” on average (SD range 0.77 to 1.1) with a sample size of 1,416 youth with complete item responses. Inter-item correlations ranged from 0.279 to 0.439 (average = 0.382) and corrected item-total correlations ranged from 0.405 to 0.538. One factor of Community resulted based on parallel analysis, which explained 59.0% of the variance in the three community items (Fig 4). Factor loadings ranged from 0.52 to 0.82, and internal consistency was high (α = 0.635).

Fig 4. Community parallel analysis scree plot and path diagram.

Fig 4

Discussion

The Native STAND survey scales reported here demonstrate the validity and reliability of strength-based survey measures in AI/AN youth participating in the Native STAND multi-site cross sectional study. The participatory process used to develop the survey tool and implement the intervention with diverse AI/AN communities added to the content validity of survey questions and provides a framework for other communities and researchers moving forward.

Previous research on the psychometric properties of strength-based scales have not been specific to AI/AN populations or the Native STAND intervention. Findings fill an essential gap in the literature–documenting the validity and reliability of strength-based measures in a national sample of AI/AN youth. Results presented here are a starting point for researchers, communities, and programs as they continue developing surveys with solid psychometric priorities for documenting the effectiveness of various programs, policies, and interventions. Results from the factor analysis and internal consistency provide guidance on creating and using sum scores for each factor/subscale.

Future efforts can build on this preliminary work, measuring protective factors in AI/AN youth that promote health and healthy decision-making [21]. For example, researchers may consider more comprehensive survey measures that define and operationalize culture and Native American identity, for example, creating culture-specific questions about involvement in history, traditions, and customs of a specific tribal group. At the individual level, future protective and strength-based surveys may include questions about healthy coping and problem-solving skills, emotional self-regulation, academic achievement, and positive physical development [7]. At the community level, additional questions could expand on the role of school, neighborhood, and community while documenting the presence of mentors, support, positive norms, behavior expectations, and safety [6]. Family strengths and protective factors are also essential, and documenting how the family provides structure, limits, rules, monitoring, predictability, supportive relationships, and behavior expectations is warranted. The Native STAND team encourages researchers, policymakers, programs, communities, clinicians, and educators to consider what is protective and what questions are essential to a program or study. Meeting with communities and members of the focus population can help define what wellness, health, culture, and community looks like and create surveys that build on these strengths and resources [5]. Although the Native STAND survey does not capture every strength-based measure possible, it is a starting point for documenting the strengths and needs of AI/AN youth throughout the life course.

Limitations

Although the contributions of this study are clear, there are a few limitations that should be noted. First, information collected from Native STAND youth is based on self-report responses; this could result in social desirability bias. The Native STAND team provided confidential areas for data collection and ensured anonymity throughout the intervention. Secondly, construct validity was not assessed in this study, and the team was not able to compare responses with actual behaviors. Last, although Native STAND was conducted with a national sample of AI/AN high school youth, these youth do not represent all populations, communities, or cultural perspectives of the 577+ tribal nations in the U.S. It is possible that the reliability and validity of these scales may be different with different populations.

Conclusion

This study validated strength-based scales in a national sample of AI/AN youth involved in the Native STAND study. Findings support the use of strength-based measures and community participation in the research process as opposed to deficit-based measures and problem behaviors. Protective and strength-based survey measures are beneficial to individuals and the community as a whole because they demonstrate what is positive in a youth’s life and what promotes health. This knowledge is often transferrable at the individual, family, and community levels.

Supporting information

S1 File. This is the S1 File Native STAND survey measures.

(DOCX)

Acknowledgments

The authors acknowledge the incredible efforts made by youth, educators, Tribes, and schools who completed the Native STAND survey. We appreciate your dedication to wellness and the Native STAND curriculum.

Data Availability

Data cannot be shared publicly because American Indian youth are a vulnerable population and the study does not allow for data sharing. Data are available by contacting Dr. Craig and going through the appropriate IRBs and Ethics Committees for researchers who meet the criteria for access to confidential data. The Portland Area Indian Health Service IRB contact is Rena Macy, Co-Chair, Portland Area IHS IRB Portland Area IHS 1414 NW Northrup St Suite 800 Portland, OR 97209 Phone: 503-414- 5540 It should be noted that all shareable data are within the paper and its Supporting Information files. Data sharing is not appropriate for this small sample of AI/AN youth. Data could potentially be identified and AI/AN youth populations are considered vulnerable populations due to the historical and unethical research practices conducted by universities and the US government on AI/AN populations. If a reader would like access to the data used in this study they would need to submit a request to Dr. Stephanie Craig Rushing, the corresponding author and she would present this information to the tribe and IRBs involved in the Native STAND study.

Funding Statement

This work was supported in part by the award number 5 U48DP005006-05 from the Centers for Disease Control and Prevention, Cooperative Agreement. The content is solely the responsibility of the authors and does not necessarily represent the official views of the Centers for Disease Control. There was no additional external funding received for this study. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

References

  • 1.National Congress of American Indians [Internet]. Demographics. ND.—[cited 2021 July 8]. Available from: https://www.ncai.org/about-tribes/demographics
  • 2.D’Amico EJ, Dickerson DL, Brown RA, Klein DJ, Agniel D, Johnson C. Unveiling an ‘invisible population’: Health, substance use, sexual behavior, culture, and discrimination among urban American Indian/Alaska Native adolescents in California. Ethn Hlth 2019; 11:1–8. doi: 10.1080/13557858.2018.1562054 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Dickerson D, Baldwin JA, Belcourt A, Belone L, Gittelsohn J, Kaholokula JK, et al. Encompassing cultural contexts within scientific research methodologies in the development of health promotion interventions. Prev Sci 2020; 21(1):33–42. doi: 10.1007/s11121-018-0926-1 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Stratford B. [Internet]. American Indian and Alaska Native communities face unique challenge when it comes to public health data. Child Trends 2021.—[cited 2021 July 8]. Available from: https://www.childtrends.org/blog/american-indian-alaska-native-communities-face-unique-challenges-comes-public-health-data [Google Scholar]
  • 5.Kelley A, Piccione C, Fisher A, Matt K, Andreini M, Bingham D. Survey development: community-involvement in the design and implementation process. J Public Health Manag Pract 2019; 25:S77. doi: 10.1097/PHH.0000000000001016 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Henson M, Sabo S, Trujillo A, Teufel-Shone N. Identifying Protective Factors to Promote Health in American Indian and Alaska Native Adolescents: A Literature Review. J Prim Prev 2017; 38(1–2):5–26. doi: 10.1007/s10935-016-0455-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Whitesell N, Mitchell C, Spicer P, The Voices of Indian Teens Project Team longitudinal study of self-esteem, cultural identity, and academic success among American Indian adolescents. Cultur Divers and Ethnic Minor Psychol 2009; 15(1), 38–50. doi: 10.1037/a0013456 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Kelley A, Witzel M, Fatupaito B. Preventing Substance Use in American Indian Youth: The Case for Social Support and Community Connections. Subst Use Misuse 2019; 16;54(5):787–95. doi: 10.1080/10826084.2018.1536724 . [DOI] [PubMed] [Google Scholar]
  • 9.Cummins J, Ireland M, Resnick M, Blum R. Correlates of Physical and Emotional Health Among Native American Adolescents. J Adolesc Health 1998; 38, 38–44. doi: 10.1016/S1054-139X(98)00063-9 [DOI] [PubMed] [Google Scholar]
  • 10.LaFromboise T, Hoyt D, Oliver L, Whitbeck L. Family, Community, and School Influences on Resilience Among American Indian Adolescents in the Upper Midwest. Am J Community Psychol 2006; 32(2), 193–209. doi: 10.1002/jcop.20090 [DOI] [Google Scholar]
  • 11.Pu J, Chewing B, St. Clair I, Kokotailo P, Lacourt J, Wilson D. Protective Factors in American Indian Communities and Adolescent Violence. Matern and Child Health J 2013; 17, 1199–1207. ISSN: 1573-6628. doi: 10.1007/s10995-012-1111-y [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Kenyon D, Carter J. Ethnic identity, sense of community, and psychological well‐being among northern plains American Indian youth. J Com Psych 2011; 39(1):1–9. [Google Scholar]
  • 13.Rosenberg M. Rosenberg self-esteem scale (RSE). Acceptance and commitment therapy. Measures package;1965; 61(52):18. doi-org.libproxy.uncg.edu/10.1002/jcop.20412 [Google Scholar]
  • 14.Zimet GD, Powell SS, Farley GK, Werkman S, Berkoff KA. Psychometric characteristics of the multidimensional scale of perceived social support. J Person Assess 1990; 1;55(3–4):610–7. doi: 10.1080/00223891.1990.9674095 [DOI] [PubMed] [Google Scholar]
  • 15.Snowshoe A, Crooks CV, Tremblay PF, Craig WM, Hinson RE. Development of a Cultural Connectedness Scale for First Nations youth. Psychol Assess 2015; 27(1):249. doi: 10.1037/a0037867 [DOI] [PubMed] [Google Scholar]
  • 16.Walls ML, Whitesell NR, Barlow A, Sarche M. Research with American Indian and Alaska Native populations: measurement matters. J Ethn Subst Abuse 2019; 2;18(1):129–49. doi: 10.1080/15332640.2017.1310640 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Rushing SN, Hildebrandt NL, Grimes CJ, Rowsell AJ, Christensen BC, Lambert WE. Healthy & empowered youth: a positive youth development program for native youth. Am J Prev Med 2017; 1;52(3):S263–7. doi: 10.1016/j.amepre.2016.10.024 [DOI] [PubMed] [Google Scholar]
  • 18.Marsh HW. Positive and negative global self-esteem: A substantively meaningful distinction or artifactors? J Pers Soc Psychol;1996; 70(4), 810. doi: 10.1037//0022-3514.70.4.810 [DOI] [PubMed] [Google Scholar]
  • 19.Horn J. A rationale and test for the number of factors in factor analysis. Psychometrika 1965; 30(2), 179–185. doi: 10.1007/BF02289447 [DOI] [PubMed] [Google Scholar]
  • 20.Dunn T, Baguley T, Brunsden V. From alpha to omega: A practical solution to the pervasive problem of internal consistency estimation. Br J Psychol 2014; 105(3), 399–412. doi: 10.1111/bjop.12046 [DOI] [PubMed] [Google Scholar]
  • 21.Mackin J, Perkins T, Furrer C. The Power of Protection: A Population-based comparison of Native and non-Native Youth Suicide Attempts. American Indian & Alaska Native Mental Health Research 2012; 19(2), 20–54. doi: 10.5820/aian.1902.2012.20 [DOI] [PubMed] [Google Scholar]

Decision Letter 0

Richard Huan XU

24 Jan 2022

PONE-D-21-24780Psychometric evaluation of measures in

the Native STAND study of American Indian Alaska Native YouthPLOS ONE

Dear Dr. Kelley,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Please submit your revised manuscript by Mar 10 2022 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Richard Huan XU

Academic Editor

PLOS ONE

Journal requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf”.

2. Thank you for stating in your Funding Statement:

“This work was supported in part by the Centers for Disease Control and Prevention, Cooperative Agreement Number [5 U48DP005006-05].  “

Please provide an amended statement that declares *all* the funding or sources of support (whether external or internal to your organization) received during this study, as detailed online in our guide for authors at http://journals.plos.org/plosone/s/submit-now.  Please also include the statement “There was no additional external funding received for this study.” in your updated Funding Statement.

Please include your amended Funding Statement within your cover letter. We will change the online submission form on your behalf.

3. Thank you for stating the following in your Competing Interests section: 

“No authors have competing interests.”

Please complete your Competing Interests on the online submission form to state any Competing Interests. If you have no competing interests, please state ""The authors have declared that no competing interests exist."", as detailed online in our guide for authors at http://journals.plos.org/plosone/s/submit-now

 This information should be included in your cover letter; we will change the online submission form on your behalf.

4. In your Data Availability statement, you have not specified where the minimal data set underlying the results described in your manuscript can be found. PLOS defines a study's minimal data set as the underlying data used to reach the conclusions drawn in the manuscript and any additional data required to replicate the reported study findings in their entirety. All PLOS journals require that the minimal data set be made fully available. For more information about our data policy, please see http://journals.plos.org/plosone/s/data-availability.

Upon re-submitting your revised manuscript, please upload your study’s minimal underlying data set as either Supporting Information files or to a stable, public repository and include the relevant URLs, DOIs, or accession numbers within your revised cover letter. For a list of acceptable repositories, please see http://journals.plos.org/plosone/s/data-availability#loc-recommended-repositories. Any potentially identifying patient information must be fully anonymized.

Important: If there are ethical or legal restrictions to sharing your data publicly, please explain these restrictions in detail. Please see our guidelines for more information on what we consider unacceptable restrictions to publicly sharing data: http://journals.plos.org/plosone/s/data-availability#loc-unacceptable-data-access-restrictions. Note that it is not acceptable for the authors to be the sole named individuals responsible for ensuring data access.

We will update your Data Availability statement to reflect the information you provide in your cover letter.

5. Your ethics statement should only appear in the Methods section of your manuscript. If your ethics statement is written in any section besides the Methods, please delete it from any other section.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Partly

Reviewer #2: Partly

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: Yes

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: No

Reviewer #2: No

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: Thanks for the opportunity to let me review the manuscript. This study conducted a psychometric evaluation for a survey tool used in Native STAND study for American Indian/Alaska Native Youth. After review, I found some weakness in its results and discussion. Detailed comments can be found below.

1. Line 148-150, the authors stated that they added questions on multiple domains in the survey tool, why only focus on the four domains in the study? Inclusion of the other new domains could give a full picture of validity and reliability of the new tool, please consider to add them, or explain why these four domains are particular important.

2. Could the authors also report the average inter-item correlations (in addition to the range)? The details of the pairwise inter-item correlations and the corrected item-total correlations can be put in a supplementary file for audience who are interested to know the details of the tool. For the “α” in the results, please specify that they are Cronbach α (if not, please define this indicator).

3. From the results, I can only tell the scales have a reasonable internal consistency. How about the other aspects of the validity and reliability. For example, did the authors test the criterion validity of the survey tool?

4. Did the authors test the test-retest reliability of this tool, for all or a subset of the participants? It could evaluate the stability of the measurement through time, which is an important component of its psychometric property.

5. What are the implications of the reported EFA outcomes in implementation of this tool and future studies, or how can people make use of the EFA outcomes in practice? Is it possible to generate a composite score for each of the domains or subscales using the EFA outcomes? Please consider to add this into the discussion.

6. In the discussion, the authors indicated that “The participatory process used to develop the survey tool and implement the intervention with diverse AI/AN communities provides a framework for other communities and researchers moving forward”; therefore, could the authors describe the process to develop the survey tool in details, particularly how the content validity was assessed and improved?

7. How were the reliability and validity of the survey tool in AI/AN youth compared with general population, especially the self-esteem that is included in both old and new version of the tool? Please describe it in the discussion.

8. How would the authors comment on the generalizability/external validity of the updated survey tool among the youth in other ethnic minority groups?

Reviewer #2: Thanks for inviting me to review this interesting manuscript.

In general, the authors had done tremendous works to obtain the data. Below I provide comments concerns which, I think, can be clarified by the authors and are necessary for the publication of the paper.

Q1

Title

1. In the title, I think the use of “youth” may not appropriate, as sample of this study focused merely on high school students with a median age of 15 years.

2. “Psychometric evaluation” means the manuscript must present specific results of the survey, and analyze their differences. But, the “objective” of this study (line 112-114, page 5) was to validate the measurement properties of …. And on the other hand, as a research of reliability and validation, I don't think this study can reach that conclusion (more details given in Q3), so it would be more appropriate to call it a pilot study or a cross-sectional study.

Q2

The presentation of the manuscript is not rigorous enough, and some small mistakes are obvious.

1. As far as known from the manuscript, sample number of this research was 1456 exactly, but, line 44-45 in page 1, the author why said “more than”?

2. Page 2, line 48, authors represented in the manuscript that the factor of negative self-esteem was two-item, so, factor loading of “0.65” was redundant.

3. I doubt about “resilience” you have mentioned in the keywords list, there is less content related to it.

4. Page 9, line 204, item ratings here were all included in the factor of “positive self-esteem”, it should be clearly defined and stated.

5. Page 9, line 213, items of “history” and “native strengths” had the same rating of 3.8.

Q3

Some of the statistical procedures were not clearly explained or the methods were incorrectly used. Because of this issue, the conclusion that the measurement is a reliable and valid tool should be reconsidered or "tone-downed".

1. As presented in figure 1. and figure 3. , parallel analysis of “self-esteem” and “social support” may indicate one more factor for each scale; these were also confirmed by the unsatisfactory total variance explanation (e.g., 53.9% of self-esteem and 59% of community of community safety). Please clarify.

2. Page 6, Line 138-139, participants of this study were high school students, they could not represent “youth”, and we know that, there are fifty states in the U.S., how these 16 states sampled, it may need more clarification.

3. Page 8, line 189-190, does any item was removed based on the EFA, and its procedure, I didn’t see it.

4. Page 6, Line 139- 140: “Most of the sites were from rural communities and were from the western part of the United States.” Maybe the geographical distribution of the samples could be more elaborated in Table 1, for clarification.

5. page 6, line 140, “a baseline survey” was done prior to participation, however, there is no description for the baseline. Please elaborate.

6. Line 148-150, the study team added questions about many factors, but the Manu did not show their results, and if not, why you mention them here. Please clarify.

7. Table 1: The sample consisted of 1140 AI/AN, and 150 others. Regarding Race/Ethnicity data, if 1140+150=1290 students were included in this study, which does not equal No.= 87.8% and No.=12.2%? Please explain.

Q5

Part of “measures”

1. Questions generating methods of the four scale and the scoring rules were identical, I think, there is no need to repeat the same description four times.

2. Line 182, where is the “supplement 1.” I didn’t see it.

Q6

“Conclusion” of the study was just derived a “start point”, and very little valuable information can be drawn from the current statement for fellow researchers.

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: Yes: Ling-ming Zhou

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2022 May 17;17(5):e0268510. doi: 10.1371/journal.pone.0268510.r002

Author response to Decision Letter 0


29 Mar 2022

March 9, 2022

Dear Editor,

We are pleased to resubmit our manuscript for consideration in PLOS One. Our responses to reviewer comments are located in the table below.

Please feel free to contact me if you have additional questions.

Allyson Kelley, DrPH

kelleyallyson@gmail.com

Reviewer 1 Response

Line 148-150, the authors stated that they added questions on multiple domains in the survey tool, why only focus on the four domains in the study? Inclusion of the other new domains could give a full picture of validity and reliability of the new tool, please consider to add them, or explain why these four domains are particular important.

Consistent with AIAN community protocols, this study focused on survey domains that were positive, strengths-based and protective. The inclusion of other domains beyond self-esteem, culture, social support, and community safety was beyond the study realm. The Native STAND team has published other research on all domains included in the Native STAND comprehensive sex education curriculum. These domains are beyond the scope of the present manuscript.

Could the authors also report the average inter-item correlations (in addition to the range)? The details of the pairwise inter-item correlations and the corrected item-total correlations can be put in a supplementary file for audience who are interested to know the details of the tool. For the “α” in the results, please specify that they are Cronbach α (if not, please define this indicator). We have added the average inter-item correlations in the text when reporting the range.

We also have already reported the range of the corrected item-total correlations so we decline to report an additional supplementary file.

We have defined Cronbach’s α at first use.

From the results, I can only tell the scales have a reasonable internal consistency. How about the other aspects of the validity and reliability. For example, did the authors test the criterion validity of the survey tool?

We did assess criterion validity of the scales with other studies. For example, the Native STAND efficacy study (Skye, 2021) reported on study outcomes related to sexual health intervention constructs and strength-based measures here, for example family social support. In the 2021 article, authors utilized this tool at baseline and follow-up and found some slight increases.

Did the authors test the test-retest reliability of this tool, for all or a subset of the participants? It could evaluate the stability of the measurement through time, which is an important component of its psychometric property. No, this was not feasible due to the manner in which survey data were collected. In the future the team plans to test the test-retest reliability of the tool with other participants to assess stability of measurement.

What are the implications of the reported EFA outcomes in implementation of this tool and future studies, or how can people make use of the EFA outcomes in practice? Is it possible to generate a composite score for each of the domains or subscales using the EFA outcomes? Please consider adding this into the discussion. EFA provides evidence of the number of factors, their loadings (i.e., measures of item discrimination), etc. It is a standard practice to create sum scores (or weighted sum scores) from the items that load of each factor. The implications for readers are that they can use subscale sum scores to analyse those factors in future research. We have added language in the Discussion around.

In the discussion, the authors indicated that “The participatory process used to develop the survey tool and implement the intervention with diverse AI/AN communities provides a framework for other communities and researchers moving forward;” therefore, could the authors describe the process to develop the survey tool in detail, particularly how the content validity was assessed and improved? We added a description of the process as it relates to content validity in the section before measures. Thank you.

How were the reliability and validity of the survey tool in AI/AN youth compared with general population, especially the self-esteem that is included in both old and new version of the tool? Please describe it in the discussion. We addressed this comment and used the Rosenberg's self-esteem scale with AIAN youth as an example. We are not sure what you mean about the old and new version of the tool. Our team reduced the number of items in the self-esteem scale from 10 to 7. The original RSE included five positive and five negative items. Previous research reports that the negative or reverse coded (RV) items were not as reliable as the positively coded items. In the original scale a score of 30 to 40 indicates high self-esteem and internal consistency and test-retest correlations were good, Cronbach's alpha is 0.85 and 0.88. We expanded on this in the discussion and added a citation from the Voices of Indian Teens research study with AIAN youth which also utilized a modified six item RSE in the Native VOICES study and alphas ranged from .79 to .84

Self-esteem (Likert 1=Strongly Disagree-5=Strongly Agree)

1. I smile and laugh a lot

2. I adjust well to new situations and challenges

3. I try to do my best

4. I am optimistic about my future

5. I have a sense of what life is calling me to do

6. Sometimes I think I am no good at all (RV)

7. I feel that I am a failure (RV)

How would the authors comment on the generalizability/external validity of the updated survey tool among the youth in other ethnic minority groups? While we cannot comment on the generalizability and external validity of the updated survey tool with other ethnic minority groups, we can say that the survey tool is applicable to other AIAN youth, communities, settings and interventions. Within AIAN communities, the strength-based focus of this survey tool makes it relevant to AIAN youth populations.

Reviewer #2

In the title, I think the use of “youth” may not appropriate, as sample of this study focused merely on high school students with a median age of 15 years. We agree and we will change this to high school students.

“Psychometric evaluation” means the manuscript must present specific results of the survey and analyze their differences. But the “objective” of this study (line 112-114, page 5) was to validate the measurement properties of …. And on the other hand, as a research of reliability and validation, I don't think this study can reach that conclusion (more details given in Q3), so it would be more appropriate to call it a pilot study or a cross-sectional study. Psychometric implies measurement properties are evaluated to some extent. This can be done as well in cross-sectional studies as well through a variety of approaches for validity evidence, including internal validity via factorial validity for construct validity which we did. We revised the title to: Psychometric Evaluation of Protective Measures in Native STAND: A Multi-site Cross-Sectional Study of American Indian Alaska Native High School Students

The presentation of the manuscript is not rigorous enough, and some small mistakes are obvious.

Thank you, we have addressed this concern.

As far as known from the manuscript, sample number of this research was 1456 exactly, but line 44-45 in page 1, the author why said, “more than”? We removed the “more than” language. The original reason was we had some youth complete post data only but excluded them from this study because that was after the Native STAND curriculum, which we viewed as an intervention and wanted to evaluate properties only before/pre-intervention. The sample size at pre-Native STAND is indeed 1,456 participants which we have tried to make clear in the resubmission.

Page 2, line 48, authors represented in the manuscript that the factor of negative self-esteem was two-item, so, factor loading of “0.65” was redundant Deleted thank you.

I doubt about “resilience” you have mentioned in the keywords list, there is less content related to it. Removed from key word list.

Page 9, line 204, item ratings here were all included in the factor of “positive self-esteem”, it should be clearly defined and stated Revised thank you.

Page 9, line 213, items of “history” and “native strengths” had the same rating of 3.8. This is not an error - the mean ratings were both 3.8 as initially reported.

Some of the statistical procedures were not clearly explained or the methods were incorrectly used. Because of this issue, the conclusion that the measurement is a reliable and valid tool should be reconsidered or "tone-downed". Agreed and toned down conclusion and implications.

As presented in figure 1. and figure 3. , parallel analysis of “self-esteem” and “social support” may indicate one more factor for each scale; these were also confirmed by the unsatisfactory total variance explanation (e.g., 53.9% of self-esteem and 59% of community of community safety). Please clarify. Figure 1 indicates two factors very clearly by the parallel analysis. We then conclude with a two-factor model: positive and negative self-esteem which has been found many times with the Rosenberg Self-Esteem scale before. So, there is no contradiction here between what we report and what parallel analysis suggests.

Preacher and MacCallum (2003) recommend parallel analysis (PA) as the way to choose the number of factors in an EFA – not total variance explained or in combination with PA. If we then choose more factors after the 2 that PA suggests, then PA disagrees with this and we are not using best practices to pick the number of factors / dimensionality as Preacher and MacCallum (2003) suggests. Reference: Preacher, K. J., & MacCallum, R. C. (2003). Repairing Tom Swift’s electric factor analysis machine. Understanding Statistics: Statistical Issues in Psychology, Education, and the Social Sciences, 2(1), 13–43.

Further, even other papers published in PLOS ONE itself report EFAs with such % total variance explained. This 2022 PLOS ONE reference: Fitriana, N., Hutagalung, F. D., Awang, Z., & Zaid, S. M. (2022). Happiness at work: A cross-cultural validation of happiness at work scale. PLOS ONE, 17(1), e0261617. https://doi.org/10.1371/journal.pone.0261617

Remarks (on bottom of page 6 of 16): “According to Hair, Black [17], total variance of 60% or even less than 60% is considered acceptable for social sciences.” So why is our presentation for PLOS ONE any different when following best practices utilizing PA as best way to choose the number of factors based on Preacher and MacCallum (2003)? When >1 factor we have reported appropriately as such.

Same response for Figure 3: PA found two factors – and we concluded two factors of social support with family and friends social support. We report separate factor loadings of each, their factor correlation, and separate internal consistency reliability estimates.

Page 6, Line 138-139, participants of this study were high school students, they could not represent “youth”, and we know that, there are fifty states in the U.S., how these 16 states sampled, it may need more clarification. Added and changed title to high school youth per Reviewer 1 comments. Thank you.

Page 8, line 189-190, does any item was removed based on the EFA, and its procedure, I didn’t see it. Items were removed in the EFA procedure as previously described in the Data Analysis section of the original submission: An item was considered to load onto a factor if its factor loading was at least 0.40. Non-loading items and items loading on more than one factor were removed and the EFA re-ran. Thus, if an item loaded, it was not removed.

Page 6, Line 139- 140: “Most of the sites were from rural communities and were from the western part of the United States.” Maybe the geographical distribution of the samples could be more elaborated in Table 1, for clarification We added the geographical distribution from the largest contributing states in a revised Table 1 for the resubmission.

Page 6, line 140, “a baseline survey” was done prior to participation, however, there is no description for the baseline. Please elaborate. This was an error, we revised it to read, youth completed the Native STAND survey. Thank you.

Line 148-150, the study team added questions about many factors, but the Manu did not show their results, and if not, why you mention them here. Please clarify. These are not reported in this manuscript because they have been published elsewhere and are part of a larger study about the Native STAND curriculum. This manuscript focuses only on strength-based measures. We deleted that sentence and readers may access other publications on Native STAND to explore those questions in more detail.

Table 1: The sample consisted of 1140 AI/AN, and 150 others. Regarding Race/Ethnicity data, if 1140+150=1290 students were included in this study, which does not equal No.= 87.8% and No.=12.2%? Please explain We revised Table 1 to include all 1,456 original participants who at least partially completed the Native STAND survey.

The Race/Ethnicity question is “check all that apply” so that participants could have checked multiple races and Hispanic. We report the frequency and percent that answered only American Indian or Alaska Native and all Other non-White in the revised Table 1.

Part of “measures”

Questions generating methods of the four scale and the scoring rules were identical, I think, there is no need to repeat the same description four times. Thank you, we added a sentence these were the same for all and deleted them form the others. Thanks.

Line 182, where is the “supplement 1.” I didn’t see it. It is included as a non-reviewed document and includes all of the questions and possible responses. We are including it again for your information and recommend that you contact PLOS if you cannot see the supplement. Thank you.

Conclusion” of the study was just derived a “start point”, and very little valuable information can be drawn from the current statement for fellow researchers Revised and toned down and also added recommendations for future surveys at the individual, family, and community level- all will help fellow researchers.

Attachment

Submitted filename: Response to Reviewers Final.docx

Decision Letter 1

Richard Huan XU

20 Apr 2022

PONE-D-21-24780R1Psychometric Evaluation of Protective Measures in Native STAND: A Multi-site Cross-Sectional Study of American Indian Alaska Native High School StudentsPLOS ONE

Dear Dr. Kelley,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Please submit your revised manuscript by Jun 04 2022 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Richard Huan XU

Academic Editor

PLOS ONE

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: All comments have been addressed

Reviewer #2: All comments have been addressed

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Partly

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: Yes

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: No

Reviewer #2: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: The authors’ revisions and responses are appreciated. I have two minor suggestions as below.

1. For corrected item-total correlations, the authors could consider to specify which items are the ones with maximum and minimum correlation coefficients in the text.

2. In the response to one of the previous comments on self-esteem scale, the authors stated that “Previous research reports that the negative or reverse coded (RV) items were not as reliable as the positively coded items” in Rosenberg's self-esteem scale. Is it one of the reasons why authors excluded three RV items in current version of the scale? If yes, please consider to add this reason and the citation in Method section.

Reviewer #2: Thanks for inviting me again to review this interesting manuscript.

The authors made large modification to the manuscript, in response to reviewers’ comments. I think the paper still needs little modifications to fully meet the requirements for PLOS ONE to publication

Q1

Line 233, in the part of “discussion”, authors are still using the expression of “AI/AN youth”, which I think may generalize the results of the study. Authors should limit their findings to the sample they investigated, and consistent with the title

Q2

In the part of “Conclusion”, the authors stated too much about the value of this research for fellow researchers, I understand the necessity of these statements, but in fact, in the “conclusion”, you only need to tell the readers what were the most important finding through the “results” and “discussion” briefly. So, suggestion of research direction for fellow researchers can be moved to the part of "discussion", which I think was too thin.

In conclusion, I have no other comments on this manuscript, if the authors could improve the details, I think it will be valuable research for readers.

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: Yes: Lingming Zhou

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2022 May 17;17(5):e0268510. doi: 10.1371/journal.pone.0268510.r004

Author response to Decision Letter 1


2 May 2022

May 2, 2022

Dear Editor,

We are pleased to resubmit our manuscript for consideration in PLOS One. Our responses to reviewer comments are located in the table below.

Please feel free to contact me if you have additional questions.

Allyson Kelley, DrPH

kelleyallyson@gmail.com

Reviewer 1 Response

Reviewer #1: The authors’ revisions and responses are appreciated. I have two minor suggestions as below.

1. For corrected item-total correlations, the authors could consider to specify which items are the ones with maximum and minimum correlation coefficients in the text.

We appreciate this request and this information was in the paper under Table 2 on pages 7-8 for Self-esteem “Inter-item correlations ranged from 0.213 to 0.329 (average = 0.278) and corrected item-total correlations ranged from 0.361 to 0.628.”, and for Culture, Social support, and Community and Community Safety. This is also a column now in Table 2. Please let us know if you have additional questions. Thank you.

2. In the response to one of the previous comments on self-esteem scale, the authors stated that “Previous research reports that the negative or reverse coded (RV) items were not as reliable as the positively coded items” in Rosenberg's self-esteem scale. Is it one of the reasons why authors excluded three RV items in current version of the scale? If yes, please consider to add this reason and the citation in Method section. We appreciate this comment and added a citation to the methods section. The self-esteem scale we used included two RV coded items instead of three because of potential reliability issues with RV coded items reported by previous researchers

Marsh, H. W. (1996). Positive and negative global self-esteem: A substantively meaningful distinction or artifactors?. Journal of personality and social psychology, 70(4), 810.

The self-esteem scale was also modeled after the Native VOICES survey. We added appropriate citations in the methods section.

Reviewer 2

Q1

Line 233, in the part of “discussion”, authors are still using the expression of “AI/AN youth”, which I think may generalize the results of the study. Authors should limit their findings to the sample they investigated, and consistent with the title We appreciate this feedback. We revised this sentence to be more consistent with the title and indicated results apply to those youth participating in the study. Thanks!

The revised title is, "Psychometric Evaluation of Protective Measures in Native STAND: A Multi-site Cross-Sectional Study of American Indian Alaska Native High School Students"

Q2

In the part of “Conclusion”, the authors stated too much about the value of this research for fellow researchers, I understand the necessity of these statements, but in fact, in the “conclusion”, you only need to tell the readers what were the most important finding through the “results” and “discussion” briefly. So, suggestion of research direction for fellow researchers can be moved to the part of "discussion", which I think was too thin.

In conclusion, I have no other comments on this manuscript, if the authors could improve the details, I think it will be valuable research for readers. We appreciate this feedback and edited the conclusion to highlight the most important finding. We moved the suggestions for future research to the discussion section as well.

Thank you.

Attachment

Submitted filename: Response to Reviewers 5-2-22.docx

Decision Letter 2

Richard Huan XU

3 May 2022

Psychometric Evaluation of Protective Measures in Native STAND: A Multi-site Cross-Sectional Study of American Indian Alaska Native High School Students

PONE-D-21-24780R2

Dear Dr. Kelley,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Richard Huan XU

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Reviewers' comments:

Acceptance letter

Richard Huan XU

9 May 2022

PONE-D-21-24780R2

Psychometric Evaluation of Protective Measures in Native STAND: A Multi-site Cross-Sectional Study of American Indian Alaska Native High School Students

Dear Dr. Kelley:

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

If we can help with anything else, please email us at plosone@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. Richard Huan XU

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 File. This is the S1 File Native STAND survey measures.

    (DOCX)

    Attachment

    Submitted filename: Response to Reviewers Final.docx

    Attachment

    Submitted filename: Response to Reviewers 5-2-22.docx

    Data Availability Statement

    Data cannot be shared publicly because American Indian youth are a vulnerable population and the study does not allow for data sharing. Data are available by contacting Dr. Craig and going through the appropriate IRBs and Ethics Committees for researchers who meet the criteria for access to confidential data. The Portland Area Indian Health Service IRB contact is Rena Macy, Co-Chair, Portland Area IHS IRB Portland Area IHS 1414 NW Northrup St Suite 800 Portland, OR 97209 Phone: 503-414- 5540 It should be noted that all shareable data are within the paper and its Supporting Information files. Data sharing is not appropriate for this small sample of AI/AN youth. Data could potentially be identified and AI/AN youth populations are considered vulnerable populations due to the historical and unethical research practices conducted by universities and the US government on AI/AN populations. If a reader would like access to the data used in this study they would need to submit a request to Dr. Stephanie Craig Rushing, the corresponding author and she would present this information to the tribe and IRBs involved in the Native STAND study.


    Articles from PLoS ONE are provided here courtesy of PLOS

    RESOURCES