Skip to main content
PLOS One logoLink to PLOS One
. 2022 Mar 11;17(3):e0264889. doi: 10.1371/journal.pone.0264889

An international validation of the Bolton Unistride Scale (BUSS) of tenacity

Chathurika Kannangara 1,*, Rosie Allen 1,*, Kevin D Hochard 2, Jerome Carson 1
Editor: Frantisek Sudzina3
PMCID: PMC8916625  PMID: 35275945

Abstract

Academic success at University is increasingly believed to be a combination of personal characteristics like grit, resilience, strength-use, self-control, mind-set and wellbeing. The authors have developed a short 12-item measure of tenacity, the Bolton Uni-Stride Scale (BUSS) which incorporates these elements. Previous work in the UK had established the reliability and validity of the BUSS. The present paper reports the findings of an International validation of BUSS across 30 countries (n = 1043). Participants completed the BUSS alongside other recognised scales. Factor analysis revealed an almost identical two-factor solution to previous work and the reliability and validity of the scale were supported using an international sample. The authors recommend however that the scale be used as a single score combining all 12 items. In the light of this, the authors suggest that the BUSS will be a useful measure to incorporate in studies of academic attainment.

Introduction

Tenacity

There is now accumulating empirical evidence to show that academic success at university is reliant on a combination of personal characteristics. Academic success cannot simply be attributed to any one quality, but is shaped through a number of characteristics such as grit, resilience, strengths-use, self-control, mind-set and well-being [1]. The complex and multidisciplinary nature of these characteristics work together to give university students the best opportunity for educational success. Ultimately, it is the combination of these characteristics that equate to how tenacious a student will be in their academic studies. Together, these characteristics promote tenacity and success in university and arguably in later life. ‘Tenacity’ can be defined as the combination of grit, resilience, strengths-use, self-control, psychological well-being and a growth mind-set, that provides students with the capacity to thrive at university [1]. A tenacious student expresses a passion and perseverance for pursuing long-term goals [2] and has the ability to overcome adversity and continue striving despite challenges they may face [3]. A student who is particularly tenacious in their academic studies will use their personal strengths in different situations and work towards improving on their strengths [4,5]. They can regulate their study habits, control their impulses and manage their own actions and behaviors, such as switching their smartphone off to avoid distraction [6,7]. Adopting a growth mind-set, they also believe that their efforts can improve their abilities and perceive setbacks as a springboard to success [8]. Finally, tenacious students frequently experience positive affect and possess better mental health outcomes [9].

Worldwide university students

Research from different countries around the world has shown that these characteristics are universal and play a crucial role with university students internationally. For instance, empirical evidence has shown that grit is an essential personal resource among university students, with similar findings from various countries including Germany [10], Spain [11], South Africa [12], Australia [13], China [14], Finland [15] and the USA [16]. Likewise, resilience is considered to be a universal concept that is necessary for various academic and mental health outcomes in university students, such as academic performance [17,18], motivation and effort [19], academic achievement [20,21] and general well-being [22]. Again, these findings are shared across many countries, including the USA, Kenya, Spain, Iran and the Czech Republic, to name but a few. Clearly, these characteristics are universal and strongly influence the educational success and well-being of university students. Therefore, tenacity is proposed to be a universal concept and so the need for a measure that is applicable to university students across the world would be beneficial.

Present study

There is a need for a reliable and valid measure of these collective characteristics in a short and concise tool, to improve the quality of research in this area. The Bolton Uni-Stride Scale (BUSS) was specifically devised to capture students’ tenacity. There was good support for the reliability and validity of the BUSS [1], in a series of UK based studies. Originally, the BUSS was developed based on British samples of university students, and to date, has not been further validated with other populations. This paper reports an international validation study of the BUSS, on a large sample of university students from across the world. This will allow for the development of an international tool for the measurement of tenacity in university students.

Materials and methods

A web-based survey was designed to facilitate data collection from university students on an international scale. Previous reliability and validity testing of the BUSS was carried out on a sample of university students from the United Kingdom. This research aimed to test whether the factor structure of the BUSS is supported using an international sample and to examine other aspects of the reliability and validity of the BUSS. The present study used the Prolific website to recruit participants which enabled representation from a range of countries. Using Prolific also meant that the researchers could pre-screen participants as per the study requirements [23], which was useful for recruiting participants who matched specific criteria. Although participants are from different countries around the world, it was made sure, using selection criteria in Prolific, that all participants’ primary language was English. Therefore no adaptations of the BUSS were needed to account for language differences.

Participants

A total of 1043 participants was recruited (See Table 1 for the full description of participants).

Table 1. Demographic characteristics of the participant sample.

Demographic Characteristic Number of Participants (N) Percentage of Sample (%)
Age 16–18 22 2.1
18–25 720 69.0
25–30 179 17.2
30+ 122 11.7
Education Level College 179 17.2
University–First Year 161 15.4
University–Second Year 151 14.5
University–Third Year 320 30.7
Postgraduate–Masters 197 18.9
PhD/Doctoral Studies 35 3.4
Country of Residence Australia 15 1.4
Austria 3 .3
Belgium 7 .7
Canada 67 6.4
Czech Republic 4 .4
Denmark 1 .1
Finland 2 .2
France 6 .6
Germany 49 4.7
Greece 41 3.9
Hungary 12 1.2
Iceland 1 1.1
India 2 .2
Ireland 6 .6
Israel 2 .2
Italy 40 3.8
Malaysia 1 .1
Mexico 68 6.5
Netherlands 13 1.2
New Zealand 3 .3
Poland 102 9.8
Portugal 119 11.4
Slovenia 5 .5
Spain 29 2.8
Sweden 3 .3
Switzerland 2 .2
Turkey 1 .1
United Kingdom 38 3.6
United States of America 399 38.3
Other/Not Specified 2 .2

Measures

Bolton Uni-Stride Scale (BUSS)

BUSS is a short and concise measure of tenacity that was developed to incorporate important characteristics such as grit, resilience, self-control and well-being [1]. This twelve-item scale measures persistence and self-composure. Persistence is measured through seven items and Self-composure is measured through five items. A sample item from Persistence is “I consider myself as very capable in handling personal challenges”. All items from Persistence are positively keyed. These items are scored on a Likert-type scale (1 = Strongly Disagree to 5 = Strongly Agree). A sample item from Self-Composure is “I do things that feel good in the moment, but later regret”. All items from Self-Composure are negatively keyed. These items are scored on a Likert-type scale (1 = Strongly Agree to 5 = Strongly Disagree). As recommended, this scale consists of positively and negatively phrased items to minimise desirability bias [24], whereby negatively phrased items were reverse coded prior to analysis to ensure all items were weighted equally. Table 2 demonstrates that all items of the BUSS correlate significantly with the total BUSS score. With the exception of item 8, all correlate more than .20, but no more than .80. On inspection of Table 2, the correlations among the twelve items of BUSS and total BUSS remain relatively consistent throughout different studies using independent samples. Table 2 also shows that this is confirmed, when utilising an international sample of university students. Previous research confirmed the reliability and validity of BUSS as internal consistency reliability was good (.74) and three week test-retest reliability was good (>.70). Also, discriminant and convergent validity were good [1].

Table 2. Correlation between each item of BUSS and total BUSS score from several studies.
BUSS items Sample 1 Sample 2 Sample 3 Sample 4 Current Sample (International)
N = 1087 N = 933 N = 331 N = 146 N = 1043
1 .518 .578 .531 .610 .620
2 .540 .437 .493 .545 .481
3 .577 .647 .550 .551 .641
4 .512 .502 .490 .473 .530
5 .523 .439 .517 .637 .426
6 .623 .630 .618 .727 .663
7 .538 .560 .621 .713 .676
8 .242 .006 .259 .413 .197
9 .546 .503 .505 .642 .522
10 .555 .497 .516 .541 .544
11 .595 .612 .613 .595 .616
12 .563 .579 .520 .570 .598

Notes: Sample 1 was 1087 students from a university in the North West of England [25] Sample 2 was 933 adolescents from the North West of England [26]. Sample 3 was 331 students from a university in the North West of England [25]. Sample 4 was 146 students from a university in the North West of England [27].

Self-control

The Self-Control Scale (SCS) is a ten-item self-control scale [28]. This scale is scored in the form of a rating scale with participants responding to ten statements from “not at all like me” (1) to “very much like me” (10). The scale included statements like “I’m good at resisting temptation”. Test-retest reliability was high at .89 and internal consistency estimates of reliability were also high, showing a Cronbach’s alpha estimate of .89 [28]. Similar internal consistency estimates were reported using the participant sample from the present study, showing a Cronbach’s alpha estimate of .83. This confirms that the scale appears to have good internal consistency.

Future work self

Participant’s future work self was measured by the Future Work Self Scale, developed by [29]. This is a 5-item scale that uses a 5-point Likert response format from 1 (“Strongly Disagree”) to 5 (“Strongly Agree”). Items in the scale included “I am very clear about who and what I want to become in my future work”. This scale has shown good construct validity, predictive validity and internal consistency reliability [29]. Internal consistency estimates using the concrete data were good, showing a Cronbach’s alpha estimate of .93.

Resilience

The Connor-Davidson Resilience Scale (CD-RISC 10) is a ten-item resilience scale that is scored in the form of a rating scale–from (0) “not true at all” to (4) “true nearly all the time” [30]. For instance, statements included “I am not easily discouraged by failure”. This scale demonstrates good test-retest reliability (.90) over a two week period [31]. Reliability analysis shows good internal consistency [31], which was mirrored in the present study which found a Cronbach’s alpha estimate of .87.

Mental well-being

The mental well-being of participants was measured using the Warwick-Edinburgh Mental Wellbeing Scale (WEMWBS). This is a 14-item scale that focuses on feelings and functioning related to mental well-being. The WEMWBS is scored on a 5-point Likert scale from “None of the time” to “All of the time”. All 14 items in this scale are positively phrased. On example item states “I’ve been dealing with problems well. One-week test-retest reliability was high on a sample of British university students [32]. WEMWBS also showed good convergent validity [32], construct validity [33], concurrent validity [32] and discriminant validity [32]. This scale has shown high internal consistency in a sample of UK university students [32]. Internal consistency estimates from the present study were good, showing a Cronbach’s alpha estimate of .91.

Grit

This study used the Short Grit Scale, developed by Duckworth & Quinn in 2009 [34]. The Grit-S is an 8-item measure of perseverance and passion to pursue long-term goals. This scale includes positively and negatively phrased items, with items 1, 3, 5 and 6 being reverse coded prior to analysis. For instance, one item reads “I finish whatever I begin”. The Grit-S is scored in the form of a 5-point Likert scale with anchors of “Very much like me” and “Not like me at all”. There is strong evidence for the reliability and validity of this scale. Such that, it has high internal consistency, good test-retest reliability and good predictive validity [34]. High internal consistency estimates were also reported using the participant sample from the present study, showing a Cronbach’s alpha estimate of .83.

Procedure

This study was uploaded to a website called Prolific. Prolific is an online platform for recruiting study participants, where researchers pay individuals to complete their questionnaires [23]. First, participants were asked to read an Information Sheet that described the study and detailed what their involvement will consist of. Participants were made aware that their participation in the study was voluntary. If participants were willing to take part in the study, they were asked to give their consent. Written consent was obtained by asking participants to select “yes” or “no” to the following statement: “If you would like to participate in the study please consent to take part. If you are not happy to continue with the study, you can withdraw at this point by closing the survey page.” They were then asked to provide basic demographic information such as age. Following this, they were asked to complete a series of questionnaires that included: BUSS, SCS, FWS, CD-RISC, WEMWBS and Grit-S. Participants were then thanked for their participation. Ethical approval for the study was obtained from the Ethics Committee of the Psychology Department at the University of Bolton in line with British Psychological Society guidelines [35]. The use of Prolific also allows you to check the quality of the responses and data. In this study, one additional items was included in the online survey acting as an attention checker that asked participants “It is important that you pay attention. Please select Strongly Agree”. Those participants that failed the attention check question were rejected. It also allows you to access the time it takes participants to complete the survey. For instance, the average time it took a student to complete the online survey was 8.89 minutes. If participants had completed the survey “too quickly”, that is they were a statistical outlier and were 3 standard deviations below the mean completion time, they were also rejected.

Results

In order to confirm the previously established internal structure of the BUSS and explore it’s applicability to an international university student sample, a further Confirmatory Factor Analysis (CFA) was conducted.

Confirmatory factor analysis

The 12 items of the Bolton Uni-Stride Scale (BUSS) were subjected to CFA using SPSS version 23 and AMOS version 26. A maximum likelihood method of factor extraction was deemed most suitable.

Table 3 illustrates the standardised factor loadings from the CFA. Findings demonstrate two latent variables with loadings of >.40 highlighted in bold [36]. As recommended, all of the items loaded strongly on each factor as factor loadings are above 0.4 [36] except for item 5 (.251) and 8 (.199) which loaded weakly onto factor 2 (self-composure).

Table 3. Maximum likelihood estimates of factor loadings for the BUSS for 1-factor, 2-factor and bifactor CFA.

Items of the BUSS(BUSS) 1-Factor 2-Factor Bifactor
Persistence Self-Composure Persistence Self-Composure General
Factor 1 (Persistence)
(7) using personal strengths is a regular habit 0.771 .785 0.399 0.882
(1) use my strengths in various situations 0.706 .716 0.441 0.819
(6) capable in handling personal challenges 0.694 .693 -0.071 0.706
(11) generally able to move forward in life 0.642 .643 0.004 0.656
(12) always looking for ways to improve talents and skills 0.639 .640 0.179 0.668
(3) persistent and hard working 0.630 .622 0.185 0.653
(9) find it easy to make decisions 0.441 .434 -0.233 0.416
Factor 2 (Self-Composure)
(4) I cannot stop my actions 0.362 .748 0.655 0.396
(2) do things that feel good in the moment but later regret 0.282 .636 0.638 0.344
(10) find it difficult to focus on one project for a long time 0.357 .451 0.299 0.354
(5) not comfortable trying new ways of doing things 0.250 .251 0.133 0.272
(8) I set goals, but after a while I decide on a new set of goals -0.013 .199 0.231 -0.035

Note: Loadings in bold indicate significant loading onto the factor.

While all indicator variables load significantly on each latent factor (p < .01), on inspection of the Cronbach’s alpha estimates, the removal of item 8 led to a small improvement in Cronbach’s alpha. Indeed, the Cronbach’s alpha of BUSS was .77, while McDonald’s omega was .79, which increased to .80 and .81 respectively if item 8 was deleted. Also, the corrected item-total correlation for item 8 was low (.032). Combined, these results indicated that item 8 should be removed from the factor structure. Therefore, item 8 was retained as a contributing item towards BUSS scores overall, however it was removed from the factor structure and further tests of factor analysis and model fit. Item 5 also demonstrated a weak loading onto the self-composure factor (.251) and is below the recommended loading for each item. While there is a very slight increase in Cronbach’s alpha, from .799 to .803, and omega, from .814 to .818, if item 5 is removed, this does not appear considerable. It is common practice to remove any item that results in an improvement to the internal consistency [37]. However, the corrected item-total correlation for item 5 indicates good discrimination at .272 and as the 11 items of BUSS demonstrate good reliability at above .70 [38], item 5 was retained.

Our 1-factor solution demonstrated sub-optimal model fit, (χ2 = 657.997, CFI = .809, TLI = .767, RMSEA = .104, AIC = 32904.41). In our 2-factor solution, Persistence and Self-composure exhibited significant covariation (.166) and improved fit based on Hu & Bentler (1995) indices [39] cut-offs (AIC = 30528.80). The Chi-square (χ2) was statistically significant (χ2 = 378.4, p < .001). The Root Mean Square Error of Approximation (RMSEA) was .074, indicating a reasonable error of approximation. Analysis of model fit revealed that both the CFI (.918) and the TLI (.875) indicate an adequate fit [40]. Typically, a significant Chi-square (χ2) can indicate a lack of model fit [41]. However, χ2 is affected by sample size and because CFA typically utilises a large sample, reporting a statistically significant chi-square is relatively common [41]. Models whose RMSEA is 0.10 or more, have a poor fit, while a value of .08 or less indicates a reasonable model fit [42,43]. For both the Comparative Fit Index (CFI) and the Tucker-Lewis coefficient (TLI), values close to 1 indicate a very good fit [44,45]. Therefore, the model is an acceptable fit to the sample data based on commonly accepted thresholds (χ2 = 378.4, df = 53, p < .01, CFI = .92, TLI = .88, RMSEA = .07). Thus, it can be concluded that the two latent factors (persistence and self-composure) are relatively strong reflections of the associated observed variables and the two-factor model fits the data quite well [39]. Yet, Persistence and Self-composure are hypothesised to be sub-scales of Tenacity. As such, a bifactor CFA model which can assess for a co-existing general factor alongside specific factors [46] was computed. Items loaded on their respective orthogonal specific factors (Persistence and Self-Composure) and a general factor (Tenacity). This model displayed the best fit indices (χ2 = 229.30, CFI = .93, TLI = .89, RMSEA = .07, AIC = 30377.09) and supports a general factor of Tenacity.

Reliability and validity

Internal consistency estimates of > .80 were sought [47]. Analysis revealed that the internal consistency of BUSS is good, reporting a Cronbach’s coefficient alpha for the total BUSS score of .8 and McDonald’s omega of .79. Alpha and omega reliability estimates for Persistence, .83 and .84 respectively also demonstrate good internal consistency. Similar to the original psychometric testing of BUSS, the Self-composure factor appeared to show lower levels of internal consistency reliability (α = .57, ω = .58). These values are similar to those reported by in previous research [1].

Table 4 demonstrates that BUSS positively correlates with the other measures used in this study. Total BUSS and both factors positively correlate with self-control, future work self, resilience, mental well-being and grit. This demonstrates good convergent validity of the BUSS.

Table 4. Correlation between persistence, self-composure and other measures used.

Measure Taken 1 2 3 4 5 6 7 8
1. Persistence -
2.Self-Composure .326** -
3. Self-Control .512** .607** -
4. Future Work Self .570** .253** .315** -
5. Resilience .730** .340** .424** .472** -
6. Mental Well-being .642** .300** .384** .524** .642** -
7. Grit .606** .615** .642** .437** .500* .469** -
8. Total BUSS .884** .729** .661** .529** .690** .603** .733** -

** Pearson Correlation is significant at the 0.01 level (2-tailed).

A multiple linear regression was performed to predict tenacity (DV) based on self-control, future work self, resilience, well-being and grit as predictor variables (see Table 5). Multicollinearity estimates were within the normal range and the homoscedasticity assumption was not violated. A significant regression equation was found (F(5,884) = 525.34, p < .001), with an R2 of .748. Findings indicate that 74.8% of the variation in tenacity can be accounted for by self-control, future work self, resilience, well-being and grit. Participants’ predicted tenacity is equal to 7.015 + .259 (SELF-CONTROL) + .155 (FUTURE WORK SELF) + 0.282 (RESILIENCE) + .071 (WELL-BEING) + .371 (GRIT), where all were coded as a total score. Participants’ tenacity increased as self-control, future work self, resilience well-being and grit increased. Therefore, self-control, future work self, resilience, well-being and grit are all significant predictors of tenacity (BUSS).

Table 5. Results from a multiple linear regression for predicting tenacity (BUSS).

B (unstandardized) SE B Β (Standardised) t p 95% CI
Constant 7.015 6.77 10.36 < .001 5.69; 8.34
Self-control .259 .020 .284 12.91 < .001 .220; .299
Future work self .155 .024 .129 6.32 < .001 .107; .203
Resilience .282 .022 .292 12.57 < .001 .238; .325
Well-being .071 .016 .101 4.31 < .001 .039; .103
Grit .371 .028 .312 13.20 < .001 .316; .426

Note: R2 adjusted = .747; 95%; SE B = standard error for the unstandardized beta; t = independent samples t-test score; CI = confidence interval for B [Lower Bound; Upper Bound].

Discussion

Main findings

The present study allowed for the examination of a new psychometric measure, the BUSS, to measure tenacity in university students on an international level. Analysis showed that the factor structure was replicated from previous research [1] with all factor loadings reported as statistically meaningful. Confirmatory factor analysis suggested a reasonable fit with the data, indicating that the two-factor model fits the data quite well. However, a bifactor model supports a general factor of Tenacity. This study utilised a large sample size to ensure factor stability [48,49]. For instance, ten participants to each item is widely considered the recommendation to ensure factor stability [50], which would require a minimum of 120 participants for a 12-item measure, whereas this study included over 1000 participants. A Cronbach’s coefficient alpha of .80 and McDonald’s omega of .79 were reported for the total BUSS score, demonstrating good internal consistency reliability. Analysis also found that the BUSS exhibited good convergent validity. Such that, Total BUSS and both factors positively correlated with self-control, future work self, resilience, mental well-being and grit. Table 4 shows that some of these intercorrelations are particularly high, supporting our argument that these individual constructs are strongly related to tenacity and that the combination of these constructs into one, short assessment is highly relevant and useful. Specifically, total BUSS has very high correlations with each of the independent variables (between .529 for future work self and .733 for grit). This can also be said for the factor of persistence, which also has particularly high correlations (between .570 for future work self and .730 for resilience). On the other hand, the factor of self-composure has slightly weaker correlations with other psychological constructs related to tenacity. For instance, its lowest correlation is with future work self at .253 and its highest with grit at .615. These findings support the factor structure that also demonstrate the self-composure factor of tenacity possessing less relevance and significance. While it does appear to have weaker factor loadings and convergent validity, the self-composure factor still offers a unique and significant contribution towards assessing tenacity.

Finally, a multiple linear regression revealed that self-control, future work self, resilience, well-being and grit were all significant predictors of tenacity (BUSS). Indeed, nearly 75% of the variation in tenacity could be accounted for by self-control, future work self, resilience, well-being and grit. Upon inspection of the unstandardized beta coefficients (see Table 5), it is apparent that self-control, resilience and grit are the strongest of these predictors, which is also backed up by the bivariate correlations shown in Table 4 which show these constructs showing greater correlation with Total BUSS. For instance, for every one unit increase in self-control, the tenacity score increases by .259. The greatest predictor is arguably Grit, with the highest bivariate correlation (.733) and largest unstandardized beta (.371). Although all independent variables are significantly contributing towards the prediction of tenacity. Therefore, BUSS is clearly a distinct construct that is comprised of several relevant and integral components that each contribute significantly and uniquely towards tenacity. Reliability analyses and scale correlations further supported the psychometric properties of the BUSS for international students. Further, this assumes cross validation of the BUSS factor structure and allows for generalisation. Based on the present study, the BUSS can be considered a reliable and valid measure of tenacity for university students internationally, particularly those within Europe, the USA and predominantly English speaking countries.

Comparisons with previous research

Compared to the original development of the BUSS, which utilised a British sample of university students, this study recruited students internationally from over 25 countries. Despite the diversity of the sample population in this study, psychometric properties of the scale, along with its reliability and validity were supported. When comparing the international factor structure of the BUSS to the original factor structure [1], the same seven items loaded onto factor 1 (persistence). Likewise, the same 5 items loaded onto factor 1 (self-composure). This indicates that the factor structure replicates, when using an international sample of university students. On the other hand, item 8 loaded weakly onto the self-composure factor (.199). The corrected item-total correlation for item 8 was low (.032) and the removal of item 8 led to a small improvement in Cronbach’s alpha and McDonald’s omega. Taking all things into consideration, item 8 was removed from factor analysis. While item 8 was not considered to be a valuable contributing factor to the self-composure factor, it was retained in the BUSS total score as various indicators pointed out it was a valuable item and contributed towards the measurement of tenacity as a whole. The removal of items could compromise construct coverage for what is arguably only a small increase in internal reliability. The original factor structure was Persistence (7) + Self-composure (5), whereas the international factor structure is Persistence (7) + Self-composure (4) + 1.

As shown in previous research, CFA confirmed two correlated latent constructs of BUSS, persistence and self-composure [1]. However, as supported by our bifactor CFA, the authors recommend modelling tenacity as a general factor making the use of total scores appropriate. The authors considered the possibility that the two latent factors, persistence and self-composure, were an artefact of item wording [5153]. For instance, the persistence factor is comprised of positively worded items and the self-composure factor is made up of negatively worded items. Previous research has indicated that the inclusion of positively worded and negatively worded items in a questionnaire can result in artefacts which impact on the number of factors resulting form factor analysis [53,54]. Moreover, there are several wording effects that are associated with responses to negatively worded items, such as increased cognitive load as a consequence of switching between response formats and the presence of socially desirable answers [55]. Nonetheless, the authors believe that the two specific factors reflects conceptually distinct sub-constructs, persistence and self-composure. We are most concerned with tenacity as a total measure, and the bifactor CFA analysis indicated that the two factors combined into a general factor is more coherent, meaningful and predictive. Similarly, following the extraction of a two-factor structure model of grit, Duckworth et al., (2007) proposed the use of total grit score alone [2]. Therefore, we recommend utilising BUSS as a general factor (unidimensional) measure of tenacity.

Limitations, implications and recommendations for future research

This study provides a generalised international tool, the BUSS, to measure tenacity in university students across the world. Nevertheless, further research is required to continue the investigation into the cultural differences of tenacity and what characteristics contribute towards academic success and well-being in university students around the globe? As mentioned, these characteristics are thought to be universal and present in university students around the world. However, it remains unclear if the construct of persistence is equivalent across different countries and cultures. Due to some underrepresentation from certain countries in this study, it is not possible to conduct tests of invariance for all of the different countries included in the sample. Thus, future research is needed to conduct a measurement invariance study across these countries to further explore this. The generalizability of current study is limited due to the underrepresentation of some countries on Prolific. As some countries have less of a presence on Prolific, there is greater participation from countries where use of the site is more popular [23]. In addition, it is unclear to what extent the sample of students recruited via Prolific are representative of a general international student population. Both of these things together mean that the sample of participating students may not be representative of an international student population. The authors welcome further investigation and international comparisons.

Our CFA have made use of commonly used cut-offs for fit indices [39]. These fit indices are derived from simulation studies and appropriate to use under the conditions in which they were derived. As our models do not reflect the same conditions by which those fit indices were derived, our use of arbitrary cut-offs may make precise assessment of model fit or mis-specification difficult. The use of dynamic fit indices for CFA [56] is a novel development which would improve our ability to discern good model fit. At present, however, dynamic fit indices for bifactor models cannot be obtained. To avoid confusion and misrepresentation, we have not included dynamic fit indices for the 1- and 2-factor models as these would be judged by a different standard than the bifactor model. We recommend further validation efforts using dynamic fit indices, and remain tentative with regards to model fit whilst using the arbitrary fit indices cut-offs.

Further, this study did not assess the criterion validity of BUSS in that it does not evaluate the extent to which tenacity (BUSS) is related to students success, satisfaction with their academic performance or their intention to drop out. It can only be assumed that because the BUSS is comprised of a multitude of concepts that are known to be strongly relevant for academic outcomes, that the BUSS too will be a useful tool to predict such outcomes. However, further analyses are needed to confirm the importance of BUSS for educational success and academic outcomes.

Users of Prolific receive payment for participating in online research, and participants of this study received £1.60 for 10 minutes of their time which equates to £9.60 per hour. Once considered an unsuitable method of recruitment, offering financial gain for participating in research is increasingly becoming common practice [57] and is argued to be an ethically acceptable method of recruitment [58]. The self-reporting nature of this study also raises the possibility of social desirability much like with most quantitative studies [59].

Nevertheless, it can be argued that the use of BUSS could provide a shortcut to assess an array of highly relevant and important psychological constructs in one time efficient and economical solution. BUSS can help higher education institutions, academics and educators to better understand their student population. By gaining knowledge of their students’ tenacity, this allows educators the opportunity to support their students and help to guide at-risk students towards targeted positive psychology education programs. We also suggest that the BUSS be used to further investigate the role that tenacity plays on various academic, physical, psychological, social and life-long outcomes. Further research should explore the extent to which encouraging tenacity at university can influence future life prospects.

Supporting information

S1 File

(SAV)

Acknowledgments

The authors would like to acknowledge the support provided by Professor Patrick McGhee and Dr. Gill Waugh.

Data Availability

All relevant data are within the paper and its Supporting information files (An International Validation of BUSS Dataset.sav).

Funding Statement

CK was awarded a small grant of £2000 from the University of Bolton to pay for the BUSS survey on the Prolific website. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

References

  • 1.Kannangara C. S., Allen R. E., Carson J. F., Khan S. Z. N., Waugh G., & Kandadi K. R. (2020) Onwards and upwards: The development, piloting and validation of a new measure of academic tenacity- The Bolton Uni-Stride Scale (BUSS). PLoS ONE, 15(7): e0235157. doi: 10.1371/journal.pone.0235157 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Duckworth A. L., Peterson C., Matthews M. D., & Kelly D. R. (2007). Grit: perseverance and passion for long-term goals. Journal of Personality and Social Psychology,92(6), 1087–1101. doi: 10.1037/0022-3514.92.6.1087 [DOI] [PubMed] [Google Scholar]
  • 3.Martin A. J., & Marsh H. W. (2006). Academic resilience and its psychological and educational correlates: A construct validity approach. Psychology in the Schools, 43(3), 267–281. doi: 10.1002/pits.2014 [DOI] [Google Scholar]
  • 4.Wood A. M., Linley P. A., Maltby J., Kashdan T. B., & Hurling R. (2011). Using personal and psychological strengths leads to increases in well-being over time: A longitudinal study and the development of the strengths use questionnaire. Personality and Individual Differences, 50(1), 15–19. doi: 10.1016/j.paid.2010.08.004 [DOI] [Google Scholar]
  • 5.Anderson E. C. (2004). What is strengths-based education? A tentative answer by someone who strives to be a strengths-based educator. Unpublished manuscript. https://www.weber.edu/WSUImages/leadership/docs/sq/strengths-base-ed.pdf.
  • 6.Yukselturk E., & Bulut S. (2007). Predictors for student success in an online course. Educational Technology & Society, 10 (2), 71–83. [Google Scholar]
  • 7.Lepp A., Barkley J. E., & Karpinski A. C. (2015). The relationship between cell phone use and academic performance in a sample of U.S. College Students. SAGE Open, 5(1), 215824401557316. [Google Scholar]
  • 8.Hysenbegasi A., Hass S. L., & Rowland C. R. (2005). The impact of depression on the academic productivity of university students. Journal of Mental Health Policy Economics, 8(3), 145–151. . [PubMed] [Google Scholar]
  • 9.Pekrun R., Goetz T., Titz W., & Perry R. P. (2002). Academic emotions in students self-regulated learning and achievement: A program of qualitative and quantitative research. Educational Psychologist, 37(2), 91–105. doi: 10.1207/s15326985ep3702%5F4 [DOI] [Google Scholar]
  • 10.Schmidt F., Fleckenstein J., Retelsdorf J., Eskreis-Winkler L., & Möller J. (2017). Measuring Grit: A German validation and a domain-specific approach to grit. European Journal of Psychological Assessment. doi: 10.1027/1015-5759/a000407 [DOI] [Google Scholar]
  • 11.Arco-Tirado J. L., Fernández-Martín F. D., & Hoyle R. H. (2018). Development and validation of a Spanish version of the Grit-S Scale. Frontiers in Psychology, 9, 96. doi: 10.3389/fpsyg.2018.00096 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Mason H. D. (2018). Grit and academic performance among first-year university students: A brief report. Journal of Psychology in Africa, 28(1), 66–68. [Google Scholar]
  • 13.Hodge B., Wright B. & Bennett P. (2018). The role of grit in determining engagement and academic outcomes for University students. Research in Higher Education 59, 448–460. doi: 10.1007/s11162-017-9474-y [DOI] [Google Scholar]
  • 14.Lee W. (2017). Relationships among grit, academic performance, perceived academic failure, and stress in associate degree students. Journal of Adolescence, 60, 148–152. doi: 10.1016/j.adolescence.2017.08.006 [DOI] [PubMed] [Google Scholar]
  • 15.Tang X., Wang M., Guo J. et al. (2019) Building grit: The longitudinal pathways between mindset, commitment, grit, and academic outcomes. Journal of Youth and Adolescence 48, 850–863. doi: 10.1007/s10964-019-00998-0 [DOI] [PubMed] [Google Scholar]
  • 16.Alhadabi A., & Karpinski A. C. (2020). Grit, self-efficacy, achievement orientation goals, and academic performance in University students. International Journal of Adolescence and Youth, 25, 1, 519–535, doi: 10.1080/02673843.2019.1679202 [DOI] [Google Scholar]
  • 17.Ayala J. & Manzano G. (2018). Academic performance of first-year university students: the influence of resilience and engagement. Higher Education Research and Development, 37, 1321–1335. doi: 10.1080/07294360.2018.1502258 [DOI] [Google Scholar]
  • 18.Novotny S. & Křeménková L. (2016). The relationship between resilience and academic performance at youth placed at risk. Československá Psychologie, 60, 553–566. [Google Scholar]
  • 19.Wang M. C., & Gordon E. W. (2012). Educational resilience in inner-city America: Challenges and prospects. Routledge. Wood A. M., Linley P. A., Maltby J., Kashdan T. B., & Hurling R. (2011). Using personal and psychological strengths leads to increases in well-being over time: A longitudinal study and the development of the strengths use questionnaire. Personality and Individual Differences, 50(1), 15–19. [Google Scholar]
  • 20.Mwangi C.N., Ireri A.M., Mwaniki E.W. (2017). Correlates of academic resilience among secondary school students in Kiambu County, Kenya. Interdisciplinary Education and Psychology, 1(1):4. [Google Scholar]
  • 21.Zuill, Z. D. (2016). The relationship between resilience and academic success among Bermuda foster care adolescents. Walden Dissertations and Doctoral Studies, 2184. https://scholarworks.waldenu.edu/dissertations/2184.
  • 22.Turner M., Scott-Young C. M., & Holdsworth S. (2017). Promoting wellbeing at university: the role of resilience for students of the built environment. Construction Management and Economics, 35, 11–12. [Google Scholar]
  • 23.Palen S. and Schitter C. (2018), Prolific.ac- A subject pool for online experiments. Journal of Behavioral and Experimental Finance, 17, 22–27. doi: 10.1016/j.jbef.2017.12.004 [DOI] [Google Scholar]
  • 24.Baumgartner H., & Steenkamp J. B. E. M. (2001). Response styles in marketing research: A cross-national investigation. Journal of Marketing Research, 38(2), 143–156. doi: 10.1509/jmkr.38.2.143.18840 [DOI] [Google Scholar]
  • 25.Kannangara C. S., Allen R. E., Waugh G., Nahar N., Khan S. Z. N., Rogerson S., et al. (2018). All that glitters is not grit: Three studies of grit in university students. Frontiers in psychology, 9, 1539. doi: 10.3389/fpsyg.2018.01539 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Platt I. A., Kannangara C., Tytherleigh M., & Carson J. (2020). The Hummingbird Project: A Positive Psychology Intervention for Secondary School Students. Frontiers in Psychology, 11, 2012. doi: 10.3389/fpsyg.2020.02012 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Tetley, C. (2020). An independent test of the reliability and validity of the Bolton Uni-Stride Scale (BUSS). Unpublished undergraduate dissertation. Bolton: University of Bolton Psychology Department.
  • 28.Tangney J., Baumeister R., & Boone A. L. (2004). High self-control predicts good adjustment, less pathology, better grades, and interpersonal success. Journal of Personality, 72(2), 271–324. doi: 10.1111/j.0022-3506.2004.00263.x . [DOI] [PubMed] [Google Scholar]
  • 29.Strauss K., Griffin M. A., & Parker S. K. (2012). Future work selves: How hoped for identities motivate proactive career behaviors. Journal of Applied Psychology, 97(3), 580–589. doi: 10.1037/a0026423 [DOI] [PubMed] [Google Scholar]
  • 30.Connor K. M., & Davidson J. R. (2003). Development of a new resilience scale: The Connor-Davidson Resilience Scale (CD-RISC). Depression and Anxiety, 18(2), 76–82. doi: 10.1002/da.10113 . [DOI] [PubMed] [Google Scholar]
  • 31.Wang L., Shi Z., Zhang Y., & Zhang Z. (2010). Psychometric properties of the 10-item Connor-Davidson Resilience Scale in Chinese earthquake victims. Psychiatry and Clinical Neurosciences, 64(5), 499–504. doi: 10.1111/j.1440-1819.2010.02130.x [DOI] [PubMed] [Google Scholar]
  • 32.Tennant R., Hiller L., Fishwick R., Platt S., Joseph S., Weich S., et al. (2007). The Warwick-Edinburgh Mental Well-being Scale (WEMWBS): development and validation. Health and Quality of Life Outcomes, 5, 63. doi: 10.1186/1477-7525-5-63 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Taggart, F., Stewart-Brown, S., & Parkinson, J. (2015). Warwick-Edinburgh Mental Well-being Scale (WEMWBS) User Guide, Version 2. Edinburgh: NHS Health Scotland.
  • 34.Duckworth A. L., & Quinn P. D. (2009). Development and validation of the Short Grit Scale (GRIT–S). Journal of Personality Assessment, 91(2), 166–174. doi: 10.1080/00223890802634290 [DOI] [PubMed] [Google Scholar]
  • 35.BPS. (2018). Code of ethics and conduct. Leicester: British Psychological Society.
  • 36.Stevens J. P. (2002). Applied multivariate statistics for the social sciences (4th ed.) Hillsdale, NJ: Erlbaum. [Google Scholar]
  • 37.Gliem, J. A., & Gliem, R. R. (2003). Calculating, interpreting, and reporting Cronbach’s alpha reliability coefficient for Likert-type scales. Midwest Research-to-Practice Conference in Adult, Continuing, and Community Education.
  • 38.Nunnally J. C. (1978). Psychometric theory (2nd ed.). New York: McGraw-Hill. [Google Scholar]
  • 39.Hu L.-T., & Bentler P. M. (1995). Evaluating model fit. In Hoyle R. H. (Ed.), Structural equation modeling: Concepts, issues, and applications (p. 76–99). London: Sage Publications, Inc. doi: 10.9722/JGTE.2018.28.1.23 [DOI] [Google Scholar]
  • 40.Bastos J. L., Celeste R. K., Faerstein E., & Barros A. J. (2010). Racial discrimination and health: a systematic review of scales with a focus on their psychometric properties. Social Science & Medicine (1982), 70(7), 1091–1099. doi: 10.1016/j.socscimed.2009.12.020 [DOI] [PubMed] [Google Scholar]
  • 41.MacCallum R. C., Browne M. W., & Sugawara H. M. (1996). Power analysis and determination of sample size for covariance structure modeling. Psychological Methods, 1(2), 130–149. [Google Scholar]
  • 42.Browne M. W., & Cudeck R. (1993). Alternative ways of assessing model fit. In Bollen K. A. and Long J. S. (Eds.). Testing structural equation models (pp. 136–162). Newbury Park, CA: Sage. [Google Scholar]
  • 43.Kenny D. A., Kaniskan B., & McCoach D. B. (2015). The performance of RMSEA in models with small degrees of freedom. Sociological Methods & Research, 44(3), 486–507. [Google Scholar]
  • 44.Bentler P. M. (1990). Comparative fit indexes in structural models. Psychological Bulletin, 107(2), 238–246. doi: 10.1037/0033-2909.107.2.238 [DOI] [PubMed] [Google Scholar]
  • 45.Bentler P. M., & Bonett D. G. (1980). Significance tests and goodness of fit in the analysis of covariance structures. Psychological Bulletin, 88(3), 588–606. [Google Scholar]
  • 46.Morin A. J. S., Marsh H. W., & Nagengast B. (2013). Exploratory structural equation modeling. In Hancock G. R. & Mueller R. O. (Eds.), Structural equation modeling: A second course (pp. 395–436). IAP Information Age Publishing. [Google Scholar]
  • 47.Henson R. K. (2001). Understanding internal consistency reliability estimates: A conceptual primer on coefficient alpha. Measurement and Evaluation in Counseling and Development, 34(3), 177–189. [Google Scholar]
  • 48.DeVellis R. F. (2003). Scale development: theory and applications (2nd ed.). Newbury Park: Sage Publications. [Google Scholar]
  • 49.Hair J. F. Junior, Black W. C., Babin N. J., Anderson R. E., & Tatham R. L. (2009). Análise multivariada de dados (6th ed.). São Paulo: Bookman. [Google Scholar]
  • 50.Sveinbjornsdottir S., & Thorsteinsson E. B. (2008). Adolescent coping scales: a critical psychometric review. Scandinavian Journal of Psychology, 49(6), 533–548. doi: 10.1111/j.1467-9450.2008.00669.x [DOI] [PubMed] [Google Scholar]
  • 51.Harvey R. J., Billings R. S., & Nilan K. J. (1985). Confirmatory factor analysis of the Job Diagnostic Survey: good news and bad news. Journal of Applied Psychology, 70, 461–8. doi: 10.1037/0021-9010.70.3.461 [DOI] [Google Scholar]
  • 52.Smith N., & Stults D. M. (1985). Factors defined by negatively keyed items: the results of careless respondents? Applied Psychological Measurement, 9, 367–73. doi: 10.1177/014662168500900405 [DOI] [Google Scholar]
  • 53.Hankins M. (2008). The factor structure of the twelve-item General Health Questionnaire (GHQ-12): the result of negative phrasing? Clinical Practice in Epidemiological Mental Health, 4. doi: 10.1186/1745-0179-4-10 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 54.Marsh H. W. (1996). Positive and negative global self-esteem: A substantively meaningful distinction or artifactors? Journal of Personality and Social Psychology, 70, 810–819. doi: 10.1037//0022-3514.70.4.810 [DOI] [PubMed] [Google Scholar]
  • 55.Vazsonya A. T., Ksinan A. J., Jiskrova G. K., Mikuska J., Javakhishvili J., & Cui G. (2019). To grit or not to grit, that is the question! Journal of Research in Personality, 78, 215–226. [Google Scholar]
  • 56.McNeish D., & Wolf M. G. (2021). Dynamic fit index cutoffs for confirmatory factor analysis models. Psychological Methods. Advance online publication. doi: 10.1037/met0000425 [DOI] [PubMed] [Google Scholar]
  • 57.Largent E. A., & Fernandez Lynch H. (2017). Paying research participants: Regulatory uncertainty, conceptual confusion, and a path forward. Yale Journal of Health Policy, Law, and Ethics, 17(1), 61–141. [PMC free article] [PubMed] [Google Scholar]
  • 58.Elliot C., & Abadie R. (2008). Exploiting a research underclass in phase 1 clinical trials. The New England Journal of Medicine, 358(22), 2316–2317. doi: 10.1056/NEJMp0801872 [DOI] [PubMed] [Google Scholar]
  • 59.Mahudin N. D. M., Cox T., & Griffiths A. (2012). Measuring rail passenger crowding: Scale development and psychometric properties. Transportation research part F: traffic psychology and behaviour, 15(1), 38–51. [Google Scholar]

Decision Letter 0

Frantisek Sudzina

23 Nov 2021

PONE-D-21-30746An International Validation of the Bolton Unistride Scale (BUSS) of Academic Tenacity.PLOS ONE

Dear Dr. Allen,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Please submit your revised manuscript by Jan 07 2022 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Frantisek Sudzina

Academic Editor

PLOS ONE

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. We noticed you have some minor occurrence of overlapping text with the following previous publication(s), which needs to be addressed:

- https://journals.plos.org/plosone/article/file?id=10.1371%2Fjournal.pone.0235157&type=printable

The text that needs to be addressed involves the results, specifically one of the tables.

In your revision ensure you cite all your sources (including your own works), and quote or rephrase any duplicated text outside the methods section. Further consideration is dependent on these concerns being addressed.

3. We note that you have indicated that data from this study are available upon request. PLOS only allows data to be available upon request if there are legal or ethical restrictions on sharing data publicly. For more information on unacceptable data access restrictions, please see http://journals.plos.org/plosone/s/data-availability#loc-unacceptable-data-access-restrictions.

In your revised cover letter, please address the following prompts:

a) If there are ethical or legal restrictions on sharing a de-identified data set, please explain them in detail (e.g., data contain potentially sensitive information, data are owned by a third-party organization, etc.) and who has imposed them (e.g., an ethics committee). Please also provide contact information for a data access committee, ethics committee, or other institutional body to which data requests may be sent.

b) If there are no restrictions, please upload the minimal anonymized data set necessary to replicate your study findings as either Supporting Information files or to a stable, public repository and provide us with the relevant URLs, DOIs, or accession numbers. For a list of acceptable repositories, please see http://journals.plos.org/plosone/s/data-availability#loc-recommended-repositories.

We will update your Data Availability statement on your behalf to reflect the information you provide.

4. We note that you have stated that you will provide repository information for your data at acceptance. Should your manuscript be accepted for publication, we will hold it until you provide the relevant accession numbers or DOIs necessary to access your data. If you wish to make changes to your Data Availability statement, please describe these changes in your cover letter and we will update your Data Availability statement to reflect the information you provide.

5. Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Partly

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: No

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: No

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: This is an important paper as it provides evidence that the BUSS may be applied in international settings; the BUSS being a novel measure of personal characteristics associated with academic tenacity. I enjoyed reading the paper; it is well written and presented in clear and logical fashion; entirely in keeping with the reporting of high quality psychometric research. I have no criticism of the statistical approach. I am of the opinion that this paper represents a significant contribution to the assessment of student characteristics and provides a very sound basis for further research.

Reviewer #2: I was having a hard time reviewing this paper and coming to a conclusion: on the one hand, there are several aspects that I liked in the paper. I think that the overall idea of the BUSS is a good and important one and that the collected sample is big and the confirmatory approach is recommendable. However, on the other hand, I see some major issues in the analyses (some actually deal-breaking) and the reasoning of what the BUSS can do and why it should be used instead of other scales. In this review, I strongly focus on the methodological aspects and the problems with the analysis. This is not because the other parts are already completely fine (other reviewers may raise important points there). Instead, I think that depending on the changes in the analyses and results substantial rewriting may be necessary for the introduction and discussion as well. If there are any points unclear in the following review (see attachment), please do not hesitate to contact me directly (I sign my reviews openly as I think that scientific communication is important, and the ability to contact a reviewer when something is not clear outweighs the benefits of ‘blind’ review).

Tom Scherndl (thomas.scherndl@plus.ac.at)

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: Yes: Thomas Scherndl

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

Attachment

Submitted filename: Review2021-11-16_PlosOne_BUSS.docx

PLoS One. 2022 Mar 11;17(3):e0264889. doi: 10.1371/journal.pone.0264889.r003

Author response to Decision Letter 0


17 Feb 2022

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

*Thank you, these have been amended as per your requirements.

2. We noticed you have some minor occurrence of overlapping text with the following previous publication(s), which needs to be addressed:

- https://journals.plos.org/plosone/article/file?id=10.1371%2Fjournal.pone.0235157&type=printable

The text that needs to be addressed involves the results, specifically one of the tables.

In your revision ensure you cite all your sources (including your own works), and quote or rephrase any duplicated text outside the methods section. Further consideration is dependent on these concerns being addressed.

*Unfortunately, I can’t see where in the results the text or tables presented overlaps with the previous publication?

3. We note that you have indicated that data from this study are available upon request. PLOS only allows data to be available upon request if there are legal or ethical restrictions on sharing data publicly. For more information on unacceptable data access restrictions, please see http://journals.plos.org/plosone/s/data-availability#loc-unacceptable-data-access-restrictions.

In your revised cover letter, please address the following prompts:

a) If there are ethical or legal restrictions on sharing a de-identified data set, please explain them in detail (e.g., data contain potentially sensitive information, data are owned by a third-party organization, etc.) and who has imposed them (e.g., an ethics committee). Please also provide contact information for a data access committee, ethics committee, or other institutional body to which data requests may be sent.

b) If there are no restrictions, please upload the minimal anonymized data set necessary to replicate your study findings as either Supporting Information files or to a stable, public repository and provide us with the relevant URLs, DOIs, or accession numbers. For a list of acceptable repositories, please see http://journals.plos.org/plosone/s/data-availability#loc-recommended-repositories.

We will update your Data Availability statement on your behalf to reflect the information you provide.

*Apologies for this, we have uploaded the anonymised data file as supporting information with the submission.

4. We note that you have stated that you will provide repository information for your data at acceptance. Should your manuscript be accepted for publication, we will hold it until you provide the relevant accession numbers or DOIs necessary to access your data. If you wish to make changes to your Data Availability statement, please describe these changes in your cover letter and we will update your Data Availability statement to reflect the information you provide.

*Please see above comment. We would greatly appreciate you amending our data availability statement to reflect these changes.

5. Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice.

*References have been corrected. Kannangara et al 2018 was kept in the references as it is now cited in Table 2 to show where Sample 1 and 3 came from. Wang & Gordon (2012) was changed to the correct reference.

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: This is an important paper as it provides evidence that the BUSS may be applied in international settings; the BUSS being a novel measure of personal characteristics associated with academic tenacity. I enjoyed reading the paper; it is well written and presented in clear and logical fashion; entirely in keeping with the reporting of high quality psychometric research. I have no criticism of the statistical approach. I am of the opinion that this paper represents a significant contribution to the assessment of student characteristics and provides a very sound basis for further research.

*Thank you very much for your comments on our manuscript. We are delighted that you support our research and enjoyed reading our manuscript. It is greatly appreciated.

Reviewer #2: I was having a hard time reviewing this paper and coming to a conclusion: on the one hand, there are several aspects that I liked in the paper. I think that the overall idea of the BUSS is a good and important one and that the collected sample is big and the confirmatory approach is recommendable. However, on the other hand, I see some major issues in the analyses (some actually deal-breaking) and the reasoning of what the BUSS can do and why it should be used instead of other scales. In this review, I strongly focus on the methodological aspects and the problems with the analysis. This is not because the other parts are already completely fine (other reviewers may raise important points there). Instead, I think that depending on the changes in the analyses and results substantial rewriting may be necessary for the introduction and discussion as well. If there are any points unclear in the following review (see attachment), please do not hesitate to contact me directly (I sign my reviews openly as I think that scientific communication is important, and the ability to contact a reviewer when something is not clear outweighs the benefits of ‘blind’ review).

Tom Scherndl (thomas.scherndl@plus.ac.at)

*We would like to thank you very much for your detailed and incredibly useful comments on our manuscript. We really appreciate the quality of your review and truly believe that the paper is better after making your recommended revisions.

________________________________________

2021-11-17

Review for ‘An International Validation of the Bolton Unistride Scale (BUSS) of Academic‘

I was having a hard time reviewing this paper and coming to a conclusion: on the one hand, there are several aspects that I liked in the paper. I think that the overall idea of the BUSS is a good and important one and that the collected sample is big and the confirmatory approach is recommendable. However, on the other hand, I see some major issues in the analyses (some actually deal-breaking) and the reasoning of what the BUSS can do and why it should be used instead of other scales. In this review, I strongly focus on the methodological aspects and the problems with the analysis. This is not because the other parts are already completely fine (other reviewers may raise important points there). Instead, I think that depending on the changes in the analyses and results substantial rewriting may be necessary for the introduction and discussion as well. If there are any points unclear in the following review, please do not hesitate to contact me directly (I sign my reviews openly as I think that scientific communication is important, and the ability to contact a reviewer when something is not clear outweighs the benefits of ‘blind’ review).

Tom Scherndl (thomas.scherndl@plus.ac.at)

General

• The authors named the first factor of the scale ‘tenacity’ but also seem to use this as the name for the total BUSS score and the overall construct: this was confusing for me – a simple fix would be to be more precise in the wording?

*We have changed the name of the “tenacity” factor to “persistence” and ensures this is coherent throughout the manuscript. We have therefore also decided to change “academic tenacity” to “tenacity”.

• Table 2: Samples used: it was not always completely clear to me as a casual reader where those samples in Table 2 came from and what the relevance was… are any of those samples published? Are they newly collected for this study? Additionally, the sample size for the current sample stated in the table does not match with the sample size given in the text (969 vs. 1043). Table 1 suggests that the latter number is correct? Please give more detail why this is relevant for the given paper. Small typo: “On inspection of table 3…” should probably refer to table 2….

*we have included references in the Table 2 notes for more information about where the samples came from

*the sample size is 1043 as double checked in Table 2 and the datafile. This has been amended in Table 2 (p.7). Thank you for spotting this mistake.

*this has been changed to “on inspection of table 2…” (p.6)

• I do not agree with the authors that the minimal factor loading that is ‘recommended’ will/should be .162. It is simply maths when factor loadings will be significant depending on specific sample sizes. You do not need a citation for that. I would also argue that items with such low loadings do not contribute much to a given factor as less than 3% of the given variance in the item is explained by the latent factor. I suggest either completely cutting this part or being more specific about why such a (weak) recommendation is given. The statistical significance of factor loadings is a really weak argument to give when conducting factor analyses. This is also a problem for some items of factor 2 later: as these seem to be consistently loading quite low, you need a good argument why you still keep them as they seem to measure something else…

*we have removed the recommendation that factor loadings should exceed .162 to be retained. We have also discussed corrected item-total correlation, Cronbach’s alpha and McDonald’s omega after removal of items, specifically for items 8 and 5 (which are the two which fall below .4). We have also included mention on how removing these items would impact construct coverage and have therefore decided to retain these items.

Method

• Prolific is a convenient way to sample big samples. However, I miss a critical reflection on the limitations of the resulting sample. Do you think that participants at Prolific are representative of a more general (international) student sample? I am quite skeptical. Typical students in my country are *NOT* users of Prolific at all… perhaps it is different in the US/UK, but you should add some caveat about this potential problem (or give some citations that all is well – I am not an expert with Prolific samples). When looking at your sample you have a very strong emphasis on UK/US/Canada students. Did you check whether they were *actually* from that country based on their IP address or is it only self-report? There was quite a problem with people faking their location on MTURK some years ago (even going so far as to fake their IP address using VPN). How can you be sure that participants were actual students in the mentioned country (and not someone else from a developing country faking being a student to get the money) or even bots? If you have anything to check for quality this may be fine. If not: add something in the limitations that online samples have been prone to such faking.

*Students self-report their country of study and IP addresses were not checked to support their answers. This has been added into the study limitations as well as your point about the representativeness of the prolific sample.

• You should also add that participants were incentivized (how much?). At least I infer monetary incentivization based on the grant mentioned in the beginning and the fact that everything on Prolific is incentivized. How long did it take participants to take the survey (on average, min/max)? what was the hourly wage: was that a wage that would have encouraged careful responding or just made participants get through quickly?

*this is an important point that should have been mentioned in the study limitations. Thank you for pointing this out. The offer of financial gain to participate and how much they received has been included (p.20).

• Did you use any quality control measures to avoid careless responding or just ‘clicking through’? This is a lesser problem on Prolific than on MTurk but still, I would strongly suggest checking whether participants were finishing very quickly or giving always the same answers. I do not suppose that you have any attention checks, bogus items or similar measures to make sure that participants read the instructions and questions? Perhaps some reaction/reading times? If yes, you should screen for overly fast responders or participants that failed these attention checks/bogus items.

*we did in fact include one attention check item and assess the time taken to compelte the survey, as we do with all our online research via platforms such as Prolific. We have included this information in procedure section of methods as we agree that this is important information.

• Description of scales/instruments: this was well done and a nice overview.

*thank you.

Results

• I was a little bit confused about what was done concerning the factor analysis. The authors say that they did a confirmatory factor analysis and report corresponding fit indices. This is fine and nice (I am strongly in favor of a confirmatory approach): I would however suggest reporting all the fit indices first and then discussing them all in one instead of their piece-meal approach discussing each fit index separately (MINOR). It seems that all point to a similar conclusion thus a more succinct interpretation is ok (but see next bullet point). However, they also mention using PROMAX as a rotation method, as well as Bartlett’s test, etc. All of these things are typically used / more common in EFA contexts which is however not reported (no factor loadings etc.). I have not used AMOS in years, but I find it strange that those indices would be shown in an AMOS output or AMOS uses a promax method in a confirmatory (maximum likelihood?) factor analysis. I have never seen these indices in confirmatory factor analyses (reported in papers as well as in output obtained by R (lavaan) or MPLUS). Is it perhaps a leftover from an exploratory factor analysis or some strange hybrid version that was later dropped from the paper? I would suggest that the paper will be fine if you focus completely on the CFA and drop everything related to the EFA. The EFA does not add anything if there is already enough data and previous papers showing a given factorial structure. So drop the EFA – keep the CFA.

*the paragraph that presents the fitness of the model has been reorganized to report all the fit indicised and then discussing them. We agree that this now reads better and more clearly, thank you.

* we believe that the focus is on the CFA only and the output from the performed CFA through AMOS v.26 that provides standardized factor loadings, regression weights, intercepts, correlations, covariances, effects, pairwise parameter comparisons and model fit. We are happy to provide the AMOS output file if this is something you would like to see?

*We can confirm we used a maximum likelihood method of factor extraction and have inserted this info in the results section for clarity (p.11).

*Text relating the EFA has been removed to keep the focus on the CFA as recommended.

• However, as you later-on argue that you prefer to use a composite score instead of the two factors, I would recommend extending the CFA approach: 1) include a CFA model with a single factor and 2) a bifactor model with the 2 specific factors and the overall /general ‘g-factor’ that you are arguing for later in the paper. Compare these 3 models (1 factor, 2 factors, bifactor model) using fit indices and change of CHI² etc. Without having seen your data and only based on the correlations and fit indices: I would expect that the bifactor model will probably have the best fit: this is then a great argument for using the composite score and arguing for a general factor.

*As suspected by the reviewer, a bifactor model was displayed the best fit indices (χ2=229.302, TLI = .885, RMSEA = .072, AIC= 30377.094), compared the 1-factor (χ2=657.997, TLI = .767, RMSEA = .104, AIC= 32904.413), and the 2-factor solution (χ2=657.997, TLI = .862, RMSEA = .079, AIC= 30528.795). We have incorporated these comparisons in the text to support the argument for a general factor and have added the 1-factor and bifactor coefficients to table 3.

• Additionally, you may also decide to use the better-suited omega as an estimate for internal consistency instead of Cronbach alpha which has received so much criticism in the last years (rightly so). If you have any problems doing this analysis in SPSS/AMOS (I am not sure if this is possible there – I have not used them for quite some time), you may opt to use JAMOVI as an open-source program with a graphical user interface. For SPSS users, JAMOVI is very easy to use and understand and it includes omega as a reliability estimate out of the box. Obviously R (package psych) would be even better, but I completely understand if you do not want to make the dive into R for this paper.

*We have added omega as an estimate of internal consistency for the BUSS.

• Fit criteria/cut-offs: Personally, I do not completely agree that the fit of the CFA is fine based on typical cut-offs. However, I am aware that many different cut-offs have been proposed in different papers which may be cited depending on the results. All of those cut-offs are more or less arbitrary (as the specific parameters from those simulations are probably never completely met in a given case) and thus debatable. I would like to raise another possibility to solve this very unsatisfying and taxing debate of whether the proposed criteria are ‘correct’: Other authors have argued that using generic cutoffs are problematic and suggested a new approach(McNeish & Wolf, 2021 in Psychological Methods. https://psyarxiv.com/v8yru for the open-access pre-print). Instead of using cutoffs derived from simulations that may or may not mirror the actual model evaluated in a given study (e.g. using Hu&Bentler or similar recommendations), authors should simply evaluate their obtained fit against the results of a simulation that is based on the model they actually tested. The above-mentioned authors give an easy-to-use app (at least that was my impression when I looked at it after the publication of the article – I have not yet used this approach myself). I would strongly encourage the authors to compare their model fit against those cutoffs derived from simulation. I think that this procedure will (and should) be the recommended approach in the future and that your paper will benefit from it. This will also sidestep the valid criticism that some of your fit indices still hint at (substantial?) misspecification of the model. Maybe this will be less of a problem if you follow my suggestion above to include a bifactor model. It may be that the fit of this other model is so good, that the fit indices are showing good model fit beyond any doubt (but do not count on it).

*We have made use of the shiny app developed by McNeish & Wolf (2021). We obtain dynamic fit indices for our 1-factor solution; SRMR = .043 to .058, RMSEA = .051 to .09, CFI=.949 to .864 at the 95% level (assuming item residual correlations of .3 are not included for 1/3 to all items in the model). When compared to our actual 1 factor fit indices, SRMR = .072, RMSEA = .104, CFI = .809, we can see this a 1 factor solution is mis-specified relative to simulated data derived cut-offs and suggests modifications to the model are required. Only an SMRM = .024 at the 90% level was provided by the app for our 2-factor model. Our actual SMRM value for our 2-factor model was .061, indicating some degree of misspecification when considering dynamic fit indices. We wished to assess our bifactor model using the dynamic fit indices but the shiny app currently states that bifactor models are “not available yet” (https://www.dynamicfit.app/connect/). On the whole, we agree that a bifactor model is likely most suitable but must remain cautious as we are presently only able to asses its fit using the arbitrary Hu & Bentler fit indices cut-offs.

As we are unable to obtain dynamic cut-offs for all models, we are concerned that the use of dynamic cut-off for some models and not others could lead to mis-interpretations or confusion. We have therefore retained the Hu & Bentler cut-offs in our results section but added to the discussion that dynamic estimates should be considered and are worth further research, once dynamic fit indices can be computed for bifactor models.

• Based on the reported fit, I would suspect that there are substantial residual error correlations hidden in the residuals – you may want to check the modification indices. Be very careful when applying any of those: however, sometimes they do make a lot of sense and can be argued on theoretical grounds as well. If you do a bifactor model as mentioned above, I would expect that those modification indices are becoming less relevant – overall suggesting better model fit.

*Modification indices were checked for the 1-factor and 2-factor models but AIC of the bifactor model remained superior. We have left these out so as to not confuse reader without clear theoretical underpinning for the use of additional residual error correlations in our models.

• Results concerning discriminant and convergent validity: I have two concerns in this area:

1) the correlations between BUSS and all the other scales are consistently high: if I may raise the question: is it too high? It suggests substantial overlap with GRIT, resilience, etc.– what did you expect? Is having a scale that is highly correlated with other (sometimes more established, always more specific) scales really better? Why should we prefer the BUSS over the other scales when measuring relevant psychological constructs?

*we have included some discussion on the matter on convergent validity. We agree that this is an important point we don’t feel the correlations are ‘too high’. It supports our argument that tenacity is comprised of several highly relevant and related constructs that each offer a unique and significant contribution towards BUSS. BUSS allows the assessment of all of these important and relevant constructs, that are expected to be highly related, in a timely and reliable solution.

2) Discriminant validity: I have a strong objection here: the authors *do not* check discriminant validity and use the term in a faulty way. Discriminant validity is the degree to which scores on a test *do not* correlate with scores from other tests that *are not* designed to assess the same construct. The artificial dichotomization and comparing the extreme groups does not add anything at all to the question of whether BUSS has discriminant validity. The BUSS and other scales are highly correlated: it is obvious that if you cut the BUSS into extreme groups, these will differ in the other scales – it is just rephrasing the correlation in another analysis. As there is also no real theoretical argument why the mentioned cuts are used (I suppose the first and last quartile are contrasted – but why are these cutoffs even interesting/meaningful?), I would suggest dropping this group comparison completely and sticking with the correlations.

*analysis that used the cut off points of high and low BUSS were removed from the results and mention of these analyses were removed from the rest of the manuscript.

• The report of the multiple regression has to be more complete: It is very uncommon in psychological assessment papers to just report a regression formula. I am not even sure what was reported in the formula: standardized betas or unstandardized b? I would strongly recommend including a complete table comprising standardized as well as unstandardized parameters and respective confidence intervals to better judge the worth of the individual predictors. It would be of interest to compare the bivariate correlations to the respective betas in the discussion: which predictors actually have a unique contribution and what is partialled out in the regression? This would strengthen the point of the paper that the BUSS is on the one hand something distinct, but also a combination of relevant other scales and constructs. In other words: can the BUSS be used as a shortcut to assess a multitude of relevant psychological aspects in a very economic way? I think this may be a viable argument to make for the authors: however, I did not find that as the main point of the reasoning. Perhaps I am wrong – I am open to other arguments as well, although right now, I do not get any good points on why this scale can/should be used in contrast to the other scales.

*a table (Table 5) has been included to show the standardized and unstandardized betas, along with confidence intervals (p.17).

*comparisons have been made between the bivariate correlations (Table 4) and the beta coeffiecients (Table 5) which do in fact help to make a case for the relevance and convenience of BUSS. Many thanks for this helpful suggestion. (p.19 and p.22).

Discussion/Limitations

• Summary of the results is mostly ok concerning the factorial structure, although overreaching for the validity claims (see above regarding discriminant validity, but also what is ‘good’ convergent validity: too high an overlap can actually be a problem if the other scales should measure distinct other constructs).

o I do not concur with the idea that n = 120 is enough to ensure factor stability of a scale of 10 items and think that this rule of thumb is ridiculous. However, I will not debate that much. I still think it is wrong and you just could cut this part. I suppose it was inserted in a previous round of reviews, so I will not force you to delete something that was probably forced on you.

o ‘can be utilised in populations across the world’ – you should insert ‘student’ here to be more precise… I think this is overreaching … you have a sample comprised of mostly English-speaking countries with some additional European countries.

o ‘strong relationship between academic tenacity and the two comprising factors was observed’: not sure what you mean. If you mean that the overall BUSS score is correlated with the two factors…. Well, this is because the total score includes the two factors… if this is what you meant: please cut it, as this is obvious and not an argument for the BUSS at all.

*we have included more discussion surrounding the convergent validity of the BUSS (p.18-19).

*this has been changed to read “…the BUSS can be considered a reliable and valid measure of tenacity for university students internationally, , particularly those within predominantly English speaking countries.” (p.20)

*we agree and this sentence has been removed (p.18).

• I strongly missed the severe limitation that there was no test at all concerning criterion validity: the authors did not show that BUSS was related to students’ success (e.g. GPA), satisfaction with their academic performance, or their intention to drop out. The BUSS was only correlated with other scales which are more or less theoretically distinct from the BUSS and each other. However, this is not sufficient and persuasive for including and using the BUSS in a study that aims at helping students in their academic life (as suggested). I am aware that you probably do not have the necessary data to investigate criterion validity in this sample. I strongly encourage you to include something like that in upcoming studies and use a more cautious tone in this paper about validity.

*you are right in saying we did not have access to this data and we completely agree that this is a major limitation of the study (p.22) , considering we are arguing that the BUSS will help assess academic outcomes. We have pointed this out in the limitations and are actually in the midst of a study that investigates this.

• I do not see where the sweeping recommendations regarding the BUSS are coming from. I think there is still quite some work to do to establish the BUSS psychometrically (especially criterion validity, measurement invariance when comparing different countries, etc.).

*we have revised the recommendations section of the manuscript to appear less bold or presumptuous (p.23).

General points

• I would strongly encourage the authors to adopt open science practices by making their code as well as their data public to increase transparency and reproducibility. Uploading the data and syntax to e.g. OSF is easy and will help readers to better judge the robustness and impact of the given results.

*the datafile has now been submitted as supporting information alongside the revised manuscript.

Attachment

Submitted filename: Response to Reviewers 21.01.22.docx

Decision Letter 1

Frantisek Sudzina

21 Feb 2022

An International Validation of the Bolton Unistride Scale (BUSS) of Tenacity.

PONE-D-21-30746R1

Dear Dr. Allen,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Frantisek Sudzina

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Reviewers' comments:

Acceptance letter

Frantisek Sudzina

4 Mar 2022

PONE-D-21-30746R1

An International Validation of the Bolton Unistride Scale (BUSS) of Tenacity.

Dear Dr. Allen:

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

If we can help with anything else, please email us at plosone@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. Frantisek Sudzina

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 File

    (SAV)

    Attachment

    Submitted filename: rebuttal letter PLOS One 30.09.21.docx

    Attachment

    Submitted filename: Review2021-11-16_PlosOne_BUSS.docx

    Attachment

    Submitted filename: Response to Reviewers 21.01.22.docx

    Data Availability Statement

    All relevant data are within the paper and its Supporting information files (An International Validation of BUSS Dataset.sav).


    Articles from PLoS ONE are provided here courtesy of PLOS

    RESOURCES