Skip to main content
Frontiers in Psychology logoLink to Frontiers in Psychology
. 2018 Jan 9;8:2329. doi: 10.3389/fpsyg.2017.02329

Commentary: Mental Toughness and Individual Differences in Learning, Educational and Work Performance, Psychological Well-being, and Personality: A Systematic Review

Daniel F Gucciardi 1,*
PMCID: PMC5767217  PMID: 29375443

The concept of mental toughness (MT) has garnered substantial interest over the past two decades. Scholars have published several narrative reviews of this literature (e.g., Connaughton et al., 2008; Crust, 2008; Gucciardi, 2017), yet in ~30 years of research there has been no attempt to review this body of work systematically to understand how MT is associated with hypothesized correlates. The systematic review by Lin et al. (2017) was timely for the field of MT (see also, Cowden, 2017). However, in this commentary, I explain two reasons why the conclusions drawn from this systematic review may be misleading or premature.

Methodological quality of primary studies matters

Well-executed systematic reviews offer many advantages for summarizing, synthesizing and integrating findings across studies when compared with non-systematic evaluations (e.g., clear and accountable methods; see Gough et al., 2017). However, the potential value of even the most well-executed systematic review could be undermined by the methodological quality of the primary studies (Moher et al., 2015; Shamseer et al., 2015; Oliveras et al., 2017). An assessment of methodological quality is necessary both for determining what primary studies might be included in a systematic review and for making inferences regarding the reliability of those studies retained for analysis and integration. For example, differences in the methodological quality of primary studies might explain why research on the same topic results in different or conflicting degrees of evidence. The exclusion of a formal assessment of methodological quality is a major limitation of the systematic review conducted by Lin et al. (2017) because any bias in primary studies transfers to the synthesized evidence unless those biases and sources are variation are handled as part of the analysis and interpretation of the cumulative findings. The issue of statistical power is an important consideration in this regard, yet sample size justification is an often overlooked consideration among primary research on MT including my past work (e.g., Gucciardi and Jones, 2012). For example, should a study with 16 participants (90% power to detect the smallest possible effect of r = 0.67 at p < 0.05; Cowden et al., 2014) be given the same degree of quality as one with 351 participants (90% power to detect the smallest possible effect of r = 0.171 at p < 0.05; Cowden et al., 2016)? Although the answer depends on the smallest effect size of interest (Lakens and Evers, 2014), it is important to bear in mind that underpowered studies inflate false positives (Button et al., 2013) and effect sizes tend to be unstable when samples are small (Schönbrodt and Perugini, 2013). The process of assessing methodological quality across a heterogeneous set of primary studies is challenging, particularly for observational research (Vandenbroucke et al., 2014; von Elm et al., 2014), because of the unavailability of consensus regarding definitions, assessments, and integration with the synthesis of evidence (Oliveras et al., 2017).

Construct validity evidence of psychometric tools matters

Given the predominance of self-reported MT among the primary studies of Lin et al.'s (2017) systematic review, a key consideration for the assessment of methodological quality is the degree of construct validity evidence for each tool. Construct validation refers to the testing of bidirectional associations between theory development and measurement in terms of assessments of the theoretical domain and its operationalization (substantive phase), empirical fidelity of the measurement approach (structural phase), and the meaning of test scores with key correlates or group differentiation (external phase) (Loevinger, 1957). Assuming sufficient evidence exists for the substantive foundations of a measure (e.g., precise definition, content validity evidence), ongoing tests of the internal structure of a scale are a necessary prerequisite for examinations of external relations because the number of latent factors or loading patterns may differ across samples, populations, and settings (Flora and Flake, 2017).

Lin et al.'s (2017) findings showed that the MTQ48 and its shortened version (MTQ18) (Clough et al., 2002) are the most widely used measures for the assessment of MT and its associations with external variables including cognition and educational, work and military performance (11 of 16 studies), psychological well-being (17 of 23 studies), personality and other psychological traits (9 of 16 studies) and genetics (4 of 5 studies). Yet psychometric analyses of the MTQ48 by the original author and his colleagues (Clough et al., 2012; Gerber et al., 2013; Perry et al., 2013, 2015) as well as independent researchers (Gucciardi et al., 2012, 2013; Birch et al., 2017; Vaughan et al., 2017) cast doubts on the operationalization of the 4Cs model of MT via the MTQ48 both in terms of global (i.e., model-data congruence) and local (i.e., pattern of factor loadings) misfit. As such, any conclusions regarding the associations between MT and key correlates are tenuous because of the uncertainty regarding the meaning of the underlying latent factor.

Conclusion

As the first systematic review of the quantitative literature on the associations between MT and key correlates, I commend Lin et al. (2017) for their efforts in bringing together disparate literatures. However, I urge caution to those readers who might interpret their findings uncritically in two key ways. The exclusion of an assessment of the methodological quality of primary studies and the reliance in the literature on a measure of MT with questionable conceptual underpinnings and limited construct validity evidence reduce our confidence in the veracity of the available findings and, therefore, the conclusions and implications of the systematic review for theory and practice. An assessment of the methodological quality of primary studies included within the Lin et al. review (e.g., Mixed Methods Appraisal Tool; Pluye and Hong, 2014) and re-analysis and re-interpretation of the findings represents an important step for the science of MT. Indeed, a critical analysis of the methodological quality of primary work alone can represent a major contribution to a field of research because it might highlight deficiencies and/or strengths in evidence (Moher et al., 2015; Shamseer et al., 2015).

Author contributions

The author confirms being the sole contributor of this work and approved it for publication.

Conflict of interest statement

The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Footnotes

Funding. DG is supported by a Curtin Research Fellowship.

References

  1. Birch P. D. J., Crampton S., Greenlees I., Lowry R., Coffee P. (2017). The mental toughness questionnaire-48: a re-examination of factorial validity. Int. J. Sport. Psychol. 48, 331–355. 10.7352/IJSP.2017.48.331 [DOI] [Google Scholar]
  2. Button K. S., Ioannidis J. P., Mokrysz C., Nosek B. A., Flint J., Robinson E. S., et al. (2013). Power failure: why small sample size undermines the reliability of neuroscience. Nat. Rev. Neurosci. 14, 365–376. 10.1038/nrn3475 [DOI] [PubMed] [Google Scholar]
  3. Clough P., Earle K., Sewell D. (2002). Mental toughness: the concept and its measurement, in Solutions in Sport Psychology, ed Cockerill I. (London: Thomson; ), 32–45. [Google Scholar]
  4. Clough P., Earle K., Perry J. L., Crust L. (2012). Comment on “progressing measurement in mental toughness: a case example of the mental toughness questionnaire 48” by Gucciardi, Hanton, and Mallett (2012). Sport. Exerc. Perform. 1, 283–287. 10.1037/a0029771 [DOI] [Google Scholar]
  5. Connaughton D., Hanton S., Jones G., Wadey R. (2008). Mental toughness research: key issues in this area. Int. J. Sport. Psychol. 39, 192–204. [Google Scholar]
  6. Cowden R. G., Anshel M. H., Fuller D. K. (2014). Comparing athletes' and their coaches' perceptions of athletes' mental toughness among elite tennis players. J. Sport. Behav. 37, 221–235. [Google Scholar]
  7. Cowden R. G., Meyer-Weitz A., Oppong Asante K. (2016). Mental toughness in competitive tennis: relationships with resilience and stress. Front. Psychol. 7:320. 10.3389/fpsyg.2016.00320 [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Cowden R. G. (2017). Mental toughness and success in sport: a review and prospect. Open. Sports Sci. J. 10, 1–14. 10.2174/1875399X01710010001 [DOI] [Google Scholar]
  9. Crust L. (2008). A review and conceptual re-examination of mental toughness: implications for future researchers. Pers. Indiv. Diff. 45, 576–583. 10.1016/j.paid.2008.07.005 [DOI] [Google Scholar]
  10. Flora D. B., Flake J. K. (2017). The purpose and practice of exploratory and confirmatory factor analysis in psychological research: directions for scale development and validation. Can. J. Beh. Sci. 49, 78–88. 10.1037/cbs0000069 [DOI] [Google Scholar]
  11. Gerber M., Kalak N., Lemola S., Clough P. J., Perry J. L., Pühse U., et al. (2013). Are adolescents with high mental toughness levels more resilience against stress? Stress Health 29, 164–171. 10.1002/smi.2447 [DOI] [PubMed] [Google Scholar]
  12. Gough D., Oliver S., Thomas J. (2017). An Introduction to Systematic Reviews, 2nd Edn. Los Angeles, CA: Sage. [Google Scholar]
  13. Gucciardi D. F., Jones M. I. (2012). Beyond optimal performance: mental toughness profiles and developmental success in adolescent cricketers. J. Sport Exerc. Psychol. 34, 16–36. 10.1123/jsep.34.1.16 [DOI] [PubMed] [Google Scholar]
  14. Gucciardi D. F., Hanton S., Mallett C. J. (2012). Progressing measurement in mental toughness: a case example of the Mental Toughness Questionnaire 48. Sport Exerc. Perform. 1, 194–214. 10.1037/a0027190 [DOI] [Google Scholar]
  15. Gucciardi D. F., Hanton S., Mallett C. J. (2013). Progressing measurement in mental toughness: a response to Clough, Earle, Perry, and Crust. Sport Exerc. Perform. 2, 157–172. 10.1037/spy0000002 [DOI] [Google Scholar]
  16. Gucciardi D. F. (2017). Mental toughness: progress and prospects. Curr. Opin. Psychol. 16, 17–23. 10.1016/j.copsyc.2017.03.010 [DOI] [PubMed] [Google Scholar]
  17. Lakens D., Evers E. (2014). Sailing from the seas of chaos into the corridor of stability: practical recommendations to increase the informational value of studies. Perspect. Psychol. Sci. 9, 278–292. 10.1177/1745691614528520 [DOI] [PubMed] [Google Scholar]
  18. Lin Y., Mutz J., Clough P. J., Papageorgiou K. A. (2017). Mental toughness and individual differences in learning, educational and work performance, psychological well-being, and personality: a systematic review. Front. Psychol. 8:1345. 10.3389/fpsyg.2017.01345 [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Loevinger J. (1957). Objective tests as instruments of psychological theory. Psychol. Rep. 3, 635–694. 10.2466/pr0.1957.3.3.635 [DOI] [Google Scholar]
  20. Moher D., Shamseer L., Clarke M., Ghersi D., Liberati A., Petticrew M., et al. (2015). Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015 statement. Syst. Rev. 4:1. 10.1186/2046-4053-4-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Oliveras I., Losilla J.-M., Vives J. (2017). Methodological quality is underrated in systematic reviews and meta-analyses in health psychology. J. Clin. Epidemiol. 86, 59–70. 10.1016/j.jclinepi.2017.05.002 [DOI] [PubMed] [Google Scholar]
  22. Perry J. L., Clough P. J., Crust L., Earle K., Nicholls A. R. (2013). Factorial validity of the Mental Toughness Questionnaire-48. Pers. Indiv. Differ. 54, 587–592. 10.1016/j.paid.2012.11.020 [DOI] [Google Scholar]
  23. Perry J. L., Nicholls A. R., Clough P. J., Crust L. (2015). Assessing model fit: caveats and recommendations for confirmatory factor analysis and exploratory structural equation modelling. Meas. Phys. Educ. Exerc. Sci. 19, 12–21. 10.1080/1091367X.2014.952370 [DOI] [Google Scholar]
  24. Pluye P., Hong Q. N. (2014). Combining the power of stories and the power of numbers: mixed methods research and mixed studies reviews. Annu. Rev. Public Health 35, 29–45. 10.1146/annurev-publhealth-032013-182440 [DOI] [PubMed] [Google Scholar]
  25. Schönbrodt F. D., Perugini M. (2013). At what sample size do correlations stabilise? J. Res. Pers. 47, 609–612. 10.1016/j.jrp.2013.05.009 [DOI] [Google Scholar]
  26. Shamseer L., Moher D., Clarke M., Ghersi D., Liberati A., Petticrew M., et al. (2015). Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015: elaboration and explanation. BMJ 350:g7647. 10.1136/bmj.g7647 [DOI] [PubMed] [Google Scholar]
  27. Vandenbroucke J. P., von Elm E., Altman D. G., Gøtzsche P. C., Mulrow C. D., Pocock S. J., et al. (2014). The strengthening the reporting of observational studies in epidemiology (STROBE) statement: explanation and elaboration. Int. J. Surg. 12, 1500–1524. 10.1016/j.ijsu.2014.07.014 [DOI] [PubMed] [Google Scholar]
  28. Vaughan R., Hanna D., Breslin G. (2017). Psychometric properties of the Mental Toughness Questionnaire 48 (MTQ48) in elite, amateur and non-athletes. Sport. Exerc. Perform. 10.1037/spy0000114 [DOI] [Google Scholar]
  29. von Elm E., Altman D. G., Egger M., Pocock S. J., Gøtzsche P. C., Vandenbroucke J. P. (2014). The strengthening the reporting of observational studies in epidemiology (STROBE) statement: guidelines for reporting observational studies. Int. J. Surg. 12, 1495–1499. 10.1016/j.ijsu.2014.07.013 [DOI] [PubMed] [Google Scholar]

Articles from Frontiers in Psychology are provided here courtesy of Frontiers Media SA

RESOURCES