Abstract
This study aims to provide a transferable methodology in the context of sport performance modelling, with a special focus to the generalisation of models. Data were collected from seven elite Short track speed skaters over a three months training period. In order to account for training load accumulation over sessions, cumulative responses to training were modelled by impulse, serial and bi-exponential responses functions. The variable dose-response (DR) model was compared to elastic net (ENET), principal component regression (PCR) and random forest (RF) models, while using cross-validation within a time-series framework. ENET, PCR and RF models were fitted either individually () or on the whole group of athletes (). Root mean square error criterion was used to assess performances of models. ENET and PCR models provided a significant greater generalisation ability than the DR model (, , and for , , and , respectively). Only and were significantly more accurate in prediction than DR ( and ). In conclusion, ENET achieved greater generalisation and predictive accuracy performances. Thus, building and evaluating models within a generalisation enhancing procedure is a prerequisite for any predictive modelling.
Subject terms: Computational models, Machine learning, Systems biology, Computer science, Predictive markers
Introduction
The relationship between training load and performance in sports has been studied since decades. A key point of the performance optimisation is the training prescription delivered by coaches, physical trainers or the athlete himself. Such a programming involves both various modalities of exercise (i.e. the type of training regarding to the physical quality required to perform) and adjusted training load. Training load is usually dissociated into (i) an external load defined by the work completed by the athlete, independently of his internal characteristics1 and (ii) an internal load corresponding to the psycho-physiological stresses imposed on the athlete in response to the external load2.
Models of training load responses emerged with the impulse response model promoted by Banister et al.3 in order to describe human adaptations to training loads. Afterwards, a simplified version of the original model built on a two-way antagonistic first order transfer function (fitness and fatigue components, so called Fitness–Fatigue model) has showed a large interest to describe the training process4–8. However, several limitations regarding to the model stability, parameter interpretability, ill-conditioning and predictive accuracy were reported9,10. Such models are considered as time-varying linear models according to their component structure11 and therefore, may require a sufficient number of observations (i.e. performances) to correctly estimate relationships between training load and performance9,12. To overcome some of the limits, refinements of the former impulse response model were proposed by using a recursive algorithm in order to estimate parameters according to each model input (i.e. the training load)11 and by introducing variations in the fatigue response to a single training bout13. Further adaptations to the Fitness–Fatigue model were also developed with the aim of improving both goodness-of-fit and prediction accuracy14,15. Nonetheless, impulse response models sought to mitigate the underpinning physiological processes involved by exercise into a small number of entities for predicting training effects in both endurance (running, cycling, skiing and swimming)6,11,16–21 and more complex (hammer throw, gymnastic and judo)8,22,23 activities. This simplistic approach might prevent from catching the appropriate relationship between training and performance, and finally impair accuracy of predictions24. Moreover, with the exception of the one from Matabuena et al.15, these models assume that the training effect is maximal by the end of the training session. This assumption is reasonable only for the negative component of the model (i.e. “Fatigue”), where its maximal value is taken immediately after the session. Regarding to the positive effects induced by training (i.e. “Fitness”), such a response is quite questionable since physiological adaptations are continuing from the end of the exercise session. For instance, skeletal muscle adaptations to training described by increases in muscle mass, fiber shortening velocity and myosin ATPase activity modifications are known to be progressive (i.e. short to long term after-effects) rather than instantaneous25–27. Consequently, serial and bi-exponential functions were proposed to counteract these limitations and better describe training adaptations through exponential growth and decay functions, according to physiological responses in rats28.
A more statistical approach was used to investigate the effects of training load on performance by using principal component analysis and linear mixed models on different time frames12. Such models infer parameters from all available data (i.e. combining subjects instead of by-subject model) but allow parameters to vary regarding the heterogeneity between athletes. The model being multivariate, the multi-faceted nature of the performance could be conserved by including psychological, nutritional and technical information as predictors12,16,18. However, authors did not consider the cumulative facet of daily training loads, where exponential and decay cumulative functions such as proposed by Candau et al.17 may be suitable for performance modelling.
Alternatives from computer sciences field were also used to clarify the training load - performance relationship in a predictive aim. Most notably, machine learning approaches are usually focused on the generalisation of models (i.e. how accurately a model is able to predict outcome values for previously unseen data). Various approaches tend to maximise such a criterion. For instance, one can perform cross-validation (CV) procedures, where data are separated into training sets for parameters estimation and testing sets for prediction29. Such a procedure fosters the determination of optimal models, relatively to the family of models considered and regarding to their ability for generalisation. In the same time, CV procedures allow to diagnose under- and over-fitting of the model. Underfitting commonly describes an inflexible model unable of capturing noteworthy regularities in a set of exemplary observations30. In contrast, overfitting represents an over-trained model, which tends to memorise each particular observation thus leading to high error rates when predicting on unknown data31. While aforementioned studies aimed to describe the training load - performance relationships by estimating model parameters and by testing the model on a single data set, generalisation of models cannot be ensured. This challenges their usefulness in a predictive application. On the other hand, modelling methodologies using CV procedures are the standard in a predictive aim rather than only being descriptive. To our knowledge, only a few recent studies modelled performances with Fitness–Fatigue models using a CV procedure10,32,33 and one separated data into two equals training and testing data sets respectively34. Ludwig et al.10 reported that optimising all parameters including the offset term makes the model prone to overfitting. Consequently, interpretations drawn from predictions as well as model parameters may be incorrect.
The physiological adaptations involved by exercise being complex, some authors investigated the relationship between training and performance by using Artificial Neural Networks (ANN), non-linear machine learning models35,36. Despite low prediction errors reported (e.g. 0.05 seconds error over a 200m swimming performance35), the methodological consideration in their study mostly influenced by a small sample size and the “black-box” nature of ANN question their use in sport performance modelling9,37. Computer sciences offer plenty of machine learning models although being often summarised in ANN for athletic performance prediction. Considering labelled athletic performances, powerful algorithms from supervised learning could be alternatively considered for solving athletic performance modelling issues, either through a regression or a classification formulation of the problem. To cite a few, non-linear approaches such as Random Forest (RF) models account for the non-linear relationships between a target and a large set of predictors38 for making predictions. In a different way, linear models such as regularised linear regressions39,40 also proved their efficiency in high dimensionality and multicollinearity contexts. On this basis, both could be profitable for sport performance modelling purposes.
To date, not any model family (i.e. impulse response and physiological based, statistical and machine learning models) seems to be preferred for athletic performance prediction based on a data set, mainly due to a lack of evidence and confidence in training effect modelling and performance prediction accuracy. In addition, because generalisation ability is not systemically appraised, practical and physiological interpretations drawn from some models may be incorrect and at least should be taken with caution.
In order to elucidate the relationships between training loads and athletic performance in a predictive application, we hypothesised that following a model selection, regularisation and dimension reduction methods would lead to a greater model generalisation capability than former impulse response models.
Aiming to prescribe an optimal training programming, sport practitioners need to understand the physiological effects involved by each training session and its after-effects on athletic performance. Hence, this study aimed to provide a robust and transferable methodology relying on model generalisation in a context of sport performance modelling. We collected data from elite Short-track speed skaters, part of the National French team. To date, only a few studies have investigated relationships between training and performances in this sport41–43. From linear and non-linear modelling approaches, Knobbe et al.42 provided an interesting methodology around aggregation methods for delivering key and actionable features of training components. The authors investigated individual patterns that represent adaptations to training and that might provide insightful information for coaches, involved in training programming tasks. On another note, Meline et al.43 examined the dose-response relationship between training and performance through simulations of overloading and a few tapering strategies. The dose-response model from Busso13 appeared to be a valuable model for evaluating taper strategies and their potential effects on skating performance. However, a contribution mostly based on the model generalisation principle seems to be of interest by reinforcing the knowledge of athletic performance modelling in elite sports.
After having constructed an appropriate data set, we considered the variable dose-response model (DR)13 as a baseline regression framework and compared it to three models: a principal component regression (PCR), an Elastic net (ENET) regularised regression and a RF regression model. These models allow:
To present and discuss the help of regularisation and dimension reduction methods in regards of the generalisation concept.
To model athletic performances using robust models to the high dimensionality and multicollinearity and to investigate the key factors of the short-track speed skating performance.
Materials and methods
Participants
Seven national elite Short-track speed skaters (mean age 22.7 ± 3.4 years old; 3 males, body mass of 71.4 ± 9.4 kg, and 4 females, body mass of 55.9 ± 3.9 kg) voluntary participated to the study. Each athletes experienced the 2018 Olympic Winter Games in PyeongChang, South Korea () or were preparing the Olympics Games of Pekin, China (). The whole team was trained by the same coach, responsible for training programming and data collection. Mean weekly volume of training was 16.6 ± 2.5 hours. Data were collected over a three months training period without any competition, interrupted by a two weeks break and beginning one month after resuming training for a new season. Participants were fully informed about data collection and written consent was obtained from them. The study was performed in agreement with the standards set by the declaration of Helsinki (2013) involving human subjects. The protocol was reviewed and approved by the local research Ethics Committee (EuroMov, University of Montpellier, France). The present retrospective study relied on the collected data without causing any changes in the training programming of athletes.
Data set
Dependent variable: performance
Participants performed each week standing start time trials ( equal 1.5 lap) after a standardised warm-up. At the finish line, timing gates system (Brower timing system, USA) recorded individual time trial performance in order to ensure a high standard of validity and reliability between measures44,45. A total of performances were recorded for an average of individual performances. The performance test being a gold standard for the assessment of acceleration ability46, athletes were all familiar with it prior to the study.
In the sequel, let be the domain of definition of such a performance and the continuous random variable. In this context, each observation can be referenced by both its athlete i and its day of realisation t as .
Independent variables
Independent variables stem from various sources, which are summarised in Table 1. In the sequel, let be the domain of definition of the random variable . The variable X is thus defined as a vector composed of the independent variables detailed hereafter. First, refers to the raw training loads (TL, Fig. 1c), calculated from on-ice and off-ice training sessions (see details on Supplementary material Appendix 1). Then, represent two aggregations of daily TL. Those aggregations come from the daily training loads w(t)—also known here as —convoluted to two transfer functions adapted from Philippe et al.28, which are denoted and .
Table 1.
Summary of independent variables.
Independent variables | Description | Aggregation | |
---|---|---|---|
Raw training load | Daily training load computed from , , , , (see Supplementary material Appendix 1) | Daily recorded | |
Cumulative Training load | , | Daily computed from values | Impulse and serial cumulative responses |
Rate of Perceived Exertion (RPE) | , | Borg category ratio (CR) 0–10 scale | Impulse and serial cumulative responses |
Averaged power | , | On-ice sessions | Impulse and serial cumulative responses |
Maximal power | , | On-ice sessions | Impulse and serial cumulative responses |
Relative intensity | , | On-ice sessions (see Supplementary material Appendix 1, Eq. S1) | Impulse and serial cumulative responses |
Session duration | , | All sessions, overall session duration | Impulse and serial cumulative responses |
Session density | , | All sessions, effective work only | Impulse and serial cumulative responses |
Ice quality | Subjective information quoted on a Borg 0–10 CR scale | Recorded the day of performance | |
Rest | Rest between two consecutive sessions (days) | Sum of rest days preceding the performance | |
Past performance | Significantly correlated past with | Performance at | |
Athlete | Athlete’s id |
Figure 1.
Cumulative daily training loads of a representative athlete following (a) the impulse response function (, Eq. 1) and (b) the serial bi-exponential response function (, Eq. 2). (c) illustrates the raw daily training loads , expressed by w(t). In (a) and (b), dots represent daily values of the cumulative training load and vertical solid lines indicate occurrence of training sessions. Values are represented in arbitrary units (a.u).
The associated impulse response reflects the acute response to exercise (e.g. fatigue). It is defined as
1 |
where is a short time constant equals to 3 days in this study (Fig. 1a). Respectively, the response describes a serial and bi-exponential function reflecting training adaptations over time. It is defined as
2 |
The time delay for the decay phase to begin only after the growth phase is given by the constant TD. Here, . Both and are the time constants of respectively the growth phase and the decline phase with and (Fig. 1b). Note that the time constants , , were averaged and based on empirical knowledge and previous findings13. Hence, for a given athlete,
Note that the symbol denotes the convolution product.
Similarly, some characteristics components of each session were aggregated. This encompasses Rate of Perceived Exertion (RPE) , averaged power , maximal power output , relative intensity , session duration and session density . Also, for each session ice quality and rest between two consecutive sessions were considered. Since some models may benefit from time through autocorrelated performances , the preceding performance with was included as predictor, denoted . Finally, athlete was considered excepted for individually built models.
Applied to the observed data of this study a data set of observations of performances associated with 19 independent variables was obtained (see Table 1). To formalise, allowing that with f a function of density, the built data set is a sample .
Modelling methodology
Formally, the goal is to find a function which minimises the generalisation error
In practice the minimisation of R is unreachable. Instead, we get a sample set and note the empirical error as
The objective becomes to find the best estimate with the class of function that we accept to consider.
Here, four family of models are evaluated in this context. With the exception of the DR, all models were individually and collectively computed ( and , respectively).
Reference: variable dose-response
The time-varying linear mathematical model developed by Busso13 was considered as the model of reference. Formally and according to the previously introduced notation, this model is a function . It describes the training effects on performance over time, y(t), from the raw training loads . TL are convoluted to a set of transfer functions and , relating respectively to the aptitude and to the fatigue impulse responses as
with and two time constants. Combined with the basic level of performance of the athlete, the modelled performance is
with and being gain terms. The later is related to the training doses by a second convolution to the transfer function
with a time constant. Since is defined as where is a gain term, one may note that increases proportionally to the training load and decay decreases exponentially from this new value. From discrete convolutions, the modelled performance can be rewritten as
with
The five parameters of the model (i.e. , , , and ) are fitted by minimizing the residual sum of squares (RSS) between modelled and observed performances, such as
where being the day in which the performance is measured. A non-linear minimisation was employed according to a Newton-type algorithm47.
Unlike this model of reference, the next presented models take benefit from the augmented data space .
Regularisation procedures
Elastic net
In highly dimensional contexts, multivariate linear regressions may lead to unsteady models by being excessively sensitive to the expanded space of solutions. To tackle this issue, cost functions penalising some parameters on account of correlated variables exist. On one side, Ridge penalisation reduces the space of possible functions by assigning a constraint to the parameters, thus minimising their amplitude to almost null values. On the other side, Least Absolute Shrinkage and Selection Operator (LASSO) penalisation has the capacity to fix parameters coefficient to null. The ENET regularisation combines both Ridge and LASSO penalisation39. Hence, the multivariate linear model is
with the predictors, the parameters of the model and the error term. The regularisation stems from the optimisation of the objective
where denotes the mixing parameter which defines the balance between the Ridge regularisation and the LASSO regularisation. denotes the impact of the penalty with . For and , the model will use a ridge and a lasso penalisation, respectively. Thus, for and a fixed value of , the number of removed variables (null coefficients) increases with monotony from 0 to the LASSO most reduced model. The model was adjusted by hyper-parameters and during the model selection, being part of the CV process (as described below).
Principal component regression
In this multivariate context with potential multicollinearity issues, principal component analysis aims to project the original data set from into a new space of orthogonal dimensions called principal components. These dimensions are built from linear combinations of the initial variables. One may use the principal components to regress the dependent variable: also known as Principal Components Regression (PCR). The regularisation is performed by using as regressors only the first principal components which retain the maximum of variance of the original data, by construction. In our study and according to the Kaiser’s rule48, p principal components with an eigenvalue higher than 1 were retained and further used in linear regression.
Such a model, , can be defined as a linear multivariate regression over principal components as
with the predictors, the parameters of the model and the error term. In addition to being a regularisation technique by using a subset of principal components only, PCR also exerts a discrete shrinkage effect on the low variance components (the lower eigenvalue components), nullifying their contribution in the original regression model.
Random forest
Random Forest model consists of a large number of regression trees that operate as an ensemble. RF is random in two ways, (i) each tree is based on a random subset of observations and (ii) each split within each tree is created based on a random subset of candidate variables. The overall performance of the forest is defined by the average of predictions from the individual trees49. In this study, random subset of variables and number of trees were the two hyper-parameters for adjusting the model within the model selection. The model is a function .
Time series cross-validation and prediction
Since we aim at predicting daily skating performances such as non-independent and identically distributed random variables, the time dependencies have to be accounted for in the cross-validation procedure. It ensures information from the future are not used to predict performances of the past. Hence, data were separated—respectively to the time index—into one training data set for time series CV (80% of the total data) and the remaining data for an unbiased model evaluation (evaluation data set). In this procedure, a model selection occurs first with the search of hyper-parameters values that minimise the predictive model error over validation subsets. The model selection is detailed in Algorithm 1.
Algorithm 1 iteratively evaluates a class of functions , in which each function differs from its hyper-parameters values. A time ordered data set S is partitioned into training and validation subsets ( and , respectively). For each partition k with , functions are fitted on the incremental and evaluated on the fixed subset that occurs after the last element of . Once functions are evaluated on K partitions of S, a function that provides the lowest and averaged root mean square error (RMSE) among validation subsets defines an optimal model denoted .
Model evaluation
Afterwards and for each partition of S, is adjusted on new time ordered training subsets which combines both and . Then, the generalisation capability of is evaluated on fixed length subsets of evaluation data , saved for that purpose. This procedure refers to the so-called “evaluation on a rolling forecasting origin” since the “origin” at which the forecast is based rolls forward in time50. Note that the DR is only concerned by the model evaluation step since it has no hyper-parameters to be optimised in the model selection phase.
Statistical analysis
For any model, the goodness of fit according to linear relationships and to performance were described by the coefficient of determination () and the RMSE criterion respectively. Their generalisation ability is described by the difference between RMSE computed on each training and evaluation data. The prediction error was reported through the Mean Absolute Error (MAE) between observations and predictions. After checking normality and variance homogeneity of the dependant variable by a Shapiro-wilk and a Levene test respectively, linear mixed models were performed to assess the contribution of each class of model over the modelling error rate. Inter and intra subject variability over athletic performances modelling have been considered through random effects. Repeated measure ANOVAs were performed in order to assess the effect of the model class and population over the response, the effect size being reported through statistic. Multiple pairwise comparison of errors between the model of reference and the other models were performed using Dunnett’s post-hoc analysis. Significance threshold was fixed to . For linear mixed models, unstandardised regression coefficients are reported along with 95% confidence interval (CI) as a measure of effect size. Models computation and statistical analysis were conducted with R statistical software (version 4.0.2). The DR model was computed with personal custom-built R package (version 1.0)51.
Results
Through the times series CV, models provided heterogeneous generalisation and performance prediction. Distributions of RMSE per model are illustrated in Fig. 2.
Figure 2.
Distributions of models’ performance. (a) RMSE distributions of each individual models and (b) the models computed on the whole group. Within boxplot midline represents the median of the distribution. All of them are compared to the dose-response (DR) model.
Models generalisation
Mixed model analysis showed that both and models lowered the differences in terms of prediction errors between the training and evaluation data set (, and , for and ; and , for and , respectively). A significant effect of the model class on the generalisation risk was also reported (). The most generalisable models were ENET and PCR models computed on overall data, followed by individual based models. Generally, group-built models likely provided a greater generalisation capability than individual based models (). A summary of model pairwise comparisons is provided in Table 2.
Table 2.
Summary of models pairwise comparisons for generalisation and prediction abilities.
Comparison | t ratio | p | Criterion | |
---|---|---|---|---|
0.057 | − 11.841 | < 0.001 | Generalisation | |
0.032 | 6.644 | < 0.001 | Generalisation | |
0.026 | 3.365 | 0.004 | Generalisation | |
0.023 | 2.933 | 0.018 | Generalisation | |
− 0.027 | − 5.649 | < 0.001 | Generalisation | |
− 0.028 | − 3.831 | < 0.001 | Generalisation | |
0.041 | 5.607 | < 0.001 | Prediction | |
0.022 | 3.067 | 0.012 | Prediction | |
0.021 | 2.112 | 0.156 | Prediction | |
0.018 | 1.789 | 0.294 | Prediction | |
0.016 | 1.537 | 0.438 | Prediction | |
− 0.042 | − 5.779 | < 0.001 | Prediction |
represents the marginal mean difference of the RMSE distribution between the DR model and its comparison.
Prediction performances
Root mean square errors reported on evaluation data using mixed model analysis indicated that was the most contributing model in lowering the prediction errors (), followed by as shown in Table 2. Accordingly, a significant model class effect on prediction errors was reported (). Computing models over a larger population (i.e. group-based models) showed only a trend in favour of group-based models over the errors response rate ().
Distributions of RMSE on data used for model evaluation have shown heterogeneous variance between models. The greatest standard deviations were found for and with and respectively. The ENET, and RF models provided more consistent performances with lower standard deviations comprised within [0.023; 0.027] and [0.012; 0.017] intervals for individual and group computed models, respectively. Note that the greatest errors on evaluation data were systematically attributed to one particular athlete. In average, predictions made from this athlete led to greater RMSE than ones made from other athletes (, ). Mean values of indicated that weak linear relationships between performance and predictors were identified by models (). The highest averaged value but also the greatest standard deviations were reported for models (). However, significant differences of averaged were only found for , and (, ; , and , respectively). A summary of model performances is provided in Table 3.
Table 3.
Summary of the predictive models.
Model | MAE | RMSE | Hyper parameters* | |
---|---|---|---|---|
0.206 ± 0.093 | 0.189 ± 0.055 | 0.225 ± 0.053 | ||
, , | ||||
0.150 ± 0.010 | 0.169 ± 0.020 | 0.197 ± 0.023 | , | |
0.164 ± 0.068 | 0.173 ± 0.025 | 0.201 ± 0.027 | ||
0.193 ± 0.074 | 0.170 ± 0.023 | 0.199 ± 0.024 | ||
0.179 ± 0.063 | 0.150 ± 0.010 | 0.176 ± 0.012 | ||
0.17 ± 0.053 | 0.22 ± 0.044 | 0.259 ± 0.062 | ||
0.164 ± 0.069 | 0.163 ± 0.017 | 0.195 ± 0.017 |
According to model families, criteria were averaged among folders and displayed with their standard deviation. For individual models, averaged values of hyper parameters are displayed along with lower and upper recorded values. The greatest performance among criteria is listed in bold type.
*Indicates the as the reference model and specification of its averaged parameters.
Predictions made from the two most generalisable models— and —and the reference illustrate the sensitivity of models for a representative athlete (Fig. 3). Performances modelled from model were relatively steady and less sensitive to real performance variations. Standard deviation calculated on data used for model evaluation supported such a smooth prediction with , and for , and , respectively. Regarding , the greatest standardised coefficients were attributed to the auto-regressive component (i.e. the past performance) such as , followed by the athlete factor and then impulse and serial bi-exponential aggregations. For regression, used the three first principal components explaining 52.3%, 16.5% and 7.6% of the total variance, respectively. Details about models’ parameters as well as principal component compositions are available on Supplementary material Appendix 2.
Figure 3.
Modelled performance of a representative subject. Solid and dashed lines represent the DR model and the two models offering the best generalisation. On this example, the training data set (80% of the data that combines training and validation subsets) and evaluation data set (20% of the data, the evaluation subset) areas are separated by the vertical solid line. Fitted parameters of the DR model were , , , , . Hyper-parameters of the PCR and ENET models were and ,
Discussion
In the present study, we provided a modelling methodology that encompasses data aggregation relying on physiological assumptions and model validation for future predictions. Data were obtained from elite athletes, able of improving their performance by training and being very sensitive to physical, psychological and emotional states. The variable dose-response model13 was fitted on individual data. It was compared to statistical and machine-learning models fitted on individual and on overall data: ENET, PCR and RF models.
Cross validation outcomes revealed significant heterogeneity in performances of models, even though the differences remain small regarding the total time of skating trials (see Table 3). The main criterion of interest, generalisation, was significantly greater for both ENET and PCR models than model. One can explain this result by the capabilities of the statistical models to better catch the underlying skating performance process using up to 19 independent variables when associated with regularisation methods. Conversely, the model relies on two antagonistic components strictly based on the training load dynamics. It does not deal with any other factors that may greatly impact the performance (e.g. psychological, nutritional, environmental, training-specific factors)12,18,52. Thus, such a conceptual limit can be overtaken by employing multivariate modelling that may result in a greater comprehension of the training load - performance relationship, for the purpose of future predictions9,12. To date, only a recent study from Piatrikova et al.53 extended the former Fitness–Fatigue model framework3 to account for some psychometric variables as model inputs. Despite the authors reported an improved goodness of fit for this multivariate alternative, attributing impulse responses to these variables might question the conceptual framework behind the model.
Distributions of RMSE from training and evaluation data sets allow us to establish a generalisation model ranking (Table 2). Linear models computed on overall data offer a better generalisation. This finding is essential because by handling the bias-variance trade-off, models are more suited for capturing a proper underlying function that maps inputs to the target even on unknown data. Hence, it allows further physiological and practical interpretations from the models such as the remodelling process of skeletal muscle involved by exercise, dynamically represented by exponential growth and decay functions28. Besides, this result might be partly explained by the sample size. It is well known that statistical inference on small samples leads to bad estimates and consequently to bad performances in prediction54,55. A greater sample size obtained by combining individual data led to more accurate parameter estimates, being more suitable for sport performance modelling12. That is particularly important to consider when we aim to predict a very few discipline specific performances throughout a season. However, predicting non-invasive physical quality assessments which can be daily performed (e.g. squat jumps and its variations for an indirect assessment of neuromuscular readiness56, short sprints) may be an alternative for small sample size issues. In our case, standing start time trials over 1.5 laps allowed for the coach to evaluate underlying physical abilities of the skating performance, several times a week. Also, regularisation tends to stabilise parameters estimators and favour generalisation of the models. For instance, multicollinearity may occur in high-dimensional problems and stochastic models generally suffer from such a conditioning. One would note that the ENET and PCR models attempt to overcome these issues in their own way by (i) penalising or removing features—or both—that are mostly linearly correlated and (ii) by projecting the initial data space onto a reduced space, which is optimised to keep the maximum of variance of the data from linear combinations of the initial features. Both approaches limit the number of unnecessary—or noisy—dimensions. In contrast, in this study non-linear machine learning models ( and ) expressed a lower generalisation capability than linear models even when models combine data from several athletes. We believe that such models may be powerful in multidimensional modelling but require an adequate data set with, in particular, ones with a sufficient sample size. Otherwise, model overfitting may occur at the expense of inaccurate predictions on unknown data.
As reported previously and with the exception of , models were more accurate in prediction than (Table 3). The large averaged RMSE as well as large standard deviations provided by the among performance criteria tend to agree with the literature, since the model is prone to suffer from a weak stability and ill-conditioning raised by noisy data that impact its predictive accuracy9,10. This evokes that linear relationships between the two components “Aptitude”—“Fatigue” and the performance are not clear. However, because of a lack of cross-validation procedures on impulse response models and particularly the DR employed in our study, our results cannot be validly compared with the literature. Despite lower standard deviations of reported by ENET and PCR models, the weak averaged values suggest that linear models can only explain a few part of the total variance. Note that all linear models are concerned (including the ), since the differences in averaged between models are relatively small and only significant for , and models. Therefore and if the data allow it (i.e. a sufficient sample size and robustly collected data), non-linear models may still be effective and should be considered during the modelling process.
The sensitivity of models according to gains and losses of performances differed between the two most generalisable models— and —and the reference (Fig. 3). Such differences can be explained by the influence of variables that may affect performance, other than training loads dynamic (e.g. ice quality the day of performance, cumulative training loads following a serial and bi-exponential function, the last known performance) or a model failure in parameter estimates regarding to the variability of the data. Indeed, parameters estimates of supported that since changes in skating performance were mostly explained through the past performance, weighted by individual properties and to a lesser degree by training related parameters. The used a different approach for the same purpose and greatly relied on training related aggregations as well as environmental and training programming variables (see Appendix 2). However, this applied example does not inform us about neither the generalisation ability of models nor accuracy of predictions because it concerns only a particular set of data, where the selected models (i.e. with optimal hyper-parameters) are trained on the first 80% of data and evaluated on the 20% remaining data. In addition, since model estimates greatly depend on the sample size, we might expect significant different estimates with more data (particularly for ).
This study presents some limits. The first one concerns the data we used and particularly the criterion of performance: standing start time trials few times a week during an approximately 3-months period. Even though being a very discipline specific test in which athletes are familiar and being conducted in standardised conditions, each test requires high levels of arousal, buy-in, motivation and technique. Therefore, psychological states and cognitive functions monitoring such as motivation and attentional focus57,58 should have been done prior performing each trial. A concrete example is provided through the Fig. 3, where greatly penalised the training correlated features and kept the influence of the auto-regressive component predominant over other features. This may be the consequence of either an inference issue due to the relative small sample size, or a lack of informative value of training related features that do not allow to explain changes in skating performance. Also, both reasons support models failure in predicting skating performances of one particular athlete, who showed significant greater errors of prediction. It emphasises the importance of measuring the “right” variables for performance modelling purposes, in particular if the sport-specific performance involves various determining factors.
Secondly, the time series cross-validation presented here has a certain cost, most notably when only few data are available (e.g. when models are individually computed). The rolling origin re-calibration evaluation performed as described by Beirgmer et al.59 implies a model training only on a incremental sub-sequence of training data. Hence, the downsized sample size of the first training sub-sequences may cause model failure in parameter estimates and consequently, an increase of prediction errors. Then, training and evaluation data sets present some dependencies. In order to evaluate models on fully independent data, some modifications of the current CV framework exist at the expense of withdrawing even more data in the learning procedure. According to Racine60, the so-called hv - block cross-validation is one of the least costly alternative to the CV used in our study, requiring a certain gap between each training and validation subsets. However, due to a limited sample size, we voluntary chose to not adapt the original CV framework described in Algorithm 1. Nonetheless, we recommend researchers and practitioners to consider such alternatives in case of significant dependencies and when sample size is sufficient.
Finally, backtesting was performed in order to evaluate model performances on historical data. From a practical point of view, models are able to predict the coming performance following a given feature of data known until day t. However, the contribution of training load responses modelling also concerns training after-effects simulations over a longer time frame. Having identified a suitable model, practitioners may pinpoint key performance indicators—specific to the discipline of interest—and confront model estimates to field observations. Then, simulations of these independent variables within their own distributions would allow practitioners and coaches to simulate changes in performance following objective and subjective measures of training loads, and any performance factors that are monitored. Conditional simulations that consider known relationships between independent variables (e.g. relationships between training load parameters)61,62 may improve the credibility of simulations.
The modelling process presented so far constitutes a part of a decision support system (DSS), from issue and data understanding to evaluation of the modelling results63. Supported by a deployment framework that makes models usable by all, DSS helps technical, medical staffs in the training programming and scheduling tasks64 throughout a systemic and holistic approach of a complex problem, such as athletic performance65. Besides, the technological improvement of sports wearable sensors and underpinning available data for quantifying and characterising exercise foster the development of DSS in individual and team sports.
Conclusion
In this study, we provided a transferable modelling methodology which relies on the evaluation of models generalisation ability in a context of sport performance modelling. The mathematical variable dose-response model along Elastic net, principal component regression and random forest models were cross-validated within a time series framework. Generalisation of the DR model was outperformed by ENET and PCR models, though our results may not be directly compared with the literature. The ENET model provided the greatest performances both in terms of generalisation and accuracy in prediction when compared to the DR, PCR and RF models. Globally, increasing sample size by computing models on the whole group of athletes led to more performing models than the individually computed ones. Yet, our results should be interpreted in the light of the models used. In our study, we foster the use of regularisation and dimension reduction methods for addressing high dimensionality and multicollinearity issues. However, other models could stand valuable for athletic performance modelling (e.g. mixed-effect models for repeated measures, generalised estimating equations since there are possible unknown correlation between outcomes, autocorrelation and cross-correlation functions for time-series analysis).
The methodology highlighted in our study can be reemployed whatever the data, with the aim of optimising elite sport performance through training protocols simulations. Beyond that, we believe that model validation is a requisite for any physiological and practical interpretation for the purpose of making future predictions. Further researches that involve training session simulations and model evaluations in forecasting would highlight the relevance of some model families for training programming optimisation.
Supplementary Information
Acknowledgements
We are grateful to the Fédération Française des Sports de Glace, the Institut National du Sport, de l’Expertise et de la Performance and to Dr. Anthony MJ Sanchez and Robert Solsona (Laboratoire Européen Performance Santé Altitude, University of Perpignan Via Domitia) for collaboration and sharing data sets.
Author contributions
Conceptualisation, F.I., S.P., R.C. (Romain Chailan), R.C. (Robin Candau); methodology and investigation F.I., R.C. (Robin Candau), R.C. (Romain Chailan); recruitment T.M.; formal analysis and data curation F.I., R.C. (Robin Candau), T.M.; resources R.C. (Romain Chailan), T.M.; writing original draft preparation, F.I.; writing—review and editing, F.I., R.C. (Robin Candau), R.C. (Romain Chailan), S.P.; visualisation, F.I.; supervision, R.C. (Robin Candau), S.P.; project administration, R.C. (Robin Candau), S.P.; funding acquisition, F.I. All authors have read and agreed to the published version of the manuscript.
Funding
This research was funded by the Association Nationale de la Recherche et de la Technologie (ANRT) Grant Number 2018/0653.
Competing interests
The authors declare no competing interests.
Footnotes
Publisher's note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary Information
The online version contains supplementary material available at 10.1038/s41598-022-05392-8.
References
- 1.Wallace LK, Slattery KM, Coutts AJ. The ecological validity and application of the session-RPE method for quantifying training loads in swimming. J. Strength Cond. Res. 2009;23:33–38. doi: 10.1519/JSC.0b013e3181874512. [DOI] [PubMed] [Google Scholar]
- 2.Impellizzeri FM, Rampinini E, Marcora SM. Physiological assessment of aerobic training in soccer. J. Sports Sci. 2005;23:583–592. doi: 10.1080/02640410400021278. [DOI] [PubMed] [Google Scholar]
- 3.Banister E, Calvert T, Savage M, Bach T. A systems model of training for athletic performance. Aust. J. Sports Med. 1975;7:57–61. [Google Scholar]
- 4.Calvert TW, Banister EW, Savage MV, Bach T. A systems model of the effects of training on physical performance. IEEE Trans. Syst. Man Cybern. SMC. 1976;6:94–102. [Google Scholar]
- 5.Banister E, Calvert T. Planning for future performance: Implications for long term training. Can. J. Appl. Sport Sci. 1980;5:170–176. [PubMed] [Google Scholar]
- 6.Banister, E. W., Good, P., Holman, G. & Hamilton, C. L. Modeling the Training Response in Athletes, vol. 3, 7–23 (Human Kinetics, 1986).
- 7.Busso T, Carasso C, Lacour J. Adequacy of a systems structure in the modeling of training effects on performance. J. Appl. Physiol. 1991;71:2044–2049. doi: 10.1152/jappl.1991.71.5.2044. [DOI] [PubMed] [Google Scholar]
- 8.Busso T, Candau R, Lacour J-R. Fatigue and fitness modelled from the effects of training on performance. Eur. J. Appl. Physiol. Occup. Physiol. 1994;69:50–54. doi: 10.1007/BF00867927. [DOI] [PubMed] [Google Scholar]
- 9.Hellard P, et al. Assessing the limitations of the banister model in monitoring training. J. Sports Sci. 2006;24:509–520. doi: 10.1080/02640410500244697. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Ludwig M, Asteroth A, Rasche C, Pfeiffer M. Including the past: Performance modeling using a preload concept by means of the fitness–fatigue model. Int. J. Comput. Sci. Sport. 2019;18:115–134. [Google Scholar]
- 11.Busso T, Denis C, Bonnefoy R, Geyssant A, Lacour J-R. Modeling of adaptations to physical training by using a recursive least squares algorithm. J. Appl. Physiol. 1997;82:1685–1693. doi: 10.1152/jappl.1997.82.5.1685. [DOI] [PubMed] [Google Scholar]
- 12.Avalos M, Hellard P, Chatard J-C. Modeling the training-performance relationship using a mixed model in elite swimmers. Med. Sci. Sports Exerc. 2003;35:838. doi: 10.1249/01.MSS.0000065004.05033.42. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.Busso T. Variable dose-response relationship between exercise training and performance. Med. Sci. Sports Exerc. 2003;35:1188–1195. doi: 10.1249/01.MSS.0000074465.13621.37. [DOI] [PubMed] [Google Scholar]
- 14.Kolossa D, et al. Performance estimation using the fitness-fatigue model with Kalman filter feedback. Int. J. Comput. Sci. Sport. 2017;16:117–129. [Google Scholar]
- 15.Matabuena M, Rodríguez-López R. An improved version of the classical banister model to predict changes in physical condition. Bull. Math. Biol. 2019;81:1867–1884. doi: 10.1007/s11538-019-00588-y. [DOI] [PubMed] [Google Scholar]
- 16.Morton R, Fitz-Clarke J, Banister E. Modeling human performance in running. J. Appl. Physiol. 1990;69:1171–1177. doi: 10.1152/jappl.1990.69.3.1171. [DOI] [PubMed] [Google Scholar]
- 17.Candau R, Busso T, Lacour J. Effects of training on iron status in cross-country skiers. Eur. J. Appl. Physiol. Occup. Physiol. 1992;64:497–502. doi: 10.1007/BF00843757. [DOI] [PubMed] [Google Scholar]
- 18.Mujika I, et al. Modeled responses to training and taper in competitive swimmers. Med. Sci. Sports Exerc. 1996;28:251–258. doi: 10.1097/00005768-199602000-00015. [DOI] [PubMed] [Google Scholar]
- 19.Millet G, et al. Modelling the transfers of training effects on performance in elite triathletes. Int. J. Sports Med. 2002;23:55–63. doi: 10.1055/s-2002-19276. [DOI] [PubMed] [Google Scholar]
- 20.Millet G, Groslambert A, Barbier B, Rouillon J, Candau R. Modelling the relationships between training, anxiety, and fatigue in elite athletes. Int. J. Sports Med. 2005;26:492–498. doi: 10.1055/s-2004-821137. [DOI] [PubMed] [Google Scholar]
- 21.Thomas L, Mujika I, Busso T. A model study of optimal training reduction during pre-event taper in elite swimmers. J. Sports Sci. 2008;26:643–652. doi: 10.1080/02640410701716782. [DOI] [PubMed] [Google Scholar]
- 22.Sanchez AM, et al. Modelling training response in elite female gymnasts and optimal strategies of overload training and taper. J. Sports Sci. 2013;31:1510–1519. doi: 10.1080/02640414.2013.786183. [DOI] [PubMed] [Google Scholar]
- 23.Agostinho MF, et al. Perceived training intensity and performance changes quantification in judo. J. Strength Cond. Res. 2015;29:1570–1577. doi: 10.1519/JSC.0000000000000777. [DOI] [PubMed] [Google Scholar]
- 24.Busso T, Thomas L. Using mathematical modeling in training planning. Int. J. Sports Physiol. Perform. 2006;1:400–405. doi: 10.1123/ijspp.1.4.400. [DOI] [PubMed] [Google Scholar]
- 25.Begue G, et al. Early activation of rat skeletal muscle IL-6/STAT1/STAT3 dependent gene expression in resistance exercise linked to hypertrophy. PLoS One. 2013;8:e57141. doi: 10.1371/journal.pone.0057141. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26.D’Antona G, et al. Skeletal muscle hypertrophy and structure and function of skeletal muscle fibres in male body builders. J. Physiol. 2006;570:611–627. doi: 10.1113/jphysiol.2005.101642. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27.Roels B, et al. Paradoxical effects of endurance training and chronic hypoxia on myofibrillar ATPase activity. Am. J. Physiol. Regul. Integr. Comp. Physiol. 2008;294:R1911–R1918. doi: 10.1152/ajpregu.00210.2006. [DOI] [PubMed] [Google Scholar]
- 28.Philippe AG, Borrani F, Sanchez AM, Py G, Candau R. Modelling performance and skeletal muscle adaptations with exponential growth functions during resistance training. J. Sports Sci. 2019;37:254–261. doi: 10.1080/02640414.2018.1494909. [DOI] [PubMed] [Google Scholar]
- 29.Arlot S, Celisse A, et al. A survey of cross-validation procedures for model selection. Stat. Surv. 2010;4:40–79. [Google Scholar]
- 30.Kouvaris K, Clune J, Kounios L, Brede M, Watson RA. How evolution learns to generalise: Using the principles of learning theory to understand the evolution of developmental organisation. PLoS Comput. Biol. 2017;13:e1005358. doi: 10.1371/journal.pcbi.1005358. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 31.Lever J, Krzywinski M, Altman N. Points of significance: Model selection and overfitting. Nat. Methods. 2016;13:703–704. [Google Scholar]
- 32.Mitchell LJ, Rattray B, Fowlie J, Saunders PU, Pyne DB. The impact of different training load quantification and modelling methodologies on performance predictions in elite swimmers. Eur. J. Sport Sci. 2020;20:1–10. doi: 10.1080/17461391.2020.1719211. [DOI] [PubMed] [Google Scholar]
- 33.Stephens Hemingway BH, Burgess KE, Elyan E, Swinton PA. The effects of measurement error and testing frequency on the fitness–fatigue model applied to resistance training: A simulation approach. Int. J. Sports Sci. Coach. 2020;15:60–71. [Google Scholar]
- 34.Chalencon S, et al. Modeling of performance and ANS activity for predicting future responses to training. Eur. J. Appl. Physiol. 2015;115:589–596. doi: 10.1007/s00421-014-3035-2. [DOI] [PubMed] [Google Scholar]
- 35.Edelmann-Nusser J, Hohmann A, Henneberg B. Modeling and prediction of competitive performance in swimming upon neural networks. Eur. J. Sports Sci. 2002;2:1–10. [Google Scholar]
- 36.Carrard J, Kloucek P, Gojanovic B. Modelling training adaptation in swimming using artificial neural network geometric optimisation. Sports. 2020;8:8. doi: 10.3390/sports8010008. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 37.Lek S, Guégan J-F. Artificial neural networks as a tool in ecological modelling, an introduction. Ecol. Model. 1999;120:65–73. [Google Scholar]
- 38.Qi Y. Random forest for bioinformatics. In: Zhang C, Ma Y, editors. Ensemble Machine Learning. Springer; 2012. pp. 307–323. [Google Scholar]
- 39.Zou H, Hastie T. Regularization and variable selection via the elastic net. J. R. Stat. Soc. Ser. B Stat. Methodol. 2005;67:301–320. [Google Scholar]
- 40.Kosmidis, I. & Passfield, L. Linking the performance of endurance runners to training and physiological effects via multi-resolution elastic net (2015). Preprint at arxiv:1506.01388.
- 41.Yu, H., Chen, X., Zhu, W. & Cao, C. A quasi-experimental study of Chinese top-level speed skaters’ training load: Threshold versus polarized model. Int. J. Sports Physiol. Perform.7, 103–112 (2012). [DOI] [PubMed]
- 42.Knobbe A, Orie J, Hofman N, van der Burgh B, Cachucho R. Sports analytics for professional speed skating. Data Min. Knowl. Disc. 2017;31:1872–1902. [Google Scholar]
- 43.Méline T, Mathieu L, Borrani F, Candau R, Sanchez AM. Systems model and individual simulations of training strategies in elite short-track speed skaters. J. Sports Sci. 2019;37:347–355. doi: 10.1080/02640414.2018.1504375. [DOI] [PubMed] [Google Scholar]
- 44.Bond CW, Willaert EM, Noonan BC. Comparison of three timing systems: Reliability and best practice recommendations in timing short-duration sprints. J. Strength Cond. Res. 2017;31:1062–1071. doi: 10.1519/JSC.0000000000001566. [DOI] [PubMed] [Google Scholar]
- 45.Bond CW, Willaert EM, Rudningen KE, Noonan BC. Reliability of three timing systems used to time short on ice-skating sprints in ice hockey players. J. Strength Cond. Res. 2017;31:3279–3286. doi: 10.1519/JSC.0000000000002218. [DOI] [PubMed] [Google Scholar]
- 46.Felser S, et al. Relationship between strength qualities and short track speed skating performance in young athletes. Scand. J. Med. Sci. Sports. 2016;26:165–171. doi: 10.1111/sms.12429. [DOI] [PubMed] [Google Scholar]
- 47.Dennis Jr, J. E. & Schnabel, R. B. Numerical Methods for Unconstrained Optimization and Nonlinear Equations (SIAM, 1996).
- 48.Kaiser HF. The application of electronic computers to factor analysis. Educ. Psychol. Meas. 1960;20:141–151. [Google Scholar]
- 49.Grömping U. Variable importance assessment in regression: Linear regression versus random forest. Am. Stat. 2009;63:308–319. [Google Scholar]
- 50.Hyndman, R. J. & Athanasopoulos, G. Forecasting: Principles and Practice (OTexts, 2018).
- 51.Imbach, F. sysmod : An R package for dose-response modelling in sports. https://github.com/fimbach/sysmod (2020).
- 52.Stone, M. H., Stone, M. & Sands, W. A. Principles and Practice of Resistance Training (Human Kinetics, 2007).
- 53.Piatrikova E, et al. Monitoring the heart rate variability responses to training loads in competitive swimmers using a smartphone application and the banister impulse–response model. Int. J. Sports Physiol. Perform. 2021;1:1–9. doi: 10.1123/ijspp.2020-0201. [DOI] [PubMed] [Google Scholar]
- 54.Kelley K, Maxwell SE. Sample size for multiple regression: Obtaining regression coefficients that are accurate, not simply significant. Psychol. Methods. 2003;8:305. doi: 10.1037/1082-989X.8.3.305. [DOI] [PubMed] [Google Scholar]
- 55.Cui Z, Gong G. The effect of machine learning regression algorithms and sample size on individualized behavioral prediction with functional connectivity features. NeuroImage. 2018;178:622–637. doi: 10.1016/j.neuroimage.2018.06.001. [DOI] [PubMed] [Google Scholar]
- 56.Watkins CM, et al. Determination of vertical jump as a measure of neuromuscular readiness and fatigue. J. Strength Cond. Res. 2017;31:3305–3310. doi: 10.1519/JSC.0000000000002231. [DOI] [PubMed] [Google Scholar]
- 57.Gillet N, et al. Examining the motivation-performance relationship in competitive sport: A cluster-analytic approach. Int. J. Sport Exerc. Psychol. 2012;43:79. [Google Scholar]
- 58.Ille A, Selin I, Do M-C, Thon B. Attentional focus effects on sprint start performance as a function of skill level. J. Sports Sci. 2013;31:1705–1712. doi: 10.1080/02640414.2013.797097. [DOI] [PubMed] [Google Scholar]
- 59.Bergmeir C, Benítez JM. On the use of cross-validation for time series predictor evaluation. Inf. Sci. 2012;191:192–213. [Google Scholar]
- 60.Racine J. Consistent cross-validatory model-selection for dependent data: hv-block cross-validation. J. Econom. 2000;99:39–61. [Google Scholar]
- 61.Noble BJ, Borg GA, Jacobs I, Ceci R, Kaiser P. A category-ratio perceived exertion scale: Relationship to blood and muscle lactates and heart rate. Med. Sci. Sports Exerc. 1983;15:523. [PubMed] [Google Scholar]
- 62.Casamichana D, Castellano J, Calleja-Gonzalez J, San Román J, Castagna C. Relationship between indicators of training load in soccer players. J. Strength Cond. Res. 2013;27:369–374. doi: 10.1519/JSC.0b013e3182548af1. [DOI] [PubMed] [Google Scholar]
- 63.Wirth, R. & Hipp, J. CRISP-DM: Towards a standard process model for data mining. In Proceedings of the 4th International Conference on the Practical Applications of Knowledge Discovery and Data Mining, vol. 1, 29–39 ( Springer-Verlag, 2000).
- 64.Schelling, X., Fernández, J., Ward, P., Fernández, J. & Robertson, S. Decision support system applications for scheduling in professional team sport. The team’s perspective. Front. Sports Act. Living3 (2021). [DOI] [PMC free article] [PubMed]
- 65.Schelling X, Robertson S. A development framework for decision support systems in high-performance sport. Int. J. Comp. Sci. Sports. 2020;19:1–23. [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.