Skip to main content
Journal of Sports Science & Medicine logoLink to Journal of Sports Science & Medicine
. 2022 Feb 15;21(1):49–57. doi: 10.52082/jssm.2022.49

The Effects of Visual Feedback on Performance in Heart Rate- and Power-Based-Tasks during a Constant Load Cycling Test

Martin Dobiasch 1,, Björn Krenn 2, Robert P Lamberts 3, Arnold Baca 1
PMCID: PMC8851120  PMID: 35250333

Abstract

Performance feedback can be essential for cyclists to help with pacing their efforts during competitions and also during standardized performance tests. However, the choice of feedback options on modern bike computers is limited. Moreover, little research on the effectiveness of the currently used feedback methods is available. In this study, two novel feedback variants using a bar or a tacho to visualize targets and deviation from targets were compared to a classic design using only numbers. Participants (6 female and 25 male trained to well-trained athletes) completed a protocol consisting of three heart rate-based tasks and one power-based task. The displays were compared with respect to their ability to guide athletes during their trials. Results showed lower root mean square error (RMSE) of the novel variants, but no significant effect of feedback variant on RMSE was found for both tasks (p > 0.05). However, when comparing the feedback variants on a person to person basis, significant differences were found for all investigated scenarios (p < 0.001). This leads to the conclusion that novel feedback variants can improve athletes’ ability to follow heart rate-based and power-based protocols, but even better results might be achieved by individualizing the feedback.

Key points.

  • Novel visual feedback variants, – different from traditional number-based displays – might increase accuracy in testing, i.e. help athletes to reduce deviations from assigned targets (heart rate or power).

  • Individual differences between athletes exist with regard to the best feed-back variant.

  • Best results could be achieved by providing athletes with individualized feedback.

Key words: Feedback, cycling, power, heart rate, accuracy, PEGASOS

Introduction

The monitoring of athletic performance and recovery status using standardized tests is essential in high-performance sports such as cycling. As there are a number of aspects to both performance and recovery, various tests exist with the aim of measuring these different components (cf. Coutts et al., 2007; Taylor et al., 2012 for an overview on existing tests). In their simplest form they involve performing a simple movement or exercise. Other tests involve performing a task with a constant load such as time trials over a fixed distance (e.g. 3km time trial), or at a fixed intensity (e.g. performing a task at a certain percentage of maximal heart rate) (Coutts et al., 2007; Taylor et al., 2012). Moreover, some test protocols combine tasks and thus measure multiple parameters. Interestingly, only a few protocols capture multiple variables and use these to monitor and fine-tune training prescription in athletes (Capostagno et al., 2021). One test that captures multiple variables is the Lamberts Submaximal Cycling Test (LSCT) (Lamberts, 2011; 2014). This test aims at monitoring training status (Lamberts, 2011; 2014), finetuning training prescription (Capostagno et al., 2014; 2021), and detecting early symptoms of overreaching (Decroix et al., 2018; Hammes et al., 2016, Lamberts et al., 2010). It consists of three stages in which an athlete is required to perform a cycling task at a specific submaximal heart rate, which is generally associated with a relatively stable cycling intensity. Upon completing the stages, the heart-rate recovery rate is measured over a 60 second period (Lamberts et al., 2011).

In order to be useful for monitoring purposes, any test is required to be objective, reliable, reproducible and valid. While the athlete’s influence on the objectivity and validity of the test itself is marginal, reproducibility is dependent on the execution of the test by the athlete. In order to be reproducible, the test protocol should be followed as closely as possible. For certain protocols such as power- or speed-based tests certain tools such as a cycling ergometer or a motorized treadmill allow athletes to strictly follow the protocol. Other protocols - such as heart rate-based tests - cannot be followed with almost perfect accuracy. For tests where no such tool as an ergometer or treadmill exists, it is the responsibility of the athlete to follow the testing protocol as accurately as possible. When using automated tools for analyzing a test, this aspect is even more important as these tools usually assume that the test has been performed correctly, e.g. heart rate was within the prescribed limits.

The LSCT has been shown to be a reliable and valid tool for monitoring and predicting the performance of athletes (Capostagno et al., 2014; 2021; Decroix et al., 2018; Lamberts, 2011; 2014). However, this task requires cyclists to closely monitor their heart rate and adjust their power output to elicit the correct target heart rate. This process is attention demanding and raises the question if alternative methods could be developed to assist cyclists in this process. Although athletes can usually use devices such as cycling computers to pace their efforts during tests, training and competition, their usage for conducting standardized tests can often be cumbersome. For example, programming timed efforts into such cycling computers can in many cases be a time-consuming task on its own. Moreover, not all devices allow coaches to program efforts remotely into the devices of their athletes. Feedback cannot only originate from different sources but can also be provided in different forms. First, feedback can be differentiated by its source into intrinsic versus extrinsic feedback (sometimes also referred to as augmented feedback). Intrinsic feedback is always present for an athlete as it stems from information from his or her nervous system (Magill and Anderson, 2007; Perez et al., 2009). For example, an athlete can see his or her movement through his or her eyes. Moreover, an athlete will have intrinsic feedback about the power produced during cycling. However, only very experienced riders will be able to correctly gauge their true effort, and moreover will be susceptible to factors such as fatigue. Extrinsic feedback on the other hand is provided by an external source such as a cycling computer (Perez et al., 2009; Salmoni et al., 1984; Sigrist et al., 2013). One example is to display the heart rate of an athlete. Moreover, feedback can be distinguished by its timely manner into being either concurrent (during action) or terminal (after action) (Sigrist et al., 2013). Additionally, in the context of motor learning the type of feedback can be categorized as providing “knowledge of performance” (KP) or “knowledge of result” (KR) (Magill and Anderson, 2007). The former (KP) relates to feedback about how the task was performed, while the latter (KR) relates to the outcome of a task. One example for KP are displaying information about crank torque or produced power. KR feedback, on the other hand, would be elapsed time for covering a certain time trial course.

Several past studies have investigated the effect of visual feedback on the accuracy in mechanical tasks (Annett, 1959; Broker et al., 1989; Keele and Posner, 1968). For very rapid hand movements with durations of lower than 250ms, it has been shown that visual feedback reduces accuracy (Keele and Posner, 1968). This gives rise to the hypothesis that feedback could decrease accuracy when its processing takes longer than the performed action. Moreover, the processing of additional information could also decrease accuracy.

Several studies have shown that the presence of visual feedback can improve the performance of an athlete or the accuracy with which the task is performed (De Marchis et al., 2013; Henke, 1998; Holderbaum et al., 1969; Perez et al., 2009; Szczepan et al., 2018; 2016). In swimming for example, studies by Szczepan et al. (2018; 2016) and Perez et al. (2009) found that the pacing of swimmers can be improved by visualizing the prescribed pacing with a strip of LED lights on the floor of the pool (Szczepan et al., 2018; 2016) or by displaying lap times (Perez et al., 2009). Moreover, pedal efficiency can be improved by displaying it to the athlete in real time (De Marchis et al., 2013; Henke, 1998; Holderbaum et al., 1969).

In some cases, the presence of certain feedback can provide opposing effects for participants. In a non-sporting context, several studies have shown influences of personality traits such as self-efficacy, self-esteem, locus of control and emotional stability on the effects of feedback interventions (Bernichon et al., 2003; Brockner et al., 1987; Brown, 2010; Heimpel et al., 2002; Krenn et al., 2013; Ray, 1974; Shrauger, 1975). Furthermore, narcissism also has an effect on how feedback is processed by individuals (Malkin et al., 2011). Moreover, studies focusing on movement have shown differences between subjects in their ability to adopt to visual feedback. In a study by D’Anna et al. (2014) participants were presented with “visual bio feedback” in order to control their center of pressure. For half the participants the sway path was reduced, while for the other half it increased. Furthermore, while a certain type of visual feedback was sufficient for some runners to aid them in reducing their tibial acceleration, the presence of the same feedback might induce an increase in one out of five participants according to a study by Crowell et al. (2010). Consequently, it can be expected that not all participants will have beneficial reactions to a type of feedback.

In this paper we aim to investigate the effects of three different types of visual feedback variants. The variants will be analyzed concerning their ability to help athletes perform constant load tests while keeping desired measures within predefined limits. This study compares two novel visual feedback variants (providing KP and KR), with a traditional variant providing only numbers (KR). We hypothesize that novel variants will allow athletes to better follow heart rate-based and power-based tasks.

Methods

Participants

A total of 33 recreational trained to well-trained athletes participated in the study. Their characteristics are listed in Table 1. Two participants were excluded from the data analysis since they expressed not having followed the protocol. Therefore, 31 (6 females, 25 males) subjects were included in the analysis. For the analysis of the power-based task, a further participant was excluded due to problems with data recording during the power-based task in one of the trials. One participant reported to exercise for only one hour per week; five participants reported to be exercising between one and three hours, nine for three to five hours, eight for five to ten hours and eight to be exercising between 10 and 15 hours per week. Eighteen participants reported to have no ambitions (i.e. marked “leisure” in the questionnaire, see below) and 13 reported to be training for amateur level sports.

Table 1.

Characteristics of the participants.

Variable Male (n = 26) Female (n = 7) All (n= 33)
M ± SD 95% CI M ± SD 95% CI M ± SD 95% CI
Age (years) 29 ± 9 25-32 19 ± 36 27-13 28 ± 9 25-32
Height (m) 1.81 ± 0.06 1.78 – 1.83 1.72 ± 0.05 1.68 – 1.76 1.79 ± 0.07 1.76 – 1.81
Mass (kg) 76.5 ± 7.8 73.4-79.6 61.7 ± 3.1 59.2-64.2 72.8 ± 8.6 69.6 – 76.0
Pred. Absolute PPO (W) 293 ± 62 191-394 215 ± 37 170-270 276 ± 66 170-394
Pred. Relative PPO (W/kg) 3.9 ± 0.8 2.4-6.1 3.5 ± 0.7 2.7-4.7 3.8 ± 0.8 2.4-6.1
Pred. VO2max (ml/min/kg) 44.6 ± 10.2 26.3-70.9 39.5 ± 8.6 29.4-53.5 43.4 ± 9.9 26.3-70.9

Prior to the testing, subjects had been informed about the purpose of the study and potential risks or benefits of the participation. All subjects gave written informed consent before participating in the study. The study was approved by the university’s Human Ethics Committee and was conducted in accordance with the Helsinki Declaration.

Protocol

After an introduction to the study during which the protocol and voice commands of the feedback system were explained, participants completed their trials. Each participant completed a trial with each of the different feedback variants (see below) in a randomized order. The trials were completed in a single visit with a rest phase of approximately 20 minutes (Minimum 15 minutes) between each trial, depending on the fitness of the athlete. Furthermore, subjects were asked to abstain from caffeine consumption for at least three hours prior to their trials. Additionally, questions about the sporting background of the participants were asked such as their amount of training (1, 1 to 3, 3-5, 5-10, 10-15, 15 or more hours per week) and ambition (“leisure”, “amateur”, “professional”).

The test protocol for the trials was a modification of the LSCT. The main modification to the original test protocol, which is heart rate-based, was an additional (fourth) power-based stage added to the original LSCT protocol (Lamberts, 2014, Lamberts et al., 2011). The load for this stage was calculated for each participant individually as the average power produced during the third heart rate-based stage, which is generally associated with about 80 ± 3% of the cyclists personal PPO (Lamberts, 2014, Lamberts et al., 2011). This was done in order to replicate variants of the test where intensity levels of the stages are fixed to a certain power number rather than heart rate. Consequently, the study protocol should be able to investigate the effect of feedback on accuracy in both types of tests (fixed to heart rate or fixed to power) with a single protocol. In order to have no fatiguing effects to the volume of a single test the decision of having a single power-based stage rather than three was taken.

Figure 1 illustrates one trial which consisted of four load stages and two rest stages. Duration of phases/stages of the test and their transitions are outlined in Table 2. After the participant started the trial on the phone, the app reminded him or her to begin the trial in the small chainring by using the Text-to-Speech function (command “Small chainring”, As all participants were either native or fluent in German, commands were given in German. For ease of reading all commands are reported in English. See Table 3 for translations). This restriction on gearing was taken from the instructions for the original LSCT-protocol. After ten seconds the actual test started, which was signaled to the athlete by the command “Go”. The participants started with the first load consisting of six minutes at 60% of their self-reported maximal heart rate (if available: highest HR captured during the last six months during a maximal cycling effort as reported by the participants; otherwise based on an estimation using the Karvonen equation). During this phase, the target was to keep the heart rate within ± two heartbeats of the target value. The start of the second stage was signaled to a participant by the command “Large chainring”. Switching to the large chainring at this point was part of the original protocol. During the second load phase. Consequently, all participants were able to correctly identify the switch from the second to the third load phase. This last stage consisted of three minutes with the heart rate within ± one heartbeat of 90% of their self-reported maximal heart rate. Its end was signaled by “sit up and relax” which also indicated the beginning of the first rest phase. During a rest stage of the test participants are asked to sit up on the bike and relax for 90 seconds. Following this phase, a pop-up on the screen displayed the target power (see above) for the fourth load stage for 30 seconds. Then participants were also instructed to “start pedaling” by the app. As part of the verbal instructions before the trials, they were instructed not to start pedaling too hard in order to avoid fatiguing effects due to excessive pedaling. After these 30 seconds the pop-up closed itself and the fourth load phase started automatically. In this phase the target was to keep the power within ±10W of the assigned target.

Figure 1.

Figure 1.

Exemplary trial depiction. The blue solid line visualizes the recorded heart rate. Target heart rate is outlined as the thin solid black line. The dashed red line shows recorded power during the trial. The target value for the power stage is outlined with the thin dashed black line.

Table 2.

Protocol of the test. ‘Load’ denotes the assigned load for each phase/stage. An empty cell denotes that no load was imposed during this phase of the test. HR60%, HR80% and HR90% represent the according percentage of the self-reported maximal heart rate. ‘Acoustic’ and ‘Visual’ show the signals for the beginning of the start. ‘Timeout’ denotes that the ‘Time remaining’ (c.f. Figure 1) has exceeded, i.e. displayed ‘00:00.0’ before switching to the next phase.

Duration Phase Load Acoustic Visual
10 s Initial Small chainring
6 min HR stage 1 HR60% ± 2 Go
6 min HR stage 2 HR80% ± 1 Large chainring Timeout
3 min HR stage 3 HR90% ± 1 Timeout
90 s Rest stage Sit up and relax Timeout
30 s Start pedaling Popup open
3 min Power stage Avg. power of HR stage 3 Popup close
End Timeout

Table 3.

Translations of the voice commands used in the study. Left column lists the commands as used in the paper. Right column lists the commands as used in the trials.

English German
Small chainring Kleines Kettenblatt
Go Los gehts!
Large chainring Großes Kettenblatt
Sit up and relax Aufsetzen und entspannen
Start pedalling Anfangen zu treten

Similar to the original protocol of the LSCT participants were asked to report their rating of perceived exertion 30s before the end of each heart rate-base stage (Lamberts et al., 2011). For this purpose, the app displayed a scale with values from six to 20 where the participant had to select the value representing his or her perceived exertion by pressing it. Participants reported ratings of perceived exertion of (median ± SD) 7 ± 1.64, 12 ± 1.50 and 16 ± 1.19 for the heart rate-based stages.

The authors would like to point out that the modifications of the LSCT into the protocol used in this study have no physiological underpinning. The modification was not done in order to alter the predictive power of the test, but rather to test for both heart rate and power accuracy with a single protocol.

Feedback Variants

All variants used in the study are outlined in Figure 2. The ‘Numbers’ variant (Figure 2a) represents the implementation of the traditional feedback variant. It displays current heart rate, a countdown timer and current power using only numbers. In the ‘Bars’ version (Figure 2b) the app displays the target value and its deviation using a vertical bar. In the middle of the bar the target value is written as a number. Deviations from the target value are visualized using a red bar. Additionally, the actual value is displayed on the right edge of the display. Moreover, a yellow box highlights the “allowed” deviations from the target value. The ‘Tacho’ feedback (Figure 2c) visualizes the current value using a needle pointing to values on a semicircle. A red area on this semicircle highlights the desired range of values; i.e. the target value at the top position and allowed deviations to the left and right of it.

Figure 2.

Figure 2.

Feedback variants used in the study. (a) shows the ‘Numbers’ variant, where information is displayed using only numbers. (b) uses a horizontal bar to display deviations from the target values. (c) displays the value using a tacho needle. All figures show the configuration of the displays for the heart rate task.

Since the evaluation protocol consisted of heart rate-based and power-based tasks, two versions of each feedback variant were created. For heart rate-based tasks, the uppermost part showed the heart rate or its deviation using the feedback variant (‘Numbers’, ‘Bars’, or ‘Tacho’) while current power was displayed in the lower part of the display as a number (c.f. Figure 2). During the power-based task the arrangement of feedback was flipped, i.e. power was displayed in the upper part of the display using the respective feedback variant while heart rate was displayed as a number in the lower part of the display.

Data Collection

In order to evaluate the feedback variants a custom app was developed using the PEGASOS framework (Dobiasch and Baca, 2016, Dobiasch et al., 2019). The app is tailored for Android-based smart phones. In the study a Samsung A3 device running Android 8.0 was used. This phone has Ant+ (ANT Wireless, Canada) support and is thus capable of collecting data from Ant+ sensors. Heartrate was continuously monitored (Garmin Soft Strap HRM, Switzerland) at a rate of 1Hz. Power output was collected with an SRM (Schoberer Rad Messtechnik, Welldorf, Germany) powermeter at a rate of ~4Hz. Collected heart rate was expressed in beats per minute (bpm), while power output was expressed as Watts.

For the provided feedback the raw values from the heart-rate monitor were displayed as processed by the sensor (values in bpm). Data from the power meter was smoothed using a 3s moving average. All visualizations were updated at a rate of 1Hz.

All trials were performed on a standard racing bike (Trek, USA) which was mounted on a mechanical stationary roller (Tacx Sirius, Tacx BV, The Netherlands). The smart phone with the installed app was mounted on the handlebars of the bike using a standard bike computer mount. The setup and view for the participants is shown in Figure 3.

Figure 3.

Figure 3.

Experimental setup. The image on the left shows the general setup of the experiments. Located in a quiet room, participants faced a white wall in order to reduce external stimuli. The picture on the left shows the view of the participants during the experiment.

Calculations: RMSE

Root-Mean-Square Error (RMSE) was calculated for the investigation of deviations from the targets in both tasks using the following formula:

graphic file with name jssm-21-49-e001.jpg

Where t is the target value for the respective task and yi denotes a measured value. In the heart rate task, the first minute of each heart rate stage was removed to account for heart-rate onset. Between minutes two to six the target heart rate was defined as HR60%, between minutes eight to twelve as HR80% and between minutes 14 and 15 as HR90% beat, where HR60%, HR80% and HR90% represents the according percentage of the self-reported maximal heart rate. For the power task the first minute was also excluded from the analysis in order to allow participants to adjust to the target power. RMSE was calculated for minutes two and three using the individually assigned target power (average power produced during the third heart rate-based stage) as target value.

Individuality

In order to check the hypothesis that individuals react differently to the feedback variants, the feedback variants were sorted for each participant individually. This was done individually for both investigated scenarios (heart rate or power). Consequently, for each participant the feedback systems were sorted based on the overall RSME during the heart rate-based task and additionally (independent from heart rate) on the RSME of the power-based task. The respective three variants were then assigned with the labels ‘Best’, ‘Mid’ and ‘Worst’ based on their order. In this paper, we refer to this computation as individuality category.

Statistical analyses

All calculations were performed with R version 3.4.4 (R Core Team, 2018). Differences between feedback variants and participants were assessed by an analysis of variance (ANOVA) where the sphericity was assessed using Mauchly’s Test. If this criterion was not met, the Greenhouse Geiser correction was used. Effect sizes were calculated using partial eta squared (η2).

Results

The results for the heart rate-based task and power-based task were analyzed independently.

Heart Rate Feedback

For the heart rate-based task, as shown in Figure 4a, no significant effect of feedback variant on RMSE was found: F(1.52,45.74) = 0.34, p > 0.05, η2 = 0.003. Descriptive statistics for the RMSE are listed in Table 4, where it can be observed that ‘Numbers’ show the highest mean RMSE in all scenarios, while ‘Tacho’ shows the lowest values. However, high standard deviations are also observable, e.g. stage 1 (min-max) 1.84-2.79 (c.f. Table 4). Furthermore, no effects of training amount or ambition on RMSE in the heart rate-based task were found (both p > 0.05).

Figure 4.

Figure 4.

RMSE of the systems for the heart-rate task. (a) shows differences in RMSE between the feedback variants for the heart-rate task on an absolute level, (b) shows differences in RMSE of the systems for the heart-rate task on an individual level while (c) shows the number of times a variant was sorted into the respective category. *** significant difference (p < 0.001), ** significant difference (p < 0.01)

Table 4.

Descriptive statistics of absolute heart-rate RMSE. For ‘all stages’ the RMSE of all stages is calculated as the weighted average of all stages with the first minute of each stage excluded. Other columns show the RMSE of the individual stage with the first minute removed from analysis.

All stages Stage 1 Stage 2 Stage 3
Numbers 3.20 ± 1.50W 3.22 ± 1.84W 3.15 ± 1.84W 2.39 ± 1.41W
Bars 3.12 ± 1.46W 3.16 ± 1.95W 3.12 ± 1.57W 2.27 ± 1.10W
Tacho 3.00 ± 1.83W 3.16 ± 2.79W 2.86 ± 1.41W 1.86 ± 0.91W

For further analysis, the feedback systems were sorted for every participant from best to worst for every participant individually. The results are shown in Figure 4b. In this scenario a significant effect of feedback variant was found for RMSE (F(1.19,35.80) = 22.93, p < 0.001, η2 = 0.108). However, it has to be pointed out that, as can be observed in Figure 4c, the design of this analysis is unbalanced as the frequency with which feedback is listed in one of the categories (Best, Mid, Worst) is not the same. Post hoc analysis revealed significant pair-wise differences in RMSE between all categories: Best (2.54 ± 0.94) - Mid (3.00 ± 1.35), Best - Worst (3.79 ± 2.04), and Mid - Worst (all p ≤ 0.001).

Power Feedback

For the power-based task, as shown in Figure 5a, no significant effect of feedback on RMSE was found on an absolute level F(1.50,43.58) = 0.01, p > 0.05, η2 = 0.000. The mean RMSE values for the feedback variants were: 14.32 ± 5.62 (‘Numbers’), 14.39 ± 6.69 (‘Bars’) and 14.31 ± 6.71 (‘Tacho’). Further descriptive statistics are listed in Table 5. Furthermore, no effects of training amount or ambition on RMSE in the power-based task were found (all p > 0.05).

Figure 5.

Figure 5.

RMSE of the systems for the power task. (a) shows differences in RMSE between the feedback variants for the power task on an absolute level, (b) shows differences in RMSE of the systems for the power task on an individual level while (c) shows the number of times a variant was sorted into the respective category. *** significant difference (p < 0.001).

Table 5.

Descriptive statistics of absolute power RMSE. First column shows the RMSE of minutes two and three. Columns two and three show RMSE over the first 30 and 45 seconds. The last column shows RMSE over the entire three minutes.

RMSE First 30s First 45s Full Stage
Numbers 14.46 ± 6.73W 21.01 ± 9.27W 19.08 ± 8.11W 15.84 ± 6.42W
Bars 14.47 ± 5.63W 26.56 ± 11.07W 23.62 ± 8.95W 17.61 ± 5.15W
Tacho 14.36 ± 6.74W 22.95 ± 11.60W 20.76 ± 10.54W 16.47 ± 7.48W

Similar to the analysis of the heart-rate values, the feedback variants were categorized individually for each participant. Again, this revealed a significant effect of the individuality category on RMSE (F(1.32,38.26) = 39.84, p < 0.001, η2 = 0.068) (Figure 5b) and significant differences between Best (12.28 ± 5.44), Mid (14.50 ± 6.13) and Worst (16.25 ± 6.80) variants for the participants (for all combinations p < 0.001).

Discussion

In this study, athletes were presented with two novel designs visualizing instantaneous accuracy as additional information and a traditional design displaying numerical information. Consequently, the novel designs present athletes with more information as they provide both knowledge of performance and knowledge of result. The findings of this study suggest that the additional information provided can be a useful aid for athletes to reduce their deviation from target value. Nevertheless, for some cases it seems that this additional information might hinder accuracy, as will be discussed later.

As visual feedback can decrease accuracy in mechanical tasks with rapid movements (Keele and Posner, 1968), significant differences between the systems would have been expected. For the power-based task the boundaries of ±10W were chosen to be deliberately narrow. Due to this restriction, even small or “short term” errors influenced the result. However, the results of this study did not show significant differences between the feedback variants on an absolute level. Nevertheless, mean RMSE of the ‘Bars’ variant was higher in the power task for all analyzed scenarios. Due to the delayed reaction of the heart rate to singular movements we believe that this aspect of feedback is not relevant for the heart rate-based task as it cannot be considered as rapid movement. This is also supported by our finding that participants had on average higher accuracies with added visual feedback in the heart rate-based task. Participants reported familiarity with the ‘Tacho’ design due to its resemblance to a speed gauge. Consequently, participants might have been able to perform the task better, when using this variant, due to familiarity with the design. Some participants gave verbal evidence for this during their trials. However, it has to be pointed out that also a familiarity with the ‘Numbers’ variant exists since it is integrated in daily life (e.g. display of heart rate on a standard heart rate monitor, radar speed signs on the street). This degree of different familiarity could not have been prevented with prior familiarization trials due to the complex nature of familiarity and the individuality of this aspect. Future studies should further investigate this effect of individuality.

On an absolute level, no differences between feedback variants with respect to RMSE have been identified for both tasks. This seems to be in line with other studies also not identifying differences on an absolute level when modifying the feedback. For example, Perez et al. (2009) found no differences in accuracy when comparing feedback given by a display to feedback given by a coach. However, it has to be pointed out that this study investigated the difference between feedback given by the coach versus an underwater chronometer (Perez et al., 2009).

Previous studies have shown increased accuracy in the presence of visual feedback (De Marchis et al., 2013; Henke, 1998; Holderbaum et al., 1969; Perez et al., 2009; Szczepan et al., 2018; 2016). However, the variant ‘Numbers’ does not provide feedback in the form of knowledge of performance (as it does not visualize information on the deviation from the target during a task where the aim is not to deviate from a certain target value), while the novel variants do. Consequently, the (not significantly) improved RMSE of the novel variants compared to the traditional variant (‘Numbers’) in the heart rate-based task could be explained. However, since no significant differences between the novel variants and the traditional variant have been identified for the power-based task, it seems that this result cannot be generalized.

Although, the insignificantly lower mean RMSE of the novel feedback variants might suggest that they are beneficial, even better results in both tasks can be achieved by individualization. For all analyzed conditions, significant differences were found between the best, mid- and worst result for the participants. This finding is in line with research on feedback in non-sporting situations (Bernichon et al., 2003; Brockner et al., 1987; Brown, 2010; Heimpel et al., 2002; Krenn et al., 2013; Malkin et al., 2011; Ray, 1974; Shrauger, 1975). Moreover, similar results have been shown for postural control as well as running (Crowell et al., 2010; D’Anna et al., 2014).

The findings of this study seem to be in contrast to the conclusions of Perez et al. (2009) who hypothesized that the lack of a significant interaction between swimmer and feedback factors suggests that no individual differences exist. However, it has to be pointed out that their findings in turn are in contradiction to the findings from studies presented above. Moreover, the design of the study was not to determine differences between individuals with respect to feedback.

One limitation of the present study is that participants were tested only once for each feedback variant. Consequently, the question of whether a certain feedback variant will remain the best choice for an athlete cannot be answered. Moreover, this study cannot answer how RMSE changes over time with repetition of the test. Future studies would have to investigate changes in test execution over repeated tests.

Another argument against a one-fits-all solution is the fact that the feedback variants differ between the heart rate and power-based task. For example, in the power task the ‘Bars’ variant achieves a lower mean accuracy than ‘Numbers’ while no such result is observable for the heart-rate task. Consequently, we assume that the best feedback might also be dependent on the task.

When inspecting the data, it seems that the ‘Tacho’ variant might be preferential in case a one-fits-all solution is desired. In most of the investigated scenarios this variant has the lowest mean RMSE. However, it has to be kept in mind that the differences were not significant, and most values are influenced by high standard deviations. Consequently, the results of our study suggest an individualized solution should be used when possible.

Conclusion

The aim of this study was to investigate the effects of visual feedback on the accuracy of athletes during heart rate and power-based tasks. It provides evidence for the hypothesis that a one-fits-all solution might not exist as no significant difference in accuracy between the variants was found for the power-based task. Furthermore, the effect size for the difference in accuracy between the feedback for the heart rate-based task was smaller than the effect size for the differences between the individually categorized feedback. Additionally, significant differences were found between the individually categorized feedback in the power-based task. Consequently, we conclude that the accuracy of testing results can be improved by providing individualized feedback. However, further research is needed to investigate how these improvements change over time.

Acknowledgements

This research was funded by Hochschuljubiläumsstiftung der Stadt Wien, grant number H-327152/2018. The experiments comply with the current laws of the country in which they were performed. The authors have no conflict of interest to declare. The datasets generated during and/or analyzed during the current study are not publicly available, but are available from the corresponding author who was an organizer of the study.

Biographies

graphic file with name jssm-21-49-g006.gif

Martin DOBIASCH

Employment

Researcher, Danube University Krems, Austria, PhD student University of Vienna, Centre for Sport Science and University Sports

Degree

Masters

Research interests

Computer science in sports, mobile coaching, Technology-enhanced Learning

E-mail: martin.dobiasch@univie.ac.at

graphic file with name jssm-21-49-g007.gif

Björn KRENN

Employment

Post-doc researcher University of Vienna, Centre for Sport Science and University Sports

Degree

PhD

Research interests

Sport and Exercise Psychology, Executive Functions, Perception & Attention, Feedback interventions

E-mail: bjoern.krenn@univie.ac.at

graphic file with name jssm-21-49-g008.gif

Robert LAMBERTS

Employment

Prof. Division of Biokinetics, Depart-ment of Sport Science, Stellenbosch University, and developer of the LSCT.

Degree

PhD, FECSS

Research interests

Optimal balance between training load and recovery, overreaching / overtrain-ing, heart rate recovery and heart rate variability. Application of scientific knowledge in (elite) sports; Cycling (road, moutainbiking and cyclo-cross) – LSCT, rowing – SMRT and, running – LSRT. Orthopaedics & sports injuries & healthy aging with cerebral palsy.

E-mail: rplam@hotmail.com

graphic file with name jssm-21-49-g009.gif

Arnold BACA

Employment

Prof. University of Vienna, Centre for Sport Science and University Sports, Department for Biomechanics, Kinesiology and Computer Science in Sport, University of Vienna.

Degree

PhD

Research interests

Biomechanics, 3D motion analysis, pervasive computing, mobile coaching, e-learning, performance analysis

E-mail: arnold.baca@univie.ac.at

References

  1. Annett J. (1959) Learning a Pressure under Conditions of Immediate and Delayed Knowledge of Results. Quarterly Journal of Experimental Psychology 11(1), 3-15. https://doi.org/10.1080/17470215908416281 10.1080/17470215908416281 [DOI] [Google Scholar]
  2. Bernichon T., Cook K. E., Brown J. D. (2003) Seeking self-evaulative feedback: The interactive role of global self-esteem and specific self-views. Journal of Personality and Social Psychology 84(1), 194. https://doi.org/10.1037/0022-3514.84.1.194 10.1037/0022-3514.84.1.194 [DOI] [PubMed] [Google Scholar]
  3. Brockner J., Derr W. R., Laing W. N. (1987) Self-esteem and reactions to negative feedback: Toward greater generalizability. Journal of Research in Personality 21(3), 318-333. https://doi.org/10.1016/0092-6566(87)90014-6 10.1016/0092-6566(87)90014-6 [DOI] [Google Scholar]
  4. Broker J. P., Gregor R. J., Schmidt R. A. (1989) Extrinsic feedback and the learning of cycling kinetic patterns. Journal of Biomechanics 22(10), 991. https://doi.org/10.1016/0021-9290(89)90134-6 10.1016/0021-9290(89)90134-6 [DOI] [Google Scholar]
  5. Brown J. D. (2010) High self-esteem buffers negative feedback: Once more with feeling. Cognition and Emotion 24(8), 1389-1404. https://doi.org/10.1080/02699930903504405 10.1080/02699930903504405 [DOI] [Google Scholar]
  6. Capostagno B., Lambert M. I., Lamberts R. P. (2014) Standardized versus customized high-intensity training: Effects on cycling performance. International Journal of Sports Physiology and Performance 9(2), 292-301. https://doi.org/10.1123/ijspp.2012-0389 10.1123/ijspp.2012-0389 [DOI] [PubMed] [Google Scholar]
  7. Capostagno B., Lambert M. I., Lamberts R. P. (2021) Analysis of a Submaximal Cycle Test to Monitor Adaptations to Training: Implications for Optimizing Training Prescription. Journal of Strength and Conditioning Research 35(4), 924-930. https://doi.org/10.1519/JSC.0000000000003227. 10.1519/JSC.0000000000003227 [DOI] [PubMed] [Google Scholar]
  8. Coutts A. J., Slattery K. M., Wallace L. K. (2007) Practical tests for monitoring performance, fatigue and recovery in triathletes. Journal of Science and Medicine in Sport 10(6), 372-381. https://doi.org/10.1016/j.jsams.2007.02.007 10.1016/j.jsams.2007.02.007 [DOI] [PubMed] [Google Scholar]
  9. Crowell H. P., Milnert C. E., Hamill J., Davis I. S. (2010) Reducing impact loading during running with the use of real-time visual feedback. Journal of Orthopaedic and Sports Physical Therapy 40(4), 206-213. https://doi.org/10.2519/jospt.2010.3166 10.2519/jospt.2010.3166 [DOI] [PubMed] [Google Scholar]
  10. D’Anna C., Bibbo D., De Marchis C., Goffredo M., Schmid M., Conforto S. (2014) Comparing different visual biofeedbacks in static posturography. In: 2014 IEEEEMBS International Conference on Biomedical and Health Informatics, BHI 2014. 380-383. https://doi.org/10.1109/BHI.2014.6864382 10.1109/BHI.2014.6864382 [DOI] [Google Scholar]
  11. De Marchis C., Schmid M., Bibbo D., Castronovo A. M., D’Alessio T., Conforto S. (2013) Feedback of mechanical effectiveness induces adaptations in motor modules during cycling. Frontiers in Computational Neuroscience 7, 1-12. https://doi.org/10.3389/fncom.2013.00035 10.3389/fncom.2013.00035 [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Decroix L., Lamberts R. P., Meeusen R. (2018) Can the lamberts and lambert submaximal cycle test reflect overreaching in professional cyclists? International Journal of Sports Physiology and Performance 13(1), 23-28. https://doi.org/10.1123/ijspp.2016-0685 10.1123/ijspp.2016-0685 [DOI] [PubMed] [Google Scholar]
  13. Dobiasch M., Baca A. (2016) Pegasos - Ein Generator für Feedbacksysteme [Pegasos - A Generator for Feedback Systems]. In: 11. Symposium der dvs-Sektion Sportinformatik, Magdeburg. p. 18. [Google Scholar]
  14. Dobiasch M., Stöckl M., Baca A. (2019) Direct mobile coaching & a software framework for the creation of mobile feedback systems. In: Book of Abstracts. 12th International Symposium on Computer Science in Sport. IACSS 2019, Moscow, Russia, 8-10 July 2019. 109-110. [Google Scholar]
  15. Hammes D., Skorski S., Schwindling S., Ferrauti A., Pfeiffer M., Kellmann M., Meyer T. (2016) Can the lamberts and lambert submaximal cycle test indicate fatigue and recovery in trained cyclists? International Journal of Sports Physiology and Performance 11(3), 328-336. https://doi.org/10.1123/ijspp.2015-0119 10.1123/ijspp.2015-0119 [DOI] [PubMed] [Google Scholar]
  16. Heimpel S. A., Wood J. V., Marshall M. A., Brown J. D. (2002) Do people with low self-esteem really want to feel better? self-esteem differences in motivation to repair negative moods. Journal of Personality and Social Psychology 82(1), 128. https://doi.org/10.1037/0022-3514.82.1.128 10.1037/0022-3514.82.1.128 [DOI] [PubMed] [Google Scholar]
  17. Henke T. (1998) Real-time feedback of pedal forces for the optimization of pedaling technique in competitive cycling. ISBS Proceedings 174-177. [Google Scholar]
  18. Holderbaum G. G., Guimaraes A. C. S., Petersen R. D. D. S. (1969) The use of augmented visual feedback on the learning of the recovering phase of pedaling. Brazilian Journal of Motor Behavior 4(1), 1-7. https://doi.org/10.20338/bjmb.v4i1.18 10.20338/bjmb.v4i1.18 [DOI] [Google Scholar]
  19. Keele S. W., Posner M. I. (1968) Processing of Visual Feedback in Rapid Movements. Journal of Experimental Psychology 77(1), 155-158 https://doi.org/10.1037/h0025754 10.1037/h0025754 [DOI] [PubMed] [Google Scholar]
  20. Krenn B., Wuerth S., Hergovich A. (2013) Individual differences Concerning the impact of feedback - Specifying the role of core self-evaluations. Studia Psychologica 55(2), 95-110. https://doi.org/10.21909/sp.2013.02.628 10.21909/sp.2013.02.628 [DOI] [Google Scholar]
  21. Lamberts R. P. (2014) Predicting cycling performance in trained to elite male and female cyclists. International Journal of Sports Physiology and Performance 9(4), 610-614. https://doi.org/10.1123/ijspp.2013-0040a 10.1123/ijspp.2013-0040a [DOI] [PubMed] [Google Scholar]
  22. Lamberts R. P., Rietjens G. J., Tijdink H. H., Noakes T. D., Lambert M. I. (2010) Measuring submaximal performance parameters to monitor fatigue and predict cycling performance: a case study of a world-class cyclo-cross cyclist. European Journal of Applied Physiology 108(1), 183-190. https://doi.org/10.1007/s00421-009-1291-3 10.1007/s00421-009-1291-3 [DOI] [PubMed] [Google Scholar]
  23. Lamberts R. P., Swart J., Noakes T. D., Lambert M. I. (2011) A novel submaximal cycle test to monitor fatigue and predict cycling performance. British Journal of Sports Medicine 45(10), 797-804. https://doi.org/10.1136/bjsm.2009.061325 10.1136/bjsm.2009.061325 [DOI] [PubMed] [Google Scholar]
  24. Magill R. A., Anderson D. I. (2007) Motor learning and control: Concepts and applications. Volume 11. McGraw-Hill; New York. [Google Scholar]
  25. Malkin M. L., Barry C. T., Zeigler-Hill V. (2011) Covert narcissism as a predictor of internalizing symptoms after performance feedback in adolescents. Personality and Individual Differences 51(5), 623-628. https://doi.org/10.1016/j.paid.2011.05.031 10.1016/j.paid.2011.05.031 [DOI] [Google Scholar]
  26. Perez P., Llana S., Brizuela G., Encarnaci'on A. (2009) Effects of three feedback' conditions on aerobic swim speeds. Journal of Sports Science and Medicine 8(1), 30-36. https://pubmed.ncbi.nlm.nih.gov/24150553/ [PMC free article] [PubMed] [Google Scholar]
  27. R Core Team (2018) R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria. [Google Scholar]
  28. Ray W. J. (1974) The Relationship of Locus of Control, Self-Report Measures and Feedback to the Voluntary Control of Heart Rate. Psychophysiology 11(5), 527-534. https://doi.org/10.1111/j.1469-8986.1974.tb01108.x 10.1111/j.1469-8986.1974.tb01108.x [DOI] [PubMed] [Google Scholar]
  29. Salmoni A. W., Schmidt R. A., Walter C. B. (1984) Knowledge of Results and Motor Learning: A Review and Critical Reappraisal. Psychological Bulletin 95(3), 355-386. https://doi.org/10.1037/0033-2909.95.3.355 10.1037/0033-2909.95.3.355 [DOI] [PubMed] [Google Scholar]
  30. Shrauger J. S. (1975) Responses to evaluation as a function of initial self-perceptions. Psychological Bulletin 82(4), 581. https://doi.org/10.1037/h0076791 10.1037/h0076791 [DOI] [PubMed] [Google Scholar]
  31. Sigrist R., Rauter G., Riener R., Wolf P. (2013) Augmented visual, auditory, haptic and multimodal feedback in motor learning: A review. Psychonomic Bulletin and Review 20(1), 21-53. https://doi.org/10.3758/s13423-012-0333-8 10.3758/s13423-012-0333-8 [DOI] [PubMed] [Google Scholar]
  32. Szczepan S., Zaton K., Borkowski J. (2018) The Effect of Visual Speed Swimming' Control in Swimmers’ Threshold Training. Central European Journal of Sport Sciences and Medicine 24(4), 25-34. https://doi.org/10.18276/cej.2018.4-03 10.18276/cej.2018.4-03 [DOI] [Google Scholar]
  33. Szczepan S., Zaton K., Klarowicz A. (2016) The Effect of Concurrent Visual Feedback on Controlling Swimming Speed. Polish Journal of Sport and Tourism 23(1), 3-6. https://doi.org/10.1515/pjst-2016-0001 10.1515/pjst-2016-0001 [DOI] [Google Scholar]
  34. Taylor K.-L., Chapman D., Cronin J., Newton M., Gill N. (2012) Fatigue monitoring in high performance sport: a survey of current trends. Journal of Australian Strength & Conditioning 20(1), 12-23. [Google Scholar]

Articles from Journal of Sports Science & Medicine are provided here courtesy of Dept. of Sports Medicine, Medical Faculty of Uludag University

RESOURCES