Skip to main content
Biology of Sport logoLink to Biology of Sport
. 2019 Oct 10;36(4):317–321. doi: 10.5114/biolsport.2019.88754

The inter-unit and inter-model reliability of GNSS STATSports Apex and Viper units in measuring peak speed over 5, 10, 15, 20 and 30 meters

Marco Beato 1,, Kevin L de Keijzer 1
PMCID: PMC6945047  PMID: 31938002

Abstract

The objective of this investigation was to evaluate the inter-unit reliability (Apex vs. Apex; Viper vs. Viper) and inter-model reliability (Apex vs. Viper) from 5 to 30 m sprinting activity. Ten team sport players (age 22 ± 1 years) were enrolled in this crossover study and performed 1271 trials (436 Apex vs. Apex, 464 Viper vs. Viper, 371 Apex vs. Viper) consisting of 5–10 m, 10–15 m, 15–20 m, and 20–30 m sprints. Inter-unit reliability was calculated using the intra-class correlation coefficient (ICC) with 95% confidence interval (CI) and coefficient of variation (CV), while between-unit and model analysis were subsequently performed to evaluate differences in Vpeak. Apex (10 Hz) units had excellent inter-unit reliability for all distances, whereas Viper (10 Hz) units had good to excellent reliability. The CV was good (< 5%) for both GNSS models. Significant differences were found in Vpeak in Sprint 5–10 = 0.13 CI (0.08, 0.182) m.s-1, Sprint 10–15 = 0.06 CI (0.01, 0.1) m.s-1, and in Sprint overall = 0.06 CI (0.03, 0.09) m.s-1. Both Viper and Apex units can consistently report Vpeak measurements since good to excellent ICC and good CV were found. However, Vpeak measurements are significantly different between models for distances less than 15 m. In conclusion, this study shows that differences exist among manufacturers’ models and that the two GNSS models should not be used interchangeably to quantify Vpeak.

Keywords: GPS, Team sports, Sprint, Performance, Football, Soccer

INTRODUCTION

The monitoring of external load metrics such as total distance, high speed running, and peak velocity (Vpeak) via global navigation satellite systems (GNSS) is now commonplace at the elite level of team sports [1,2] . GNSS-based metrics are used at the elite level to help coaches make daily informed decisions, which can ensure adequate recovery among training sessions and have a critical impact on the maximization of physical adaptations during the training process [3]. Large variability in accuracy between manufacturers’ models and units has been previously identified [4,5], which may significantly undermine practitioners’ ability to monitor and plan training effectively.

STATSports GNSS (Viper and Apex units) are among the most common devices used in elite sports (e.g. English Premier League), and their validity have been previously reported over 20 m [47]. The main difference between the two GNSS is that the Apex, which is the newest model released by STATSports, is capable of acquiring and tracking multiple satellite systems (e.g. GPS [global positioning systems], GLONASS, BeiDou) to provide the best possible positional information, while Viper units are based only on GPS [6]. No evidence exists about inter-unit reliability for Viper units and inter-model reliability between Viper and Apex units. Previous investigations have demonstrated that both validity and reliability reference of specific GNSS units and they cannot be extended to other models [6,8]. Therefore, it could be possible that these GNSS models, even if produced by the same manufacturer, may report differences in Vpeak monitoring.

Considering that the majority of training and competitive actions in intermittent sports occur within 5–30 m [1,9,10], it is crucial that an investigation determines the Vpeak reliability of STATSports Apex and Viper units during such sprinting activities. This information is missing in the literature and could have a critical role in elite sports and for research purposes. This is crucial because previous research suggested using GNSS units for the monitoring of Vpeak during testing protocols [3,6,11,12]. Furthermore, differences between the Apex and Viper units may clarify whether previously recorded data using Viper units can be compared to new data recorded by Apex units. Therefore, this study aims, firstly, to evaluate the inter-unit reliability (Apex vs. Apex; Viper vs. Viper) and, secondly, to assess the inter-model reliability (Apex vs. Viper) from 5 to 30 m sprinting activity.

MATERIALS AND METHODS

Subjects

Ten male team sports players were enrolled (mean ± standard deviation [SD], age 22 ± 1 years, body mass 71.8 ± 5 kg, and height 1.75 ± 0.06 m) in this crossover study. The study was performed in accordance with the Declaration of Helsinki for studies on human subjects. The Institutional Ethics Board of the University of Suffolk (Ipswich, UK) approved the experimental protocol (RETH19/044). A written informed consent form was obtained from all participants of the current investigation.

Procedures

GNSS Apex (STATSports, Northern Ireland) and Viper (STATSports, Northern Ireland) data were collected on an outdoor athletics track, in the absence of high buildings. Data collection was only performed in good meteorological conditions to enhance satellite reception, following the recommendations of recent investigations [5,6]. Prior to each session, a standardized warm-up was led by an accredited strength and conditioning coach to reduce the risk of muscle injuries. The Apex and Viper units were turned on 20 minutes prior to the beginning of the protocol. For Apex units, the satellites ranged between 17 and 21, while the horizontal dilution of precision was 0.4 ± 0. By contrast, Viper units do not report this information. Both units were placed in manufacturer-provided vests on the participant’s back about 3 cm from each other, midway between the scapulas, to permit equal exposure to the embedded antenna [6,7]. Comparisons consisted of Apex vs. Apex, Viper vs. Viper, and Apex vs. Viper during linear sprint without changes of direction. Sprint distance was determined in advance by a meter tape and marked with cones. Sprinting distances were categorized as 5–10 m, 10–15 m, 15–20 m, and 20–30 m. Prior to each protocol, participants were required to stand still for 10 seconds at the starting point to facilitate data analysis, then they were required to maximally sprint to replicate competition-specific conditions. Apex and Viper data were downloaded and further analyzed by the respective software (Apex 10 Hz version 2.0.2.4 and Viper version 1.2).

Statistical analysis

A total of 1271 trials were analyzed in the current investigation, which were divided into 436 trials used to test Apex inter-unit reliability, 464 trials to test Viper inter-unit reliability, and 371 trials to test Apex and Viper inter-model reliability. All descriptive data were presented as means ± SD. The inter-unit and inter-model reliability was calculated by the intra-class correlation coefficient (ICC), which was interpreted accordingly: ICC ≥ 0.9 = excellent; 0.9 > ICC ≥ 0.8 = good; 0.8 > ICC ≥ 0.7 = acceptable; 0.7 > ICC ≥ 0.6=questionable; 0.6 > ICC ≥ 0.5 = poor; ICC < 0.5 = unacceptable [13]. Technical error of measurement (TE) was calculated using the following formula: TE=SD.√(1-ICC) [13,14]. TE was also reported as the coefficient of variation (CV), which was considered good when < 5%. Between-unit and model analysis was performed using the t-test. Statistical significance was set at p < 0.05. Confidence intervals (CI) at 95% were reported. Effect sizes (ES) were calculated using Cohen’s d principle and interpreted by the Hopkins et al. [15] scale of magnitudes. Statistical analysis was performed using JASP (Amsterdam, Netherlands) software version 0.9.2.

RESULTS

Inter-unit reliability and inter-model reliability analysis is reported in Tables 1 and 2. Between-unit and between-model analyses are reported in Table 3. Between-model analysis (Apex vs. Viper) revealed a significant difference (delta difference, 95% CI) in Vpeak in Sprint 5–10 = 0.13 CI (0.08, 0.182) m.s-1, ES = 0.44 (small); Sprint 10–15 = 0.06 CI (0.01, 0.1) m.s-1, ES = 0.20 (small); and in Sprint overall = 0.06 CI (0.03, 0.09) m.s-1, ES = 0.22 (small).

TABLE 1.

Reliability data recorded during 5, 10, 15, 20 and 30 m sprints (10 players, 1271 sprints).

Variables Vpeak (m.s-1) Apex inter-unit reliability ICC (95% CI) Reliability qualitative interpretation Viper inter-unit reliability ICC (95% CI) Reliability qualitative interpretation Apex and Viper inter-model reliability ICC (95% CI) Reliability qualitative interpretation
Sprint 5–10 m 0.96 (0.95, 0.97) excellent 0.91 (0.90, 0.92) excellent 0.95 (0.94, 0.96) excellent
Sprint 10–15 m 0.95 (0.94, 0.96) excellent 0.90 (0.88, 0.91) excellent 0.94 (0.92, 0.95) excellent
Sprint 15–20 m 0.95 (0.94, 0.96) excellent 0.89 (0.87, 0.90) good 0.92 (0.90, 0.94) excellent
Sprint 20–30 m 0.97 (0.96, 0.98) excellent 0.91 (0.89, 0.93) excellent 0.96 (0.95, 0.96) excellent
Sprint overall (5 to 30 m) 0.99 (0.98, 0.99) excellent 0.97 (0.96, 0.97) excellent 0.98 (0.98, 0.99) excellent

Vpeak= Peak velocity, ICC = intra-class correlation coefficient, CI = Confidence Intervals, m = meters, s = seconds.

TABLE 2.

Reliability data rwecorded during 5, 10, 15, 20 and 30 m sprints (10 players, 1271 sprints).

Variables Vpeak Apex inter-unit reliability TE (CV%) Reliability qualitative interpretation Viper inter-unit reliability TE (CV%) Reliability qualitative interpretation Apex and Viper inter-model reliability TE (CV%) Reliability qualitative interpretation
Sprint 5–10 m (m.s-1) 0.15 (2.91%) good 0.25 (4.94%) good 0.15 (2.85%) good
Sprint 10–15 m (m.s-1) 0.14 (2.18%) good 0.20 (4.40%) good 0.13 (2.15%) good
Sprint 15–20 m (m.s-1) 0.14 (2.01%) good 0.20 (3.09%) good 0.13 (1.99%) good
Sprint 20–30 m (m.s-1) 0.12 (1.64%) good 0.19 (2.62%) good 0.12 (1.76%) good
Sprint overall (5 to 30 m) (m.s-1) 0.12 (1.85%) good 0.20 (3.28%) good 0.15 (2.45%) good

Vpeak= Peak velocity, TE = Technical error of measurement, CV = Coefficient of variation, s = seconds, m = meters.

TABLE 3.

Data recorded during 5, 10, 15, 20 and 30 m sprints (10 players performing an overall of 1271 sprints) and between analysis.

Variables Vpeak Apex sprint test Apex sprint retest p-level Viper sprint test Viper sprint retest p-level Apex sprint Viper sprint p-level
Sprint 5–10 m (m.s-1) 5.33 ± 0.76 5.30 ± 0.76 0.162 5.04 ± 0.83 5.07 ± 0.77 0.345 5.10 ± 0.65 4.96 ± 0.75 <0.001
Sprint 10–15 m (m.s-1) 6.36 ± 0.62 6.35 ± 0.64 0.793 5.96 ± 0.64 5.97 ± 0.66 0.585 6.03 ± 0.53 5.97 ± 0.58 0.013
Sprint 15–20 m (m.s-1) 7.00 ± 0.63 7.00 ± 0.62 0.998 6.54 ± 0.61 6.59 ± 0.58 0.055 6.55 ± 0.46 6.54 ± 0.53 0.579
Sprint 20–30 m (m.s-1) 7.48 ± 0.71 7.46 ± 0.72 0.207 7.10 ± 0.62 7.11 ± 0.59 0.472 7.03 ± 0.62 7.03 ± 0.65 0.929
Sprint overall 5 to 30 m (m.s-1) 6.48 ± 1.20 6.46 ± 1.20 0.130 6.13 ±1.16 6.16 ± 1.12 0.056 6.13 ± 1.06 6.07 ± 1.15 <0.001

Vpeak>= Peak velocity, m = meters, s = seconds.

DISCUSSION

Apex inter-unit reliability for Vpeak was excellent for all distances, whereas Viper (10 Hz) units showed good to excellent reliability. Both models presented a CV < 5% (good), but Apex units reported lower values than Viper units. Significant differences between the two models exist in Vpeak for sprints from 5–10, 10–15 m, and overall (from 5 to 30 m).

The development of monitoring tools is rapidly improving, with a great deal of interest and investment being placed in the monitoring of training load [4,1618]. Nonetheless, the validation and reliability of such monitoring tools are often lacking [6,8]. This study involves a very large number of sprints, consisting of 436 (Apex vs. Apex), 464 (Viper vs. Viper), and 371 (Apex vs. Viper), for a total of 1271, which is a strength of the current investigation. Apex inter-unit reliability was excellent for all distances, showing that the Apex model can be used to monitor Vpeak. Previous research that evaluated the validity of Apex vs. a gold standard criterion device (radar gun) reported a nearly perfect correlation (r = 0.96) during a 20 m sprint, with no significant difference between the two tools (p = 0.32), and good inter-unit reliability expressed as CV = 2.3% was found during a 20 m sprint [6]. Recent research found that Apex inter-unit reliability of maximal speed (tested using a sprint sled) showed a CV = 1.9% [4], which is in line with previous inter-unit reliability scores [6]. The present research agrees with the findings previously reported in the literature and add that the Apex GNSS model is reliable to evaluate Vpeak from 5 to 30 m distance (Table 2), which is an innovative finding. By contrast, information related to Viper units is limited since no studies have performed an inter-unit reliability assessment. In the current research lower reliability scores were obtained in all distances for Viper units compared to Apex units, even though the ICC ranged from 0.89 to 0.97 (good to excellent). Such results are supported by previous research that demonstrated that Viper units have error that increases as the distance decreases (from 20 m to 5 m) [7]. Moreover, Vpeak recorded by the Viper units showed a significant difference (p=0.045) compared to a gold standard measure [5].

The current research supports the knowledge that reliability values reference of specific GNSS units and should not be extended to other models since significant differences were found between the two models (Table 3) [8]. Such differences exist for short sprints (from 5 to 15 m), but do not exist for longer distances (>15 m). Specifically, previous research has attributed improved accuracy of positional information to the Apex (10 Hz multi-GNSS) model due to its enhanced ability to acquire and optimize satellite system reception [6]. Such information (satellite connection) is not reported by the Viper model, and therefore the authors cannot prove that this is the main factor responsible for such differences, which may be considered a limitation of the Viper units. Possibly, the Vpeak differences may also arise due to the different algorithms that can be applied and used with advances in technology or differences in the filtering techniques adopted [4,8]. The differences found between the Apex and Viper Vpeak measurements during sprints may be crucial for practitioners because velocity-based monitoring could be affected, which can have a consequence on sessions and training periodization. For this reason, the authors recommend using one monitoring system and avoiding alternating between Viper and Apex units (if different models are used within the same club) to monitor Vpeak during sprinting or sport-specific drills. Moreover, the results of this study are relevant for professional practitioners, since players’ data recorded using Viper units should be interpreted with caution when compared to the Apex model (or to different GNSS units) [4].

CONCLUSIONS

This investigation reports, firstly, that although Apex and Viper units present excellent and good to excellent inter-unit reliability respectively, Vpeak measurements are significantly different between the GNSS models. Secondly, the CV of the units decreases as distances increase, with higher reliability being reported over 15 m. However, Apex units proved to be excellent (ICC) and good (CV) for evaluating Vpeak at shorter distances (<15 m). In conclusion, this study shows that differences exist when measuring Vpeak with different models from the same manufacturer and that these two GNSS models should not be used interchangeably for this purpose. Practitioners should be aware of the findings of this study when monitoring speed-based measurements in professional settings (e.g. elite soccer) [19,20], above all when comparing the data between other devices, while the speed data should be used with caution because these devices were not validated for a short distance (less than 20 m).

REFERENCES

  • 1.Haugen T, Buchheit M. Sprint running performance monitoring: Methodological and practical considerations. Sports Med. 2016 May;46(5):641–56. doi: 10.1007/s40279-015-0446-0. [DOI] [PubMed] [Google Scholar]
  • 2.Jackson BM, Polglaze T, Dawson B, King T, Peeling P. Comparing Global Positioning System (GPS) and Global Navigation Satellite System (GNSS) Measures of Team Sport Movements. Int J Sports Physiol Perform. 2018:1–22. doi: 10.1123/ijspp.2017-0529. [DOI] [PubMed] [Google Scholar]
  • 3.Hoppe MW, Baumgart C, Polglaze T, Freiwald J. Validity and reliability of GPS and LPS for measuring distances covered and sprint mechanical properties in team sports. Ardigò LP, editor. PLoS One. 2018 Feb 8;13(2):e0192708. doi: 10.1371/journal.pone.0192708. org/10.1371/journal.pone.0192708. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Thornton HR, Nelson AR, Delaney JA, Serpiello FR, Duthie GM. Interunit reliability and effect of data-processing methods of global positioning systems. Int J Sports Physiol Perform. 2019 Apr;14(4):432–8. doi: 10.1123/ijspp.2018-0273. [DOI] [PubMed] [Google Scholar]
  • 5.Beato M, Devereux G, Stiff A. Validity and reliability of global positioning system units (STATSports Viper) for measuring distance and peak speed in sports. J strength Cond Res. 2018;32(10):2831–7. doi: 10.1519/JSC.0000000000002778. [DOI] [PubMed] [Google Scholar]
  • 6.Beato M, Coratella G, Stiff A, Dello Iacono A. The validity and between-unit variability of GNSS units (STATSports Apex 10 and 18 Hz) for measuring distance and peak speed in team sports. Front Physiol. 2018;9:1288. doi: 10.3389/fphys.2018.01288. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Beato M, Bartolini D, Ghia G, Zamparo P. Accuracy of a 10 Hz GPS unit in measuring shuttle velocity performed at different speeds and distances (5 – 20 M) J Hum Kinet. 2016;54(1):15–22. doi: 10.1515/hukin-2016-0031. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Scott TU, Scott TJ, Kelly VG. The validity and reliability of global positioning system in team sport: a brief review. J Strength Cond Res. 2016;30(5):1470–90. doi: 10.1519/JSC.0000000000001221. [DOI] [PubMed] [Google Scholar]
  • 9.Carling C, Bradley P, McCall A, Dupont G. Match-to-match variability in high-speed running activity in a professional soccer team. J Sports Sci. 2016 Dec;34(24):2215–23. doi: 10.1080/02640414.2016.1176228. [DOI] [PubMed] [Google Scholar]
  • 10.Carling C. Interpreting physical performance in professional soccer match-play: Should we be more pragmatic in our approach? Sport Med. 2013;43(8):655–63. doi: 10.1007/s40279-013-0055-8. [DOI] [PubMed] [Google Scholar]
  • 11.Roe G, Darrall-Jones J, Black C, Shaw W, Till K, Jones B. Validity of 10-hz GPS and timing gates for assessing maximum velocity in professional Rugby Union players. Int J Sports Physiol Perform. 2017;12(6):836–9. doi: 10.1123/ijspp.2016-0256. [DOI] [PubMed] [Google Scholar]
  • 12.Buchheit M, Haddad H Al, Simpson BM, Palazzi D, Bourdon PC, Salvo V Di, et al. Monitoring accelerations with GPS in football: Time to slow down ? Int J Sports Physiol Perform. 2014;9:442–5. doi: 10.1123/ijspp.2013-0187. [DOI] [PubMed] [Google Scholar]
  • 13.Atkinson G, Nevill AM. Statistical methods for assessing measurement error (reliability) in variables relevant to sports medicine. Sports Med. 1998 Oct;26(4):217–38. doi: 10.2165/00007256-199826040-00002. [DOI] [PubMed] [Google Scholar]
  • 14.Hopkins WG, Schabort EJ, Hawley J a. Reliability of power in physical performance tests. Sports Med. 2001;31(3):211–34. doi: 10.2165/00007256-200131030-00005. [DOI] [PubMed] [Google Scholar]
  • 15.Hopkins WG, Marshall SW, Batterham AM, Hanin J. Progressive statistics for studies in sports medicine and exercise science. Med Sci Sports Exerc. 2009 Jan;41(1):3–13. doi: 10.1249/MSS.0b013e31818cb278. [DOI] [PubMed] [Google Scholar]
  • 16.Mohr M, Krustrup P, Bangsbo J. Match performance of high-standard soccer players with special reference to development of fatigue. J Sports Sci. 2003;21(7):519–28. doi: 10.1080/0264041031000071182. [DOI] [PubMed] [Google Scholar]
  • 17.Christopher J, Beato M, Hulton AT. Manipulation of exercise to rest ratio within set duration on physical and technical outcomes during small-sided games in elite youth soccer players. Hum Mov Sci. 2016;48:1–6. doi: 10.1016/j.humov.2016.03.013. [DOI] [PubMed] [Google Scholar]
  • 18.Beato M, De Keijzer KL, Carty B, Connor M. Monitoring fatigue during intermittent exercise with accelerometer-derived metrics. Front Physiol. 2019;10:780. doi: 10.3389/fphys.2019.00780. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Mendez-Villanueva A, Buchheit M. Football-specific fitness testing: adding value or confirming the evidence? J Sports Sci. 2013 Sep;31(13):1503–8. doi: 10.1080/02640414.2013.823231. [DOI] [PubMed] [Google Scholar]
  • 20.Young D, Beato M, Mourot L, Coratella G. Match-play temporal and position-specific physical and physiological demands of senior hurlers. J strength Cond Res. 2019;00(00):1–10. doi: 10.1519/JSC.0000000000002844. [DOI] [PubMed] [Google Scholar]

Articles from Biology of Sport are provided here courtesy of Institute of Sport

RESOURCES