Skip to main content
PLOS One logoLink to PLOS One
. 2021 Jan 22;16(1):e0245299. doi: 10.1371/journal.pone.0245299

Using the VERT wearable device to monitor jumping loads in elite volleyball athletes

Faraz Damji 1,2, Kerry MacDonald 3,4, Michael A Hunt 2, Jack Taunton 5, Alex Scott 1,2,*
Editor: José M Muyor6
PMCID: PMC7822237  PMID: 33481847

Abstract

Sport is becoming increasingly competitive and athletes are being exposed to greater physical demands, leaving them prone to injuries. Monitoring athletes with the use of wearable technology could provide a way to potentially manage training and competition loads and reduce injuries. One such technology is the VERT inertial measurement unit, a commercially available discrete wearable device containing a 3-axis accelerometer, 3-axis gyroscope and 3-axis magnetometer. Some of the main measurement outputs include jump count, jump height and landing impacts. While several studies have examined the accuracy of the VERT’s measures of jump height and jump count, landing impact force has not yet been investigated. The objective of this research study was to explore the validity of the VERT landing impact values. We hypothesized that the absolute peak VERT acceleration values during a jump-land cycle would fall within 10% of the peak acceleration values derived simultaneously from a research-grade accelerometer (Shimmer). Fourteen elite university-level volleyball players each performed 10 jumps while wearing both devices simultaneously. The results showed that VERT peak accelerations were variable (limits of agreement of -84.13% and 52.37%) and had a propensity to be lower (mean bias of -15.88%) when compared to the Shimmer. In conclusion, the validity of the VERT device’s landing impact values are generally poor, when compared to the Shimmer.

Introduction

Athletes today are being exposed to high training loads and saturated competition calendars, due to the evolving nature of sport into a more competitive and professionalized industry. This creates an environment that is conducive to sports injuries. For example, tendinopathy is an overuse injury to the tendon and its insertions, and is associated with chronic tendon pain [1]. Those with tendinopathy experience a swollen, painful tendon caught in a cumulative cycle of injury-repair which can last months or years [2]. Typically, these people are advised to scale back the intensity of their sports participation while they participate in rehabilitation, and then gradually increase tendon loading activities whilst monitoring and minimizing pain [3].

Evidence supports that poor load management is a major risk factor for injury [4]. In the context of sports medicine, load can be defined broadly as “the sport and non-sport burden (single or multiple physiological, psychological or mechanical stressors) as a stimulus that is applied to a human biological system (including subcellular elements, a single cell, tissues, one or multiple organ systems, or the individual)” [4]. Additionally, it may be applied “over varying time periods (seconds, minutes, hours to days, weeks, months and years) and with varying magnitude (ie, duration, frequency and intensity)” [4].

Monitoring training and competition load placed on athletes is gaining popularity as a fundamental practice to ensure they are being subjected to appropriate and therapeutic levels [5]. A new generation of wearable technologies are available, which enable the precise quantification of load with built-in accelerometers, gyroscopes and magnetometers. One such technology is the VERT system (www.myvert.com), a commercially available discrete wearable device that measures vertical displacement through a proprietary algorithm [6]. This inertial measurement unit (IMU) contains a 3-axis accelerometer, 3-axis gyroscope and 3-axis magnetometer. The dimensions of the unit are 6 x 3 x 0.5 cm and it is typically worn on an elastic belt at the level of the L3 or L4 vertebrae, thought to be near the body’s center of mass. The basic model records jump count and jump height and a newer G-VERT model also records landing impact and kinetic energy.

Several studies have examined the accuracy of the VERT’s measures of jump height and jump count, and generally shown it to be acceptable [613]. Landing impact, however, is a key parameter that has not yet been investigated. In sport, the act of landing from a jump is often required and may happen very frequently [14]. The resulting ground reaction forces on the body during landing have been shown to be influential in lower limb injury [15]. It is apparent that the movement of joints and muscle activity in the lower limb can decrease the magnitude of these impact forces. The VERT calculates landing impact as the instantaneous acceleration values resulting from forces upon landing, expressed as a G-force (1G = 9.81m/s^2). A resultant value is given, encompassing all components of acceleration in the X-Y-Z axes. This study was the first to perform an external validation of VERT for the landing impact parameter.

The objective of this research study was to examine the accuracy of VERT landing impact values in university volleyball players. We hypothesized that the absolute VERT landing impact values would fall within 10% of the landing impact values derived from a research-grade accelerometer. The specific research-grade accelerometer that was chosen for this study has been previously used extensively to monitor human health and track activities of daily living [1619]. Information about the design, sensing capabilities and firmware are available to the general public [20]. The acceleration signals from the device, when compared with force platform and motion capture data, were shown to be valid in terms of the acceleration-based measurements for spatio-temporal gait parameters [21,22].

Methods

This study was approved by the Clinical Research Ethics Board at the University of British Columbia (Certificate Number: H19-00163). Each participant provided written informed consent.

Participants

The study population included 14 varsity volleyball players at the University of British Columbia (UBC), with 11 players recruited from the men’s volleyball team and 3 players recruited from the women’s volleyball team. Our initial plan was to recruit equal numbers of men and women, but data collection was terminated due to the COVID-19 pandemic shortly after the start of data collection with the women’s volleyball team. We excluded participants who had been instructed to avoid jumping (e.g. due to injury). Participants filled out a modified version of the Oslo Sport Trauma Research Centre-Patellar Tendinopathy (OSTRC-P) questionnaire which has been shown to be valid for self-reporting patellar tendinopathy symptom severity in youth basketball [23]. Body mass and height measurements were recorded with a portable weigh scale (Weight Watchers 14C, Conair Consumer Products, Woodbridge, Ontario) and standing height measure (HM200P, Charder Medical, Taichung, Taiwan). Standing reach height was measured with the Vertec (5013, Jump USA, Sunnyvale, California). Ethical approval from the Clinical Research Ethics Board at UBC was obtained (Certificate Number: H19-00163). All participants provided informed consent.

Study design

A single research-grade accelerometer (Shimmer3 IMU, Shimmer Sensing, Dublin, Ireland) was used as the comparator for the VERT, and both devices were worn adjacently with one device placed to the right of the navel and the other to the left of the navel. This allowed for sufficient spacing to prevent contact between devices. The VERT was held securely in the commercially available VERT elastic waistband, in the same position and orientation intended by the developers. The Shimmer was clipped into the Shimmer adjustable belt, which was worn directly on top of the VERT waistband. The Shimmer was calibrated using the Shimmer 9DOF Calibration software, and the wide range accelerometer data output was selected with a range of +/- 16g, and sampling frequency of 2048 Hz.

Data collection occurred at the university volleyball court, a hardwood surface. After an optional warm-up at a self-selected pace on a stationary bike and 2–3 practice jumps, each participant performed a total of 10 countermovement jumps (CMJs) while collecting data simultaneously from both the VERT and Shimmer devices. The first block consisted of 5 maximal CMJs. The second block consisted of 5 submaximal CMJs at 80% of the maximum height achieved in the first block, measured using a Vertec. The rest periods were 20 seconds between trials of the same block and 2 minutes between blocks, to avoid the effects of fatigue and allow for clear data visualization.

VERT and Shimmer data processing

During the study visit, both the highest magnitude impact and VERT-identified landing impact values were recorded for each jump, as displayed on the VERT iPad application (Fig 1).

Fig 1. Screenshot from VERT iPad application.

Fig 1

As seen in Fig 1 (a screenshot from the VERT iPad application), the Impacts Vs. Time graph displays the resultant accelerations for each jump/land cycle. Yellow bars denote VERT-identified landing impact values, and red bars denote accelerations categorized by the VERT software to be from accelerations other than landing impact. The VERT excel file downloaded from the myVERT online server shows the same impact data with a higher degree of precision and time-stamped. For each jump, the highest magnitude acceleration and VERT-identified landing impact values were located in the excel file and the exact values were used to replace the initially documented values.

The Shimmer Excel file was exported from the ConsensysPRO software into Excel, yielding a series of time-stamped acceleration values for each of the X-Y-Z axes in m/s^2. The resultant accelerations were calculated from the separate components using the formula:

Resultantacceleration=[(accelerationinX)2+(accelerationinY)2+(accelerationinZ)2]

These values were then converted from m/s^2 to Gs. Scatter plots were generated with spikes corresponding to jumps clearly visible (Fig 2). The maximum values of each spike, or the resultant peak accelerations, could then be directly compared to values from the VERT.

Fig 2. Shimmer scatter plot generated from Shimmer excel data.

Fig 2

Statistical analysis

For each pair of measurements from each jump, the percentage difference between Shimmer and VERT peak accelerations were calculated. A Bland-Altman plot approach was then used to assess the discrepancy between devices across the range of G values, with adjustments made due to the presence of repeated measures [24,25]. Intraclass correlation coefficients (ICC) and concordance correlation coefficients (CCC) were also calculated. There are many versions of CCC, and in this analysis we used the version which is computed from variance components of a linear mixed effect model for longitudinal repeated measures [26]. We additionally assumed a compound symmetric covariance structure.

Results

Participant demographics

The demographics for the 14 participants can be found in Table 1.

Table 1. Participant demographics.

Variable All Subjects (n = 14)
Age (yr), mean ± SD 20.9 ± 1.5
Male 11
Female 3
Height (cm), mean ± SD 187.6 ± 10.5
Standing reach (cm), mean ± SD 244.6 ± 15.2
Body mass (kg), mean ± SD 80.9 ± 10.8
Maximum jump height (cm), mean ± SD 59.2 ± 10.2
Self-reported patellar tendinopathy in one or both knees on OSTRC-P a 3

SD = standard deviation.

a OSTRC-P data was missing for 2 of the 14 participants, who accidentally missed the form during their study visits.

Comparison of peak accelerations between VERT and Shimmer

Summary statistics of the devices followed a similar trend. We observed that the mean VERT value for peak acceleration (7.8 G) was lower than the Shimmer (10.0 G). The standard deviations of the VERT and Shimmer were similar (3.5 G and 4.1 G, respectively), as well as the maximal (24.3 G and 21.2 G, respectively) and minimal (3.6 G and 4.1 G, respectively) values for each device.

The results of the analysis are shown in the Bland-Altman plot below (Fig 3).

Fig 3. Bland-Altman plot for VERT and Shimmer agreement.

Fig 3

The red line represents the 0% difference mark. The blue line represents the mean bias, or the mean percentage difference we observe between VERT and Shimmer. The dotted lines around the mean bias represent the 95% confidence interval. Lastly, the green lines represent limits of agreement, which are the (mean bias)+1.96*(standard deviation of differences) and (mean bias)-1.96*(standard deviation of differences).

The limits of agreement within this plot are remarkably wide, with limits of -84.13% and 52.37%, suggesting that percentage differences between VERT and Shimmer vary by a large amount. Additionally, when examining the mean bias, we observe that its value is -15.88%. The confidence interval spans from -35.99% to 4.23%. As it overlaps with 0%, it suggests the mean bias may not be significantly different from 0%, but this is likely attributed to the high variability that we observe in the differences. With such a wide-ranging confidence interval, the mean bias may nevertheless be large and negative since the lower bound of the confidence interval stretches down to -35.99%. There may be a tendency for VERT to have lower measurements than Shimmer, resulting in a bias.

In examining the distribution of points themselves, it is evident that when the mean between the pairs of measurements is less than 10, then the points are nearly evenly distributed around the mean bias. When the mean of a measurement pair exceeds 10, then a peculiar weak trend is seen where percentage differences are very low but increase towards the mean bias and beyond. This is indicative of lower reliability, and hence validity, of the VERT device at landing impacts on the higher end of the spectrum.

The ICC (two-way mixed effect model, single measures, and absolute agreement) calculated is 0.49 (95% CI: 0.38 to 0.60) [27]. The moderate size of the ICC value indicates some agreement between the two devices, but agreement is generally mediocre. The calculated CCC is 0.37 (95% CI: 0.12 to 0.58). The mild size of the CCC value corroborates what we observe with the ICC value, although the measure of agreement is worse here once we account for the repeated measures within subjects.

Discussion

Main findings

The aim of this study was to examine the accuracy of VERT landing impact values in university volleyball players. It was hypothesized that the VERT would generate peak acceleration values similar to the Shimmer. Contrary to our hypothesis, the VERT peak acceleration values were highly variable, as evidenced by the percentage differences with Shimmer as a reference, and there is a propensity for VERT to produce lower measurements than Shimmer. Hence, the validity of the VERT device’s landing impact values can be said to be generally poor, with respect to Shimmer, when it comes to measuring landing impacts. This is contrary to our hypothesis that the absolute VERT landing impact values will fall within 10% of the landing impact values derived from the Shimmer.

As this study was the first to perform an external validation of VERT for the landing impact parameter, the main findings cannot be discussed in relation to previous literature but several considerations could explain these findings. First, it is possible that the sampling rate of the VERT is not sufficiently high to meet the performance demands of volleyball. Second, the disparate nature of the data from each of the devices can also serve as an explanation for our results. While the VERT processes the data through a proprietary algorithm which was not available to us, the Shimmer provides the full signal. Another consideration is that landing biomechanics such as the amount of hip or knee flexion, or placement of the feet, likely differed both within and between individuals. This variability may have led to different acceleration values at the exact location of the VERT and Shimmer sensors. This possibility could be tested by placing two Shimmers side by side in the same positions as used for this study, and assessing the magnitude of difference.

Some discussion is warranted around the issue with the VERT-identified landing impact values. The VERT identifies a number of acceleration peaks for each jump-land cycle. This generally is seen as a cluster of vertical bars on the measurement output produced by the VERT software. For the most part, each cluster consists of one yellow bar (VERT-identified landing impact) and the rest are all red bars (accelerations due to events other than landing). The yellow bar is the tallest in some instances, but this is not always the case. Neither does there seem to be any temporal pattern; in other words, the order of the red and yellow bars seems to be random for each cluster. The VERT also may (incorrectly) show multiple yellow bars in a given cluster because of heel to toe landings, one foot hitting the ground before the other foot, or simply a hard landing with a subtle reverberation. For these reasons, we chose to use the single highest acceleration peak for each cluster that appeared in the VERT data. Shimmer data was similar to the VERT data in that it also showed a number of acceleration peaks for each jump-land cycle. We made the assumption that the highest magnitude acceleration from a jump-land cycle corresponds to landing impact. It is true that this may not always be the case, however an argument can be made that our assumption stands the majority of the time. A study that examined the application of force during the vertical jump, in both children and adults, shows force-time curves that support our reasoning [28].

Although the VERT’s sampling rate was unknown, we had good reason to believe that the Shimmer sampling rate of 2048 Hz was significantly higher than the VERT. Many other studies with similar methodology have ensured equal sampling rates in devices being used. Our rationale for using the highest possible sampling rate as a control was to allow us to examine the possibility that the VERT may be underestimating landing impact values, which would occur if sampling frequency is too low to capture the peak values.

Limitations

Force platforms, the gold standard for measuring forces, were not used in this study for validating the VERT and this presents a limitation. Research-grade accelerometers, such as the Shimmer device, were a suitable and possibly better alternative for a few reasons. Firstly, the Shimmer presents raw data and does not use proprietary algorithms unlike the VERT. Secondly, the Shimmer has been compared with force platform data. A study comparing force calculated from the Shimmer device with force platform data found moderate to low levels of agreement and a consistent systematic bias between both technologies [29]. Although this is not the ideal and desired outcome, this means the Shimmer could still be used as a proxy to infer how far off from the gold standard the VERT stands. In a study by Howard et al. (2014), the Shimmer generally overestimated peak forces compared to force platform data. If the Shimmer is overestimating, and the VERT is estimating lower than the Shimmer, the VERT may either be overestimating or underestimating compared to the gold standard force platform. A future study could be conducted to evaluate how the VERT performs in relation to the force platform. However, it should be noted that a force platform will be only partly related to the resultant peak acceleration at the worn location of the VERT due to the dissipation of force travelling up the kinetic chain.

While evidence-based guidelines were being followed to mitigate any potential interference between the two devices (e.g. avoiding placing one on top of the other), there was a chance that there was crosstalk when in close proximity at the level of the waist [30]. Despite taking the steps to ensure the waist bands were fit snug and positioning the devices far apart to avoid bumping into each other, unwanted movement of the bands or devices may have still occurred without knowing.

Practical recommendations

Based on the findings of this study, coaches and practitioners should treat the VERT landing impact data cautiously. This certainly does not mean that the VERT is not useful, as it has proven to be valid and reliable for measuring jump height and jump count.

Conclusions

In conclusion, the VERT landing impact values were more variable and had a propensity to be lower when compared to the same measurements taken from a research-grade accelerometer. Hence, the validity of the VERT device’s landing impact values can be said to be generally poor, with respect to the Shimmer. While the VERT may be a useful tool for the practice of load monitoring, the landing impact data may not be truthful and caution should be exercised with the interpretation of this parameter until further validation studies are conducted.

Supporting information

S1 File. Full de-identified dataset.

(XLSX)

Acknowledgments

We would like to acknowledge the efforts of Eric Sanders, Nikolas Krstic and Johann Windt for assisting with the statistical analysis, and David Gil for his expert advice on the VERT wearable technology.

Data Availability

All relevant data are within the manuscript and its Supporting Information files.

Funding Statement

F.D. was the recipient of a Canadian Institute of Health Research Canada Graduate Scholarship - Master's Award, which provided funding for this study (CIHR CGS-M, http://www.cihr.ca/). A.S. was a recipient of a Natural Sciences and Engineering Research Council of Canada Discovery Grant, which also provided funding for this study (NSERC DG, https://www.nserc-crsng.gc.ca/). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

References

  • 1.Scott A, Docking S, Vicenzino B, Alfredson H, Zwerver J, Lundgreen K, et al. Sports and exercise-related tendinopathies: A review of selected topical issues by participants of the second International Scientific Tendinopathy Symposium (ISTS) Vancouver 2012. Br J Sports Med. 2013; 10.1136/bjsports-2013-092329 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Attia M, Scott A, Carpentier G, Lian Ø, Van Kuppevelt T, Gossard C, et al. Greater glycosaminoglycan content in human patellar tendon biopsies is associated with more pain and a lower VISA score. Br J Sports Med. 2014; 10.1136/bjsports-2013-092633 [DOI] [PubMed] [Google Scholar]
  • 3.Malliaras P, Cook J, Purdam C, Rio E. Patellar Tendinopathy: Clinical Diagnosis, Load Management, and Advice for Challenging Case Presentations. J Orthop Sport Phys Ther. 2015; [DOI] [PubMed] [Google Scholar]
  • 4.Soligard T, Schwellnus M, Alonso JM, Bahr R, Clarsen B, Dijkstra HP, et al. How much is too much? (Part 1) International Olympic Committee consensus statement on load in sport and risk of injury. Br J Sports Med. 2016; [DOI] [PubMed] [Google Scholar]
  • 5.Bourdon PC, Cardinale M, Murray A, Gastin P, Kellmann M, Varley MC, et al. Monitoring athlete training loads: Consensus statement. Int J Sports Physiol Perform. 2017; 10.1123/IJSPP.2017-0208 [DOI] [PubMed] [Google Scholar]
  • 6.Charlton PC, Kenneally-Dabrowski C, Sheppard J, Spratford W. A simple method for quantifying jump loads in volleyball athletes. J Sci Med Sport. 2017; 10.1016/j.jsams.2016.07.007 [DOI] [PubMed] [Google Scholar]
  • 7.MacDonald K, Bahr R, Baltich J, Whittaker JL, Meeuwisse WH. Validation of an inertial measurement unit for the measurement of jump count and height. Phys Ther Sport. 2017; 10.1016/j.ptsp.2016.12.001 [DOI] [PubMed] [Google Scholar]
  • 8.Skazalski C, Whiteley R, Hansen C, Bahr R. A valid and reliable method to measure jump-specific training and competition load in elite volleyball players. Scand J Med Sci Sport. 2018; 10.1111/sms.13052 [DOI] [PubMed] [Google Scholar]
  • 9.Borges TO, Moreira A, Bacchi R, Finotti RL, Ramos M, Lopes CR, et al. Validation of the VERT wearable jump monitor device in elite youth volleyball players. Biol Sport. 2017; 10.5114/biolsport.2017.66000 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Manor J, Bunn J, Bohannon RW. Validity and Reliability of Jump Height Measurements Obtained From Nonathletic Populations With the VERT Device. J Geriatr Phys Ther. 2018; [DOI] [PubMed] [Google Scholar]
  • 11.Brooks ER, Benson AC, Bruce LM. Novel Technologies Found to be Valid and Reliable for the Measurement of Vertical Jump Height With Jump-and-Reach Testing. J strength Cond Res. 2018; 10.1519/JSC.0000000000002790 [DOI] [PubMed] [Google Scholar]
  • 12.Mahmoud I, Othman AAA, Abdelrasoul E, Stergiou P, Katz L. The reliability of a real time wearable sensing device to measure vertical jump. In: Procedia Engineering. 2015. [Google Scholar]
  • 13.Staiger ST, Wahl ET. Comparison of 3 Alternative Systems for Measuring Vertical Jump Height. Med Sci Sport Exerc. 2018; [Google Scholar]
  • 14.McNitt-Grey JL. Kinematics and Impulse Characteristics of Drop Landings From Three Heights. Int J O F Sport Biomech. 1991; [Google Scholar]
  • 15.Nigg BM. Biomechanics, Load Analysis and Sports Injuries in the Lower Extremities. Sports Medicine: An International Journal of Applied Medicine and Science in Sport and Exercise. 1985. [DOI] [PubMed] [Google Scholar]
  • 16.Sun FT, Kuo C, Cheng HT, Buthpitiya S, Collins P, Griss M. Activity-aware mental stress detection using physiological sensors. In: Lecture Notes of the Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering, LNICST. 2012. [Google Scholar]
  • 17.Shahzad A, Ko S, Lee S, Lee JA, Kim K. Quantitative Assessment of Balance Impairment for Fall-Risk Estimation Using Wearable Triaxial Accelerometer. IEEE Sens J. 2017; [Google Scholar]
  • 18.Avvenuti M, Carbonaro N, Cimino MGCA, Cola G, Tognetti A, Vaglini G. Smart shoe-based evaluation of gait phase detection accuracy using body-worn accelerometers. In: Lecture Notes of the Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering, LNICST. 2018. [Google Scholar]
  • 19.O’Reilly MA, Whelan DF, Ward TE, Delahunt E, Caulfield B. Technology in strength and conditioning tracking lower-limb exercises with wearable sensors. J Strength Cond Res. 2017; [DOI] [PubMed] [Google Scholar]
  • 20.Burns A, Greene BR, McGrath MJ, O’Shea TJ, Kuris B, Ayer SM, et al. SHIMMERTM—A wireless sensor platform for noninvasive biomedical research. IEEE Sens J. 2010; [Google Scholar]
  • 21.Patterson MR, Johnston W, O’Mahony N, O’Mahony S, Nolan E, Caulfield B. Validation of temporal gait metrics from three IMU locations to the gold standard force plate. In: Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS. 2016. [DOI] [PubMed]
  • 22.Kluge F, Gaßner H, Hannink J, Pasluosta C, Klucken J, Eskofier BM. Towards mobile gait analysis: Concurrent validity and test-retest reliability of an inertial measurement system for the assessment of spatio-temporal gait parameters. Sensors (Switzerland). 2017; 10.3390/s17071522 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Owoeye OBA, Wiley JP, Walker REA, Palacios-Derflingher L, Emery CA. Diagnostic accuracy of a self-report measure of patellar tendinopathy in youth basketball. J Orthop Sports Phys Ther. 2018; 10.2519/jospt.2018.8088 [DOI] [PubMed] [Google Scholar]
  • 24.Bland JM, Altman DG. Measuring agreement in method comparison studies. Stat Methods Med Res. 1999; 10.1177/096228029900800204 [DOI] [PubMed] [Google Scholar]
  • 25.Bland JM, Altman DG. Agreement between methods of measurement with multiple observations per individual. J Biopharm Stat. 2007; 10.1080/10543400701329422 [DOI] [PubMed] [Google Scholar]
  • 26.Carrasco JL, King TS, Chinchilli VM. The concordance correlation coefficient for repeated measures estimated by variance components. J Biopharm Stat. 2009; 10.1080/10543400802527890 [DOI] [PubMed] [Google Scholar]
  • 27.McGraw KO, Wong SP. Forming Inferences about Some Intraclass Correlation Coefficients. Psychol Methods. 1996; [Google Scholar]
  • 28.Floria P, Gómez-Landero LA, Harrison AJ. Variability in the application of force during the vertical jump in children and adults. J Appl Biomech. 2014; 10.1123/jab.2014-0043 [DOI] [PubMed] [Google Scholar]
  • 29.Howard R, Conway R, Harrison AJ. Estimation of force during vertical jumps using body fixed accelerometers. In: IET Conference Publications. 2014. [Google Scholar]
  • 30.Düking P, Fuss FK, Holmberg HC, Sperlich B. Recommendations for assessment of the reliability, sensitivity, and validity of data provided by wearable sensors designed for monitoring physical activity. JMIR mHealth uHealth. 2018 [DOI] [PMC free article] [PubMed] [Google Scholar]

Decision Letter 0

José M Muyor

4 Sep 2020

PONE-D-20-21022

Using the VERT wearable device to monitor jumping loads in elite volleyball athletes

PLOS ONE

Dear Dr. Scott,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Please submit your revised manuscript by Oct 19 2020 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols

We look forward to receiving your revised manuscript.

Kind regards,

José M. Muyor

Academic Editor

PLOS ONE

Journal requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

Reviewers´ comments

Reviewer 1

The study deals with a comparison between two IMUs (with different sensor specs) during volleyball jumping. The authors state that the consumer grade VERT device would provide similar (within a 10% range) peak acceleration compared to a “research grade” sensor.

While the idea of the study is nice it must be stated that the theoretical background does not directly lead to the research question. The VERT device does not provide any sensor specifications which already is a red flag. While the system provides jump count metrics the idea of the authors was to compare peak acceleration between the two sensors. In my opinion a simulation study could be conducted first to back up the idea that a sensor with unknown sensor specs will still provide similar results as the “gold standard” which should have been the force plate. Using the Shimmer sensor device (+/- 16g, and sampling frequency of 2048 Hz) during a jump could give some insights of how various sensor specs may affect the peak acceleration. The comparison between the two devices is also a little flawed as it is not clear if the peak acceleration actually occurred during the landing period, which could have been tested with a force plate. I understand not every laboratory has a force plate but in this case I would have started out with a technical comparison. Put both accelerometers on a sled or on a S&C machine to keep the same movement going (or barbell etc) with an a similar fixation (Latissiums machine and fix them next to each other on the weights).

The problem with human testing is always the sensor placement. Has the sensor moved? Were the sensor positions changed? How is it guaranteed that the sensors actually do measure the same and one not its own movement also? Why not put both sensors on a rigid object and fix it to the body?

Overall I like your approach but I suggest you go back to the drawing board and get some technical evidence before you run the jumping experiment. Fourteen subjects is not a lot and while I usually refrain from telling authors to redo an experiment, I think that once COVID-19 has been conquered you could get more people in the lab. Also you don’t need elite athletes, as you look at jumping in general and your tests were not volleyball or high performance related. A CMJ does not reflect real playing behavior…

Please use the Equation tool for your signal magnitude calculation (also not really necessary as this is a very easy and standard procedure in signal processing)

Table 2 could be in the text so please remove

If you want to test the accuracy of VERT landing impact then you need to compare it to the Gold Standard but you also state that Shimmer is not the gold standard…

Reviewer 2

General comments: The authors describe a descriptive research study that compared the simultaneous measurement of impact acceleration measures during jump landings in collegiate volleyball players. A commercially available device marketed to sports users (Vert) was compared to a commercially available research-grade accelerometer (Shimmer). There was poor agreement and concordance demonstrated for peak impact acceleration measures between the two devices. The authors identify several potential causes for the results but the speculative nature of these reasons lead to questions about the methodology of the current study. These questions cast doubt on the internal validity of the current study methods.

Specific comments:

Lines 54-58: The sentences describing the pathophysiology of tendonopathy seem out of place in the introduction. Recommend deleting.

Lines 81-84: Are there references to support the benefits and limitations of Vert? Or, are these the authors’ observations? If the latter, other than the lack of accuracy studies I don’t think that this content fits in the introduction.

Lines 102-108: I’m curious as to the choice of a research-grade accelerometer (Shimmer) as the gold standard for this validation study. Many sports biomechanists would consider a force plate to be the gold standard for measuring landing impacts. Citing studies that report the validity of the Shimmer measures compared to force plate measures during walking gait are helpful but jump landings occur at much higher speed than walking so the relevance of the walking studies are marginal in relation to the current study. I believe that further justification of using the Shimmer as the gold standard measurement device in this study is warranted.

Lines 114-115: Was an a priori sample size estimate performed?

Lines 132-133: The placement of the two devices isn’t clear. Was one device placed to the right of the navel and the other to the left of the navel? Or were both devices placed on either the left or the right side? Please revise to improve clarity.

Line 135: What was the sampling rate for Vert?

Line 137: More information on the court surface would be helpful. Was it hardwood or a synthetic surface?

Line 176: Spelling should be “Altman”

Lines 206-210: This information seems better suited as the figure legend rather than in the text of the Results section.

Line 229: I recommend deleting this sentence as it is not a statistically-based result.

Line 255: Report the sampling rate for Vert if that information is publicly available.

Lines 257-258: Expand on the potential difference in algorithms for data processing between the two devices.

Lines 258-260: I fail to see how variability in jump technique would influence the lack of agreement between the two measurement devices. Are you suggesting that there was a right-to-left asymmetry in jumping mechanics that would have affected the measurements by the sensors placed on the right or left side of the body?

Lines 261-263: Why wasn’t this done to ensure that sensor placement didn’t influence the primary results of the study. This seems like a glaring flaw that limits reader confidence in the current results.

Line 282: Were efforts made to contact the manufacturer of Vert to ascertain the sampling rate? If yes, explicitly state this. If no, I strongly encourage the authors to do so.

Lines 295-299: The reasoning justifying the use of Shimmer as the gold standard (in lieu of a force plate) seems flawed based on the results of reference #30.

Lines 311: This is the first mention of “waist bands”. Please elaborate on this in the methods section.

Lines 315-353: This section does not contextualize the results of the current investigation and seems disconnected from the rest of the manuscript.

References: None of the journal articles references have volume or page numbers listed.(less...)

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Partly

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: Yes

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2021 Jan 22;16(1):e0245299. doi: 10.1371/journal.pone.0245299.r002

Author response to Decision Letter 0


2 Nov 2020

Reviewers´ comments

Reviewer 1

The study deals with a comparison between two IMUs (with different sensor specs) during volleyball jumping. The authors state that the consumer grade VERT device would provide similar (within a 10% range) peak acceleration compared to a “research grade” sensor.

While the idea of the study is nice it must be stated that the theoretical background does not directly lead to the research question. The VERT device does not provide any sensor specifications which already is a red flag. While the system provides jump count metrics the idea of the authors was to compare peak acceleration between the two sensors. In my opinion a simulation study could be conducted first to back up the idea that a sensor with unknown sensor specs will still provide similar results as the “gold standard” which should have been the force plate. Using the Shimmer sensor device (+/- 16g, and sampling frequency of 2048 Hz) during a jump could give some insights of how various sensor specs may affect the peak acceleration. The comparison between the two devices is also a little flawed as it is not clear if the peak acceleration actually occurred during the landing period, which could have been tested with a force plate. I understand not every laboratory has a force plate but in this case I would have started out with a technical comparison. Put both accelerometers on a sled or on a S&C machine to keep the same movement going (or barbell etc) with an a similar fixation (Latissiums machine and fix them next to each other on the weights).

The problem with human testing is always the sensor placement. Has the sensor moved? Were the sensor positions changed? How is it guaranteed that the sensors actually do measure the same and one not its own movement also? Why not put both sensors on a rigid object and fix it to the body?

Overall I like your approach but I suggest you go back to the drawing board and get some technical evidence before you run the jumping experiment. Fourteen subjects is not a lot and while I usually refrain from telling authors to redo an experiment, I think that once COVID-19 has been conquered you could get more people in the lab. Also you don’t need elite athletes, as you look at jumping in general and your tests were not volleyball or high performance related. A CMJ does not reflect real playing behavior…

Please use the Equation tool for your signal magnitude calculation (also not really necessary as this is a very easy and standard procedure in signal processing)

Table 2 could be in the text so please remove

If you want to test the accuracy of VERT landing impact then you need to compare it to the Gold Standard but you also state that Shimmer is not the gold standard…

Thank you for your detailed and constructive feedback. While your queries about the VERT sensor specifications have been answered in the responses to the specific comments below, I will address your other comments here. In our study, we were very much interested in testing the VERT while worn in the position and orientation intended by the developers. Although it could have been beneficial to place the accelerometers on a machine or fixation, this defeated our purpose of conducting a very practical validation study that was as close to real-life conditions as possible. We were willing to accept the potential problems (such as chances of the sensors moving) for this reason. The reason we opted for CMJs as opposed to volleyball specific jumps was to encourage landing on two feet and in turn, data that was easier to interpret/analyze.

Reviewer 2

General comments: The authors describe a descriptive research study that compared the simultaneous measurement of impact acceleration measures during jump landings in collegiate volleyball players. A commercially available device marketed to sports users (Vert) was compared to a commercially available research-grade accelerometer (Shimmer). There was poor agreement and concordance demonstrated for peak impact acceleration measures between the two devices. The authors identify several potential causes for the results but the speculative nature of these reasons lead to questions about the methodology of the current study. These questions cast doubt on the internal validity of the current study methods.

Thank you for your time spent reviewing our manuscript. Hopefully the responses to the specific comments below will provide further insights about the methodology of our study.

Specific comments:

Lines 54-58: The sentences describing the pathophysiology of tendonopathy seem out of place in the introduction. Recommend deleting.

The details of the pathophysiology have been removed.

Lines 81-84: Are there references to support the benefits and limitations of Vert? Or, are these the authors’ observations? If the latter, other than the lack of accuracy studies I don’t think that this content fits in the introduction.

These are the authors observations, therefore no references were included. The discussion on the benefits and limitations has now been removed.

Lines 102-108: I’m curious as to the choice of a research-grade accelerometer (Shimmer) as the gold standard for this validation study. Many sports biomechanists would consider a force plate to be the gold standard for measuring landing impacts. Citing studies that report the validity of the Shimmer measures compared to force plate measures during walking gait are helpful but jump landings occur at much higher speed than walking so the relevance of the walking studies are marginal in relation to the current study. I believe that further justification of using the Shimmer as the gold standard measurement device in this study is warranted.

The rationale for selecting the Shimmer was two-fold. Firstly, the Shimmers were readily available to us whereas we did not have access to portable force platforms in the biomechanics lab. Secondly, the Shimmers have been used extensively in the literature (including validation studies) and information about their technical specifications is open to the public. In addition, there are possibly some reasons why the Shimmer would be a better alternative to force platforms as discussed in the limitations section of this paper. For these reasons, we decided that the Shimmer is the next best option to the force platform.

The aim of this study was to provide a first look at the validity of the VERT landing impact values, a novel contribution. We expect to see future research evaluating the VERT in relation to the force platform, and realize this is needed to be more certain of the validity.

Lines 114-115: Was an a priori sample size estimate performed?

It was deemed that no sample size estimate was necessary for this particular study, given its exploratory nature. The sample size was determined based on the study designs of other validations of the VERT that have been conducted in the past (analyzing jump height and jump count). It is also important to note that the number of participants was limited by the COVID crisis which led to an early termination of our data collection.

Lines 132-133: The placement of the two devices isn’t clear. Was one device placed to the right of the navel and the other to the left of the navel? Or were both devices placed on either the left or the right side? Please revise to improve clarity.

Apologies for the lack of clarity on the placement of the devices. The language has been revised.

Line 135: What was the sampling rate for Vert?

The sampling rate for the VERT is proprietary information, and is not publicly available. For the purpose of this study, the VERT performance lab director privately disclosed this information to us, but we do not feel it is appropriate to include the actual sampling rate in the text. In all of the existing VERT validation studies, this proprietary information is not disclosed.

Line 137: More information on the court surface would be helpful. Was it hardwood or a synthetic surface?

The flooring of the university volleyball court was hardwood. This information has been included.

Line 176: Spelling should be “Altman”

Thank you for the correction.

Lines 206-210: This information seems better suited as the figure legend rather than in the text of the Results section.

Thank you, this has been amended.

Line 229: I recommend deleting this sentence as it is not a statistically-based result.

This sentence has been deleted.

Line 255: Report the sampling rate for Vert if that information is publicly available.

As previously mentioned, we cannot report this value unfortunately.

Lines 257-258: Expand on the potential difference in algorithms for data processing between the two devices.

We do not have any information regarding the VERT algorithms. On the other hand, the Shimmer data is raw data. That is, no algorithms were applied.

Lines 258-260: I fail to see how variability in jump technique would influence the lack of agreement between the two measurement devices. Are you suggesting that there was a right-to-left asymmetry in jumping mechanics that would have affected the measurements by the sensors placed on the right or left side of the body?

Yes, you are correct. We are suggesting there could have been a right-to-left asymmetry in jumping mechanics.

Lines 261-263: Why wasn’t this done to ensure that sensor placement didn’t influence the primary results of the study. This seems like a glaring flaw that limits reader confidence in the current results.

We were able to reduce the likelihood of asymmetrical landings (i.e. one foot before the other) by specifically instructing participants to reach with two hands when performing the CMJs. This ensured landing on two feet. Additionally, any landings that were visibly asymmetrical during the data collection were identified and participants were asked to repeat these attempts. Only the repeats were included in the analysis.

Line 282: Were efforts made to contact the manufacturer of Vert to ascertain the sampling rate? If yes, explicitly state this. If no, I strongly encourage the authors to do so.

Kindly refer to the earlier responses regarding the VERT sampling rate.

Lines 295-299: The reasoning justifying the use of Shimmer as the gold standard (in lieu of a force plate) seems flawed based on the results of reference #30.

The intent was to acknowledge that although the results of reference #30 shows moderate-low agreement between the Shimmer and force plate, the Shimmer does have its benefits due to its nature as a discrete wearable device that can be positioned at different locations on the body.

Lines 311: This is the first mention of “waist bands”. Please elaborate on this in the methods section.

This has been clarified in the methods section.

Lines 315-353: This section does not contextualize the results of the current investigation and seems disconnected from the rest of the manuscript.

The future directions section has been deleted from the manuscript.

Attachment

Submitted filename: Response to Reviewers.docx

Decision Letter 1

José M Muyor

2 Dec 2020

PONE-D-20-21022R1

Using the VERT wearable device to monitor jumping loads in elite volleyball athletes

PLOS ONE

Dear Dr. Scott,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Please submit your revised manuscript by 10 days. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols

We look forward to receiving your revised manuscript.

Kind regards,

José M. Muyor

Academic Editor

PLOS ONE

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: All comments have been addressed

Reviewer #2: (No Response)

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Partly

Reviewer #2: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: Yes

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: Dear authors, during the initial review I contacted the VERT company and even received the sensor specifications so you could have been a little more "brave" in disucssing the sensor specifications. Having said this, I think you have improved the manuscript and it is up to standard. I wish you had improved your presentation of the results a little to make a stronger point for your case. In the end your conclusion is: Shimmer is better in detecting peak accelerations which is a little "simple". Not every coach is really interested in the peak accelerations as they clearly depend on the sensor location, and sensor specs. Anyways, sorry to hear that your data collection was cut short due to the current global pandemic. Good luck with your future research!

Reviewer #2: The authors have adequately addressed my major concerns from the initial submission and the revised manuscript represents a marked improvement. I have a few further minor comments:

Line 115: Participants’ scores on the OSTRC-P should be reported with along with the other demographic measures in Table 1.

Line 264: “on the ipad” – it would be better to say “on the measurement output produced by the Vert software”.

Line 271: Change “preferred” to “chose”

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2021 Jan 22;16(1):e0245299. doi: 10.1371/journal.pone.0245299.r004

Author response to Decision Letter 1


19 Dec 2020

Reviewers' comments:

Reviewer #1: Dear authors, during the initial review I contacted the VERT company and even received the sensor specifications so you could have been a little more "brave" in disucssing the sensor specifications. Having said this, I think you have improved the manuscript and it is up to standard. I wish you had improved your presentation of the results a little to make a stronger point for your case. In the end your conclusion is: Shimmer is better in detecting peak accelerations which is a little "simple". Not every coach is really interested in the peak accelerations as they clearly depend on the sensor location, and sensor specs. Anyways, sorry to hear that your data collection was cut short due to the current global pandemic. Good luck with your future research!

We appreciate all of your feedback and attention to detail, which has improved the quality of our work.

Reviewer #2: The authors have adequately addressed my major concerns from the initial submission and the revised manuscript represents a marked improvement. I have a few further minor comments:

Thank you very much. Please see the responses to your specific comments below.

Line 115: Participants’ scores on the OSTRC-P should be reported with along with the other demographic measures in Table 1.

OSTRC-P scores are now included in Table 1.

Line 264: “on the ipad” – it would be better to say “on the measurement output produced by the Vert software”.

This has been amended.

Line 271: Change “preferred” to “chose”

This has been amended.

Attachment

Submitted filename: Response to Reviewers 2.docx

Decision Letter 2

José M Muyor

26 Dec 2020

Using the VERT wearable device to monitor jumping loads in elite volleyball athletes

PONE-D-20-21022R2

Dear Dr. Scott,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

José M. Muyor

Academic Editor

PLOS ONE

Acceptance letter

José M Muyor

2 Jan 2021

PONE-D-20-21022R2

Using the VERT wearable device to monitor jumping loads in elite volleyball athletes

Dear Dr. Scott:

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

If we can help with anything else, please email us at plosone@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. José M. Muyor

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 File. Full de-identified dataset.

    (XLSX)

    Attachment

    Submitted filename: Response to Reviewers.docx

    Attachment

    Submitted filename: Response to Reviewers 2.docx

    Data Availability Statement

    All relevant data are within the manuscript and its Supporting Information files.


    Articles from PLoS ONE are provided here courtesy of PLOS

    RESOURCES