Skip to main content
PLOS ONE logoLink to PLOS ONE
. 2022 Sep 9;17(9):e0273865. doi: 10.1371/journal.pone.0273865

An open-source, low-cost voluntary running activity tracking tool for in vivo rodent studies

Grace E Deitzler 1,*,#, Nicholas P Bira 2,#, Joseph R Davidson 2, Maude M David 1,3
Editor: Dragan Hrncic4
PMCID: PMC9462748  PMID: 36084055

Abstract

In vivo rodent behavioral and physiological studies often benefit from measurement of general activity. However, many existing instruments necessary to track such activity are high in cost and invasive within home cages, some even requiring extensive separate cage systems, limiting their widespread use to collect data. We present here a low-cost open-source alternative that measures voluntary wheel running activity and allows for modulation and customization, along with a reproducible and easy to set-up code pipeline for setup and analysis in Arduino IDE and R. Our robust, non-invasive scalable voluntary running activity tracker utilizes readily accessible magnets, Hall effect sensors, and an Arduino microcontroller. Importantly, it can interface with existing rodent home cages and wheel equipment, thus eliminating the need to transfer the mice to an unfamiliar environment. The system was validated both for accuracy by a rotating motor used to simulate mouse behavior, and in vivo. Our recorded data is consistent with results found in the literature showing that the mice run between 3 to 16 kilometers per night, and accurately captures speed and distance traveled continuously on the wheel. Such data are critical for analysis of highly variable behavior in mouse models and allow for characterization of behavioral metrics such as general activity. This system provides a flexible, low-cost methodology, and minimizes the cost, infrastructure, and personnel required for tracking voluntary wheel activity.

1 Introduction

Voluntary wheel running behavior is a widely used metric to study motricity in rodent research. Laboratory mice will run voluntarily if they have access to a wheel in their cage [1]. Tracking the changes in voluntary running behavior can be used to evaluate behavioral changes (such as general activity levels which may correspond to anxiety, or adverse side effects from drug toxicity testing), exercise capacity (such as what might be observed in genetically modified or disease model mice) and physiological function. Running activity can be influenced by a variety of conditions including sex, age, diet, environmental conditions (ambient noise, temperature, etc) and strain of mouse. Most voluntary running activity occurs during the dark phase of the 12 hr/12 hr light/dark cycle, because mice typically have nocturnal circadian rhythms [2].

Currently available systems for tracking voluntary running activity are often cost-prohibitive, sometimes requiring removing mice from the home cage during the tracking period, and several systems require the user to purchase expensive proprietary software licenses. Low-profile and minimally invasive wireless options on the market can cost several thousand dollars for a system that covers only 6 cages (what our proposed tool described here is designed for), with the bulk of that cost being for the licensure, wireless communication hubs, and computer hardware systems. Other systems can cost into the tens of thousands of USD for an entire housing system designed to replace the home cages. While these specialist home cage systems allow for collection of additional data (such as operant conditioning experiments), the cost of such systems can become a burden to researchers or instructors who are seeking basic activity tracking capabilities.

Mice are sensitive to changes in their environment and being transferred between unfamiliar cages can cause increases in levels of serum corticosterone and anxiety-like behaviors [3]. In order to accurately assess voluntary running behavior that is not influenced by external sources of stress, there is a need for a low-profile tracking system that keeps the mice in their native cage environment, with a wheel they are already familiar with. Existing systems, as mentioned above, either do not keep mice in their home cages or come with a price tag that includes cost-prohibitive software licenses and hardware that can make such systems inaccessible to smaller research groups or teaching facilities.

Here we outline a robust, system-agnostic, low-cost alternative tracking system that can be built in a lab or home setting with easily accessible off-the-shelf materials and open-source software. All documentation, an itemized list of materials, .STL files for optional 3D printing, and source code are provided on GitHub [4]. The proposed method involves the use of a small, non-invasive magnet that can be attached to the home cage wheel with minimal disturbance to the mice. While this method is specifically designed to be used on an angled disc style wheel, the system could easily be adapted to other wheel types, as long as the magnet can be securely affixed. This system is ideal for laboratories that would like to measure the voluntary wheel activity in smaller-scale experiments that do not require the use of hundreds of cages. We describe the use of this system for six sensors, but it could easily be scaled up to 18 sensors with a single Arduino Uno, or more with different models of microcontroller or additional microcontrollers per computer. As long as the laptop computer is powered on, the system can run indefinitely, allowing for tracking over multiple nights or even weeks. In the following sections, we discuss how the system was built and validated, and then analyze data collected with the system in two usage scenarios.

2 Materials and methods

2.1 Building the tracker system

The tracker system design is based on the principles of a rotary encoder, a device frequently utilized within robotics, automotives, and other industries. Its key function is to sense the state of a rotating shaft and provide feedback on the shaft’s angular position, speed, and/or direction of rotation. Rotary encoders frequently make use of Hall effect sensors and small magnets to encode this data, tracking the rotation of a shaft by triggering the sensor each time a magnet passes nearby.

For this application, the intention was to enable tracking of multiple mice in a single cage, with minimal changes to their known environment. This need for minimal invasion, as well as the need to avoid wires in the cage that could pose a danger to the mice, suggested a wireless solution was necessary. To this end, slightly altering the existing exercise wheel to attach a small magnet, which can then wirelessly trigger a Hall effect sensor attached to the exterior of the cage, achieves these goals. This method allows for a scalable approach to multiple cages with mouse wheels, and allows for simultaneous and continuous monitoring of mouse activities levels, without significantly altering the cage environment.

For the tracking system, an Arduino Uno R3 microcontroller was selected due to its ease of programming and component integration, low cost, and wide availability of numerous open-source software libraries. Normally, an Arduino Uno only has two pins available for interrupts. Interrupts are necessary for real-time monitoring, as the triggering of the hall effect sensor can happen at any time. The Arduino code was written with this in mind, and made use of the EnableInterrupt library [5]. This additional library enables assigning interrupt functionality to all pins on the Arduino, allowing for multiple sensors beyond the default two. The Arduino was connected to a laptop over USB, and generates a .txt file containing the timestamps and distance traveled through communication with the software PuTTY running on the laptop. Beyond the average velocity and distance traveled discussed in this paper, this data can be analyzed to measure peak velocities, distribution of periods of rest, and other useful metrics for mouse activity.

2.1.1 Assembling the wheel and cage

Wheels used were angled running discs on plastic huts (Bio-Serv, product numbers K3328 and K3251 (Fig 1A) with a diameter of approximately 15-20 cm, made of solid plastic flooring, attached to the top of the plastic hut. A small magnet, 11 mm in diameter, was affixed to the outer edge of the wheel using either an epoxy or placed into a small 3D printed jacket slid over the edge, held in place with a small amount of non-toxic PVA glue. The wheel was then reattached to the hut and aligned with the front side of the cage in such a way that the flat face of the magnet passed by the wall in its trajectory. A small amount of non-toxic PVA glue was applied to the bottom of the hut and let dry so that the hut would not move during the testing period. Following the alignment of the wheel to the wall, the Hall effect sensor was affixed to the outside of the wall of the cage adjacent to the path of the magnet during a full wheel rotation (Fig 1B). Mice were placed in the home cages after the setup was complete.

Fig 1.

Fig 1

A. The magnet attached to the wheel and hut. B. The Hall effect sensor, connected via wires to the Arduino. A red LED indicates when the sensor detects a magnet. C. The 3D-printed housing for the Arduino and breadboard, with wires for six sensors leaving the enclosure. D. Aligning and connecting the Hall effect sensor to overlap the rotation path of the magnet inside the cage. E. Example of the system connected to 6 cages. The wires are all leading back to the Arduino, which is connected via USB to a laptop.

3 Bill of materials

Assuming access to 3D printing (many universities provide 3D printing services at low cost to students and researchers), preexisting rodent cages and wheels, and ownership of a laptop computer, the total cost for implementing the designs described above totals approximately $70 USD at the time of this paper’s submission, as shown in Table 1 (see Table 2 for a more detailed description of each component). A laptop computer capable of running PuTTY or a similar program for serial communication with the microcontroller is required. A Prusa MK3S desktop 3D printer was utilized to fabricate the 3D models for custom housings for both the magnets to attach to the wheel and the electronics. This step is optional, but does make the removal of the magnet for cleaning simpler. Making use of cheaper components, or sourcing from other retailers, could drop the total price below $50 USD as of October 2021.

Table 1. A bill of materials for constructing the tracker system.

Part Cost
A) Arduino Uno (R3) $23.00
B) Hall Effect Magnetic Sensors (A3144) $6.00
C) Small Magnets (10 mm x 3 mm) $14.00
D) Non-toxic PVA Glue $8.26
E) Long Breadboard Jumper Wires $9.00
F) Breadboard $9.00
G) 3D Printing Filament (PLA, 80g) $2.40
Total: $71.66

Table 2. Detailed description of each component.

Part Purpose Cost (in USD at time of Submission)
A) Arduino Uno (R3) An open-source, low-cost microcontroller for the purpose of communicating with all mouse wheel sensors and recording the ongoing distance traveled. $23.00
B) Hall Effect Magnetic Sensors (A3144) Small, standardized magnetic sensors to detect the state of a magnetic field. These include the necessary pull-up resistor and a red LED on an integrated PCB, simplifying circuit construction and allowing for visual validation of sensor triggering. $6.00
C) Small Magnets (10mmx3mm) Once attached to the mouse wheel, the magnet rotates around the perimeter and triggers the hall effect sensors to count a single wheel rotation. $14.00
D) Non-toxic PVA Glue Washable adhesive necessary to glue the wheel in place, ensuring proper alignment between the sensors and the rotating magnet. $8.26
E) Long Breadboard Jumper Wires Wires to connect the hall effect sensors to the microcontroller. $9.00
F) Breadboard Hub for wiring to break out shared power and ground for all sensors. $9.00
G) 3D Printing Filament A 1 kg roll of PLA filament costs around $30 and the printed components of our system utilize approximately 80g to print. $2.40
Total: - $71.66

3.1 Validating the tracker system: Mechanical test

A robotic “mouse” was created to test the functionality of the testing apparatus. This “mouse” consisted of a single foam wheel attached to a DC motor and Arduino to be rotated at a constant velocity for three intervals. This foam wheel was placed upon one of the mouse wheels with an attached magnet, and five sensors were placed adjacent to each other, following the curvature of the rotating mouse wheel. The system recorded for 85 seconds at a sampling frequency of 4 Hz, and recorded the data shown in Fig 2. This validation demonstrated 100 percent agreement between all sensors at the end of the test, and any variability present during the recording process arose as an artifact of the PuTTY sampling rate and not the sensor reliability.

Fig 2.

Fig 2

A. Top: The measured distance traveled by the mouse wheel over the course of 85 seconds for five sensors at once. B. Bottom: Average velocity of the mouse wheel with a small (4) averaging window.

3.2 Validation experiments: In vivo

All mice in this validation test were retired breeder C57BL/6J females (Jackson Laboratories) and were approximately 15 weeks of age. Mice were excluded if they were in poor health or had previously received experimental treatments from other ongoing laboratory projects. Validation of the tracker system was accomplished by two means: assessing the distance and velocity by four mice in a single cage across two nights (n = 1 cage), and assessing the distance traveled by the same four mice split into four separate cages across two nights (n = 4 cages). Mice remained in their home cage in the facility for the first part of the test, and were moved into new individual cages for the second part. A wheel with the magnet attached replaced their previous wheel (same style and brand, sans magnet.) Tracking began at 7 PM after the dark cycle at the mouse facility had begun and ended at 8 AM on the morning of the second day. The tracking therefore occurred over an entire light-dark cycle and an additional dark cycle. During this time mice had typical ad libitum access to their standard chow and water. To reduce potential for confounding effects, the same handler was responsible for all setup of tracker system and any handling of mice, and all studies were carried out in the same room in the mouse facility. All procedures and experiments involving mice performed in the study were carried out according to and approved by the Oregon State University Institutional Animal Care and Use Committee, on Animal Care and Use Protocol #5127.

3.3 In vivo usage scenario experiment comparing wheel activity prior to and during caloric restriction

All mice used in this experiment were male, between 10-13 weeks of age at the start of the test. Mice were CNTNAP2 knockout strains (B6.129(Cg)-Cntnap2tm1Pele/J, Homozygous genotype, from Jackson Laboratories, JAX stock number 017482) [6]. Each of 8 cages contained three mice (n = 8 cages) and were available to the authors due to a concurrent study occurring in the laboratory. Mice were fed a normal chow diet from weaning, before 80% caloric restriction for three days. Running activity was measured for approximately 10 hours overnight prior to restriction to establish a baseline, and for 10 hours overnight following the three-day period of caloric restriction. All procedures and experiments involving mice performed in the study were carried out according to and approved by the Oregon State University Institutional Animal Care and Use Committee, on Animal Care and Use Protocol #5127.

3.4 Data processing and statistical analysis

Statistics, processing, and plotting for in vivo experiments were done using R version 3.6.3 (R Core Team, 2020), the ggplot2 (v3.3.2) [7], and reshape2 (v1.4.4) [8] packages. Processing and analysis of the “robotic mouse” data was done in MATLAB (R2020a). The full reproducible code and data can be found on our GitHub [4]. To test our hypothesis for the caloric restriction test, we used a permutation test on the means of the differences between each cage before/after caloric restriction distance traveled. In more detail: we measured the distance each cage (n = 8, with 3 mice per cage) ran for 3 days, 10 hours a day and summed that distance per cage (‘before distance’). Then we measured the distance each cage ran after caloric restriction for 3 days, 10 hours a day, and summed that distance (‘after distance’). We generated 1000 permutations by randomly shuffling each cages’ before/after distance, and took the mean across cages. These means produce the null distribution. We then measured the actual mean of before/after distances, and calculated the area under the curve that is more extreme than the actual measured mean value. We chose this test over a t.test to account for the non-normal distribution, and over a Wilcoxon test as we wished to test the difference in the means in a non-parametric way (rather than the rank sum).

4 Results

4.1 Validating the tracker system: Mechanical test

While the robotic mouse was active, the apparatus was rotated at a known constant speed and traveled a set distance before pausing (Fig 2A). This was controlled programmatically while the number of rotations were visually observed as a control. All five sensors recorded the wheel rotations simultaneously, resulting in the plot below where each sensor output is overlaid. The only slight variation in recording between sensors is visible during the first interval (Fig 2B near the 20 second mark, highlighted in red on the right side of the figure), when the rotation of the wheel matched up with the refresh rate of the Arduino (0.25 seconds) resulting in a slight misalignment between record keeping at that specific time point. However, this misalignment disappeared after a few more rotations, resulting in equivalent record keeping for all sensors and demonstrating the robustness of the system for collecting wheel rotations simultaneously. In Fig 2B, the same distance data was plotted as an average speed with a moving average of 4 data points to smooth the plotted line. This visualization shows the top speed of the mouse wheel to be around 44.5 cm/sec while the robotic mouse was moving, and 0 while not moving. The recorded data was in complete agreement with the distance traveled by the robotic mouse both from visual counting and programmed distance traveled.

4.2 Validation experiments: In vivo

When four mice were in a single cage, the total distance run over 45 hours (indicated in blue) was just under 32 kilometers (Fig 3A). However when mice were separated into four different cages, each individual mouse ran between 2 and 12 kilometers, altogether totaling less than 24 kilometers (Fig 3B). Fig 3A also shows the change in velocity over the course of the study, reflecting changes in activity as the light-dark cycle changes. This test shows that our system can accurately pick up the changes in rate over time over the two days, and when considering the light-dark cycle can reflect the changes in the rodents’ active hours.

Fig 3.

Fig 3

A. The measured distance traveled (blue) and velocity (red) by a single mouse wheel over the course of 50 hours for four mice in one cage. Yellow and blue panels indicate the light-dark cycle throughout the study period B. Measured distance traveled for four mice in individual cages over the course of 50 hours. Upon inspection during take down of the system, Cage 4 appeared to have inhibited the wheel with bedding at some point during the first night, so rotations of the wheel were not possible, and thus the tracker ceased measurements.

4.3 In vivo caloric restriction in CNTNAP2 mice

Here we hypothesized that restricting the caloric intake of mice would result in an increase in locomotor activity as measured by distance traveled, compared to the baseline of typical caloric intake. An increase in distance traveled was observed in most cages following caloric restriction (see Fig 4). This difference in distance between the baseline and caloric restriction measurements was not found to be statistically significant, but did show an upward trend. To test our hypothesis we calculated the null distribution using 1,000 label permutations and found the area under the curve of the null distribution to assess significance (p = 0.0876). On the night of tracking following caloric restriction, bedding was stuck in Cage 5’s wheel and thus no data was collected, so Cage 5 was removed from the statistical analysis.

Fig 4.

Fig 4

A. Distance traveled overnight by each cage prior to caloric restriction. B. Distance traveled overnight by the same cages following three days of caloric restriction at 80% of the normal diet. C. Comparison of total distance before and after caloric restriction. When Cage 5 was removed from the analysis, the difference between baseline and caloric restriction was not found to be statistically significant (permutation test, p = 0.0876).

5 Discussion

Our aim to create a low-cost and accessible testing apparatus which implements open-source code pipelines and provides an affordable option to laboratories with minimal resources and personnel was successful. Here we have presented two in vivo usage scenarios to demonstrate the efficacy of this low-cost system. We were able to track distance and velocity over the course of the experimental period the mice ran on their wheels. Fig 3A demonstrates that the mice are most active during the dark hours, and that the changes in velocity correspond with the change in the slope of the distance. We were also able to detect differences in the cages depending on whether there was a solitary mouse or a group of mice. This disparity could be explained by the fact that as social animals, mice tend to be more active when paired in cages with other mice. On their own they may not run as much as when they are all together. Additionally, a single more active mouse could be driving the total distance in the cage together. Finally, for the portion of the study assessing mice in individual cages, mice were moved out of the home cage they had resided in prior to the study, and thus this could have introduced stress, affecting their activity levels.

Although we did not find a significant difference in distance traveled before and after the caloric restriction period of the CNTNAP2 mice, we did observe a trend indicating mice traveled more following the restriction period. Despite the reduction in caloric intake, an increase in activity (here measured by distance traveled over the night) has been observed in several rodent studies [9]. The hypothesized explanation for this phenomenon is that an increase in foraging activity during times of nutritional stress is required for survival by rodents in the wild, so the general upward (though not significant) trend in wheel running which we observed here could be interpreted as a proxy for that foraging and searching behavior. A future study to robustly assess these changes using our described tracker system would need to include a much larger number of cages in order to have sufficient power to detect a truly significant effect.

The proposed system is ideal for short-term (less than a week) monitoring of wheel running as a measure of general activity in assessments such as post-surgery monitoring or short-term drug toxicity studies, but could also be utilized for long-term observation of patterns or behavioral responses to stimuli. Short-term experiments are optimal with the described system in that eventually the amount of recorded data could become large and difficult to computationally manage during analysis, and longer experiments have a greater chance of mechanical disruption (bedding jamming the wheel), but this can be circumvented by checking the cages and saving ongoing files to the computer every few days. The system only disturbs the home cage during the setup, and can subsequently record data for as long as necessary without removing or altering the home cage at all. Additionally, personnel is only needed during setup and resets (as described in the scenario above for running longer-term measurements), as the system can run consecutively as long as the laptop it is connected to remains powered. A limitation of this approach is that cage cleaning and feedings may require realigning the sensors and wheels, and result in a restart of data collection. To improve this alignment, making slight modifications to the cages and wheels by adding pegs and small holes would ensure reliable wheel and sensor alignment without manual adjustment and gluing. Our results show that the system can produce results that align with the current literature on how far mice run in a 24-hour cycle [2], and our validation via controlled rotation of the wheel demonstrates that our system can accurately measure wheel rotations. The tracker can run during both light and dark cycles in the facility. This gives researchers the advantage of observing the full range of wheel activity, including nocturnal behaviors.

Our system also presents some limitations. One such limitation is that with multiple mice in a single cage our system cannot detect which mouse is running (or how many mice are running, total) on the wheel at any given time, as shown in Fig 3A. For this reason, many cage-compatible tracking systems traditionally house mice individually. However, single-housing of rodents can lead to increased anxiety, a reduction in cognitive performance, and increased biological stress markers [10]. Such issues could be resolved with the use of the system in conjunction with a video camera outside the cage, along with video tracking and RFID tools that can identify the number of animals on the wheel at any given time [11]. Testing systems such as the one developed by Singh et al [12] achieve similar goals with a visual tracking system, and combining both the system presented here with the capabilities demonstrated in this study would further deepen the richness of collected data. Furthermore, we only present here a set-up allowing tracking of six cages at a time. However, Arduino-based microcontrollers used with the EnableInterrupt Library enable up to as many inputs as there are available pins on the microcontroller. This allows scalability for researchers wishing to implement the system in additional cages, for up to 18 for an Arduino Uno and even more for larger microcontrollers such as the Arduino Mega (up to 54). A single laptop computer could accommodate multiple microcontrollers at once through USB, enabling multiple microcontrollers recording multiple cages in parallel, should a researcher wish to scale the system beyond what a single microcontroller can record. Another limitation of this wheel-sensor design is that it utilizes a single sensor. Our counting methodology assumes one reading on the sensor is equivalent to one rotation of the wheel. This holds true for continuous use, but may introduce error in the long term as mice get on and off the wheel, resulting in partial rotations. One way to validate full counts would be to place another sensor on the opposite side of the wheel, and only count a rotation when both sensors trigger in sequence; however, this would double the number of sensors, while being difficult to implement without bringing the sensors fully into the cage, undermining the overall value of the system. As a result, one sensor was deemed sufficient for our tests, and cumulative error from miscounting artefacts are likely multiple orders of magnitude lower than the total distance traveled overnight (within a few centimeters accounting for when mice get on or off the wheel.)

6 Conclusions

Studying complex animal behavior such as voluntary wheel running often involves moving the mice from their home cage environment into an unfamiliar apparatus, requires extensive time and personnel to set up, and is costly. The stress of placing mice in an unfamiliar environment may cause spurious phenotypic results, possibly influencing behavior and metabolism of murine subjects. Monitoring behavior in the home cages with minimal alterations to the wheel or cage therefore confers a great advantage to the researchers wishing to study voluntary running behavior.

In this paper we present a low-cost, open-source, minimally invasive and reliable voluntary wheel running tracking system that can be scaled and implemented in the home cage of the rodents. We demonstrated that this system provides reliable and robust tracking capabilities and is low-cost with accessible, off-the-shelf materials. The open source nature of the system allows for expansion in both the hardware and the software, leaving open the possibility for such modifications as setting automatic timers for data collection, automated uploading of the data to an online repository, and expanding the number of cages run at a single time. The system is able to be scaled up for high-throughput analyses, and is suitable for remote running activity monitoring in the home cage as a useful tool for behavioral analysis.

Acknowledgments

The authors would like to thank Mae Araki and Dr. Jennifer Sargent at the Laboratory Animal Resource Center at Oregon State University for animal care support and consultation. We would also like to thank Alexandra Phillips and Maya Livni for their assistance in tracker setup for the cages in the caloric restriction study, and Christine Tataru for guidance on the statistical analysis.

Data Availability

The list of materials, data, and all source code can be found at \url{https://github.com/MaudeDavidLab/Motricity_Tracker} \cite{Motricity2020}.

Funding Statement

Research was supported by the National Science Foundation Graduate Research Fellowship under Grant No. 1840998 (NPB and GED), the National Institutes of Health Small Business Innovation Research Grant \#R44 DA043954 03 by NIH National Institute on Drug Abuse, and the Oregon State University College of Science Research and Innovation Seed (SciRIS-ii) Program award (MMD). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. NSF GRFP: https://www.nsfgrfp.org/ SBIR: https://sbir.nih.gov/ College of Science at Oregon State University SciRIS-ii: https://internal.science.oregonstate.edu/rdu/internal-research-funding-program

References

  • 1. de Visser L, van den Bos R, Spruijt BM. Automated home cage observations as a tool to measure the effects of wheel running on cage floor locomotion. Behavioural brain research. 2005;160(2):382–388. doi: 10.1016/j.bbr.2004.12.004 [DOI] [PubMed] [Google Scholar]
  • 2. Manzanares G, Brito-da Silva G, Gandra P. Voluntary wheel running: patterns and physiological effects in mice. Brazilian journal of medical and biological research. 2018;52(1). doi: 10.1590/1414-431x20187830 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3. Rasmussen S, Miller MM, Filipski SB, Tolwani RJ. Cage change influences serum corticosterone and anxiety-like behaviors in the mouse. Journal of the American Association for Laboratory Animal Science. 2011;50(4):479–483. [PMC free article] [PubMed] [Google Scholar]
  • 4.Bira N, Deitzler G, David M. Motricity Tracker; 2020. https://github.com/MaudeDavidLab/Motricity_Tracker.
  • 5.Schwager M. EnableInterrupt; 2019. https://github.com/GreyGnome/EnableInterrupt.
  • 6. Peñagarikano O, Abrahams BS, Herman EI, Winden KD, Gdalyahu A, Dong H, et al. Absence of CNTNAP2 leads to epilepsy, neuronal migration abnormalities, and core autism-related deficits. Cell. 2011;147(1):235–246. doi: 10.1016/j.cell.2011.08.040 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Wickham H. ggplot2: Elegant Graphics for Data Analysis. Springer-Verlag New York; 2016. Available from: https://ggplot2.tidyverse.org.
  • 8. Wickham H, et al. Reshaping data with the reshape package. Journal of statistical software. 2007;21(12):1–20. doi: 10.18637/jss.v021.i12 [DOI] [Google Scholar]
  • 9. Mitchell SE, Delville C, Konstantopedos P, Derous D, Green CL, Wang Y, et al. The effects of graded levels of calorie restriction: V. Impact of short term calorie and protein restriction on physical activity in the C57BL/6 mouse. Oncotarget. 2016;7(15):19147. doi: 10.18632/oncotarget.8158 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10. Manouze H, Ghestem A, Poillerat V, Bennis M, Ba-M’hamed S, Benoliel J, et al. Effects of single cage housing on stress, cognitive, and seizure parameters in the rat and mouse pilocarpine models of epilepsy. Eneuro. 2019;6(4). doi: 10.1523/ENEURO.0179-18.2019 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11. Valientes DA, Raus AM, et al. An Improved Method for Individual Tracking of Voluntary Wheel Running in Pair-housed Juvenile Mice. Bio-protocol. 2021;11(13):e4071–e4071. doi: 10.21769/BioProtoc.4071 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12. Singh S, Bermudez-Contreras E, Nazari M, Sutherland RJ, Mohajerani MH. Low-cost solution for rodent home-cage behaviour monitoring. Plos one. 2019;14(8):e0220751. doi: 10.1371/journal.pone.0220751 [DOI] [PMC free article] [PubMed] [Google Scholar]

Decision Letter 0

Dragan Hrncic

15 Jun 2022

PONE-D-21-39460An open-source, low-cost voluntary running activity tracking tool for in vivo rodent studiesPLOS ONE

Dear Dr. Deitzler,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Please submit your revised manuscript by Jul 30 2022 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Dragan Hrncic

Academic Editor

PLOS ONE

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at 

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and 

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. Thank you for stating the following financial disclosure: 

Research was supported by the National Science Foundation Graduate Research Fellowship under Grant No. 1840998 (NPB and GED) and the National Institutes of Health Small Business Innovation Research Grant \\#R44 DA043954 03 by NIH National Institute on Drug Abuse (MMD).

NSF GRFP: https://www.nsfgrfp.org/

SBIR: https://sbir.nih.gov/

Please state what role the funders took in the study.  If the funders had no role, please state: "The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript." 

If this statement is not correct you must amend it as needed. 

Please include this amended Role of Funder statement in your cover letter; we will change the online submission form on your behalf.

3. Thank you for stating the following in the Acknowledgments Section of your manuscript: 

Research was supported by the National Science Foundation Graduate Research 262

Fellowship under Grant No. 1840998 (NPB and GED) and the National Institutes of 263

Health Small Business Innovation Research Grant #R44 DA043954 03 by NIH National 264

Institute on Drug Abuse (MMD).

However, funding information should not appear in the Acknowledgments section or other areas of your manuscript. We will only publish funding information present in the Funding Statement section of the online submission form. 

Please remove any funding-related text from the manuscript and let us know how you would like to update your Funding Statement. Currently, your Funding Statement reads as follows: 

Research was supported by the National Science Foundation Graduate Research Fellowship under Grant No. 1840998 (NPB and GED) and the National Institutes of Health Small Business Innovation Research Grant \\#R44 DA043954 03 by NIH National Institute on Drug Abuse (MMD).

NSF GRFP: https://www.nsfgrfp.org/

SBIR: https://sbir.nih.gov/

Please include your amended statements within your cover letter; we will change the online submission form on your behalf.

4. Thank you for stating the following in the Competing Interests section: 

MMD has financial interests relative to the activity of Second Genome, and Second Genome could benefit from the outcomes of this research. The other authors have no conflicts of interest to declare that are relevant to the content of this article.

Please confirm that this does not alter your adherence to all PLOS ONE policies on sharing data and materials, by including the following statement: "This does not alter our adherence to  PLOS ONE policies on sharing data and materials.” (as detailed online in our guide for authors http://journals.plos.org/plosone/s/competing-interests).  If there are restrictions on sharing of data and/or materials, please state these. Please note that we cannot proceed with consideration of your article until this information has been declared. 

Please include your updated Competing Interests statement in your cover letter; we will change the online submission form on your behalf.

5. Please include your full ethics statement in the ‘Methods’ section of your manuscript file. In your statement, please include the full name of the IRB or ethics committee who approved or waived your study, as well as whether or not you obtained informed written or verbal consent. If consent was waived for your study, please include this information in your statement as well. 

6. Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Partly

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: No

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: In their manuscript „An open-source, low-cost voluntary running activity tracking tool for in vivo rodent studies “, the authors describe a “simple” solution to a cost-effective method for tracking activity data in a multi-cage setup. They use an Arduino microcontroller to process the data and Matlab/R to evaluate the sensor robustness and statistics. An in vivo caloric restriction experiment is selected as a use case to demonstrate the method.

First, I would like to say that I am very familiar with commercial systems that can do the same but are rather pricy. Therefore, I welcome this “hands-on” method to establish a validated and low-cost solution to obtain multiple time-resolved activity patterns. Furthermore, getting total wheel-running counts over time (e.g., overnight) is trivial. Therefore, the innovative focus of this study lies in a) cost-effectiveness, b) the ability to run n cages in parallel, and c) the time resolution of activity patterns. I find it a bit sad that the analytical focus of the analysis was not more on c), but I also understand that the main aim of the manuscript was to present the validated setup.

The points a-c are addressed in the manuscript. I want to thank the authors for their work and encourage them to continue this kind of work and, e.g., develop more measures for severity assessment that can be implanted this easily.

The manuscript is well-written, good to understand, and straightforward. The authors also address critical behavior-related activity issues like housing conditions, increased anxiety, and food-related stress.

However, some minor things need to be addressed before I can recommend the manuscript for publication.

a) Since this is an animal study (at least the caloric restriction experiment): did the authors include the ARRIVE guidelines with the manuscript?

b) In the caloric restriction experiment, there is no hypothesis. However, the authors mention that they measured the difference in distance between “baseline and caloric restriction measurements” and that this was statistically significant.

o I cannot see an initial hypothesis and how the effect is (potentially) biologically meaningful

o I guess that there was no power analysis done before the experiment. Therefore, we cannot know whether the result is sufficiently powered or meaningful. If the authors have done an a priori power analysis, they should include it (with their hypothesis).

If they haven’t, they should explain their level of significance threshold.

o The term “statistically significant” is not self-sufficient as a result. Without an effect or hypothesis, this statement is meaningless.

o A type-1 error or α-error of p<= 0.05 is the threshold for “statistical significance” in general science. However, the results report “p=0.0876” (line 170) above this threshold. Therefore, the result is NOT significant. Why was it termed “significant” in the text?

o However, scientists can (when there is a good reason) change that threshold, e.g., to p=0.1 (e.g., if they followed Fisher’s definition of the p-value). In this case, the result would be significant. But without a hypothesis, this is again meaningless, and the authors also give no reason why the level should be larger than the commonly accepted threshold.

o Please provide context for the reported p-value and why this should be significant and/or adjust the reporting of the result.

c) Typo in line 170: “varFiation” (variation?)

d) The authors analyzed the data “through calculation of the empirical cumulative distribution using 1,000 label permutations, followed by a one sided tailed test”.

o This needs a better explanation: What kind of test did the authors use and why?

o I understand a permutation test and that this test can, e.g., be a t-test that is permuted on the ECDF data differences. Was this a t-test?

o And, why was a one-tailed test chosen and not a two-tailed (this information should be given in the missing hypothesis as mentioned above)? The one-tailed design hints that the authors at least expected a lower/higher development in one of the groups, other than a general difference (two-tailed).

o ECDF functions can also be analyzed with a Kolmogorov-Smirnov test; Was it a KS-test?

o The term “one sided tailed test“ is incorrect. Usually, this is called a one-tailed xy-test. Sided and tailed means the same here.

In light of these points, I suggest a minor revision before publication.

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

**********

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2022 Sep 9;17(9):e0273865. doi: 10.1371/journal.pone.0273865.r002

Author response to Decision Letter 0


3 Aug 2022

* This response is contained in the document uploaded "Response to Reviewers".

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: In their manuscript „An open-source, low-cost voluntary running activity tracking tool for in vivo rodent studies “, the authors describe a “simple” solution to a cost-effective method for tracking activity data in a multi-cage setup. They use an Arduino microcontroller to process the data and Matlab/R to evaluate the sensor robustness and statistics. An in vivo caloric restriction experiment is selected as a use case to demonstrate the method.

First, I would like to say that I am very familiar with commercial systems that can do the same but are rather pricy. Therefore, I welcome this “hands-on” method to establish a validated and low-cost solution to obtain multiple time-resolved activity patterns. Furthermore, getting total wheel-running counts over time (e.g., overnight) is trivial. Therefore, the innovative focus of this study lies in a) cost-effectiveness, b) the ability to run n cages in parallel, and c) the time resolution of activity patterns. I find it a bit sad that the analytical focus of the analysis was not more on c), but I also understand that the main aim of the manuscript was to present the validated setup.

The points a-c are addressed in the manuscript. I want to thank the authors for their work and encourage them to continue this kind of work and, e.g., develop more measures for severity assessment that can be implanted this easily.

The manuscript is well-written, good to understand, and straightforward. The authors also address critical behavior-related activity issues like housing conditions, increased anxiety, and food-related stress.

Thank you very much for valuing our study and the idea presented here of circumventing cost obstacles to track rodent activity in their home cage. We agree with the reviewer that we did not emphasize enough the real time tracking capability of this device. We have added a measure of velocity to Figure 3A to show the time resolution of both distance and velocity over the course of the study, to demonstrate that subtle changes and trends throughout the light-dark cycle can be detected, and have expanded the discussion regarding this aspect of our work page 9, line 185.

However, some minor things need to be addressed before I can recommend the manuscript for publication.

a) Since this is an animal study (at least the caloric restriction experiment): did the authors include the ARRIVE guidelines with the manuscript?

Thank you for pointing us to this useful resource, we appreciate it. We have downloaded and checked all the items in the E10 guideline for our study to make sure this paper will be easily reproducible and useful to the scientific community. We have uploaded the E10 checklist to the GitHub repository for this project to make sure it is openly available with the rest of our data and manuscript.

b) In the caloric restriction experiment, there is no hypothesis. However, the authors mention that they measured the difference in distance between “baseline and caloric restriction measurements” and that this was statistically significant.

o I cannot see an initial hypothesis and how the effect is (potentially) biologically meaningful

Thank you for this comment. We have added clearly our hypothesis in line 191 on page 9, which is that caloric restriction in mice will exacerbate foraging behavior which will translate on the motricity tracker by a higher recorded traveled distance.

o I guess that there was no power analysis done before the experiment. Therefore, we cannot know whether the result is sufficiently powered or meaningful. If the authors have done an a priori power analysis, they should include it (with their hypothesis).

If they haven’t, they should explain their level of significance threshold.

The term “statistically significant” is not self-sufficient as a result. Without an effect or hypothesis, this statement is meaningless.

o A type-1 error or α-error of p<= 0.05 is the threshold for “statistical significance” in general science. However, the results report “p=0.0876” (line 170) above this threshold. Therefore, the result is NOT significant. Why was it termed “significant” in the text?

o However, scientists can (when there is a good reason) change that threshold, e.g., to p=0.1 (e.g., if they followed Fisher’s definition of the p-value). In this case, the result would be significant. But without a hypothesis, this is again meaningless, and the authors also give no reason why the level should be larger than the commonly accepted threshold.

Given that we were testing this for the first time and were using this experiment as an assessment of the efficacy of the tracker system, we did not do a power analysis beforehand. If we approximate that the data follow a normal distribution, we determined via power analysis using the pwr package in R that we needed 10 samples, or cages (paired; assessed both prior to and following the caloric restriction phase) if we wanted to be able to find a significant difference between the two groups based on a power level of 0.8 and a significance level of 0.05. This result relates to the reviewers comment about the significance of the results: we found the difference to not be significant (i.e. p value < 0.05) but found a trend (we have edited our manuscript page 10, line 195 to reflect this). We have now added these points in the discussion page. We believe however that we have demonstrated the benefits and robustness of our tool as used in an in vivo scenario when compared to the mechanical wheel.

o Please provide context for the reported p-value and why this should be significant and/or adjust the reporting of the result.

We thank the reviewer for pointing out this oversight. We have edited the results section as well as indicated above to reflect the non significance of the results and kept a threshold at 0.05 (as indicted above in the power analysis)

c) Typo in line 170: “varFiation” (variation?)

Thank you for noticing this, we have fixed this typo in the text.

d) The authors analyzed the data “through calculation of the empirical cumulative distribution using 1,000 label permutations, followed by a one sided tailed test”.

o This needs a better explanation: What kind of test did the authors use and why?

We agree, this does require a more complete explanation. We’ve added the following to the methods section, line 149:

We used a permutation test on the means of the differences between each cage before/after caloric restriction distance traveled. In more detail: we measured the distance each cage (n = 8, with 3 mice per cage) ran for 3 days, 10 hours a day and summed that distance per cage (‘before distance’). Then we measured the distance each cage ran after caloric restriction for 3 days, 10 hours a day, and summed that distance (‘after distance’). We generated 1000 permutations by randomly shuffling each cages’ before/after distance, and took the mean across cages. These means produce the null distribution. We then measured the actual mean of before/after distances, and calculated the area under the curve that is more extreme than the actual measured mean value. We chose this test over a t test to account for the non-normal distribution, and over a Wilcoxon test as we wished to test the difference in the means in a non-parametric way (rather than the rank sum).

o I understand a permutation test and that this test can, e.g., be a t-test that is permuted on the ECDF data differences. Was this a t-test? The term “one sided tailed test“ is incorrect. Usually, this is called a one-tailed xy-test. Sided and tailed means the same here.

Thank you for pointing this out, it was an oversight, here we performed a permutation test. We have removed the term “one sided tailed test” from the text.

o And, why was a one-tailed test chosen and not a two-tailed (this information should be given in the missing hypothesis as mentioned above)? The one-tailed design hints that the authors at least expected a lower/higher development in one of the groups, other than a general difference (two-tailed).

As explained above, we did not use a t test for this study, it was an oversight and we appreciate the reviewer’s comment regarding this mistake.

o ECDF functions can also be analyzed with a Kolmogorov-Smirnov test; Was it a KS-test?

We are not completely clear about the question, but we hope that we have answered the reviewer's question in our previous answer.

Attachment

Submitted filename: Response To Reviewers.pdf

Decision Letter 1

Dragan Hrncic

17 Aug 2022

An open-source, low-cost voluntary running activity tracking tool for in vivo rodent studies

PONE-D-21-39460R1

Dear Dr. Deitzler,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Prof. Dr. Dragan Hrncic, MD, MSc, PhD 

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Up date the repository.

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: All comments have been addressed

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: Please upload the E10 checklist to the GitHub repo as stated. The last change in the repository was on " Oct 6, 2021".

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

**********

Acceptance letter

Dragan Hrncic

30 Aug 2022

PONE-D-21-39460R1

An open-source, low-cost voluntary running activity tracking tool for in vivo rodent studies

Dear Dr. Deitzler:

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

If we can help with anything else, please email us at plosone@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Professor Dragan Hrncic

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    Attachment

    Submitted filename: Response To Reviewers.pdf

    Data Availability Statement

    The list of materials, data, and all source code can be found at \url{https://github.com/MaudeDavidLab/Motricity_Tracker} \cite{Motricity2020}.


    Articles from PLoS ONE are provided here courtesy of PLOS

    RESOURCES