Skip to main content
PLOS ONE logoLink to PLOS ONE
. 2021 Feb 19;16(2):e0247321. doi: 10.1371/journal.pone.0247321

Improving dog training methods: Efficacy and efficiency of reward and mixed training methods

Ana Catarina Vieira de Castro 1,2,*, Ângelo Araújo 3, André Fonseca 4, I Anna S Olsson 1,2
Editor: Simon Clegg5
PMCID: PMC7895348  PMID: 33606822

Abstract

Dogs play an important role in our society as companions and work partners, and proper training of these dogs is pivotal. For companion dogs, training helps preventing or managing dog behavioral problems—the most frequently cited reason for relinquishing and euthanasia, and it promotes successful dog-human relationships and thus maximizes benefits humans derive from bonding with dogs. For working dogs, training is crucial for them to successfully accomplish their jobs. Dog training methods range widely from those using predominantly aversive stimuli (aversive methods), to those combining aversive and rewarding stimuli (mixed methods) and those focusing on the use of rewards (reward methods). The use of aversive stimuli in training is highly controversial and several veterinary and animal protection organizations have recommended a ban on pinch collars, e-collars and other techniques that induce fear or pain in dogs, on the grounds that such methods compromise dog welfare. At the same time, training methods based on the use of rewards are claimed to be more humane and equally or more effective than aversive or mixed methods. This important discussion, however, has not always been based in solid scientific evidence. Although there is growing scientific evidence that training with aversive stimuli has a negative impact on dog welfare, the scientific literature on the efficacy and efficiency of the different methodologies is scarce and inconsistent. Hence, the goal of the current study is to investigate the efficacy and efficiency of different dog training methods. To that end, we will apply different dog training methods in a population of working dogs and evaluate the outcome after a period of training. The use of working dogs will allow for a rigorous experimental design and control, with randomization of treatments. Military (n = 10) and police (n = 20) dogs will be pseudo-randomly allocated to two groups. One group will be trained to perform a set of tasks (food refusal, interrupted recall, dumbbell retrieval and placing items in a basket) using reward methods and the other group will be trained for the same tasks using mixed methods. Later, the dogs will perform a standardized test where they will be required to perform the trained behaviors. The reliability of the behaviors and the time taken to learn them will be assessed in order to evaluate the efficacy and efficiency, respectively, of the different training methods. This study will be performed in collaboration with the Portuguese Army and with the Portuguese Public Security Police (PSP) and integrated with their dog training programs.

1. Introduction

The methods used to train dogs range broadly with some using rewards and other non-invasive techniques (reward methods), others using mainly aversive stimuli (aversive methods) and still others using a combination of both (mixed methods). Strong claims have been made for the negative effect of the use of aversive stimuli in training on dog welfare and dog-owner bond. However, the scientific evidence for this has been limited as most studies lack objective welfare measures, investigation of the entire range of aversive techniques and companion dog-focused research [1]. Recently, in the first large-scale quasi-experimental study of companion dog training (n = 92), Vieira de Castro et al (2020) [2] found that dogs trained with aversive stimuli displayed more stress behaviors during training, showed higher elevations in cortisol levels after training and, if trained exclusively with aversive methods, were more ‘pessimistic’ in a cognitive bias task than dogs trained with either reward and mixed methods. These findings strongly suggest that using aversive stimuli in training compromises companion dog welfare both within and outside the training context. In parallel, in a study aimed at assessing the relationship between training methods and dog-owner bond, Vieira de Castro el al (2019) [3] found that a secure attachment tended to be more consistent in dogs trained with reward methods, as revealed by behaviors displayed during a Strange Situation Procedure. These results suggest that the choice of training methods may also affect dog attachment to owner.

In addition to the effects on welfare, efficacy and efficiency are also relevant aspects to consider for the choice of training methods. Although claims have been made that reward and aversive/mixed methods are, at least, equally effective, the existing scientific literature is inconsistent. Some studies examined the efficacy (reliability of trained behaviors) of specific training methods but without directly comparing reward and aversive/mixed methods. Dale et al (2017) [4] found that dogs learned to avoid native birds after training using e-collars, an aversive technique, and that learning was retained for most dogs following one year. On the other hand, Yin et al (2008) [5] demonstrated that dogs could be trained with a remote-controlled food reward dispenser not to bark excessively, jump and crowd around the door when people arrived. Also, three proof-of-concept studies have shown that clicker training (a reward technique) is effective for training dogs for scent detection tasks [6, 7] and service dog tasks [8]. Other studies have directly compared the efficacy of aversive and reward methods in both dogs and horses and these have produced conflicting results. Among these, five studies suggest a higher efficacy of reward methods [913], whereas one points in the opposite direction [14] and three show no differences between methods [1517]. To our knowledge, only one study addressed the efficiency (speed of learning) of different methods and suggests a higher efficiency of reward over aversive methods [18].

Therefore, the aim of the current study is to evaluate the efficacy and efficiency of different dog training methods. This will be investigated in the context of working dogs, as working dogs allow a rigorous experimental design and control, with randomization of treatments. Namely, military and police dogs will be trained using either reward (Group Reward) or mixed methods (Group Mixed, dogs pseudo-randomly allocated to groups) to perform a set of behaviors. The efficiency of training methods will be evaluated by measuring the number of sessions required for the dogs to learn the tasks, and efficacy will be assessed using a standardized test in which dogs will be required to perform the trained behaviors.

Dogs play an important role in our society both as companion and working animals. Owning a dog for companionship has been shown to bring several physical and psychological benefits to humans [19, 20], and working dogs are of invaluable help when, for example, they fulfil tasks for disabled people or help in the detection of drugs or explosives. Dog training plays a pivotal role here. First, by preventing or managing dog behavioral problems—the most frequently cited reason for relinquishing and euthanasia [21], it helps to promote successful dog-human relationships and thus maximize the benefits humans derive from bonding with dogs [22]. Secondly, because it is required for working dogs to successfully accomplish their jobs.

2. Material and methods

2.1. Ethics statement

The planned study includes an experimental training protocol in which working dogs are trained with either reward or mixed methods. The mixed methods will be based on the training method presently used for training these dogs outside the experimental protocol, thus no dog will be subjected to pain, suffering, distress or lasting harm as a result of being recruited for the study. Shock collars and pinch collars, which can cause physical harm, will not be used.

Dogs and handlers will be video recorded for further analysis of behavior. Individual handlers will be identifiable from the video footage. Material in which individuals can be identified will only be used by the research team for research purposes (i.e., to control for the training techniques and for data analysis).

All handlers will be briefed that the purpose of the study is “to investigate different training methods and measure the behavior of the dog-handler dyad”, and sign an informed consent form that they agree to participate in the study and to be video recorded for research purposes. Each handler will be instructed about which tools and techniques are included in the treatment assigned to them, but will not be informed about the overall experimental design.

Applications for approval are submitted to the Committee for Ethics and Responsible Conduct in Research (human subjects research) and from the Animal Welfare and Ethics Body (animal research) of i3S, University of Porto. The study will only start after approval has been obtained.

2.2. Subjects

Military (n = 10) and police dogs (n = 20), housed at the facilities of the Military Working Dog Platoon in the Portuguese Paratroopers Regiment (RPara) and Portuguese Public Security Police (PSP) K9 unit, respectively will be allocated to Group Reward (trained with reward methods) and Group Mixed (trained with mixed methods). All dogs have previous mixed methods training experience, a stratified randomization method [23] will be used to assign animals to the two groups. This method allows for balancing in terms of subjects’ baseline characteristics (covariates) that may potentially affect the dependent variables under study. In the present study the following covariates will be taken into account: dog sex, age, breed and previous training experience (obedience, odor detection, protection work). This will be done for each institution, meaning that five dogs from RPara and 10 dogs from the PSP K9 unit will be allocated to each group.

As part of their certification process as working dogs, all the animals had to perform and pass the obedience component of a BH test [24]. Despite all dogs being naïve to the specific exercises included in the present study (food refusal, interrupted recall, dumbbell retrieval and placing items in basket–the detailed description of the exercises is presented below), two similar behaviors are trained as part of the training programs of PSP and RPara. Namely, dogs are trained to retrieve a motivator (e.g., a tug or bite pad), although not to the formality and precision that is going to be required in the ‘dumbbell retrieve’ exercise, and they are also usually trained to interrupt a send away (i.e., they are trained to run forward to a motivator and interrupt the running when instructed). The ‘food refusal’ and ‘place items in the basket’ exercises are not part of the training programs and are thus new or near to completely new for all the animals. Because previous training on similar behaviors may have carryover effects on the training planned for the study, at the time of the beginning of the study, each participating dog’s training history will be thoroughly evaluated and, if needed, this will also be included as a covariate in the randomization process.

2.3. Training methods

All dogs will be trained through associative learning (classical and operant conditioning) [25, 26], however, the principles used for each group will differ. Whereas all four quadrants of operant conditioning will be allowed for Group Mixed (positive punishment, negative reinforcement, positive reinforcement and negative punishment), only the quadrants of positive reinforcement and negative punishment will be permitted for Group Reward. Regarding classical conditioning, the use of both conditioned reinforcers and punishers will be allowed for Group Mixed, but only conditioned reinforcers will be allowed for Group Reward. Table 1 displays the detailed definitions for all the conditioning procedures and includes some practical examples.

Table 1. Definition of the conditioning procedures used for training dogs.

Procedure Definition
Operant conditioning Positive punishment Any unpleasant stimulus that is applied to the dog after the exhibition of an undesirable behavior. Examples include applying a leash jerk, yelling at the dog and leaning towards the dog in a threatening way.
Negative reinforcement Any unpleasant stimulus that is applied to the dog and that is stopped only after the dog exhibits the desired behavior. Examples include releasing leash pressure.
Positive reinforcement Any pleasant stimulus that is applied to the dog after the exhibition of a desirable behavior. Examples include food treats, playing tug-of-war, verbal praise, and petting the dog.
Negative punishment Any pleasant stimulus that is removed after the exhibition of an undesirable behavior. Examples include a time-out in a crate.
Classical conditioning Conditioned punisher Any (initially) neutral stimulus that, after repeated paring with an unpleasant stimulus, acquires its properties as a punisher. Examples includes a verbal marker ‘no’ that was paired with a slap.
Conditioned reinforcer Any (initially) neutral stimulus that, after repeated paring with a pleasant stimulus acquires its properties as a reinforcer. Examples includes a clicker (a device that makes a clicking sound) that was paired with food delivery.

As for training equipment, no pinch nor e-collars will be allowed in the study and choke chains will only be allowed for Group Mixed. Apart from this, the handlers will be free to decide which other equipment to use among leashes, flat collars and harnesses. The use of a clicker will also be optional, as it has been reported not to affect efficiency and efficacy as compared to the use of a verbal marker or food alone [2729]. In order to ensure that the instructions regarding the training procedures and tools permitted for each group are being followed, checkpoints will be done at the fifth and tenth days of training for each dyad, when the research team will review the video recordings of the training sessions.

Some flexibility for choosing training equipment and procedures will thus be allowed (as opposed to have the handlers following previously defined and detailed training protocols). The reason for this decision is that this study aims to reflect a real-life situation of dog training, where different handlers use different approaches (within the same training method–reward or mixed) and, especially, where the individual dog and its natural tendencies and behaviors usually dictate the training pathway.

2.4. Data collection

2.4.1. Training

Dogs will be trained by their handlers to perform four exercises: ‘food refusal, ‘interrupted recall’, ‘dumbbell retrieval’ and ‘placing items in basket’. The exercises were chosen in order to resemble real working dog tasks, while not interfering with the dogs’ daily working duties. Prior to training commencement, the handlers will be instructed on the exercises that they will train the dogs to perform and on the tools and techniques they are allowed to use during training (as explained in detail in the previous section). The handlers will be free to decide whether to train the exercises in parallel or in a sequence, as well as the order in which to train the different exercises. Training sessions will be conducted two days per week, with a gap between training days no longer than three days. Each training session will have a maximum duration of 10 minutes and up to six training sessions can be conducted per day. Within each training day, a break of at least 30 minutes between training sessions will be required.

Training for each exercise will end when the dog reaches the learning criterion (i.e., adequately performs the behavior as determined by the handler) or after a maximum of 45 sessions. Information regarding the number of training sessions, their duration and the behaviors being trained will be annotated by each handler in a notebook (specifically designed for the study) for each training day. In addition, all training sessions will be video recorded.

2.4.2. Evaluating performance

The efficiency of the different training methods will be evaluated through the number of training sessions necessary to reach the learning criterion (as determined by the handlers), and the efficacy will be assessed through a standardized test where the dogs will be asked to perform the trained behaviors. The test will be conducted in a fenced enclosure and will include the following exercises:

1. Food refusal: The handler asks the dog to ‘stay’ (the position in which the dog is left can be either a sit, a down or a stand, according to the handler’s choice), walks 10 meters away to a pre-defined/marked location within the field of vision of the dog, and stops with his/her back facing the dog. Afterwards, a helper comes near the dog and throws two pieces of food next to the dog’s front legs, one to right side and one to the left side. The handler can use the verbal cue for the dog not to eat before starting the exercise or while the helper is coming within the field.

Cues: ‘Sit’/’Down’/‘Stand’, ‘Stay’, ‘Don’t eat’

2. Interrupted recall: The handler asks the dog to ‘stay’ (the position in which the dog is left can be either a sit, a down or a stand, according to the handler’s choice), walks 30 meters away to a pre-defined/marked location, turns to face the dog and recalls the dog, instructing it to stop after roughly half the distance is covered (the position is which the dog stops can be either a sit, a down or a stand, according to the handler’s choice).

Cues: ‘Sit’/’Down’/‘Stand’, ‘Stay’, ‘Come’, ‘Stop’

3. Dumbbell retrieval: With the dog sitting at his/her side, the handler throws the dumbbell to a distance of roughly 10 meters (marked in the floor in order to help) and then instructs the dog to retrieve it. The dog should move towards the dumbbell, pick it up and bring it to the handler, sit in front of him and only release on cue.

Cues: ‘Sit’, ‘Get it’, ‘Out’

4. Placing items in basket: A basket will be placed in the field and three items will be placed in pre-defined positions in the floor, around the basket, by a helper. The handler will then instruct the dog to place the items in the basket.

Cues: ‘Place’ (only one cue is allowed for the entire exercise, the handler is not allowed to give additional cues after each item is retrieved)

5. Surprise exercise: The dog will have to perform a dumbbell retrieval with two pieces of food being thrown to the floor next to the dog by a helper before the exercise starts. This exercise will be included in order to test for training generalization.

Cues: ‘Sit’, ‘Don’t eat’, ‘Get it’, ‘Out’

The starting points for the exercises will be the same for all dogs and will be marked in the floor with a spray. Only verbal cues will be allowed during the test. The aforementioned words/expressions are, however, purely indicative—each handler will be free to choose his or her own cues. During the test, the dogs will not wear any collar or leash and no treats, toys or punishments will be allowed. Handlers will only be allowed to use social reinforcement (praise) between exercises. Additionally, in order to ensure that all dogs perform the test with similar motivation levels, dogs will be fed 12 hours previously to the conduction of the tests and no play or physical exercise will be allowed during this period.

The designs of Exercises 1, 2 and 3 were inspired on the Federation Cynologique International (FCI) dog sports of IGP, Obedience and Mondioring [24, 30, 31]. Exercise 4 is not part of any recognized dog sport, but its core behavior is (retrieve).

The test will be performed twice, the day after the learning criterion is achieved for all behaviors and 6 months later, to assess short- and long-term efficacy. No formal training will be performed between the two evaluations for ‘Food refusal’ and ‘Placing items in basket’. ‘Interrupted Recall’ and ‘Retrieve dumbbell’ will be trained once a month for maintenance. This will be done in order to evaluate the impact of maintenance training on long-term efficacy. The tests will be recorded using two video cameras, set in order to cover the entire field.

A pilot study using two dog-handler dyads that will not participate in the main study will be performed in order to test and, if needed, refine the methodology.

2.5. Data analysis

Two different approaches will be used to analyze the performance of the dogs in the test. Three international experts on working dog training will be invited to assess dog performance in situ on the test days. The experts, who will be blind to the experimental groups and to the goals of the study, will be instructed to use a qualitative scoring system, according to which the dog performance for each exercise should be classified as ‘insufficient’, ‘sufficient’ or ‘outstanding’ (see S1 Annex for full details). Moreover, two researchers blind to the experimental groups and to the goals of the study will analyze the videos of the tests using a quantitative scoring system, following which the dog performance for each exercise will receive a score ranging from 0 to 10 (see S2 Annex for full details). Inter-observer reliability will be calculated for each exercise. The quantitative scoring system was developed based on FCI rules and guidelines for Obedience, Mondioring and IGP trials [24, 30, 31].

The video recordings of the training sessions and the tests will also be used to assess dog welfare through the analysis of stress behaviors as in Vieira de Castro et al (2020). These will also allow for the analysis of handler behavior and other aspects of training such as the frequency, type and timing of the stimuli applied. This will be used to generate a list of all the conditioning procedures actually used by each handler during training.

2.5.1. Statistical analysis

Data will be analyzed using a Generalized Linear Mixed Model (GLMM), to account for repeated measures and to investigate the effects of multiple subject variables. Subject ID will be included as the repeated measure. Age (years), sex (M/F), breed and training experience will be included as covariates and Training Method (Mixed vs Reward) and Training Unit (PSP, RPara) as factors. One model will be run for each response variable: 1) number of training sessions necessary to reach the learning criterion, 2) qualitative score obtained in the test and 3) quantitative score obtained in the test.

Supporting information

S1 Annex. Qualitative scoring system for the test for efficacy evaluation.

(DOCX)

S2 Annex. Quantitative scoring system for the test for efficacy evaluation.

(DOCX)

Data Availability

All relevant data from this study will be made available upon study completion.

Funding Statement

The authors received no specific funding for this work.

References

  • 1.Guilherme-Fernandes J, Olsson IAS, Vieira de Castro AC. Do aversive-based training methods actually compromise dog welfare?: A literature review. Appl Anim Behav Sci. 2017; 196, 1–12. 10.1016/j.applanim.2017.07.001 [DOI] [Google Scholar]
  • 2.Vieira de Castro AC, Fuchs D, Pastur S, Munhoz-Morello G, de Sousa L, Olsson IAS. Does training method matter? Evidence for the negative impact of aversive-based methods on companion dog welfare. PLoS ONE. 2020; 15(12): e0225023 10.1371/journal.pone.0225023 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Vieira de Castro AC, Barrett J, de Sousa L, Olsson IAS Carrots versus sticks: The relationship between training methods and dog-owner bond. Appl Anim Behav Sci. 2019; 219: 104831 10.1016/j.applanim.2019.104831 [DOI] [Google Scholar]
  • 4.Dale AR, Podlesnik CA, Elliffe D. Evaluation of an aversion-based program designed to reduce predation of native birds by dogs: An analysis of training records for 1156 dogs. Appl Anim Behav Sci. 2017; 191: 59–66. 10.1016/j.applanim.2017.03.003 [DOI] [Google Scholar]
  • 5.Yin S, Fernandez EJ, Pagan S, Richardson SL, Snyder G. Efficacy of a remote-controlled, positive-reinforcement, dog-training system for modifying problem behaviors exhibited when people arrive at the door. Appl Anim Behav Sci. 2008; 113: 123–138. 10.1016/j.applanim.2007.11.001 [DOI] [Google Scholar]
  • 6.Willis CM, Church SM, Guest CM, Cook WA, McCarthy N, Bransbury AJ, et al. Olfactory detection of human bladder cancer by dogs: proof of principle study. BMJ: Brit Med J. 2004; 329 (7468): 712 10.1136/bmj.329.7468.712 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Cornu J, Cancel-Tassin G, Ondet V, Girarde C, Cussenot O. Olfactory detection of prostate cancer by dogs sniffing urine: a step forward in early diagnosis. Eur Urol. 2011; 59(2), 197–201. 10.1016/j.eururo.2010.10.006 [DOI] [PubMed] [Google Scholar]
  • 8.D’Onofrio J. Measuring the efficiency of clicker training for service dogs. Thesis in Special Education, The Pennsylvania State University; 2015. [Google Scholar]
  • 9.Blackwell EJ, Bolster C, Richards G, Loftus BA, Casey RA. The use of electronic collars for training domestic dogs: estimated prevalence, reasons and risk factors for use, and owner perceived success as compared to other training methods. BMC Vet Res. 2012; 8, 93 10.1186/1746-6148-8-93 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.China L, Mills DS, Cooper JJ. Efficacy of Dog Training With and Without Remote Electronic Collars vs. a Focus on Positive Reinforcement. Front Vet Sci. 2020; 7:508 10.3389/fvets.2020.00508 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Haverbeke A, Laporte B, Depiereux E, Giffroy JM, Diederich C. Training methods of military dog handlers and their effects on the team’s performances. Appl Anim Behav Sci. 2008; 113, 110–122. 10.1016/j.applanim.2007.11.010 [DOI] [Google Scholar]
  • 12.Haverbeke A, Messaoudi F, Depiereux E, Stevens M, Giffroy JM, Diederich C. Efficiency of working dogs undergoing a new Human Familiarization and Training Program. J Vet Behav. 2010; 5: 112–119. 10.1016/j.jveb.2009.08.008 [DOI] [Google Scholar]
  • 13.Hiby EF, Rooney NJ, Bradshaw JWS. Dog training methods: their use, effectiveness and interaction with behaviour and welfare. Anim Welf. 2004; 13, 63–69. [Google Scholar]
  • 14.Salgirli Y, Schalke E, Hackbarth H. Comparison of learning effects and stress between 3 different training methods (electronic training collar, pinch collar and quitting signal) in Belgian Malinois Police Dogs. Rev Méd Vét. 2012; 163, 530–535. 10.1016/j.jveb.2009.05.014 [DOI] [Google Scholar]
  • 15.Cooper JJ, Cracknell N, Hardiman J, Wright H, Mills D. The welfare consequences and efficacy of training pet dogs with remote electronic training collars in comparison to reward based training. PLoS One. 2014; 9, e102722 10.1371/journal.pone.0102722 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Sankey C, Richard-Yris M., Henry S, Fureix C, Nassur F, Hausberger M. Reinforcement as a mediator of the perception of humans by horses (Equus caballus). Anim Cogni, 2010; 13: 753–764. 10.1007/s10071-010-0326-9 [DOI] [PubMed] [Google Scholar]
  • 17.Visser EK, Van Dierendonck M, Ellis AD, Rijksen C, Van Reenen. A comparison of sympathetic and conventional training methods on responses to initial horse training. Vet J. 2009; 181: 48–52. 10.1016/j.tvjl.2009.03.009 [DOI] [PubMed] [Google Scholar]
  • 18.Hendriken P, Elmgreen K, Ladewig J. Trailer-loading of horses: Is there a difference between positive and negative reinforcement concerning effectiveness and stress-related signs? J Vet Behav. 2011; 6, 261–266. 10.1016/j.jveb.2011.02.007 [DOI] [Google Scholar]
  • 19.Barker SB, Wolen AR. The benefits of human-companion animal interaction: a review. J Vet Med Educ. 2008; 35(4), 487–495. 10.3138/jvme.35.4.487 [DOI] [PubMed] [Google Scholar]
  • 20.Crawford EK, Worsham NL, Swinehart ER. Benefits derived from companion animals, and the use of the term “attachment”. Anthrozoos. 2006; 19 (2), 98–112. 10.2752/089279306785593757 [DOI] [Google Scholar]
  • 21.Reisner I. The learning dog: A discussion of training methods In: Serpell J, editor. The Domestic Dog: Its Evolution, Behavior and Interactions with People, 2nd Edition Cambridge University Press; 2017. pp. 211–226. [Google Scholar]
  • 22.Payne E, Bennett PC, McGreevy PD. Current perspectives on attachment and bonding in the dog–human dyad. Psychol Res Behav Manag. 2015; 8: 71–79. 10.2147/PRBM.S74972 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Suresh K. An overview of randomization techniques: An unbiased assessment of outcome in clinical research. J Hum Reprod Sci. 2011; 4(1):8–11. 10.4103/0974-1208.82352 [DOI] [PMC free article] [PubMed] [Google Scholar] [Retracted]
  • 24.Mazur J. Learning and Behavior, 6th Edition Pearson/Prentice Hall, Upper Saddle River, N.J; 2006. [Google Scholar]
  • 25.Pryor K. Don’t shoot the dog!: the new art of teaching and training. New York, Bantam Books; 1999. [Google Scholar]
  • 26.International Utility Dogs Regulations of the Federation Cynologique International (FCI). file://fs04.i3s.up.pt/Users$/ana.castro/Downloads/UTI-REG-IGP-en%20(3).pdf (Accessed 30/10/2020)
  • 27.Dorey NR, Blandina A, Udell MAR. Clicker training does not enhance learning in mixed-breed shelter puppies (Canis familiaris). J Vet Behav. 2020; 39: 57–63. 10.1016/j.jveb.2020.07.005 [DOI] [Google Scholar]
  • 28.Chiandetti C, Avella S, Fongaro E, Cerri F. Can clicker training facilitate conditioning in dogs? Appl. Anim. Behav. Sci. 2016; 184, 109–116. 10.1016/j.applanim.2016.08.006 [DOI] [Google Scholar]
  • 29.Williams JL, Friend TH, Nevill CH, Archer G. The efficacy of a secondary reinforcer (clicker) during acquisition and extinction of an operant task in horses. Appl. Anim. Behav. Sci. 2004; 88, 331–341. 10.1016/j.applanim.2004.03.008 [DOI] [Google Scholar]
  • 30.Federation Cynologique International (FCI) Rules and Guidelines for Obedience Trials. http://www.fci.be/en/Obedience-46.html (Accessed 16/04/2020)
  • 31.Federation Cynologique International (FCI) International Rules of Mondioring Competition. http://fci.be/en/Mondioring-3500.html (Accessed 30/10/2020)

Decision Letter 0

Simon Clegg

7 Sep 2020

PONE-D-20-23944

Improving dog training methods: Efficacy and efficiency of reward and mixed training methods

PLOS ONE

Dear Dr. Ana Catarina Vieira de Castro,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

It was reviewed by two experts in the field and they have suggested some modifications be made prior to acceptance.

If you could write a response to reviewers, that will help to expedite revision when you re-submit.

Please submit your revised manuscript by Oct 17 2020 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols

We look forward to receiving your revised manuscript.

I wish you the best of luck with your revisions.

Hope you are keeping safe and well in these difficult times.

Kind regards,

Simon Clegg, PhD

Academic Editor

PLOS ONE

Journal requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. Thank you for stating the following financial disclosure:

"The author(s) have applied to funding for this work and are currently waiting for decision."

At this time, please address the following queries:

  1. Please clarify the sources of funding (financial or material support) for your study. List the grants or organizations that supported your study, including funding received from your institution.

  2. State what role the funders took in the study. If the funders had no role in your study, please state: “The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.”

  3. If any authors received a salary from any of your funders, please state which authors and which funders.

  4. If you did not receive any funding for this study, please state: “The authors received no specific funding for this work.”

Please include your amended statements within your cover letter; we will change the online submission form on your behalf.

3. We note that you have stated that you will provide repository information for your data at acceptance. Should your manuscript be accepted for publication, we will hold it until you provide the relevant accession numbers or DOIs necessary to access your data. If you wish to make changes to your Data Availability statement, please describe these changes in your cover letter and we will update your Data Availability statement to reflect the information you provide.

4. Please include captions for your Supporting Information files at the end of your manuscript, and update any in-text citations to match accordingly. Please see our Supporting Information guidelines for more information: http://journals.plos.org/plosone/s/supporting-information.

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Does the manuscript provide a valid rationale for the proposed study, with clearly identified and justified research questions?

The research question outlined is expected to address a valid academic problem or topic and contribute to the base of knowledge in the field.

Reviewer #1: Yes

Reviewer #2: Yes

**********

2. Is the protocol technically sound and planned in a manner that will lead to a meaningful outcome and allow testing the stated hypotheses?

The manuscript should describe the methods in sufficient detail to prevent undisclosed flexibility in the experimental procedure or analysis pipeline, including sufficient outcome-neutral conditions (e.g. necessary controls, absence of floor or ceiling effects) to test the proposed hypotheses and a statistical power analysis where applicable. As there may be aspects of the methodology and analysis which can only be refined once the work is undertaken, authors should outline potential assumptions and explicitly describe what aspects of the proposed analyses, if any, are exploratory.

Reviewer #1: Yes

Reviewer #2: Partly

**********

3. Is the methodology feasible and described in sufficient detail to allow the work to be replicable?

Reviewer #1: No

Reviewer #2: No

**********

4. Have the authors described where all data underlying the findings will be made available when the study is complete?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception, at the time of publication. The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: No

Reviewer #2: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above and, if applicable, provide comments about issues authors must address before this protocol can be accepted for publication. You may also include additional comments for the author, including concerns about research or publication ethics.

You may also provide optional suggestions and comments to authors that they might find helpful in planning their study.

(Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: Dear authors,

Thank you for submitting your protocol to PLOS ONE. Studies of high level of evidence comparing reward-based training methods with aversive-training methods are missing in the literature, your research will hopefully fill an important gap in the field. Overall, your manuscript is well presented, there is a strong rationale for the proposed study and the methodology is feasible and clear. I am looking forward to reading your findings in the future. Please see some of the suggestions I have written below:

Ethics statement: ethical approval of the study protocol is desirable.

Data availability: where will the data of the study be made available? https://journals.plos.org/plosone/s/data-availability (there are some recommendations on this page)

Abstract: you might prefer "randomized" instead of "pseudorandomized" to avoid confusion by readers without a scientific background?

Line 57: what do you mean by "other non-invasive techniques (reward methods)"? It sounds a bit redundant as compared to line 56 ('exclusively rewards') but I may have missed something.

Line 63: in light of the potential importance of this large study, you could write its sample size, e.g. "of companion dog training (n=XXX)"

Line 75: just remove a space here "aversive/mixed methods"

Line 114: in which context are they going to be video recorded? During the whole training process? Analysis of behaviour of both dog and handler?

Line 116: will this material remain confidential or be shared (e.g. supplementary material) upon study publication? To clarify, you could write "will only be used by the research team for research purposes (i.e. data analysis, XXX)".

Lines 123-125: same comment from above about ethical approval

Lines 128-132; 204-210: besides randomization of individuals and control of the variables 'years of training', age and gender, are you going to use any practical measure to reduce group bias? If dogs are likely to differ in performance (e.g. due to age difference, genetics, training experience), a test of baseline performance (e.g. time required to learn a simple new task) could be added.

Lines 159-160: are all dogs naive to these tasks?

Lines 161-162: who will control the number of sessions and duration of sessions? Self-report, diary, video records? How long is each session expected to last? Are you going to standardize the duration of the sessions?

Lines 165-166: at this task, wouldn't be important to tell the handlers to cover "all" the zones of this field as opposed to just letting them walk "randomly"? Is the food going to be spilt on the same areas for all dogs?

Lines 167-168: are they going to use only verbal commands or visual cues are also allowed? Standardization is important.

Lines 169-174: same as above regarding visual and verbal cues.

Line 174: is the food going to be placed on the same area for all the dogs? Any target to help the handlers throw the dumbbell on a similar location?

Line 183: the techniques allowed for each group should be included in the protocol. It lacks details on how exactly the group reward differs from the group mixed - this is a very important piece of information.

Lines 188-190: who is going to score the dogs? Ideally, more than one person should rate the performances and inter-rater reliability should be calculated (especially for the 'general impression' score, as it is more subjective).

Reviewer #2: This protocol covers an interesting and important applied topic: the efficiency and effectiveness of different dog training approaches. However, many methodological details are missing, making it impossible to assess the soundness of the proposed methods.

There are several different reward-based training methods and aversive training methods and, among a given category, they differ in their effectiveness. For example, research has shown that, among reward training methods, diverse methods differ in their efficiency (Fugazza and Miklósi 2014) and effectiveness (Fugazza and Miklósi 2015). I am not aware of studies comparing different aversive methods, but it is logical to assume that, for example, diverse aversive stimuli may differ in the intensity of their effects, at least, and potentially in other aspects.

It is therefore crucial that the authors carefully describe the methods that will be used for training, rather than only classifying them as reward-based or aversive.

A detailed description of the protocol applied in the training sessions with the two methods would help enormously in this sense.

The authors propose that the efficiency of the training methods will be assessed by measuring the number of sessions needed to reach a criterion that is determined by the trainers. However, it is fundamental to know what would be the length (N. of trials? Time?) of a training session. Training sessions of different durations have been shown to produce different outcomes (Demant et al. 2011).

The duration or number of trials of the sessions should be somehow standardized.

It is likely that both the dogs and the handlers of this study will have extensive experience with mixed methods, but little or no experience with reward methods. I think that this may affect the results. How do the authors plan to take it into account?

Since the evaluation of the dogs’ performance in the test is somewhat subjective, I warmly recommend the observer that will score the dogs’ performance to be blind with regard to the treatment received by the dog, to avoid a biased judgment.

References:

Demant H., Ladewig J., Balsby T.J.S., Dabelsteen J. (2011) The effect of frequency and duration of training sessions on acquisition and long-term memory in dogs. Applied Animal Behaviour Science, 133, 228-234.

Fugazza C. and Miklósi A. (2015) Social learning in dog training: the effectiveness of the Do as I do method compared to shaping/clicker training. Applied Animal Behaviour Science, 171, 146-151.

Fugazza, C. and Miklósi Á. (2014) Should old dog trainers learn new tricks? The efficiency of the Do as I do method and shaping / clicker training method to train dogs. Applied Animal Behaviour Science, 153, 53-61.

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2021 Feb 19;16(2):e0247321. doi: 10.1371/journal.pone.0247321.r002

Author response to Decision Letter 0


8 Nov 2020

RESPONSE TO REVIEWERS

We appreciate all the constructive criticism provided by the two anonymous reviewers. In our opinion, the manuscript as it currently stands has improved substantially. In what follows, we present detailed responses to all the comments.

Editor

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

Some changes were made and we believe the new version of the manuscript fully complies with PLOS ONE’s requirements.

2. Thank you for stating the following financial disclosure:

"The author(s) have applied to funding for this work and are currently waiting for decision."

At this time, please address the following queries:

a. Please clarify the sources of funding (financial or material support) for your study. List the grants or organizations that supported your study, including funding received from your institution.

b. State what role the funders took in the study. If the funders had no role in your study, please state: “The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.”

c. If any authors received a salary from any of your funders, please state which authors and which funders.

d. If you did not receive any funding for this study, please state: “The authors received no specific funding for this work.”

Please include your amended statements within your cover letter; we will change the online submission form on your behalf.

Please refer to Cover Letter. We have currently no funding approved for the study and hence the statement for financial disclosure at this time should be “The authors received no specific funding for this work.”

3. We note that you have stated that you will provide repository information for your data at acceptance. Should your manuscript be accepted for publication, we will hold it until you provide the relevant accession numbers or DOIs necessary to access your data. If you wish to make changes to your Data Availability statement, please describe these changes in your cover letter and we will update your Data Availability statement to reflect the information you provide.

We plan on submitting data as supporting information.

4. Please include captions for your Supporting Information files at the end of your manuscript, and update any in-text citations to match accordingly. Please see our Supporting Information guidelines for more information: http://journals.plos.org/plosone/s/supporting-information.

Done. Thank you for pointing that out!

Reviewer #1:

1. Ethics statement: ethical approval of the study protocol is desirable.

We have now submitted our protocol to both the Committee for Ethics and Responsible Conduct in Research (human subjects research) and from the Animal Welfare and Ethics Body (animal research) of i3S, University of Porto.

2. Data availability: where will the data of the study be made available? https://journals.plos.org/plosone/s/data-availability (there are some recommendations on this page)

We plan on submitting data as supporting information.

3. Abstract: you might prefer "randomized" instead of "pseudorandomized" to avoid confusion by readers without a scientific background?

We have now realized that in the first version of the manuscript we did not explain what we meant by pseudo-randomization and, of course, that could be confusing for the readers (especially for those with no scientific background). In the current version of the manuscript, in the section “Subjects”, the following information can be found:

“A stratified randomization method [23] will be used to assign animals to the two groups. This method allows for balancing in terms of subjects’ baseline characteristics (covariates) that may potentially affect the dependent variables under study. In the present study the following covariates will be taken into account: dog sex, age, breed and previous training experience (obedience, odor detection, protection work).”

In the meantime, we will stick to the use of the word “pseudo-randomly” in the Abstract. In our opinion, using “randomly” can be deceptive, leading the readers to think that we will perform a true random allocation of dogs to the two groups.

4. Line 57: what do you mean by "other non-invasive techniques (reward methods)"? It sounds a bit redundant as compared to line 56 ('exclusively rewards') but I may have missed something.

With "other non-invasive techniques we meant extinction and negative punishment, other operant conditioning techniques (besides positive reinforcement) that are used within the scope of reward-based methods in dog training. However, we recognize that the way it was phrased could be confusing and we have removed the word “exclusively”. Now it reads:

“The methods used to train dogs range broadly with some using rewards and other non invasive techniques (reward methods), others using mainly aversive stimuli (aversive methods) and still other using a combination of both (mixed methods).”

5. Line 63: in light of the potential importance of this large study, you could write its sample size, e.g. "of companion dog training (n=XXX)"

We followed the reviewer’s suggestion and changed the text to:

“Recently, in the first large-scale quasi-experimental study of companion dog training (n=92), Vieira de Castro et al (accepted for publication) [2] found that dogs trained with aversive stimuli displayed more stress behaviors during training…”

6. Line 75: just remove a space here "aversive/mixed methods"

Done. Thank you!

7. Line 114: in which context are they going to be video recorded? During the whole training process? Analysis of behaviour of both dog and handler?

We now acknowledge that, in the first version of the protocol, this information was not made clear. The information can now be found in the following places throughout the protocol:

Line 113 Dogs and handlers will be video recorded for further analysis of behavior. Individual handlers will be identifiable from the video footage. Material in which individuals can be identified will only be used by the research team for research purposes (i.e., to control for the training techniques and for data analysis).

Line 197 all training sessions will be video recorded.

Line 247 The tests will be recorded

Line 258 two researchers blind to the experimental groups and to the goals of the study will analyze the videos of the tests using a quantitative scoring system

Line 263 The video recordings of the training sessions and the tests will also be used to assess dog welfare through the analysis of stress behaviors as in Vieira de Castro et al (accepted for publication). These will also allow for the analysis of handler behavior and other aspects of training such as the frequency, type and timing of the stimuli applied.

8. Line 116: will this material remain confidential or be shared (e.g. supplementary material) upon study publication? To clarify, you could write "will only be used by the research team for research purposes (i.e. data analysis, XXX)".

We will share untreated quantitative data but not the videos. In order to make this information clear, the text now reads: “Material in which individuals can be identified will only be used by the research team for research purposes (i.e., to control for the training techniques and for data analysis”.

8. Lines 123-125: same comment from above about ethical approval

See comment above.

9. Lines 128-132; 204-210: besides randomization of individuals and control of the variables 'years of training', age and gender, are you going to use any practical measure to reduce group bias? If dogs are likely to differ in performance (e.g. due to age difference, genetics, training experience), a test of baseline performance (e.g. time required to learn a simple new task) could be added.

Because, to our knowledge, there is no validated test for evaluating baseline performance, we will not use a test of this sort in order to allocate dogs to groups. However, as can now read in line 150 “as part of their certification process as working dogs, all the animals had to perform and pass the obedience component of a BH test [24]”. This, is our view, already ensures that all the animals have some equivalence in their baseline performance.

10. Lines 159-160: are all dogs naive to these tasks?

This is crucial information and it was missing in the previous version of the manuscript. We have added the following paragraph: “Despite all dogs being naïve to the specific exercises included in the present study (food refusal, interrupted recall, dumbbell retrieval and placing items in basket – the detailed description of the exercises is presented below), two similar behaviors are trained as part of the training programs of PSP and RPara. Namely, dogs are trained to retrieve a motivator (e.g., a tug or bite pad), although not to the formality and precision that is going to be required in the ‘dumbbell retrieve’ exercise, and they are also usually trained to interrupt a send away (i.e., they are trained to run forward to a motivator and interrupt the running when instructed). The ‘food refusal’ and ‘place items in the basket’ exercises are not part of the training programs and are thus new or near to completely new for all the animals. Because previous training on similar behaviors may have carryover effects on the training planned for the study, at the time of the beginning of the study, each participating dog’s training history will be thoroughly evaluated and, if needed, this will also be included as a covariate in the randomization process.”

11. Lines 161-162: who will control the number of sessions and duration of sessions? Self-report, diary, video records? How long is each session expected to last? Are you going to standardize the duration of the sessions?

We have added the following information to the text:

Line 190: “Training sessions will be conducted two days per week, with a gap between training days no longer than three days. Each training session will have a maximum duration of 10 minutes and up to six training sessions can be conducted per day. Within each training day, a break of at least 30 minutes between training sessions will be required.”

Line 196: “Information regarding the number of training sessions, their duration and the behaviors being trained will be annotated by each handler in a notebook (specifically designed for the study) for each training day. In addition, all training sessions will be video recorded.”

12. Lines 165-166: at this task, wouldn't be important to tell the handlers to cover "all" the zones of this field as opposed to just letting them walk "randomly"?

Is the food going to be spilt on the same areas for all dogs?

We decided to change this exercise to the food refusal exercise of the dog sport of Mondioring. This way, we have both a more standardized exercise and a stronger basis for the scoring.

13. Lines 167-168: are they going to use only verbal commands or visual cues are also allowed? Standardization is important.

Only verbal cues will be allowed. This is now clearly stated in line 234: “Only verbal cues will be allowed during the test.”

14. Lines 169-174: same as above regarding visual and verbal cues.

See response to previous comment.

15. Line 174: is the food going to be placed on the same area for all the dogs?

See response to comment 12 above.

16. Any target to help the handlers throw the dumbbell on a similar location?

Now line 219 reads: “With the dog sitting at his/her side, the handler throws the dumbbell to a distance of roughly 10 meters (marked in the floor in order to help) and then instructs the dog to retrieve it”.

18. Line 183: the techniques allowed for each group should be included in the protocol. It lacks details on how exactly the group reward differs from the group mixed - this is a very important piece of information.

We added a section entitled “Training methods” where this information is now presented. Thank you for pointing this out, this is definitely crucial information.

19. Lines 188-190: who is going to score the dogs? Ideally, more than one person should rate the performances and inter-rater reliability should be calculated (especially for the 'general impression' score, as it is more subjective).

This information can now be found in section “Data analysis”:

“Two different approaches will be used to analyze the performance of the dogs in the test. Three international experts on working dog training will be invited to assess dog performance in situ on the test days. The experts, who will be blind to the experimental groups and to the goals of the study, will be instructed to use a qualitative scoring system, according to which the dog performance for each exercise should be classified as ‘insufficient’, ‘sufficient’ or ‘outstanding’ (see Annex 1 for full details). Moreover, two researchers blind to the experimental groups and to the goals of the study will analyze the videos of the tests using a quantitative scoring system, following which the dog performance for each exercise will receive a score ranging from 0 to 10 (see Annex 2 for full details). Inter-observer reliability will be calculated for each exercise. The quantitative scoring system was developed based on FCI rules and guidelines for Obedience, Mondioring and IGP trials [24, 30, 31].”

Reviewer #2:

This protocol covers an interesting and important applied topic: the efficiency and effectiveness of different dog training approaches. However, many methodological details are missing, making it impossible to assess the soundness of the proposed methods.

1. There are several different reward-based training methods and aversive training methods and, among a given category, they differ in their effectiveness. For example, research has shown that, among reward training methods, diverse methods differ in their efficiency (Fugazza and Miklósi 2014) and effectiveness (Fugazza and Miklósi 2015). I am not aware of studies comparing different aversive methods, but it is logical to assume that, for example, diverse aversive stimuli may differ in the intensity of their effects, at least, and potentially in other aspects. It is therefore crucial that the authors carefully describe the methods that will be used for training, rather than only classifying them as reward-based or aversive. A detailed description of the protocol applied in the training sessions with the two methods would help enormously in this sense.

Please refer to response to comment #18 of Reviewer #1. We acknowledge that information on training methods was actually missing in the previous version of the manuscript and we have now added a section entitled “Training methods”, where we detail which procedures and tools can be used for each group. We will not use standardized protocols for training and the reasons for this choice are now underpinned in lines 175-180: “Some flexibility for choosing training equipment and procedures will thus be allowed (as opposed to have the handlers following previously defined and detailed training protocols). The reason for this decision is that this study aims to reflect a real-life situation of dog training, where different handlers use different approaches (within the same training method – reward or mixed) and, especially, where the individual dog and its natural tendencies and behaviors usually dictate the training pathway”. However, we believe that the information we have included in the “Training methods” section addresses your concerns.

2. The authors propose that the efficiency of the training methods will be assessed by measuring the number of sessions needed to reach a criterion that is determined by the trainers. However, it is fundamental to know what would be the length (N. of trials? Time?) of a training session. Training sessions of different durations have been shown to produce different outcomes (Demant et al. 2011). The duration or number of trials of the sessions should be somehow standardized.

Please refer to response to comment #11 of Reviewer #1.

3. It is likely that both the dogs and the handlers of this study will have extensive experience with mixed methods, but little or no experience with reward methods. I think that this may affect the results. How do the authors plan to take it into account?

Reward methods have been used consistently in both institutions (PSP and RPara) for more or less 10 years. We do not foresee any issue regarding the experience of handlers with both methods.

4. Since the evaluation of the dogs’ performance in the test is somewhat subjective, I warmly recommend the observer that will score the dogs’ performance to be blind with regard to the treatment received by the dog, to avoid a biased judgment.

Please refer to response to comment #19 of Reviewer #1.

Attachment

Submitted filename: Response to reviewers.docx

Decision Letter 1

Simon Clegg

3 Dec 2020

PONE-D-20-23944R1

Improving dog training methods: Efficacy and efficiency of reward and mixed training methods

PLOS ONE

Dear Dr. Ana Catarina Vieira de Castro,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

==============================

Many thanks for submitting your manuscript to PLOS One

Your manuscript was reviewed by two experts in the field, and they have recommended some minor modifications be made prior to acceptance

I therefore invite you to make these changes and resubmit. If you could write a response to reviewers, that will greatly aid revision upon re-submission

I wish you the best of luck with your revisions

Hope you are keeping safe and well in these difficult times

Thanks

Simon

==============================

Please submit your revised manuscript by Jan 17 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols

We look forward to receiving your revised manuscript.

Kind regards,

Simon Clegg, PhD

Academic Editor

PLOS ONE

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Does the manuscript provide a valid rationale for the proposed study, with clearly identified and justified research questions?

The research question outlined is expected to address a valid academic problem or topic and contribute to the base of knowledge in the field.

Reviewer #1: Yes

Reviewer #2: Yes

**********

2. Is the protocol technically sound and planned in a manner that will lead to a meaningful outcome and allow testing the stated hypotheses?

The manuscript should describe the methods in sufficient detail to prevent undisclosed flexibility in the experimental procedure or analysis pipeline, including sufficient outcome-neutral conditions (e.g. necessary controls, absence of floor or ceiling effects) to test the proposed hypotheses and a statistical power analysis where applicable. As there may be aspects of the methodology and analysis which can only be refined once the work is undertaken, authors should outline potential assumptions and explicitly describe what aspects of the proposed analyses, if any, are exploratory.

Reviewer #1: Yes

Reviewer #2: Yes

**********

3. Is the methodology feasible and described in sufficient detail to allow the work to be replicable?

Reviewer #1: Yes

Reviewer #2: Yes

**********

4. Have the authors described where all data underlying the findings will be made available when the study is complete?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception, at the time of publication. The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above and, if applicable, provide comments about issues authors must address before this protocol can be accepted for publication. You may also include additional comments for the author, including concerns about research or publication ethics.

You may also provide optional suggestions and comments to authors that they might find helpful in planning their study.

(Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: Dear authors,

Thank you for considering my suggestions and making several changes to the manuscript. The quality of your work improved substantially, particularly the methodology, which is much clearer and detailed. I am happy to recommend your report protocol for publication and I wish you all the best conducting the study.

Best wishes.

Reviewer #2: The authors have now provided an improved version of the manuscript and I believe that this is now publishable, after minor revision.

Since flexibility is allowed for the trainers to choose which actual rewards and punishments to use, I strongly recommend that in the data collection the authors include a list of the rewards and punishments actually used by the trainers, because these may play an important role in determining the outcome of the training.

Apart from this, which I believe is very important, I only have a few minor suggestions:

Line 62: a parenthesis is missing from the reference.

Lines 151-152: This sentence describing the subjects' previous experience may better fit above (e.g. in line 138). Then you can continue with the more detailed description of similar behaviours previously learnt by the dogs.

Lines 256-257: What will be the criterions for scoring the performance? E.g., speed of execution? Latency to execute? Detailed criterions may help reducing the subjectivity level of this judgment.

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #2: No

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2021 Feb 19;16(2):e0247321. doi: 10.1371/journal.pone.0247321.r004

Author response to Decision Letter 1


3 Feb 2021

Once again, we would like to thank the reviewers for the constructive criticism. The current version of the manuscript addresses the last round of comments by Reviewer #2. We detail them in what follows.

“Since flexibility is allowed for the trainers to choose which actual rewards and punishments to use, I strongly recommend that in the data collection the authors include a list of the rewards and punishments actually used by the trainers, because these may play an important role in determining the outcome of the training.”

We agree with the reviewer. We thought this was made clear in the previous version of the manuscript when we wrote, in lines 265-268, “The video recordings of the training sessions (…) will also allow for the analysis of handler behavior and other aspects of training such as the frequency, type and timing of the stimuli applied”. However, in order to leave no doubt about this idea, we added the current sentence in line 297: “This will be used to generate a list of all the conditioning procedures actually used by each handler during training”.

“Line 62: a parenthesis is missing from the reference.”

Thank you for noticing that! It is now corrected.

“Lines 151-152: This sentence describing the subjects' previous experience may better fit above (e.g. in line 138). Then you can continue with the more detailed description of similar behaviours previously learnt by the dogs.”

We followed the reviewer’s suggestion and moved this sentence to the beginning of the paragraph (Line 139).

“Lines 256-257: What will be the criterions for scoring the performance? E.g., speed of execution? Latency to execute? Detailed criterions may help reducing the subjectivity level of this judgment.”

We understand the reviewer’s concern here. In a first draft of our qualitative scoring system, we actually had more criteria detailed for each exercise. However, after carefully and thoroughly discussing it with the members of the author team who work very closely and in practice with this type of scoring systems (Ângelo Aráujo and André Fonseca), the end result was the system we proposed in the previous version of the manuscript (Annex S1). Given that the international experts on working dog training that will evaluate dog performance for our study are also familiar with (and actually implement in practice) this type of system, we feel this is actually the best approach for our research.

Attachment

Submitted filename: Response to Reviewers.docx

Decision Letter 2

Simon Clegg

5 Feb 2021

Improving dog training methods: Efficacy and efficiency of reward and mixed training methods

PONE-D-20-23944R2

Dear Dr. Ana Catarina Vieira de Castro,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Simon Clegg, PhD

Academic Editor

PLOS ONE

Additional Editor Comments:

Many thanks for resubmitting your manuscript to PLOS One

As you have addressed all the comments and the manuscript reads well, I have recommended it for publication

You should hear from the Editorial Office shortly.

It was a pleasure working with you and I wish you the best of luck for your future research

Hope you are keeping safe and well in these difficult times

Thanks

Simon

Acceptance letter

Simon Clegg

10 Feb 2021

PONE-D-20-23944R2

Improving dog training methods: Efficacy and efficiency of reward and mixed training methods

Dear Dr. Vieira de Castro:

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

If we can help with anything else, please email us at plosone@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. Simon Clegg

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 Annex. Qualitative scoring system for the test for efficacy evaluation.

    (DOCX)

    S2 Annex. Quantitative scoring system for the test for efficacy evaluation.

    (DOCX)

    Attachment

    Submitted filename: Response to reviewers.docx

    Attachment

    Submitted filename: Response to Reviewers.docx

    Data Availability Statement

    All relevant data from this study will be made available upon study completion.


    Articles from PLoS ONE are provided here courtesy of PLOS

    RESOURCES