Skip to main content
PLOS ONE logoLink to PLOS ONE
. 2020 Apr 1;15(4):e0230799. doi: 10.1371/journal.pone.0230799

The SIESTA (SEAAV Integrated evaluation sedation tool for anaesthesia) project: Initial development of a multifactorial sedation assessment tool for dogs

Fernando Martinez-Taboada 1,*,#, Jose Ignacio Redondo 2,#
Editor: Simon Russell Clegg3
PMCID: PMC7112187  PMID: 32236148

Abstract

Objective

The aim of the study was to develop a multifactorial tool for assessment of sedation in dogs.

Methods

Following a modified Delphi method, thirty-eight veterinary anaesthetists were contacted to describe the following levels of awareness: no-sedation, light, moderate, profound sedation and excitation. The answers were summarized in descriptors for each level. A questionnaire was created with all the variables obtained from the descriptors. The questionnaire was returned to the panel of anaesthetists to be used before and after real sedations in conjunction with the previous 5-point categorical scale. Data obtained were analysed using the classification-tree and random-forest methods.

Results

Twenty-three anaesthetists (60%) replied with descriptions. The descriptors and study variables were grouped in categories: state-of-mind, posture, movements, stimuli-response, behaviour, response-to-restraint, muscle tone, physiological data, facial-expression, eye position, eyelids, pupils, vocalization and feasibility-to-perform-intended-procedure. The anaesthetists returned 205 completed questionnaires. The levels of awareness reported by the anaesthetists were: no sedation in 92, mild (26), moderate (37) and profound in 50 cases. The classification-tree detected 6 main classifying variables: change in posture, response-to-restraint, head-elevation, response-to-toe-pinching, response-to-name, and movements.

The random-forest found that the following variables: change in posture, response-to-restraint, head-elevation, response-to-name, movements, posture, response-to-toe-pinching, demeanour, righting-reflex and response-to-handclap, were classified correctly in 100% awake, 62% mild, 70% moderate and 86% of profound sedation cases.

Discussion and conclusion

The questionnaire and methods developed here classified correctly the level of sedation in most cases. Further studies are needed to evaluate the validity of this tool in the clinical and research setting.

Introduction

Sedation is a state characterized by central depression accompanied by drowsiness and some degree of centrally induced relaxation [1]. The term is very broad and it is regularly used to refer to a range from a calm and stress-free state required just to tolerate hospitalization, to a much more profound depression of the central nervous system with immobility and no response to painful stimulus. Even, in recent years, the use of procedural sedation as an alternative to general anaesthesia is becoming popular for minor surgical and diagnostic procedures.

The assessment of the degree of sedation experienced by animals is fundamental both in clinical practice and in research. This assessment can enhance the accurate titration of anaesthetic agents to reduce the incidence of excessive drug-induced complications [2]. It can also be a fundamental tool in the development of new sedative drugs, drug combinations and routes of administration.

Clinicians and researchers require tools to measure the effectiveness of sedation in individual animals in relation to the purpose of such sedation. These tools are the sedation scales or scoring systems [3]. Different types of sedation scales have been previously published: numerical [4], simple descriptive [5] and multifactorial [6]. To the best of the authors’ knowledge, there is no consensus in the veterinary anaesthesia community regarding the best sedation tools to be used clinically or in research. Additionally, only one study has been published attempting to preliminarily validate a canine sedation scale [7].

The aim of this work was to collect and classify veterinary anaesthetists’ opinions on canine sedation characteristics and behaviours, and to develop a tool to assess the level of sedation.

Material and methods

A modified Delphi method was used to obtain the opinions of anaesthetists working in clinical practice [8]. The study began in February 2016 and ended in April 2016 and it was approved by the Universidad CEU Valencia Human Research Ethics Authority (no 2012–813). For the last phase, the Animal Research Ethics Committee advised that specific ethics approval for this observational study was not required as the study design did not cause deviation from standard practice.

Experts/Participants

Thirty-eight full-time veterinary anaesthetists were contacted to explore their interest to be part of the expert panel. They all spoke Spanish as their mother language, were members of the SEAAV (Sociedad Española de Anestesia y Analgesia Veterinaria, Spanish for Spanish Society of Veterinary Anaesthesia and Analgesia), and had postgraduate training in veterinary anaesthesia and analgesia (diploma of the European College of Veterinary Anaesthesia and Analgesia [ECVAA], diploma of the American Colleges of Veterinary Anaesthesia and Analgesia [ACVAA], completion of an ECVAA or ACVAA approved residency program, PhD, MSc, etc.).

The experts were contacted by email to explain the aims of the study and to give some general information regarding the study. Those who replied agreeing to participate and providing consent (‘participants’ for the rest of the text) were included in a mailing list. This email mailing list was used in all the subsequent communication phases of the study.

Consultation process

The consultation process was subdivided in 3 phases or rounds.

Phase 1: Following the Delphi method, a group of broad open questions were formulated. The participants had to describe, in their own words, the following categories: no sedation, light sedation, moderate sedation, profound sedation, and excitation.

The answers were visually analysed in the form of word clouds to graphically assess the most frequent or relevant expressions. The answers were summarised and structured in descriptors that were shared with the group of participants for their feedback.

Phase 2: A list of dichotomic variables were developed from the descriptors obtained in phase 1 and a preliminary questionnaire was developed. The list of descriptors and the preliminary questionnaire were circulated among the participants to obtain their opinion regarding total or partial repetition of variables, clarity of the variables to be assessed and overall usability of the questionnaire.

Phase 3: With the information obtained from the previous phases, a definitive questionnaire was developed as a fillable portable document format (pdf) (Adobe Acrobat X Pro, Adobe Systems, San Jose, CA, USA). This questionnaire (Appendix 1) included 10 sections: Date and assessor details, animal signalment and procedure, sedation protocol (including drugs, doses and route of administration), a group of variables related to the dog’s state of mind and posture (alertness, mental and emotional state, head position, posture, righting reflex and general behaviour prior to interaction with the dog), variables related to mobility (eg. muscle tone, ataxia, hypermetric movements amongst others), a group of miscellaneous variables obtainable by passive observation of the animal (eye position, pupil size, visible third eyelid, presence of nystagmus, excessive salivation, vocalization, respiratory rate and panting), variables related to the dog’s response to different types of stimuli (eg. menace response, response to a handclap, response to being called by their name, response to holding one of the front paws, etc), variables in relation to the feasibility of performing the intended procedure or step (hair clipping, venous catheter placement, arterial catheter placement, radiographical studies, preparation of the surgical field), the five-point simple descriptive sedation scale (no-sedation, light, moderate, profound sedation and excitation) used in phase 1, and a free text box for any other observations. During the phase 2 feedback, it was made clear that some variables could not be assessed in every sedation status. For this reason, the variables related to behaviour and some of the ones related to movements, were optional variables that appeared with a tick box next to them. All the other variables required a compulsory answer and they appeared with a circle next to them. During this 3rd phase, the participants were asked to use the questionnaire on clinical cases within their practice. They had to assess at least five cases before the administration of any sedative drugs and another five cases after the administration of sedative drugs (the assessments could be done on the same dogs before and after sedation, but they could also be done on different animals). The entire questionnaire needed to be completed, including the subjective five-point simple descriptive sedation scale. Partially completed forms were not allowed to be submitted.

The participants had a week to reply for phase 1, the same period for phase 2, and a two-week period for phase 3. The communication between participants and coordinators was always in Spanish. All the data provided by the participants was compiled, summarized and deidentified by the coordinators before being returned for their assessment and feedback.

The responses to the descriptors in the questionnaires were analysed in conjunction with the score provided by the participants using the five-point simple descriptive scale.

Data analysis

The data obtained from the questionnaires was analysed with the statistical program R 3.5.3 (R Core Team 2019, The R Foundation for Statistical Computing http://www.R-project.org, Austria) performing a classification tree using the function rpart() of the statistical package rpart [9]. The minimum number of observations per node to try a partitioning was set at 15. The classification tree was plotted via the function rpart.plot() of the rpart.plot package [10].

The data was also analysed using the randomForest package [11,12]. This package randomly selected 66% of the results as a learning sample and built a classification tree with this data. In this case, it was considered that each node should have a minimum of 5 observations. The rest of the data (33% of the results) was considered out-of-bag data and it was used to assess the sensitivity and specificity of the classification tree. This process of generating a classification tree and to check its classification functions was repeated 5000 times. The weight of each category or sedation status (no-sedation, light, moderate, profound sedations and excitation) was pondered according to relative percentage of frequency of analysed cases, so the initial probability of any given dog to be scored in any of the categories was 20%.

Data related to the animal signalment, procedure and sedation protocols was analysed by descriptive statistics and is presented in the form of mean ± SD, number of observations and/or percentage.

Results

A total of 23 of the 38 contacted participants (60%) consented to participate in the study.

The descriptors obtained to the open questions asked in phase 1 were classified into the following 15 groups: state of mind, spontaneous posture, mobility, response to stimuli, response to visual presence, behaviour, response to restraint, muscle tone, physiological data, facial expression, eye position, eyelids position, pupils (presence of mydriasis), vocalization and feasibility to perform the intended procedure.

The final version of the questionnaire (annex 1) included 106 dichotomic variables.

The participants returned 205 completed forms. A total of 94 male and 101 female dogs, 5.2 ± 3.8 year-old and weighing 18.4 ± 13.0 kg were assessed with the questionnaire. Only 10 dogs (4 male and 6 female) were assessed twice, before and after the sedation. The rest of the dogs were only assessed once, either fully awake (before any sedation was administered) or after being sedated. Their health status was classified as ASA I in 92 dogs (44.9%), ASA II in 83 cases (40.5%) and ASA III in 30 (14.6%). No animals were classified ASA IV or V. The reasons for sedation were: pre-anaesthetic sedation (68%), diagnostic procedures (28%) and others (4%).

The level of sedation, assessed by the participants using the 5-point scale, was no sedation in 92 cases (44.9%), mild sedation in 26 cases (12.7%), moderate sedation in 37 cases (18%) and profound sedation in 50 (24.4%). There were no cases classified as “excitation”. The drugs used are shown in Table 1.

Table 1.

Sedatives Analgesic
Drug name No cases % Drug name No cases %
Acepromazine 24 11.7 Methadone 72 35.1
Medetomidine 24 11.7 Morphine 12 5,9
Dexmedetomidine 66 32.2 Pethidine 7 3.4
Midazolam 7 3.4 Fentanyl 7 3.4
Diazepam 1 0.5 Butorphanol 14 6.8
Buprenorphine 2 1
NSAIDs 9 4.4

Drugs used for sedation by the participants with the number of cases used for and the percentage. Please note that in multiple cases, the reported combination contained more than one sedative and one analgesic.

The classification tree obtained (Fig 1) shows in each node the classifying descriptor or variable that allows the dichotomization of the cases. According to this classification tree, six predictors overtook all the other explanatory variables: change in posture, head-elevation, response-to-restraint, response-to-name, movements and posture.

Fig 1. Classification tree algorithm.

Fig 1

The classification tree represents the different selection criteria or ‘decision nodes’ used to predict the most correct classification of the total number of cases (represented at the root of the tree as a 100%). As the data is classified in subsets, the percentage value represents the probability of a case of belonging to that data subset.

The random forest determined the most important variables for a successful classification were (Fig 2): change in posture, response to restraint, head elevation, response to name, movements, posture, response to toe pinching, demeanour, righting reflex and response to handclap based on the variable important plot. And based on the mean decrease in Gini coefficient, the most important variables were: head-elevation, response-to-name, change in posture, response-to-restraint, posture and movements (Fig 2).

Fig 2. Variable importance plot (mean decrease accuracy and mean decrease Gini).

Fig 2

This is a fundamental outcome of the random forest and it shows, for each variable, how important it is in classifying the data. The Mean Decrease Accuracy plot expresses how much accuracy the model losses by excluding each variable. The more the accuracy suffers, the more important the variable is for the successful classification. The variables are presented from descending importance. The mean decrease in Gini coefficient is a measure of how each variable contributes to the homogeneity of the nodes and leaves in the resulting random forest. The higher the value of mean decrease accuracy or mean decrease Gini score, the higher the importance of the variable in the model.

The final random forest model predicted accurately the different levels of sedation (Table 2). The out of bag estimate of error rate was 14.15%.

Table 2.

Predicted scores
No sedation Mild Moderate Profound Agreement Class error
Observed scores No sedation 92 0 0 0 100.0% 0.0%
Mild 6 16 4 0 61.5% 38.5%
Moderate 0 5 26 6 70.3% 29.7%
Profound 0 0 7 43 86.0% 14.0%

Confusion or error matrix showing the observed and predicted (based on the random forest model) values, agreement (proportion of data subsets predicted correctly by the model) and the class or classification error (proportion of data subsets predicted wrongly by the model).

Discussion

The Delphi technique was a successful method to obtain a collective view in an area of very limited published data. This technique has proven adequate in bringing consensus in areas of knowledge with no or limited evidence and that are very dependent on individual opinion [13].

The Delphi technique requires a panel of ‘experts’ and in this research 23 participants formed that panel. The definition of ‘expert’ was deliberately ambiguous. The only pre-requisites for the being ‘experts’ were that the participants were working full time in veterinary anaesthesia and they held some postgraduate qualification in veterinary anaesthesia. These requirements were set at that level for two reasons: 1. Easy communication between the panel of participants and the team of coordinators (eg. using the same terminology and jargon) and 2. The participants being able to readily use the questionnaire when developed. Theoretically, the panel size can vary from 4 to 3000, and the final number is just an empirical and pragmatic decision [14]. The coordinators and authors of this research discussed the panel size and a bigger number of participants with smaller work-load per individual was considered preferable to maintain a high level of engagement from the participants. The use of electronic communication and very strict deadlines for each step of the process allowed the completion of the Delphi technique in this study in just under six weeks, compared to several months in similar studies [15].

The classification and regression trees are techniques to determine relations between multiple independent variables and a dependent one. The term classification tree is used when it is applied to discrete or categorical variables (like in this case) and regression trees when the variables are continuous. Both techniques were described by Breiman et al. [16] as an algorithm that is dividing the data into subgroups to minimise the heterogeneity (called node impurity) within the different subgroups. This technique always classifies the variables by importance [17].

The classification tree determined that the most relevant variables for the classification of sedation were: change in posture, head elevation, response to restraint, response to name, movements and spontaneous posture. The random forest determined the most useful variables were: change in posture, response to restraint, head elevation, response to name, movements, posture, response to toe pinching, demeanour, righting reflex and response to handclap. Some of these domains coincide with those previously mentioned in other composite sedation scales used in the literature. Young et al. assessed the level of sedation after medetomidine by analysing six parameters: jaw tone, placement on side, response to noise, attitude, posture and pedal reflex [18]. These variables are mostly included in the ones obtained in this study with the exception of jaw tone. The assessment of jaw and tongue tone also appear in another two composite sedation scales in dogs [19, 20]. This variable was not mentioned in any of the phases of the Delphi method reported here. It was not even a descriptor mentioned in phase 1 as an answer to the open questions, it was not mentioned during the phase 2 feedback and, for that reason, it was not included in the questionnaire. It is possible that the participants favoured other methods for the assessment of this physical characteristic such as the animal’s muscle tone. It is also possible that the participants strongly associated the evaluation of jaw tone with the level of anaesthetic depth and this might have prevented them from relating it to sedation. This explanation would seem unlikely considering that they included the position of the eye and the palpebral reflex as adequate variables to assess sedation, which are also used widely to assess depth of anaesthesia. The previously published scales also consider the position of the eye and the presence of palpebral reflex [19,20], but, although these variables were included in our questionnaire, neither the classification tree or the random forest found them determinant in the assessment of sedation. Wagner et al. [7] also observed that they could shorten their scale by removing three domains (palpebral reflex, jaw tone and resistance when laid into lateral recumbency) maintaining similar consistency to the full scale, but being three times shorter to complete. In the research reported here, the righting reflex was not significant in the classification tree, but it was a significant domain in the random forest analysis. Additionally, the head elevation was found to be fundamental in the classification of the different levels of sedation. Head elevation is a variable that it has never been mentioned in the previously published sedation scales in small animals (although it is a variable extensively used in horses [21]).

The random forest analysis classified correctly all the non-sedated animals and 43 out of 50 (86%) animals profoundly sedated, when compared with the participants’ assessment using simple descriptive scales. The success rate in the classification decreases when analysing mild (61.5%) and moderate (70.3%) sedations. The degree of central depression obtained after the administration of sedatives is a continuous variable with several physical signs. As a consequence, the assessment of this depression is very subjective. In this research, the simple descriptive scale was deliberately open to the participants’ interpretation. No descriptors for the different categories were provided, so it is possible that there was some degree of overlapping between different participants and categories. In other words, it is possible that a mildly sedated dog for an observer might have been moderately sedated for a different participant. This might explain some of the classification errors committed by the random forest analysis.

Composite rating scales are multi-item scales that allow detailed evaluation of complex or multidimensional issues. Several items referring to the same idea or issue have to increase internal consistency and, consequently, they should improve the reproducibility and validity of the scale [22]. The variables used in the present study were always assessed and scored in a dichotomous (yes/no) answer. This type of answer counteracts any subjectivity in the assessment of any variable. For example, Grint et al. [20] developed and used a composite scale for assessing sedation levels in dogs that was later validated by Wagner et al. [7]. This scale allows each domain to be assessed in 3 or 4 possible gradings (associating a score from 0 to 2 or 3 depending of the grading). These types of simple descriptive scales within the different domains allows extensive assessor subjectivity in the interpretation of each subsection of the composite scale.

It is important to comment on several limitations regarding this study. All the participants knew when the dogs were non-sedated, as they performed the assessment before administering drugs to the animals. However, it is impossible to predict the effect of any drug combination in a particular individual, so the responses for the sedated dogs should be accurate. Inter-observer and intra-observer variability over time were not studied, as the assessments were performed in situ by the participants. The time to complete the questionnaire was not assessed, but presumably this was long. The original intention was to identify important variables for the classification of sedation and then, to develop a shorter questionnaire. For that reason, the original inventory was long and comprehensive. Finally, the Delphi method requires a group of implicated and flexible facilitators/coordinators that can classify, compile and condense the group’s ideas to streamline them and to avoid repetition. There is no doubt the coordinator group could have introduced some degree of bias during this process, but the Delphi method performed in this study involved three rounds and, after each of them, the responses were aggregated and shared for feedback from the participants. It is fairly unlikely, that the coordinators could have forced any bias in the research without being noticed by the participants.

Conclusion

The Delphi method and the random forest analysis identified ten domains to successfully classify most of the levels of sedation in the dog. Further studies are now needed to assess the validity of the short version of the questionnaire and its reliability. Additionally, a broad study using a large number of dogs and veterinarians with and without specific anaesthesia knowledge would also allow the assessment of the tool agreement and bias.

Supporting information

S1 Appendix. Questionnaire.

Final version of the questionnaire distributed to the collaborators.

(PDF)

S1 Data

(PDF)

S2 Data

(PDF)

S3 Data

(PDF)

Acknowledgments

Special thanks to all the participants for their generous and selfless support.

Data Availability

All relevant data are within the manuscript and its Supporting Information files.

Funding Statement

The author(s) received no specific funding for this work.

References

  • 1.Tranquilli WJ and Grimm KA. Introduction: Use, Definitions, History, Concepts, Classification, and Considerations for Anesthesia and Analgesia. In: Grimm KA, Lamont LA, Tranquilli WJ, Greene SA and Robertson SA, editors. Veterinary Anesthesia and Analgesia 5th Ed. Chichester, UK: Wiley Blackwell Publishing; 2015. P. 3–10. [Google Scholar]
  • 2.Sessler CN. Sedation scales in the ICU. Chest. 2004; 126(6): 1727–30. 10.1378/chest.126.6.1727 [DOI] [PubMed] [Google Scholar]
  • 3.Newton T, Pop I and Duvall E. Sedation scales and measures–A literature review. SAAD Digest. 2013; 99: 88–99. [PubMed] [Google Scholar]
  • 4.Hofmeister EH, Chandler MJ and Read MR. Effects of acepromazine, hydromorphone, or an acepromazine-hydromorphone combination on the degree of sedation in clinically normal dogs. J Am Vet Med Assoc. 2010; 237(10): 1155–9. 10.2460/javma.237.10.1155 [DOI] [PubMed] [Google Scholar]
  • 5.Gómez-Villamandos RJ, Domínguez JM, Redondo JI, Martín EM, Granados MM, Ruiz I, et al. Comparison of romifidine and medetomidine pre-medication in propofol-isoflurane anaesthetised dogs. J Vet Med A Physiol Pathol Clin Med. 2006; 53(9): 471–5. 10.1111/j.1439-0442.2006.00859.x [DOI] [PubMed] [Google Scholar]
  • 6.Granholm M, Granholm M, McKusick BC, McKusick BC, Westerholm FC, Westerholm FC, et al. Evaluation of the clinical efficacy and safety of intramuscular and intravenous doses of dexmedetomidine and medetomidine in dogs and their reversal with atipamezole. Vet Rec. 2007; 160(26): 891–7. 10.1136/vr.160.26.891 [DOI] [PubMed] [Google Scholar]
  • 7.Wagner MC, Hecker KG and Pang DSJ. Sedation levels in dogs: a validation study. BMC Veterinary Research. 2017; 13: 110–118. 10.1186/s12917-017-1027-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Varela-Ruiz M, Díaz-Bravo L and García-Durán R. Descripción y usos del método de Delphi en investigación del área de la salud. Inv Ed Med. 2012; 1(2): 90–95. [Google Scholar]
  • 9.Therneau T, Atkinson B and Ripley B. rpart: Recursive partitioning and regression trees. 2019. R package version 4.1–15. Available from: https://cran.r-project.org/web/packages/rpart/index.html [Last accessed 10th June 2019]
  • 10.Stephen Milborrow (2019). rpart.plot: Plot 'rpart' Models: an enhanced Version of 'plot.rpart'. R package version 3.0.7. https://CRAN.R-project.org/package=rpart.plot [Last accessed 10th June 2019]
  • 11.Liaw A and Wiener M. Breiman and Cutler’s Random Forest for classification and regression. 2016. R package version 4.6–14. Available from: https://cran.r-project.org/web/packages/randomForest/index.html [Last accessed 10th June 2019]
  • 12.Liaw A and Wiener M. Classification and regression by randomForest. R News. 2002; 2(3): 18–22. [Google Scholar]
  • 13.Thangaratinam S and Redman CWE. The Delphi technique. Obstet Gynecol. 2005; 7: 120–125. [Google Scholar]
  • 14.Hassan F, Keeney S and McKenna H. Research guidelines for the Delphi survey technique. J Adv Nurs. 2000; 32: 1008–1015. [PubMed] [Google Scholar]
  • 15.Merola I and Mills DS. Behavioural signs of pain in cats: An expert consensus. PLoS ONE. 2016; 11(2): e0150040 10.1371/journal.pone.0150040 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Breiman L, Friedman JH, Olshen RA and Stone CJ. Classification and regression trees. Boca Raton, FL, USA: Chapman & Hall/CRC; 1984. [Google Scholar]
  • 17.García Pérez A. Cuadernos de estadística aplicada: Área de la salud. Madrid: Universidad Nacional de Educación a Distancia; 2014. P 298. [Google Scholar]
  • 18.Young LE, Brearley JC, Richards DLS, Bartram DH and Jones RS. Medetomidine as a premedicant in dogs and its reversal by atipamezole. J Small Anim Pract. 1990; 31: 554–559. [Google Scholar]
  • 19.Kuusela E, Raekallio M, Anttila M, Falck I, Mölsä S and Vainio O. Clinical effect and pharmacokinetics of medetomidine and its enantiomers in dogs. J Vet Pharmacol Therap. 2000; 23: 15–20. [DOI] [PubMed] [Google Scholar]
  • 20.Grint NJ, Burford J and Dugdale AHA. Does pethidine affect the cardiovascular and sedative effects of dexmedetomidine in dogs? J Small Anim Pract. 2009; 50: 62–66. 10.1111/j.1748-5827.2008.00670.x [DOI] [PubMed] [Google Scholar]
  • 21.Ringer SK, Portier KG, Fourel I and Bettschart-Wolfensberger R. Development of a xylazine constant rate infusion with or without butorphanol for standing sedation in horses. Vet Anaesth Analg. 2012; 39: 1–11. 10.1111/j.1467-2995.2011.00653.x [DOI] [PubMed] [Google Scholar]
  • 22.Martinez-Martin P. Composite rating scales. J Neurol Sci. 2010; 289: 7–11. 10.1016/j.jns.2009.08.013 [DOI] [PubMed] [Google Scholar]

Decision Letter 0

Francesco Staffieri

27 Sep 2019

PONE-D-19-17102

The SiESTa (SEaav SedaTion) Scale project: development of a multifactorial composite sedation inventory for dogs

PLOS ONE

Dear Dr Fernando Martinez-Taboada,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Dear Authors, both reviewers expressed some concerns regarding your manuscript. In particular the second reviewer had the major concerns and some of them are very critical. I invite you to try to respond accordingly to all the concerns of the reviewer and amend the manuscript accordingly

We would appreciate receiving your revised manuscript by Nov 11 2019 11:59PM. When you are ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter.

To enhance the reproducibility of your results, we recommend that if applicable you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). This letter should be uploaded as separate file and labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. This file should be uploaded as separate file and labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. This file should be uploaded as separate file and labeled 'Manuscript'.

Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out.

We look forward to receiving your revised manuscript.

Kind regards,

Francesco Staffieri

Academic Editor

PLOS ONE

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at http://www.journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and http://www.journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2.  Please include additional information regarding the survey or questionnaire used in the study and ensure that you have provided sufficient details that others could replicate the analyses. For instance, if you developed a questionnaire as part of this study and it is not under a copyright more restrictive than CC-BY, please include a copy, in both the original language and English, as Supporting Information."

3. Please amend your current ethics statement to include the full name of the ethics committee that approved your specific study.

For additional information about PLOS ONE submissions requirements for animal ethics, please refer to http://journals.plos.org/plosone/s/submission-guidelines#loc-animal-research  

Once you have amended this/these statement(s) in the Methods section of the manuscript, please add the same text to the “Ethics Statement” field of the submission form (via “Edit Submission”)

Additional Editor Comments (if provided):

Dear Authors, both reviewers expressed some concerns regarding your manuscript. In particular the second reviewer had the major concerns and some of them are very critical. I invite you to try to respond accordingly to all the concerns of the reviewer and amend the manuscript accordingly

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Partly

Reviewer #2: No

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: I Don't Know

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: No

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: No

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: the acronym used is not consistent with the words: I suggest to find a single word for each letter for example SIESTA dog:Seaav Integrated Evaluation Sedation Tool for Anaesthesia in dogs

Abstract

Line 45: please specify how many anaesthetists sent the questionnaires back. It is not clear how the clinicians involved used the questionnaires.

Line 56: there are not enough information in the abstract to asses that the questionnaire used was successful in evaluating the real sedation status of the animals.

M&M

Line 113: please specify if the answer to this first phase was anonymous or not.

Results

Line 184: please correct the capital letter of “State” in “state”.

Line 191: please specify why on 205 case there were 94 male and 101 female: 10 dogs are missing.

Table 1: I suggest to put the total number for each class of drugs in order to see what is explained in the legend: more than 1 drug was used on the same animal. Anyway is not clear why the authors want to give this information because is not possible from the table understand which kind of sedative protocol/combination was done; I suggest to divide the cases in only opioid, or only sedative agents and combinationS (opioids+ sedatives or sedative + tranquillisers, etc.). It would be interesting to associate the drugs used to the level of sedation, to see if the evaluation of sedation was consistent with the protocol.

Unfortunately the annexes are in spanish and not in english.

In M&M is missing the description of comparison between the level so sedation given by the descriptive scale and the one resulted from the questionnaire.

Reviewer #2: Reviewer report

This manuscript describes the attempted development of a sedation scale in dogs, based on a Delphi method.

I have limited my comments to the study design and underlying assumptions as the analytical techniques (classification tree and random forest) employed are beyond my expertise.

I have concerns regarding the study methods, some of which may profoundly limit interpretation of the results.

1. Study participants - participants are repeatedly described as “experts” but there is no evidence, based on information provided in the manuscript, how many of them meet widely accepted definitions of expert status (American or European College Diplomate holders). Holding a research degree (MSc or PhD) is not a reflection of clinical experience or training, or even relevance to the subject of this study. This poses several problems, some of which are likely to have interfered with the collected data and consequently biased the results and interpretation.

a. participants may have provided limited descriptors, reflecting limited training experience. To some extent this is reflected in the Discussion, where the authors identify that some criteria, such as jaw tone, were not mentioned.

b. participants reflect a single mother tongue, Spanish. It is well established that health assessment scale development should consider differences in the interpretation and meaning of terms between languages. It is concerning that a scale developed in Spanish has been translated in to English for publication, with no apparent assessment of whether scale performance has been affected as a result of translation.

c. Participant assessments using the developed scale were compared to a simple descriptive scale. This problematic because feedback from the same participants was used to develop the scale. Ideally, animals would have been video-recorded (with appropriate consent) and the the videos scored by a separate, independent group of observers.

d. the authors suggest that similarities in terminology and jargon used by participants during the Delphi process were beneficial. Given the lack of apparent minimal consistent training level, how could authors be certain that terminology and jargon were applied consistently by participants?

2. Scale development process

a. the argument in favour of dichotomous outcomes as being more objective is misguided. Though forcing a participant to select a dichotomous outcome appears objective, the underlying assessments are in large part subjective. There is no indication that objective measures were employed in many of the final predictors described. This could explain why the scale performed less well when animals had levels of sedation that were not at the extremes (“no” or “profound” sedation).

b. from the methods (phase 1 and 2) it appears there was considerable author input in selecting and classifying the final descriptors - this may be a necessary step, but it would be helpful to have assurance that any author bias was limited - perhaps the authors would consider making their data available on-line?

c. some descriptors are clearly highly subjective and uncontrolled e.g. responses to noise and name.

d. for some descriptors, participants had the option of selecting which would be tested. This suggests the possibility that the testing method was not standardised, with the risk that the outcome of a test would be influenced by what occurred previously. For example, if attempts were made to place a dog in lateral recumbency, this could change the level of sedation, affecting the test that followed (e.g. response to noise).

e. There appears to be an underlying assumption that the method used by the authors to collect descriptors is somehow superior to methods used in previously published studies (e.g. Young et al.). Just because the described method has been named “Delphi” does not confer legitimacy over older studies conducted before the Delphi concept existed or was widely applied. It is highly likely that descriptors generated in previous studies were achieved through consensus discussion.

f. I could not find Fig 2.

g. As identified by the authors, participants were not blinded to the status of the dogs. This would appear a critical limitation, especially when no further testing has been done to evaluate scale performance with observers blinded to treatment.

3. The language used throughout needs editing to raise the standard of scientific english. There is a tendency to use slang (“in fact”) and poorly defined terms (“awareness” - how can we know if an animal is aware? Similarly, “state of mind”).

4. As the authors describe, other sedation scales have been published for use in dogs, including one that was validated according to psychometric principles (validity and reliability testing). With the study goal of developing this novel scale, it would have been invaluable to perform some comparisons with pre-existing scale. In general terms, it is extremely difficult to interpret the performance of a novel scale when it is compared against a non-validated scale (as was the case here). Therefore, it is probably unsurprising that the results show the extremes of sedation (no sedation and profound sedation) to be in close agreement. Surely it is accurately classifying the full continuum (including crossing the threshold in to general anaesthesia) that is of greatest interest?

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files to be viewed.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2020 Apr 1;15(4):e0230799. doi: 10.1371/journal.pone.0230799.r002

Author response to Decision Letter 0


11 Nov 2019

To the reviewer 1:

Thank you for taking the time to review our work. We truly believe in the peer-review process and we know the article will improve with your comments. All the modifications that you suggested have been highlighted yellow in the manuscript. The reply to your comments is as follow:

You mentioned in your revision:

the acronym used is not consistent with the words: I suggest to find a single word for each letter for example SIESTA dog:Seaav Integrated Evaluation Sedation Tool for Anaesthesia in dogs

Although there is no need for an acronym to use the first letter of the words, we appreciate your suggestion and we have decided to introduce it in the manuscript.

Abstract

Line 45: please specify how many anaesthetists sent the questionnaires back. It is not clear how the clinicians involved used the questionnaires.

An explanation has been added in line 45 and the number of participants are included in line 48.

Line 56: there are not enough information in the abstract to asses that the questionnaire used was successful in evaluating the real sedation status of the animals.

We added some modification to the abstract to address this comment.

M&M

Line 113: please specify if the answer to this first phase was anonymous or not.

This has been added now.

Results

Line 184: please correct the capital letter of “State” in “state”.

This has been changed now.

Line 191: please specify why on 205 case there were 94 male and 101 female: 10 dogs are missing.

Information to clarify this has been added now. These animals were assessed twice and for that reason they appeared missing.

Table 1: I suggest to put the total number for each class of drugs in order to see what is explained in the legend: more than 1 drug was used on the same animal. Anyway is not clear why the authors want to give this information because is not possible from the table understand which kind of sedative protocol/combination was done; I suggest to divide the cases in only opioid, or only sedative agents and combinationS (opioids+ sedatives or sedative + tranquillisers, etc.). It would be interesting to associate the drugs used to the level of sedation, to see if the evaluation of sedation was consistent with the protocol.

We are providing the information only because it was included in the questionnaire and we thought that it will give some degree of perspective to the reader and for completeness. We agree, the information is fairly irrelevant and we defer to the Editor to decide if this should be included or not.

Unfortunately the annexes are in spanish and not in english.

As mentioned in the Materials and Methods, all the research was performed in Spanish and for that reason the data in the annexes is all in Spanish. Any translation will involve some degree of interpretation and we are considering the possibility of validating this tool in other languages, but as yet this research is just in the initial development phase. For that reason, we have kept the translation to a minimum for manuscript publication.

In M&M is missing the description of comparison between the level so sedation given by the descriptive scale and the one resulted from the questionnaire.

This has now been added in line 149.

We hope these comments answer all your queries and the changes in the manuscript address your concerns.

Once again, thank you for reviewing our work.

The authors.

To the reviewer 2:

Thank you for taking the time to review our work. We truly believe in the peer-review process and we know the article will improve with your comments. All the modifications that you suggested have been highlighted green in the manuscript. The reply to your comments is as follow:

You mentioned in your revision:

This manuscript describes the attempted development of a sedation scale in dogs, based on a Delphi method.

We think it is important to highlight that this study is the development of a sedation assessment tool. The title has been changed to stress that. We were trying to define and apply a new approach to the question about how sedated the dogs are. We decided to approach the question from a different angle, with a group of questions that when they are answered in a simple way (yes/no) and in a specific order, the level of sedation can be easily defined. We believe the term “scale” brings some preconceived ideas and, for that reason, we have removed it from the manuscript as much as possible.

I have limited my comments to the study design and underlying assumptions as the analytical techniques (classification tree and random forest) employed are beyond my expertise.

I have concerns regarding the study methods, some of which may profoundly limit interpretation of the results.

1. Study participants - participants are repeatedly described as “experts” but there is no evidence, based on information provided in the manuscript, how many of them meet widely accepted definitions of expert status (American or European College Diplomate holders). Holding a research degree (MSc or PhD) is not a reflection of clinical experience or training, or even relevance to the subject of this study. This poses several problems, some of which are likely to have interfered with the collected data and consequently biased the results and interpretation.

As it is mentioned in the manuscript (Line 273 – ‘The definition of ‘expert’ was deliberately open to people with various levels of experience and expertise, independently of the qualifications. The only requirement was that the participants were working full time in veterinary anaesthesia’. By definition, an expert is a knowledgeable or skillful person in a specific topic. We are sure that you agree with us that the people capable to assess sedation in dogs are the ones who are performing that task every day. We deliberately stayed away from the conservatism of considering only ECVAA/ACVAA diploma holders as ‘experts’ because some of them/us do not practice clinically as much as other people considered ‘less expert’ by that conservative definition. For your peace of mind, we can tell you that of the 23 participants 10 hold an ECVAA/ACVAA diploma and 3 were residents of the ECVAA at the time. We believe the definition is clear enough and the term ‘experts’ is classically used in the Delphi method to refer to the participants. We defer to the editor if he rather included further details in the results section about the population of participants or not.

a. participants may have provided limited descriptors, reflecting limited training experience. To some extent this is reflected in the Discussion, where the authors identify that some criteria, such as jaw tone, were not mentioned.

Please, see comments above.

b. participants reflect a single mother tongue, Spanish. It is well established that health assessment scale development should consider differences in the interpretation and meaning of terms between languages. It is concerning that a scale developed in Spanish has been translated in to English for publication, with no apparent assessment of whether scale performance has been affected as a result of translation.

The scale, the questionnaire, and the tool itself are exclusively translated to comply with the publishing guidelines of PLOS ONE or they are not translated at all. We are fully aware of language and cultural differences and we are not intending to validate this tool in English. That may occur at a further stage in the development of this project. Brondani, Luna and Padovani (2011) published the initial development of a scale for pain assessment in cats (originally in Portuguese) that finally unfolded into the ‘UNESP-Botucatu Multidimensional Composite Pain Scale for assessing postoperative pain in cats’, published and validated in several languages. This manuscript covers exclusively the initial development of a sedation assessment tool, nothing else.

c. Participant assessments using the developed scale were compared to a simple descriptive scale. This problematic because feedback from the same participants was used to develop the scale. Ideally, animals would have been video-recorded (with appropriate consent) and the the videos scored by a separate, independent group of observers.

As we described in the Material and Methods, the robust method of statistical analysis used in this study allowed us to use 66% of the data obtained to generate de statistical model (in this case the classification tree from the random forest), then this model was tested against the remaining 33% to confirm the validity of the model. In this article, we detailed the methods used to develop the sedation assessment tool, the next stage is to validate its inter- and intraobserver variability. This research is already in process and it will be published when complete. As you suggested, this phase is video based and it requires repeated assessment from multiple observers over a period of time.

d. the authors suggest that similarities in terminology and jargon used by participants during the Delphi process were beneficial. Given the lack of apparent minimal consistent training level, how could authors be certain that terminology and jargon were applied consistently by participants?

See above about training of participants. The terminology used during all the phases of the Delphi method is general medical/veterinary terminology. All the descriptor generated in the phase 1 and 2 were provided in the first submission.

2. Scale development process

a. the argument in favour of dichotomous outcomes as being more objective is misguided. Though forcing a participant to select a dichotomous outcome appears objective, the underlying assessments are in large part subjective. There is no indication that objective measures were employed in many of the final predictors described. This could explain why the scale performed less well when animals had levels of sedation that were not at the extremes (“no” or “profound” sedation).

It is not surprising that the tool performed slightly weaker between the mild and the moderate sedation. In many scales there is a considerable overlapping between the descriptors in these two levels and very often animals fit into descriptors from one or another category at the same time (forcing the assessor to take a subjective decision about what category to use). The composite scales developed until now have the same subjectivity in the score allocation. For instance, in Young et al. (1990), just in the only parameter with scores and descriptions – Spontaneous posture – the authors suggested using terms such as: standing/tired and standing/lying but able to rise/lying rising with difficulty. All assessment tools involved some degree of subjectivity and the combination of multiple dichotomic variables seems to provide the best results for the tool developed here.

b. from the methods (phase 1 and 2) it appears there was considerable author input in selecting and classifying the final descriptors - this may be a necessary step, but it would be helpful to have assurance that any author bias was limited - perhaps the authors would consider making their data available on-line?

The original response data from all the assessors and the summaries were included in the original submission. We have added now the involvement of the coordinators as a limitation of the study (line 345).

c. some descriptors are clearly highly subjective and uncontrolled e.g. responses to noise and name.

d. for some descriptors, participants had the option of selecting which would be tested. This suggests the possibility that the testing method was not standardised, with the risk that the outcome of a test would be influenced by what occurred previously. For example, if attempts were made to place a dog in lateral recumbency, this could change the level of sedation, affecting the test that followed (e.g. response to noise).

Please, see comments above about subjectivity.

e. There appears to be an underlying assumption that the method used by the authors to collect descriptors is somehow superior to methods used in previously published studies (e.g. Young et al.). Just because the described method has been named “Delphi” does not confer legitimacy over older studies conducted before the Delphi concept existed or was widely applied. It is highly likely that descriptors generated in previous studies were achieved through consensus discussion.

It is not the intention of the authors to start a discussion about the legitimacy of older or newer research, and we are sure that previously reported scales had a considerable amount of time dedicated to them. You appear to ignore the fact that 23 people giving their opinion about a topic and being asked for their opinion several times are likely to get a broader and more sound opinion (not necessarily better, although this is also likely) than three or four researches on their own. Additionally, the Delphi method is not a “new” method, this method (the Iman-Delphi Method) was first developed in the 40s in the early years of the Cold War to forecast the role of technology in warfare. Since then, it has been used in multiple areas and recently it is finding its way into areas where decision-making is necessary or fundamental.

f. I could not find Fig 2.

Figure 2 is at the end of the manuscript, before the links for the annexes. We have added to this letter for your convenience.

g. As identified by the authors, participants were not blinded to the status of the dogs. This would appear a critical limitation, especially when no further testing has been done to evaluate scale performance with observers blinded to treatment.

As we mentioned above, the robust statistical methods allowed us to performed this analysis. Additionally, as we already mentioned in the manuscript and in this letter, this research is just the initial development of the assessment tool and not the validation phase.

3. The language used throughout needs editing to raise the standard of scientific english. There is a tendency to use slang (“in fact”) and poorly defined terms (“awareness” - how can we know if an animal is aware? Similarly, “state of mind”).

This manuscript was written by a bilingual author and it was originally proof-read by a native English speaker. After your comments, the manuscript has been modified and it has been proof-read again to raise the standards of scientific English. It is difficult to address this criticism any further without particular suggestions.

4. As the authors describe, other sedation scales have been published for use in dogs, including one that was validated according to psychometric principles (validity and reliability testing). With the study goal of developing this novel scale, it would have been invaluable to perform some comparisons with pre-existing scale. In general terms, it is extremely difficult to interpret the performance of a novel scale when it is compared against a non-validated scale (as was the case here). Therefore, it is probably unsurprising that the results show the extremes of sedation (no sedation and profound sedation) to be in close agreement. Surely it is accurately classifying the full continuum (including crossing the threshold in to general anaesthesia) that is of greatest interest?

We appreciate your comment and agree that it would be interesting to compare several scales. As per the Material and Methods section, this research was performed before Wagner et al. (2017) was published. We will take into consideration your suggestion and, if possible, we might incorporate the comparison in future phases of the development of this sedation assessment tool.

We hope these comments answer all your queries and the changes in the manuscript address your concerns.

Once again, thank you for reviewing our work.

The authors.

Brondani, Luna and Padovani (2011) Refinement and initial validation of a multidimensional composite scale for use in assessing acute postoperative pain in cats. Am J Vet Res 72, 174–183.

Young LE, Brearley JC, Richards DLS, Bartram DH and Jones RS. Medetomidine as a premedicant in dogs and its reversal by atipamezole. J Small Anim Pract. 1990; 31: 554-559.

Wagner MC, Hecker KG and Pang DSJ. Sedation levels in dogs: a validation study. BMC Veterinary Research. 2017; 13: 110-118.

Attachment

Submitted filename: Response to reviewers.docx

Decision Letter 1

Simon Russell Clegg

14 Feb 2020

PONE-D-19-17102R1

The SIESTA (SEAAV Integrated Evaluation Sedation Tool for Anaesthesia) project: initial development of a multifactorial sedation assessment tool for dogs.

PLOS ONE

Dear Dr Martinez-Taboada

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

==============================

Many thanks for your re-submission to PLOS One

Your manuscript was re-assigned to me as Academic editor.

Two reviewers were split on the manuscript, with one suggesting accept and one suggesting rejection

Therefore it went to a review by a third reviewer, and they have recommended that some changes be made.

I apologise for the delay in getting the comments back to you but hopefully you can understand why this was required.

I therefore invite you to further modify the manuscript, and resubmit it along with a response to reviewers comments

I wish you the best of luck with your revisions

Many thanks

Simon

==============================

We would appreciate receiving your revised manuscript by Mar 30 2020 11:59PM. When you are ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter.

To enhance the reproducibility of your results, we recommend that if applicable you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). This letter should be uploaded as separate file and labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. This file should be uploaded as separate file and labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. This file should be uploaded as separate file and labeled 'Manuscript'.

Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out.

We look forward to receiving your revised manuscript.

Kind regards,

Simon Russell Clegg, PhD

Academic Editor

PLOS ONE

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: All comments have been addressed

Reviewer #2: (No Response)

Reviewer #3: (No Response)

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: No

Reviewer #3: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: I Don't Know

Reviewer #3: Yes

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

Reviewer #3: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

Reviewer #3: No

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: Dear authors, thank you for revising the manuscript in a critical manner.

This paper is very interesting and it will be helpful for the clinical activity

Reviewer #2: Thank you to the authors for providing responses to my previous comments and suggestions. While I appreciate their efforts in justifying some of the decisions made during study design, I remain unconvinced that various assumptions underlying the design of this study are valid. Specifically, I do not feel that the justifications provided for the following are sufficient: wide variability in experience and training of participants, claims that performing the study in one language can generate a usable tool in another language, dependence on dichotomous outcomes, varying consistency in how assessments were applied by users.

Reviewer #3: This is an important and thorough investigation and it will be interesting to read further about the validation phase. However, I do have some doubts about the accuracy and clarity of English, which is a pity after the exhaustive research involved. To give some specific examples: frequent use of 'etc' when it's not clear about what the implied 'and others' actually are (e.g. a fundamental tool in the development of new sedative drugs, drug combinations, routes of administration, etc.); inconsistency in use of 'anaesthetists' and 'anaesthesiologists'; 'central nerve system'. I realise these are minor points, but I think they are vitally important because if we write descriptions of studies that might confuse readers assumed to be of equal knowledge and understanding, they might justifiably wonder whether the conduct of the study had been similarly confused. Maybe a good copy editor would be able to help, because this important work really needs to be published!

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

Reviewer #3: No

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files to be viewed.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at figures@plos.org. Please note that Supporting Information files do not need this step.

Decision Letter 2

Simon Russell Clegg

10 Mar 2020

The SIESTA (SEAAV Integrated Evaluation Sedation Tool for Anaesthesia) project: initial development of a multifactorial sedation assessment tool for dogs.

PONE-D-19-17102R2

Dear Dr. Martinez-Taboada

We are pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it complies with all outstanding technical requirements.

Within one week, you will receive an e-mail containing information on the amendments required prior to publication. When all required modifications have been addressed, you will receive a formal acceptance letter and your manuscript will proceed to our production department and be scheduled for publication.

Shortly after the formal acceptance letter is sent, an invoice for payment will follow. To ensure an efficient production and billing process, please log into Editorial Manager at https://www.editorialmanager.com/pone/, click the "Update My Information" link at the top of the page, and update your user information. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to enable them to help maximize its impact. If they will be preparing press materials for this manuscript, you must inform our press team as soon as possible and no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

With kind regards,

Simon Russell Clegg, PhD

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Many thanks for resubmitting your manuscript to PLOS One

It was reviewed by the same reviewer as last time, and I am pleased to say that they have recommended publication

I have therefore recommended that your manuscript be accepted, and you should hear from the editorial office soon

I wish you all the best with your future research and it was a pleasure working with you

Many thanks

Simon

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #3: All comments have been addressed

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #3: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #3: Yes

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #3: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #3: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #3: Thank you for making the amendments suggested in my last review. I think the manuscript reads much more clearly now.

Acceptance letter

Simon Russell Clegg

12 Mar 2020

PONE-D-19-17102R2

The SIESTA (SEAAV Integrated Evaluation Sedation Tool for Anaesthesia) project: initial development of a multifactorial sedation assessment tool for dogs.

Dear Dr. Martinez-Taboada:

I am pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please notify them about your upcoming paper at this point, to enable them to help maximize its impact. If they will be preparing press materials for this manuscript, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

For any other questions or concerns, please email plosone@plos.org.

Thank you for submitting your work to PLOS ONE.

With kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. Simon Russell Clegg

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 Appendix. Questionnaire.

    Final version of the questionnaire distributed to the collaborators.

    (PDF)

    S1 Data

    (PDF)

    S2 Data

    (PDF)

    S3 Data

    (PDF)

    Attachment

    Submitted filename: Response to reviewers.docx

    Attachment

    Submitted filename: To the reviewers.docx

    Data Availability Statement

    All relevant data are within the manuscript and its Supporting Information files.


    Articles from PLoS ONE are provided here courtesy of PLOS

    RESOURCES