Skip to main content
PLOS One logoLink to PLOS One
. 2022 May 5;17(5):e0266252. doi: 10.1371/journal.pone.0266252

The HINTS examination and STANDING algorithm in acute vestibular syndrome: A systematic review and meta-analysis involving frontline point-of-care emergency physicians

Millie Nakatsuka 1,2,3,*, Emma E Molloy 3
Editor: Diego Kaski4
PMCID: PMC9070939  PMID: 35511910

Abstract

This systematic review aims to evaluate whether point-of-care emergency physicians, without special equipment, can perform the HINTS examination or STANDING algorithm to differentiate between central and non-central vertigo in acute vestibular syndrome with diagnostic accuracy and reliability comparable to more specialized physicians (neuro-ophthalmologists and neuro-otologists). Previous research has concluded that emergency physicians are unable to utilize the HINTS examination with sufficient accuracy, without providing any appropriate education or training. A comprehensive systematic search was performed using MEDLINE, Embase, the Cochrane CENTRAL register of controlled trials, Web of Science Core Collection, Scopus, Google Scholar, the World Health Organization International Clinical Trials Registry Platform, and conference programs and abstracts from six medical organizations. Of the 1,757 results, only 21 were eligible for full-text screening. Two further studies were identified by a manual search of references and an electronic search for any missed studies associated with the authors. Five studies were included in the qualitative synthesis. For the STANDING algorithm, there were two studies of 450 patients who were examined by 11 emergency physicians. Our meta-analysis showed that emergency physicians who had received prior education and training were able to utilize the STANDING algorithm with a sensitivity of 0.96 (95% confidence interval: 0.87–1.00) and a specificity of 0.88 (0.85–0.91). No data was available for the HINTS examination. When emergency physicians are educated and trained, they can use the STANDING algorithm with confidence. There is a lack of evidence regarding the HINTS examination; however, two ongoing studies seek to remedy this deficit.

Introduction

AVS (acute vestibular syndrome) is a rapid-onset, persistent vertigo or dizziness associated with nausea or vomiting, head-motion intolerance, gait instability, and spontaneous nystagmus [1, 2]. It results from an “acute unilateral central or peripheral vestibular lesion that causes a sudden asymmetry of the normal vestibular nuclei neuronal firing rate” [3].

The HINTS examination (head impulse, nystagmus, test of skew) is used to identify central lesions in AVS and distinguish them from peripheral, relatively benign, and more common diagnostic alternatives [2]. This three-step bedside test has been rapidly adopted because it has greater sensitivity (97%) and specificity (99%) than early diffusion-weighted magnetic resonance imaging (MRI) in stroke diagnosis [2].

Further research led to the development of more than five variants to the original 2009 HINTS methodology, as follows: (1) the 2013 HINTS-PLUS, which added a test of acute hearing loss [4]; (2) the 2014 STANDING algorithm (spontaneous nystagmus, direction, head-impulse test, and standing), which has a modified, four-step test [5]; (3) the 2015 TiTrATE paradigm (timing, triggers, and targeted bedside eye examinations) [6]; and (4) the 2017 ATTEST approach (associated symptoms, timing and triggers, examination signs and testing) [7].

Video-oculography (VOG) or “eye ECG” is used in a further variant, VOG-HINTS [8], which is able to capture subtle oculomotor disturbances [9]. Although VOG-guided care may significantly lower costs, increase health utility, and may result in less missed strokes and improved patient outcomes [10], it has not been widely used due to issues including disruptive eye movements and a high artifact rate [11].

A retrospective review by Rau et al. (2020) identified that only 18% of patients presenting with acute vertigo had a documented HINTS examination [12]. Other prominent studies, including a 2020 systematic review by Ohle et al. [13] and an ongoing phase two trial by Newman-Toker et al. [14], have concluded that only specialized physicians (neuro-ophthalmologists and neuro-otologists) are able to utilize HINTS because its component tests are unfamiliar to most emergency physicians [15]. These studies suggest VOG-assisted diagnosis.

Other studies have instead recommended further investigations. In 2017, Dumitrascu et al. called for the development of a dose-response curve for educational interventions for emergency physicians [16]. In 2020, Hunter proposed that emergency physicians may be able to utilize the HINTS examination with formal education and training [17]. In 2020, Ceccofiglio et al. raised concerns about the practicality of widespread VOG-assisted care because of cost considerations [18].

Our objective was to evaluate whether frontline point-of-care emergency physicians, without specialist equipment, can diagnose central vertigo in AVS using the HINTS examination (or its variants) with sufficient diagnostic accuracy and reliability. We did not intend to conduct a comparative analysis.

Materials and methods

This systematic review and meta-analysis adheres to the PRISMA statement [19] and is registered on PROSPERO [20].

Sources and search strategy

In May 2020, we searched MEDLINE, Embase, the Cochrane CENTRAL register of controlled trials, Web of Science Core Collection, Scopus, Google Scholar, and the World Health Organization International Clinical Trials Registry Platform. We did not place limits on document type, study design, language, or publication status. We searched for publications from 2009 onwards (the year HINTS was first introduced). Four assistants translated foreign studies.

In July 2020, we supplemented the above searches by contacting the following organizations and searching programs and abstracts of their annual or biennial conferences: the Neuro-Ophthalmology Society of Australia, the Neuro-Otology Society of Australia, the Australasian College for Emergency Medicine, the Australian and New Zealand Association of Neurologists, the Royal Australian and the New Zealand College of Ophthalmologists. In August 2020, it was apparent that a wider search (beyond Australia and New Zealand) would be redundant, for the most part, given most international organizations are affiliated with official journals that publish their conference programs, including abstracts for oral and poster presentations. We contacted the International Federation for Emergency Medicine directly to manually search their conference programs and abstracts.

After full-text screening of the studies identified, we further searched every article cited in those studies or associated with their authors. We also contacted the authors of a 2019 poster presentation (Rayner et al. [21]) that intended to provide emergency physicians and trainees with education and training in HINTS.

If we required clarification of details or unpublished data from a study, we initially tried to contact the corresponding author. If no response was received within seven days, we tried again and also searched for alternative contact details using their associated institutions, ResearchGate, Open Researcher and Contributor Identification, and social media (Facebook and Twitter). If we received no response within another week, we contacted the subordinate authors. We excluded studies that provided no contact information. We excluded those whose contacts provided no response within a one month period. When the contact was known to be on leave, we attempted contact after their return.

Study selection criteria

After training and pilot testing in accordance with best practice guidelines for abstract screening [22], two independent reviewers (MN and EM) conducted screening in three stages using pre-determined criteria. All disagreements were resolved in subsequent discussion, without the independent third reviewer.

Stage 1 screening

One reviewer (MN) excluded irrelevant studies including book or book sections, studies based on animal subjects or models, studies involving people younger than 18 years old, and duplicates.

Stage 2 screening

Two independent reviewers (MN and EM) screened titles and abstracts. The inclusion criteria were: (1) primary research with original data; (2) patients presented to emergency settings with AVS (vertigo or dizziness) as primary complaint; and (3) one of the outcomes of interest was the diagnostic accuracy the HINTS examination (or any of its three individual components) or its variants. This third criterion was not mandatory, as we wanted to minimize the likelihood of missing diagnostic accuracy data presented within the full-text or only known only by the authors. Therefore, for each study, we used four questions to estimate the likelihood of it providing the required data at the required level of detail. In order for a study to progress to the next stage of screening, it required a minimum score of four points out of ten. We excluded studies of fewer than five research subjects and those that utilized VOG-associated technology or telemedicine.

Stage 3 screening

Two independent reviewers (MN and EM) reviewed the full-text of each selected article. Our inclusion criteria were: (1) patients were examined by emergency physicians or trainees; (2) AVS (new-onset vertigo or dizziness) was the primary presenting complaint; (3) measures of diagnostic accuracy were available or data were available from which these could be derived; and (4) the final diagnosis was determined by at least one qualified physician, based on all available clinical information.

We excluded studies: (1) that included patients with a history of chronic or transient AVS; (2) that included patients who initially presented with gross neurological deficits, such as hemi-paresis; (3) where AVS was deemed to result from a general medical disorder, such as orthostatic hypotension [23], or from obvious neurological disorders other than stroke, such as traumatic brain injuries [23]; and (4) where AVS was deemed to result from similar rare causes. We also excluded studies that were concerned only with AVS in peripheral vestibular disorders and isolated stroke syndromes. Fig 1 provides an overview of the study selection.

Fig 1. Overview of study selection.

Fig 1

The diagram summarizes search results between January 1, 2009, and May 14, 2020. MEDLINE found 326 records, Embase 723 records, Cochrane Central 20 records, Web of Science Core Collection 131 records, Scopus 169 records, Google Scholar 200 records, and International Clinical Trials Registry Platform (World Health Organisation) 23 records. The search was updated during July and August 2020 by hand searching gray literature, identifying 165 records. All 1127 records were screened for eligibility, with 2 new studies meeting the inclusion criteria identified in January 2021.

Data collection

We developed data extraction tables that were refined after a pilot test using a randomly selected study. One reviewer (MN) extracted data, which was cross-checked by the second reviewer (EM). For one particular study [5], we additionally reviewed the duplicate 2015 study [24], to screen for inconsistencies. All disagreements were resolved in subsequent discussions between the first and second reviewer, without involving the third reviewer.

Risk of bias in individual studies

Two independent reviewers (MN and EM) evaluated the quality of each study against the 11 recommended quality items derived from QUADAS-2 tool [25]. We added an additional domain to evaluate the suitability of the education and training program for emergency physicians: “Did the operators of the index test(s) receive an appropriate level of education and training?” Disputes were again resolved by discussion. We used the robviz package (https://github.com/mcguinlu/robvis) with R software (R Foundation for Statistical Computing, Vienna, Austria) to summarize the overall results [26].

Statistical analysis

We analyzed the data on MetaDiSc 1.40 (http://www.hrc.es/investigacion/metadisc_en.htm). We adopted a random effects model and performed the meta-analysis using the Der Simonian and Laird procedure to calculate the point estimate of pooled sensitivity and specificity, with a 95% confidence interval [27]. Because one data point from Vanni et al. (2014) was zero, we added 0.5 to all values [5].

Risk of bias across studies

We used Cochran’s Q test, the chi-square with p-values, and the I2 heterogeneity statistic to estimate whether variation between two studies was beyond that reasonably expected by chance. We note that, with only two studies, these tests have limited value and could be misleading. Therefore, they did not influence us to conduct a meta-analysis; we relied on our assessments of the risk of bias in individual studies.

We were unable to obtain the data required to assess interobserver variability, nor was it possible to compare the Cohen’s kappas in Vanni et al. (2014) to the Fleiss kappas in Vanni et al. (2017) [5, 28].

Results

Study selection

Our initial searches across five databases and two registries of clinical trials identified 1410 articles. We subsequently reviewed conference programs and abstracts of scientific meetings from six medical organizations, resulting in a further 165 works, including recordings of conference presentations. After stage one screening 1127 studies remained. Of these, 1106 were excluded after reviewing titles and abstracts. We then scrutinized the full-text of the remaining 21 studies; only three fulfilled the selection criteria. We noted that Vanni et al. (2015) was as a duplicate of Vanni et al. (2014) [5, 24]. A supplemental search identified one further study, Ceccofiglio et al. (2020) [18]. We further included a study proposal by Rayner et al., from which we hoped to obtain relevant data before finishing our analysis [21]. Thus, a total of five studies were included in the qualitative synthesis.

As of January 15th 2021, the Gerlier et al. study is ongoing, while the Rayner et al. study is still in its protocol stage [21, 29].

Study characteristics

All five studies were single-center studies in which an appropriate level of education and practical training was provided (or was intended to be provided) to emergency physicians. Three of the five studies were prospective cohort studies: Vanni et al. (2017), Gerlier et al., and Rayner et al. [21, 28, 29]. Vanni et al. (2014) was a prospective, quasi-randomized, controlled trial [5]. The only retrospective review was that of Ceccofiglio et al. (2020) [18].

Three of the studies were conducted in Italy by a largely unchanged core group of physicians, all using only the STANDING algorithm: Vanni et al. (2014), Vanni et al. (2017), and Ceccofiglio et al. (2020) [5, 28, 18]. By contrast, in the Gerlier et al. study, participating physicians conducted both the STANDING algorithm and the original HINTS examination for each patient [29]. Rayner et al. intend to apply only the original HINTS examination [21]. The characteristics of the participating patients and physicians are summarized in Table 1. The minimal underlying data set is available online [30].

Table 1. Patient and physician characteristics in the studies included in the qualitative synthesis.

Vanni 2014 Vanni 2017 Ceccofiglio 2020 Gerlier Rayner
References 5 28 18 29 21
Patient characteristics
Total number 98 352 24 232 -
Age (years) x¯ = 60 ± 16.3 x¯ = 58 ± 18 x¯ = 54 - -
Male n = 42 (42.9%) n = 142 (40.3%) n = 6 (25.0%) - -
No nystagmus n = 13 (14.3%) n = 76 (21.5%) - n = 0 (0.00%) -
CT n = 31 (31.6%) n = 137 (38.9%) - - -
MRI n = 10 (10.2%) n = 27 (7.7%) - n = 232 (100%) -
Central vertigo n = 11 (11.2%) n = 40 (11.4%) n = 0 (0.00%) - -
Participating physicians
Physicians n = 5 n = 6 n ≤ 40 (?) n = 8 n = 15
Speciality Emergency medicine Emergency medicine Emergency medicine Emergency medicine Emergency medicine
Examiner Independent physician Independent physician Treating physician Independent physician Unknown
Quality and quantity of physician education and training
Participation n = 5 (100%) n = 6 (100%) n = 5 (?%) n = 8 (100%) n = 15 (100%)
Education 5 hours of lectures 4 hours of lectures 4 hours of lectures 2 hours of lectures 1 hour of lecture
Practical training 1 hour of workshop 8 hours of workshops 8 hours of workshops 8 hours of workshops Yes
Supervised placement No 4 weeks with neuro-otologist 4 weeks with neuro-otologist No Yes
Assessment 15 proctored exams 10 proctored exams 10 proctored exams No Yes

The table summarizes the characteristics of the participating patients and emergency physicians. As of January 15th 2021, the Gerlier et al. study is ongoing, while the Rayner et al. study is still in its protocol stage [21, 29].

Risk of bias in individual studies

We analyzed the 12 domains based on adjusted recommended quality items derived from QUADAS-2 tool [25]. The main reasons for downgrading studies were no response bias, no data bias, and lack of representativeness of the population sample. Only two of the studies, Vanni et al. (2014) and Vanni et al. (2017), were of overall quality high enough to be included in the quantitative analysis [5, 28]. The results are summarized in Fig 2.

Fig 2. Traffic light plot of the risk of bias within studies.

Fig 2

The diagram summarises the risk of bias within the individual studies, based on QUADAS-2 tool derived adjusted recommended quality items for each of the twelve domains. QUADAS = Quality Assessment tool for Diagnostic Accuracy Studies.

Result of individual studies

Three studies provided data for the diagnostic accuracy of the STANDING algorithm. Rayner et al. had no data, while Gerlier et al. did not provide data [21, 29].

In the Vanni et al. (2014) study, 92 patients were examined by five educated and trained emergency physicians, with a sensitivity of 0.96 (95% confidence interval: 0.83–0.99), and a specificity of 0.87 (82–0.90) [5].

In the Vanni et al. (2017) study, 352 patients were examined by six educated and trained emergency physicians, with a sensitivity of 1.00 (0.72–1.00), and a specificity of 0.94 (0.82–0.90) [28]. Thirteen patients withdrew from this study; four were lost to follow-up and nine were unable to attend follow-up.

In the Ceccofiglio et al. (2020) study, 24 patients were examined. We were unable to verify the number of emergency physicians who performed these examinations but we were able to determine that five of them had received education and training as participants in the Vanni et al. (2017) study [18, 28]. The specificity was 37.5%. The sensitivity could not be calculated as there were no cases of central vertigo.

Synthesis of results

Diagnostic data was available from two studies, with 450 patients examined by 11 emergency physicians who were educated and trained to perform and interpret the STANDING algorithm. The sensitivity and specificity of individual studies are summarized in Fig 3, as pooled values with 95% confidence intervals. Pooled sensitivity was 0.96 (0.87–1.00) and pooled specificity was 0.88 (0.85–0.91).

Fig 3. Meta-analysis of the diagnostic accuracy of the STANDING algorithm in trained emergency physicians.

Fig 3

The forest plot shows the individual and pooled sensitivity and specificity with 95% confidence intervals. A random-effects model was used. Cochran’s Q test, chi-square with p values, and I2 were calculated to estimate whether the variation between two studies was beyond that reasonably expected by chance.

Risk of bias across studies

Our analysis suggests heterogeneity for specificity (chi-square = 4.49 [p = 0.0341]; I2 = 77.7%) but no heterogeneity for sensitivity (chi-square = 0.99 [p = 0.3188]; I2 = 0.0%).

Patient characteristics are summarized in Table 1. We noted minimal variance between the two studies. Given this sufficient homogeneity, the data can usefully be combined to answer broader questions than could be addressed by the individual studies alone.

Discussion

Previous studies of HINTS examination use by emergency physicians have differentially supported two general streams of thought. Some have suggested that emergency physicians are unable to use the HINTS examination with sufficient accuracy, and have suggested a move toward VOG-based care (e.g. the ongoing “eye ECG” trial by Newman-Toker et al. [14]). A systematic review by Ohle et al. (2020) also supports this view; it reported a sensitivity of 83% and a specificity of 44% [13]. However, this was based solely on the Kerber et al. (2015) study [31], which is limited in that it did not provide any education or training to emergency physicians; the examiners were fellowship-trained in vascular neurology. Moreover, the outcome was purely MRI-based, with a substantial incidence of patient withdrawal (48/320 patients or 15%).

Other studies have not concluded against emergency physicians, but have instead called for further research into education and training required, for example, Hunter (2020), Dumitrascu et al. (2017), and Ceccofiglio et al. (2020) [1618]. Concerns have been expressed about the feasibility of expensive VOG-HINTS examinations in non-major city hospitals. This avenue of inquiry is particularly important for addressing health inequality.

We designed our study to follow this line of enquiry. We sought to focus on emergency physicians, explicitly measuring the education and training they received, and evaluating their use of the HINTS examination and its four variants. We adopted a deliberately extensive search strategy, without limitations, in an attempt to improve upon Ohle et al. (2020), who had pre-emptively excluded a large volume of potentially relevant studies, including ongoing trials, non-peer reviewed journals, and unpublished data [13].

Our systematic review and meta-analysis thus provides the first quantitative summary estimate of the sensitivity and specificity achieved by educated and trained emergency physicians in performing and interpreting the STANDING algorithm to rule out central pathology in patients with AVS. Our results show that, with education and training, emergency physicians can independently identify central vertigo in AVS using the STANDING algorithm. This was achieved with high diagnostic accuracy, with a pooled sensitivity of 0.96 (0.87–1.00) and a specificity of 0.88 (0.85–0.91). This finding is significant support for the contention that emergency physicians have been prematurely dismissed as effective examiners.

Currently, data for evaluating the use of the original HINTS examination by emergency physicians are not available. However, it is pleasing to note that there is ongoing (Gerlier et al.) and proposed (Rayner et al.) research on the diagnostic accuracy of the HINTS examination by appropriately educated and trained emergency physicians [21, 29]. Our search found no studies where emergency physicians used the HINTS-PLUS, TiTrATE paradigm, or ATTEST approach.

We wish to highlight that, although “HINTS appears to be accurate independent of the presence or absence of other neurological signs” [32], the original test cannot be conducted in patients who are no longer symptomatic or without spontaneous nystagmus. Furthermore, many neurologists have expressed concern that other neurological signs are undervalued. There is increasing evidence to support that vestibulospinal signs, in particular, are “probably strong predictors of a central cause”: for example, a systematic review by Tarnutzer et al. (2011) [33]. Edlow and Newman-Toker (2016) noted that the first of five prudent questions to pose when assessing acute dizziness and vertigo, was regarding the patient’s ability to sit or stand without assistance [32]. Carmona et al. (2015) noted that truncal ataxia was “an easy sign, even for physicians without specific training in clinical neurology” [34]. In essence, the STANDING algorithm is a HINTS examination plus positional nystagmus and evaluation of standing position and gait–an assessment that can be performed in all patients.

The ATTEST approach (the revised and renamed TiTrATE paradigm) incorporates aspects of the patient history combined with targeted bedside examinations to best inform a differential diagnosis. Although there have been no clinical trials to date, even among physicians of other specialities, it is considered to be “supported by a very strong evidence base in the speciality literature” [7].

Limitations

Despite its design being a major strength of our study, it has inherent limitations that contribute to uncertainty and bias, particularly methodological diversity. Capturing as many studies as possible ensured that some of our data came from subsections of studies with an array of different study types, designs, and objectives. Many studies did not record as much data as we required. For example, we were unable to assess interobserver reliability for the STANDING algorithm because the particular physician who examined each patient was not recorded.

Furthermore, our meta-analysis contained only two studies. We were unable to perform an individual patient data meta-analysis (considered the gold standard) because of a lack of individual patient data.

Other limitations included:

  1. Three of the five studies were conducted by a common working group of physicians: Vanni et al. (2014), Vanni et al. (2017), and Ceccofiglio et al. (2020) [5, 18, 28]. Limited and repetitive authorship is not ideal and increases the risk of bias, raising concerns that findings may vary significantly in different settings and populations.

  2. Vanni et al. (2017) mandated active follow-up with a neuro-otologist at one week and three months [28]. This resulted in the withdrawal of 13 of 365 patients.

  3. Lack of response and/or data decreased the amount of gray literature that could be screened, with the majority of contacts repeatedly failing to respond and many records incomplete or missing. A similar pattern of lack of response and missing data also affected the fives studies in our review.

  4. We used MetaDiSc for meta-analysis despite its outdated Moses-Littenberg method, because we did not require a hierarchical or bivariate analysis.

Other considerations

(1) Currently, no studies have looked at the dose-response curve, or quality and quantity of the education and training required for emergency physicians. All of the five studies included in the qualitative synthesis conducted (or planned to conduct) both education and practical training. Gerlier et al. was the only study which did not conduct supervised placements or assessments [29].

(2) Similarly, no studies have looked at the value of subsequent ongoing education and refresher courses to maintain skills and confidence in emergency physicians. Although the level of experience, confidence, and clinical utility will vary significantly between practice sites and individual physicians, operator proficiency is likely to decrease over time. Research on other examinations, such as a 2019 study by Schwid et al. suggests that even brief educational interventions can boost confidence to perform and supervise point-of-care ultrasound [35].

(3) Health inequity is a major problem in many countries. Health resource maldistribution is of particular concern, with governments over the last decade backing centralization, and either merging or closing services in Australia, the United States, Canada, Germany, and France [36].

While VOG-HINTS and telemedicine may appear very attractive, concerns have been expressed about the practicality of widespread VOG-assisted care because of cost considerations, including Ceccofiglio et al. (2020) [18]. For example, in Australia, only 25% of public hospitals are located in major cities [37]. Many rural and remote emergency departments have no facilities to conduct pathology tests or advanced imaging; they rely on emergency retrieval services to transport critical patients (and specimens). The cost of VOG is not feasible in such settings, especially with the strain on resources compounded by the coronavirus pandemic.

Three further significant logistical concerns with VOG and telemedicine are that: A) training may be required to use the equipment; B) equipment may malfunction, or face other technical issues such as a slow internet connection; and C) there is limited availability of on-call city specialists for emergency telemedicine consults, particularly after-hours.

(4) A pleasing new development is the increasing interest in using smartphones to record eye movements. Parker et al. (2021), conducted a preliminary study which supports the concept of smartphone-based applications as an alternative to traditional VOG [38].

(5) There is a large body of literature concerned with the loss of clinical skills and growing dependence on technology in medicine [39]. Fear of missing a low probability diagnosis and litigation has also resulted in defensive medicine, increasing the rates of unnecessary diagnostic imaging [40]; this places emergency physicians at particularly high risk due to the nature of their work.

A decade long study by Kanzaria et al. (2015) identified that dizziness-related imaging in emergency settings had increased by 169%, without a corresponding increase in the diagnosis of cerebrovascular disease among these patients [41]. The largest increase of 281% occurred in patients aged 20 to 44, who are relatively at low risk of such events [41]. A recent case presented by Khachatoorian et al. also warns of the potential dangers of misdiagnosis when a thorough history and physical examination are omitted [42].

Conclusions

The results of our systematic review and meta-analysis are particularly significant for less privileged hospitals that have minimal access to specialists and VOG-assisted technology. We believe that previous studies were unable to demonstrate an acceptable level of diagnostic accuracy because they did not provide prior education and training. Our results show that by addressing the lack of education and training in the STANDING algorithm, frontline emergency physicians can become equipped to make rapid point-of-care decisions, even in rural or remote settings. We note that two ongoing studies are focusing on providing education and training for the HINTS examination in emergency physicians.

Future research is required to focus on the quality and quantity of the education and training required, as well to assess the value of subsequent ongoing education and refresher courses to preserve operator proficiency.

Supporting information

S1 Checklist

(DOCX)

Acknowledgments

I am sincerely grateful to supervisor Dr. Mark Paine (neuro-otology and neuro-ophthalmology specialist), Syeda Azim (statistician), and the team of librarians and translators.

Data Availability

The minimal dataset is now available online from https://doi.org/10.25910/tqv0-qz64.

Funding Statement

The author(s) received no specific funding for this work.

References

  • 1.Hotson JR, Baloh RW. Acute vestibular syndrome. N Engl J Med. 1998;339(10):680–685. doi: 10.1056/NEJM199809033391007 . [DOI] [PubMed] [Google Scholar]
  • 2.Kattah JC, Talkad AV, Wang DZ, Hsieh YH, Newman-Toker DE. HINTS to diagnose stroke in the acute vestibular syndrome: three-step bedside oculomotor examination more sensitive than early MRI diffusion-weighted imaging. Stroke. 2009;40(11):3504–3510. doi: 10.1161/STROKEAHA.109.551234 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Kattah JC. Use of HINTS in the acute vestibular syndrome. An Overview. Stroke Vasc Neurol. 2018;3(4):190–196. doi: 10.1136/svn-2018-000160 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Newman-Toker DE, Kerber KA, Hsieh YH, Pula JH, Omron R, Saber Tehrani AS, et al. HINTS outperforms ABCD2 to screen for stroke in acute continuous vertigo and dizziness. Acad Emerg Med. 2013;20(10):986–996. doi: 10.1111/acem.12223 . [DOI] [PubMed] [Google Scholar]
  • 5.Vanni S, Pecci R, Casati C, Moroni F, Risso M, Ottaviani M, et al. STANDING, a four-step bedside algorithm for differential diagnosis of acute vertigo in the emergency department. ACTA Otorhinolaryngol Ital. 2014;34(6):419–426. . [PMC free article] [PubMed] [Google Scholar]
  • 6.Newman-Toker DE, Edlow JA. TiTrATE: a novel, evidence-based approach to diagnosing acute dizziness and vertigo. Neurol Clin. 2015;33(3):577–599. doi: 10.1016/j.ncl.2015.04.011 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Edlow JA, Gurley KL, Newman-Toker DE. A new diagnostic approach to the adult patient with acute dizziness. J Emerg Med. 2018;54(4):469–483. doi: 10.1016/j.jemermed.2017.12.024 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Newman-Toker DE, Saber Tehrani AS, Mantokoudis G, Pula JH, Guede CI, Kerber KA, et al. Quantitative video-oculography to help diagnose stroke in acute vertigo and dizziness: toward and ECG for the eyes. Stroke. 2013;44(4):1158–1161. doi: 10.1161/STROKEAHA.111.000033 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Hubert RL, Todd NS, Lehnen N, Jahn K, Boy S, Audevert HJ, et al. Stroke or non-stroke in dizzy patients in rural areas: the necessity for remote examination, televertigo project. Cerebrovasc Dis. 2014;37(1):462. [Google Scholar]
  • 10.Keita M, Padula W, Newman-Toker DE. Video-oculography-guided workups vs. MRI-first for stroke diagnosis in emergency department vertigo and dizziness presentations: a model-based cost-effectiveness analysis. Diagnosis. 2017;4(4):eA:49. [Google Scholar]
  • 11.Mantokoudis G, Saber Tehrani AS, Wozniak A, Eibenberger K, Kattah JC, Guede CI, et al. Impact of artifacts on VOR gain measures by video-oculography in the acute vestibular syndrome. J Vestib Res. 2016;26(4):375–385. doi: 10.3233/VES-160587 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Rau CJ, Terling L, Elkhodair S, Kaski D. Acute vertigo in the emergency department: use of bedside oculomotor examination. Eur J Emerg Med. 2020;27(5):381–383. doi: 10.1097/MEJ.0000000000000674 . [DOI] [PubMed] [Google Scholar]
  • 13.Ohle R, Montpellier RA, Marchadier V, Wharton A, McIsaac S, Anderson M, et al. Can emergency physicians accurately rule out a central cause of vertigo using the HINTS examination? A systematic review and meta-analysis. Acad Emerg Med. 2020;27(9):887–896. doi: 10.1111/acem.13960 . [DOI] [PubMed] [Google Scholar]
  • 14.Newman-Toker DE. Grantome National Institudes of health; 2014. AVERT—acute video-oculography for vertigo in emergency rooms for rapid triage; [cited 2020 Oct]. Available from: https://grantome.com/grant/NIH/U01-DC013778-01A1 [Google Scholar]
  • 15.Clin Trials.gov US National Libraray of Medicine; 2015. Acute video-oculography for vertigo in emergency rooms for rapid triage (AVERT); [cited 2020 Oct]. Available from: https://clinicaltrials.gov/ct2/show/NCT02483429 ID: NCT02483429
  • 16.Dumitrascu OM, Torbati S, Tighiouart M, Newman-Toker DE, Song SS. Pitfalls and rewards for implementing ocular motor testing in acute vestibular syndrome: A Pilot Project. Neurologist. 2017;22(2):44–47. doi: 10.1097/NRL.0000000000000106 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Hunter RB. Accuracy of the HINTS exam for vertigo in the hands of emergency physicians. NEJM J Watch. 2020. Apr 03. [Google Scholar]
  • 18.Ceccofiglio A, Pecci R, Peruzzi G, Rivasi G, Rafanelli M, Vanni S, et al. STANDING update: a retrospective analysis in the emergency department one year after its validation. Emerg Care J. 2020;16(2):94–98. doi: 10.4081/ecj.2020.8848 [DOI] [Google Scholar]
  • 19.Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gøtzsche PC, Ioannidis JPA, et al. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and elaboration. BMJ. 2009;339:b2700. doi: 10.1136/bmj.b2700 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.PROSPERO international prospective register of systematic reviews [Internet]; 2020. A systematic literature review and meta-analysis: can frontine point-of care emergency physicians, genral practitioners and hospital trianees use the bedside HINTS examintation to rule out stroke in acute vestibular syndrome? [cited 2020 May]. National Institute of Health Research. Available from: https://www.crd.york.ac.uk/prospero/display_record.php?ID=CRD42020187166 ID: CRD42020187166
  • 21.Rayner R, Hmu C. ‘HINTS’ at the front door: an acute stroke service quality improvement project. Int J Ther Rehabil. 2019;26(6):5. doi: 10.12968/ijtr.2019.26.6.5 [DOI] [Google Scholar]
  • 22.Polanin JR, Pigott TD, Espelage DL, Grotpeter JK. Best practice guidelines for abstract screening large-evidence systematic reviews and meta-analyses. Res Synth Methods. 2019;10(3):330–342. doi: 10.1002/jrsm.1354 [DOI] [Google Scholar]
  • 23.Edlow JA, Newman-Toker DE. Medical and nonstroke neurologic causes of acute, continuous vestibular symptoms. Neurol Clin. 2015;33(3):699–716, xi. doi: 10.1016/j.ncl.2015.04.002 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Vanni S, Nazerian P, Casati C, Moroni F, Risso M, Ottaviani M, et al. Can emergency physicians accurately and reliably assess acute vertigo in the emergency department? Emerg Med Australas. 2015;27(2):126–131. doi: 10.1111/1742-6723.12372 . [DOI] [PubMed] [Google Scholar]
  • 25.The Cochrane Collaboration. Reitsma JB, Rutjes AWS, Whiting P, Vlassov VV, Leeflang MMG, et a. Chapter 9: assessing methodological quality. In: Deeks JJ, Bossuyt PM, Catsonis C, editors, Cochrane Handbook of Systematic Reviews of Diagnostic Test Accuracy version 1.0, 2009. Available from: https://methods.cochrane.org/sites/methods.cochrane.org.sdt/files/public/uploads/ch09_Oct09.pdf [Google Scholar]
  • 26.McGuinness LA, Higgins JPT. Risk-of-bias visualization (ROBVIS): an R package and Shiny web app for visualizing risk-of-bias assessments. Res Syn Meth. 2021. Jan;12(1):55–61. doi: 10.1002/jrsm.1411 . [DOI] [PubMed] [Google Scholar]
  • 27.Higgins JPT, Thompson SG, Spiegelhalter David J. A re-evaluation of random-effects meta-analysis. J R Stat Soc Ser A Stat Soc. 2009;172(1):137–159. doi: 10.1111/j.1467-985X.2008.00552.x . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Vanni S, Pecci R, Edlow JA, Nazerian P, Santimone R, Pepe G, et al. Differential diagnosis of vertigo in the emergency department: a prospective validation study of the STANDING algorithm. Front Neurol. 2017;8:590. doi: 10.3389/fneur.2017.00590 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Clin Trials.gov US National Libraray of Medicine; 2019. Eye-ECG approach to emergencies: diagnostic performance of the HINTS test; [cited 2021 Jan]. Available from: https://clinicaltrials.gov/ct2/show/NCT04118361 ID: NCT04118361
  • 30.Nakatsuka M, Molloy EE. The HINTS examination and STANDING algorithim in acute vestibular syndrome involving frontline point-of-care emergency physicians [dataset]. 2022. FEB 3 [cited 2022 FEB 3]. The Sydney eScholarship Repository, The University of Sydney. Available from: 10.25910/tqv0-qz64 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Kerber KA, Meurer WJ, Brown DL, Burke JF, Hofer TP, Tsodikov A, et al. Stroke risk stratification in acute dizziness presentations: a prospective imaging-based study. Neurology. 2015;85(21):1869–1878. doi: 10.1212/WNL.0000000000002141 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.Edlow JA, Newman-Toker DE. Using the physical examination to diagnose patients with acute dizziness and vertigo. J Emerg Med. 2016;50(4):617–28. doi: 10.1016/j.jemermed.2015.10.040 . [DOI] [PubMed] [Google Scholar]
  • 33.Tarnutzer AA, Berkowitz AL, Robinson KA, Hsieh YH, Newman-Toker DE. Does my dizzy patient have a stroke? A systematic review of bedside diagnosis in acute vestibular syndrome. CMAJ. 2011;183(9):E571–E592. doi: 10.1503/cmaj.100174 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.Carmona S, Martínez C, Zalazar G, Moro M, Batuecas-Caletrio A, Luis L, et al. The diagnostic accuracy of truncal ataxia and HINTS as cardinal signs for acute vestibular syndrome. Front Neurol. 2016;8(7):125. doi: 10.3389/fneur.2016.00125 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.Schwid M, Harris O, Landry A, Eyre A, Henwood P, Kimberly H. Use of a refresher course increases confidence in point-of-care ultrasound skills in emergency medicine faculty. Cureus. 2019:11(8):e5413. doi: 10.7759/cureus.5413 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36.Vaughan L, Edwards N. The problems of smaller, rural and remote hospitals: separating facts from fiction. Future Healthcare Journal. 2020;7(1):38–45. doi: 10.7861/fhj.2019-0066 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 37.Australian Institute of Health and Welfare. Hospital resources 2017–18: Australian hospital statitics [Internet]. Canberra: Australian Institute of Health and Welfare, 2019. [cited 2020 July 7]. Available from: https://www.aihw.gov.au/reports/hospitals/hospital-resources-2017-18- ahs/contents/hospitals-and-average-available-beds [Google Scholar]
  • 38.Parker TM, Farrell N, Otero-Millain J, Kheradmand A, McClenney A, Newman-Toker DE. Proof of concept for an “eyePhone” app to measure video head impulses. Digit Biomark. 2021;5:1–8. doi: 10.1159/000511287 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 39.Datta A. Clinical Skill: The Ebbing Art of Medicine. Malays J Med Sci. 2021;28(1):105–108. doi: 10.21315/mjms2021.28.1.13 ; PMCID: PMC7909357. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 40.Kene M, Ballard D, Vinson D, Rauchwerger A, Iskin H, & Kim A. Emergency physician attitudes, preferences, and risk tolerance for stroke as a potential cause of dizziness symptoms. West J Emerg Med. 2015;16(5):768–76. doi: 10.5811/westjem.2015.7.26158 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 41.Kanzaria HK, Hoffman JR, Probst MA, Caloyeras JP, Berry SH, Brook RH. Emergency physician perceptions of medically unnecessary advanced diagnostic imaging. Academic Emergency Medicine. 2015;22(4), 390–398. doi: 10.1111/acem.12625 [DOI] [PubMed] [Google Scholar]
  • 42.Khachatoorian Y, Uberoi A, Shihadeh S. The Lost Art of Physical Examination. Abstract published at Hospital Medicine 2020, Virtual Competition. Abstract 806 Journal of Hospital Medicine. https://shmabstracts.org/abstract/the-lost-art-of-physical-examination/. January 17th 2022 [Google Scholar]

Decision Letter 0

Diego Kaski

10 Dec 2021

PONE-D-21-34436The HINTS examination and STANDING algorithm in acute vestibular syndrome: a systematic review and meta-analysis involving frontline point-of-care emergency physiciansPLOS ONE

Dear Dr. Nakatsuka,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Please submit your revised manuscript by Jan 24 2022 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Diego Kaski, PhD MBBS

Academic Editor

PLOS ONE

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. In your Data Availability statement, you have not specified where the minimal data set underlying the results described in your manuscript can be found. PLOS defines a study's minimal data set as the underlying data used to reach the conclusions drawn in the manuscript and any additional data required to replicate the reported study findings in their entirety. All PLOS journals require that the minimal data set be made fully available. For more information about our data policy, please see http://journals.plos.org/plosone/s/data-availability.

Upon re-submitting your revised manuscript, please upload your study’s minimal underlying data set as either Supporting Information files or to a stable, public repository and include the relevant URLs, DOIs, or accession numbers within your revised cover letter. For a list of acceptable repositories, please see http://journals.plos.org/plosone/s/data-availability#loc-recommended-repositories. Any potentially identifying patient information must be fully anonymized.

Important: If there are ethical or legal restrictions to sharing your data publicly, please explain these restrictions in detail. Please see our guidelines for more information on what we consider unacceptable restrictions to publicly sharing data: http://journals.plos.org/plosone/s/data-availability#loc-unacceptable-data-access-restrictions. Note that it is not acceptable for the authors to be the sole named individuals responsible for ensuring data access.

We will update your Data Availability statement to reflect the information you provide in your cover letter.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Partly

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: Yes

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: The authors conducted a systematic review to evaluate whether point of care emergency physicians without special equipment can perform the HINTS plus examination or the STANDING algorithm to differentiate peripheral versus central causes of vertigo.

They found 21 eligible studies. The STANDING algorithm form Florence, Italy, included 450 patients examined by 11 ER HINTS-trained ED physicians, These physician not only achieved a sensitivity of 096 (CI 087-1.00) and specificity of 0.88 (CI 0.85 -0.91) and an excellent Cohen’ kappas with subsequent Neurotology evaluation. The initial HINTS examination performed by a neurophthalmologist/neurotologist, involved ED patients and transferred patients from other referring hospital ED without a diagnosis.

One paper studied the effect of training neurologists and ED physicians regarding the HINTS performance, and this training had a short-term impact on correct diagnosis/treatment and decreased number of unnecessary imaging. One study also raised concerns about the cost of VOG-assisted care because of price; this however, will be well balanced by the fact that 1. A permanent record of the findings will be valuable in follow up 2. The cost substantially less that the cost of unnecessary imaging, admission, etc.

I believe that the authors did an excellent review of what is available in the literature. The authors are correct on Vanni’s, et al pioneering training of a small number of ED physicians, who evaluated a large number of patients in time. I believe that this is important, is not only the initial training but also sustained exposure that leads to expertise.

Minor points to address

In the discussion section, this reviewer believes that the authors can state the opinion of Vanni, et al who provided their research team of ED physicians with proper training, and found that they perform the STANDING protocol effectively." Not clearly stated, however, is the need for sustained exposure" If they see only one case a week, confidence will remain weak . An increasing number of ED physicians who focus their attention to vertigo/dizziness, thus increasing enthusiasm is noticeable.

I also think that the diagnosis may not be straightforward with the first clinical exam. In such cases, Video recording may be the best choice. The cost of video-oculography is not very expensive, and it provides a record of the initial findings, which may become critical in the short term for management. Let us draw an analogy with an Echo obtained in a patient with a heart murmur (the goal is precision). If you agree may want to add.

Video- oculography of course would no be needed for BPPV. It is true that local community hospital will need to rely on Telemedicine or refer the patient to a larger facility. It is possible though that common diagnosis such as BPPV or vestibular neuritis could be identify easily by the local ED physician. (If you agree may want to add)

Finally, increase literacy among all future physicians pertaining Neurology and Vestibular Medicine is also predictable in the not so far future. At our School, the curriculum is quite ambitious, and students as they advanced in heir training and early practice will become more aware that this is expected from them. (In the spirit of Education may want to add)

Reviewer #2: In the Stage 1 Screenig the authors state that the excluded irrellevant studies but ignore a lot of papers which use vestibulospinal signs in AVS

"imbalance" or more properly truncal ataxia deserves a special paragraph, especially if quantify (Lee H, Neurology (2006) 67: 1178-83 – Carmona et al doi: 10.3389/fneur.2016 – Vanni et al doi: 10.3389/fneur.2017)

a- truncal ataxia is an specific manifestation of the vestibulospinal compromise and its value in the physical examination is the same of the ocular findings

b- At least in the both side of the spectrum: No ataxia never central, ataxia grade III (unable to walk) is always central (Kiersten L. Gurley, MD1, 2,3 Jonathan A. Edlow, MD, Semin Neurol 2019;39:27–40 - Jonathan A. Edlow, MD, Kiersten L. Gurley, MD,and David E. Newman-Toker https://doi.org/10.1016/j.jemermed.2017.12.024 -Diagnosing Stroke in Acute Dizziness and Vertigo Pitfalls and Pearls Ali S. Saber Tehrani, MD; Jorge C. Kattah, MD; Kevin A. Kerber, MD, MS; Daniel R. Gold, DO; David S. Zee, MD; Victor C. Urrutia, MD; David E. Newman-Toker, MD, PhD Stroke, March 2018)

Other vestibulospinal signs: Babinski asinergia sign, falling from sitting with closed eyes and arm over the chest can help (A New Diagnostic Approach to the Adult Patient with Acute Dizziness. Edlow JA, Gurley KL, Newman-Toker DE J Emerg Med. 2018 Apr; 54(4):469-483. doi: 10.1016/j.jemermed.2017.12.024. Ideally, have the patient walk unassisted, but for severely nauseated patients too symptomatic to walk, test for truncal ataxia by asking the patient to sit upright in the stretcher with arms crossed. Patients who cannot walk or sit up unassisted are unsafe for discharge and are more likely to have a stroke (or other CNS pathology) rather than vestibular neuritis (27, 44, and 77). Although American emergency physicians are uncomfortable using HINTS testing and instead overuse CT, one study reported that specially trained emergency physicians using these bedside examination elements decreased both CT use and hospitalization

You compare HINTS against STANDING, but HINTS is about AVS but STANDING is about Differential Diagnosis of Vertigo in the Emergency Department, it includes for example, another etiologies of vertigo like BPPV so a comparison is not posible

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: Yes: Sergio Carmona

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2022 May 5;17(5):e0266252. doi: 10.1371/journal.pone.0266252.r002

Author response to Decision Letter 0


31 Jan 2022

Thank you for the opportunity to submit a revised draft of our manuscript “The HINTS examination and STANDING algorithm in acute vestibular syndrome: a systematic review and meta-analysis involving frontline point-of-care emergency physicians” for publication as a Research Article in PLOS ONE. We appreciate the time and effort that you and the reviewers have dedicated to providing feedback. We are grateful for the insightful comments on and valuable improvements to our manuscript.

We have incorporated most of the suggestions made by the reviewers. Below, we provide point-by-point responses to the reviewers’ comments. All page numbers refer to the revised manuscript file (with tracked changes).

The minimal underlying dataset has been submitted to The University of Sydney eScholarship Repositary. We will contact you as soon as our dataset is online to revise refence number 30 (Page 23).

We have approved the revised manuscript and agree with its submission to PLOS ONE. There are no conflicts of interest to declare.

We look forward to hearing from you at your earliest convenience.

Attachment

Submitted filename: Response to Reviewers1.docx

Decision Letter 1

Diego Kaski

17 Mar 2022

The HINTS examination and STANDING algorithm in acute vestibular syndrome: a systematic review and meta-analysis involving frontline point-of-care emergency physicians

PONE-D-21-34436R1

Dear Dr. Nakatsuka,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Diego Kaski, PhD MBBS

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: All comments have been addressed

Reviewer #2: All comments have been addressed

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: Yes

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: The questions answered, suggestions incorporated, good paper.. I believe that the manuscript summarizes the state of the art om the topic. The authors raise concern that the that VOG recordings could negatively impact on a less accurate neurologic examination.

The VOG advantages are multiple. one important one is the ability to quantitate the findings, a second very important feature is to examine the eye movements with total fixation block, which is otherwise impossible. This of course shpuld never replace the complete neurologic exam , including posture and gait when possible.

Reviewer #2: Congrats, a good review which demonstrates that the AVS management is still not known by general physicians in the ER

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: Yes: Sergio Carmona

Acceptance letter

Diego Kaski

11 Apr 2022

PONE-D-21-34436R1

The HINTS examination and STANDING algorithm in acute vestibular syndrome: a systematic review and meta-analysis involving frontline point-of-care emergency physicians

Dear Dr. Nakatsuka:

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

If we can help with anything else, please email us at plosone@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. Diego Kaski

Academic Editor

PLOS ONE


Articles from PLoS ONE are provided here courtesy of PLOS

RESOURCES