Abstract
This systematic review aims to evaluate whether point-of-care emergency physicians, without special equipment, can perform the HINTS examination or STANDING algorithm to differentiate between central and non-central vertigo in acute vestibular syndrome with diagnostic accuracy and reliability comparable to more specialized physicians (neuro-ophthalmologists and neuro-otologists). Previous research has concluded that emergency physicians are unable to utilize the HINTS examination with sufficient accuracy, without providing any appropriate education or training. A comprehensive systematic search was performed using MEDLINE, Embase, the Cochrane CENTRAL register of controlled trials, Web of Science Core Collection, Scopus, Google Scholar, the World Health Organization International Clinical Trials Registry Platform, and conference programs and abstracts from six medical organizations. Of the 1,757 results, only 21 were eligible for full-text screening. Two further studies were identified by a manual search of references and an electronic search for any missed studies associated with the authors. Five studies were included in the qualitative synthesis. For the STANDING algorithm, there were two studies of 450 patients who were examined by 11 emergency physicians. Our meta-analysis showed that emergency physicians who had received prior education and training were able to utilize the STANDING algorithm with a sensitivity of 0.96 (95% confidence interval: 0.87–1.00) and a specificity of 0.88 (0.85–0.91). No data was available for the HINTS examination. When emergency physicians are educated and trained, they can use the STANDING algorithm with confidence. There is a lack of evidence regarding the HINTS examination; however, two ongoing studies seek to remedy this deficit.
Introduction
AVS (acute vestibular syndrome) is a rapid-onset, persistent vertigo or dizziness associated with nausea or vomiting, head-motion intolerance, gait instability, and spontaneous nystagmus [1, 2]. It results from an “acute unilateral central or peripheral vestibular lesion that causes a sudden asymmetry of the normal vestibular nuclei neuronal firing rate” [3].
The HINTS examination (head impulse, nystagmus, test of skew) is used to identify central lesions in AVS and distinguish them from peripheral, relatively benign, and more common diagnostic alternatives [2]. This three-step bedside test has been rapidly adopted because it has greater sensitivity (97%) and specificity (99%) than early diffusion-weighted magnetic resonance imaging (MRI) in stroke diagnosis [2].
Further research led to the development of more than five variants to the original 2009 HINTS methodology, as follows: (1) the 2013 HINTS-PLUS, which added a test of acute hearing loss [4]; (2) the 2014 STANDING algorithm (spontaneous nystagmus, direction, head-impulse test, and standing), which has a modified, four-step test [5]; (3) the 2015 TiTrATE paradigm (timing, triggers, and targeted bedside eye examinations) [6]; and (4) the 2017 ATTEST approach (associated symptoms, timing and triggers, examination signs and testing) [7].
Video-oculography (VOG) or “eye ECG” is used in a further variant, VOG-HINTS [8], which is able to capture subtle oculomotor disturbances [9]. Although VOG-guided care may significantly lower costs, increase health utility, and may result in less missed strokes and improved patient outcomes [10], it has not been widely used due to issues including disruptive eye movements and a high artifact rate [11].
A retrospective review by Rau et al. (2020) identified that only 18% of patients presenting with acute vertigo had a documented HINTS examination [12]. Other prominent studies, including a 2020 systematic review by Ohle et al. [13] and an ongoing phase two trial by Newman-Toker et al. [14], have concluded that only specialized physicians (neuro-ophthalmologists and neuro-otologists) are able to utilize HINTS because its component tests are unfamiliar to most emergency physicians [15]. These studies suggest VOG-assisted diagnosis.
Other studies have instead recommended further investigations. In 2017, Dumitrascu et al. called for the development of a dose-response curve for educational interventions for emergency physicians [16]. In 2020, Hunter proposed that emergency physicians may be able to utilize the HINTS examination with formal education and training [17]. In 2020, Ceccofiglio et al. raised concerns about the practicality of widespread VOG-assisted care because of cost considerations [18].
Our objective was to evaluate whether frontline point-of-care emergency physicians, without specialist equipment, can diagnose central vertigo in AVS using the HINTS examination (or its variants) with sufficient diagnostic accuracy and reliability. We did not intend to conduct a comparative analysis.
Materials and methods
This systematic review and meta-analysis adheres to the PRISMA statement [19] and is registered on PROSPERO [20].
Sources and search strategy
In May 2020, we searched MEDLINE, Embase, the Cochrane CENTRAL register of controlled trials, Web of Science Core Collection, Scopus, Google Scholar, and the World Health Organization International Clinical Trials Registry Platform. We did not place limits on document type, study design, language, or publication status. We searched for publications from 2009 onwards (the year HINTS was first introduced). Four assistants translated foreign studies.
In July 2020, we supplemented the above searches by contacting the following organizations and searching programs and abstracts of their annual or biennial conferences: the Neuro-Ophthalmology Society of Australia, the Neuro-Otology Society of Australia, the Australasian College for Emergency Medicine, the Australian and New Zealand Association of Neurologists, the Royal Australian and the New Zealand College of Ophthalmologists. In August 2020, it was apparent that a wider search (beyond Australia and New Zealand) would be redundant, for the most part, given most international organizations are affiliated with official journals that publish their conference programs, including abstracts for oral and poster presentations. We contacted the International Federation for Emergency Medicine directly to manually search their conference programs and abstracts.
After full-text screening of the studies identified, we further searched every article cited in those studies or associated with their authors. We also contacted the authors of a 2019 poster presentation (Rayner et al. [21]) that intended to provide emergency physicians and trainees with education and training in HINTS.
If we required clarification of details or unpublished data from a study, we initially tried to contact the corresponding author. If no response was received within seven days, we tried again and also searched for alternative contact details using their associated institutions, ResearchGate, Open Researcher and Contributor Identification, and social media (Facebook and Twitter). If we received no response within another week, we contacted the subordinate authors. We excluded studies that provided no contact information. We excluded those whose contacts provided no response within a one month period. When the contact was known to be on leave, we attempted contact after their return.
Study selection criteria
After training and pilot testing in accordance with best practice guidelines for abstract screening [22], two independent reviewers (MN and EM) conducted screening in three stages using pre-determined criteria. All disagreements were resolved in subsequent discussion, without the independent third reviewer.
Stage 1 screening
One reviewer (MN) excluded irrelevant studies including book or book sections, studies based on animal subjects or models, studies involving people younger than 18 years old, and duplicates.
Stage 2 screening
Two independent reviewers (MN and EM) screened titles and abstracts. The inclusion criteria were: (1) primary research with original data; (2) patients presented to emergency settings with AVS (vertigo or dizziness) as primary complaint; and (3) one of the outcomes of interest was the diagnostic accuracy the HINTS examination (or any of its three individual components) or its variants. This third criterion was not mandatory, as we wanted to minimize the likelihood of missing diagnostic accuracy data presented within the full-text or only known only by the authors. Therefore, for each study, we used four questions to estimate the likelihood of it providing the required data at the required level of detail. In order for a study to progress to the next stage of screening, it required a minimum score of four points out of ten. We excluded studies of fewer than five research subjects and those that utilized VOG-associated technology or telemedicine.
Stage 3 screening
Two independent reviewers (MN and EM) reviewed the full-text of each selected article. Our inclusion criteria were: (1) patients were examined by emergency physicians or trainees; (2) AVS (new-onset vertigo or dizziness) was the primary presenting complaint; (3) measures of diagnostic accuracy were available or data were available from which these could be derived; and (4) the final diagnosis was determined by at least one qualified physician, based on all available clinical information.
We excluded studies: (1) that included patients with a history of chronic or transient AVS; (2) that included patients who initially presented with gross neurological deficits, such as hemi-paresis; (3) where AVS was deemed to result from a general medical disorder, such as orthostatic hypotension [23], or from obvious neurological disorders other than stroke, such as traumatic brain injuries [23]; and (4) where AVS was deemed to result from similar rare causes. We also excluded studies that were concerned only with AVS in peripheral vestibular disorders and isolated stroke syndromes. Fig 1 provides an overview of the study selection.
Fig 1. Overview of study selection.
The diagram summarizes search results between January 1, 2009, and May 14, 2020. MEDLINE found 326 records, Embase 723 records, Cochrane Central 20 records, Web of Science Core Collection 131 records, Scopus 169 records, Google Scholar 200 records, and International Clinical Trials Registry Platform (World Health Organisation) 23 records. The search was updated during July and August 2020 by hand searching gray literature, identifying 165 records. All 1127 records were screened for eligibility, with 2 new studies meeting the inclusion criteria identified in January 2021.
Data collection
We developed data extraction tables that were refined after a pilot test using a randomly selected study. One reviewer (MN) extracted data, which was cross-checked by the second reviewer (EM). For one particular study [5], we additionally reviewed the duplicate 2015 study [24], to screen for inconsistencies. All disagreements were resolved in subsequent discussions between the first and second reviewer, without involving the third reviewer.
Risk of bias in individual studies
Two independent reviewers (MN and EM) evaluated the quality of each study against the 11 recommended quality items derived from QUADAS-2 tool [25]. We added an additional domain to evaluate the suitability of the education and training program for emergency physicians: “Did the operators of the index test(s) receive an appropriate level of education and training?” Disputes were again resolved by discussion. We used the robviz package (https://github.com/mcguinlu/robvis) with R software (R Foundation for Statistical Computing, Vienna, Austria) to summarize the overall results [26].
Statistical analysis
We analyzed the data on MetaDiSc 1.40 (http://www.hrc.es/investigacion/metadisc_en.htm). We adopted a random effects model and performed the meta-analysis using the Der Simonian and Laird procedure to calculate the point estimate of pooled sensitivity and specificity, with a 95% confidence interval [27]. Because one data point from Vanni et al. (2014) was zero, we added 0.5 to all values [5].
Risk of bias across studies
We used Cochran’s Q test, the chi-square with p-values, and the I2 heterogeneity statistic to estimate whether variation between two studies was beyond that reasonably expected by chance. We note that, with only two studies, these tests have limited value and could be misleading. Therefore, they did not influence us to conduct a meta-analysis; we relied on our assessments of the risk of bias in individual studies.
We were unable to obtain the data required to assess interobserver variability, nor was it possible to compare the Cohen’s kappas in Vanni et al. (2014) to the Fleiss kappas in Vanni et al. (2017) [5, 28].
Results
Study selection
Our initial searches across five databases and two registries of clinical trials identified 1410 articles. We subsequently reviewed conference programs and abstracts of scientific meetings from six medical organizations, resulting in a further 165 works, including recordings of conference presentations. After stage one screening 1127 studies remained. Of these, 1106 were excluded after reviewing titles and abstracts. We then scrutinized the full-text of the remaining 21 studies; only three fulfilled the selection criteria. We noted that Vanni et al. (2015) was as a duplicate of Vanni et al. (2014) [5, 24]. A supplemental search identified one further study, Ceccofiglio et al. (2020) [18]. We further included a study proposal by Rayner et al., from which we hoped to obtain relevant data before finishing our analysis [21]. Thus, a total of five studies were included in the qualitative synthesis.
As of January 15th 2021, the Gerlier et al. study is ongoing, while the Rayner et al. study is still in its protocol stage [21, 29].
Study characteristics
All five studies were single-center studies in which an appropriate level of education and practical training was provided (or was intended to be provided) to emergency physicians. Three of the five studies were prospective cohort studies: Vanni et al. (2017), Gerlier et al., and Rayner et al. [21, 28, 29]. Vanni et al. (2014) was a prospective, quasi-randomized, controlled trial [5]. The only retrospective review was that of Ceccofiglio et al. (2020) [18].
Three of the studies were conducted in Italy by a largely unchanged core group of physicians, all using only the STANDING algorithm: Vanni et al. (2014), Vanni et al. (2017), and Ceccofiglio et al. (2020) [5, 28, 18]. By contrast, in the Gerlier et al. study, participating physicians conducted both the STANDING algorithm and the original HINTS examination for each patient [29]. Rayner et al. intend to apply only the original HINTS examination [21]. The characteristics of the participating patients and physicians are summarized in Table 1. The minimal underlying data set is available online [30].
Table 1. Patient and physician characteristics in the studies included in the qualitative synthesis.
| Vanni 2014 | Vanni 2017 | Ceccofiglio 2020 | Gerlier | Rayner | |
|---|---|---|---|---|---|
| References | 5 | 28 | 18 | 29 | 21 |
| Patient characteristics | |||||
| Total number | 98 | 352 | 24 | 232 | - |
| Age (years) | = 60 ± 16.3 | = 58 ± 18 | = 54 | - | - |
| Male | n = 42 (42.9%) | n = 142 (40.3%) | n = 6 (25.0%) | - | - |
| No nystagmus | n = 13 (14.3%) | n = 76 (21.5%) | - | n = 0 (0.00%) | - |
| CT | n = 31 (31.6%) | n = 137 (38.9%) | - | - | - |
| MRI | n = 10 (10.2%) | n = 27 (7.7%) | - | n = 232 (100%) | - |
| Central vertigo | n = 11 (11.2%) | n = 40 (11.4%) | n = 0 (0.00%) | - | - |
| Participating physicians | |||||
| Physicians | n = 5 | n = 6 | n ≤ 40 (?) | n = 8 | n = 15 |
| Speciality | Emergency medicine | Emergency medicine | Emergency medicine | Emergency medicine | Emergency medicine |
| Examiner | Independent physician | Independent physician | Treating physician | Independent physician | Unknown |
| Quality and quantity of physician education and training | |||||
| Participation | n = 5 (100%) | n = 6 (100%) | n = 5 (?%) | n = 8 (100%) | n = 15 (100%) |
| Education | 5 hours of lectures | 4 hours of lectures | 4 hours of lectures | 2 hours of lectures | 1 hour of lecture |
| Practical training | 1 hour of workshop | 8 hours of workshops | 8 hours of workshops | 8 hours of workshops | Yes |
| Supervised placement | No | 4 weeks with neuro-otologist | 4 weeks with neuro-otologist | No | Yes |
| Assessment | 15 proctored exams | 10 proctored exams | 10 proctored exams | No | Yes |
Risk of bias in individual studies
We analyzed the 12 domains based on adjusted recommended quality items derived from QUADAS-2 tool [25]. The main reasons for downgrading studies were no response bias, no data bias, and lack of representativeness of the population sample. Only two of the studies, Vanni et al. (2014) and Vanni et al. (2017), were of overall quality high enough to be included in the quantitative analysis [5, 28]. The results are summarized in Fig 2.
Fig 2. Traffic light plot of the risk of bias within studies.
The diagram summarises the risk of bias within the individual studies, based on QUADAS-2 tool derived adjusted recommended quality items for each of the twelve domains. QUADAS = Quality Assessment tool for Diagnostic Accuracy Studies.
Result of individual studies
Three studies provided data for the diagnostic accuracy of the STANDING algorithm. Rayner et al. had no data, while Gerlier et al. did not provide data [21, 29].
In the Vanni et al. (2014) study, 92 patients were examined by five educated and trained emergency physicians, with a sensitivity of 0.96 (95% confidence interval: 0.83–0.99), and a specificity of 0.87 (82–0.90) [5].
In the Vanni et al. (2017) study, 352 patients were examined by six educated and trained emergency physicians, with a sensitivity of 1.00 (0.72–1.00), and a specificity of 0.94 (0.82–0.90) [28]. Thirteen patients withdrew from this study; four were lost to follow-up and nine were unable to attend follow-up.
In the Ceccofiglio et al. (2020) study, 24 patients were examined. We were unable to verify the number of emergency physicians who performed these examinations but we were able to determine that five of them had received education and training as participants in the Vanni et al. (2017) study [18, 28]. The specificity was 37.5%. The sensitivity could not be calculated as there were no cases of central vertigo.
Synthesis of results
Diagnostic data was available from two studies, with 450 patients examined by 11 emergency physicians who were educated and trained to perform and interpret the STANDING algorithm. The sensitivity and specificity of individual studies are summarized in Fig 3, as pooled values with 95% confidence intervals. Pooled sensitivity was 0.96 (0.87–1.00) and pooled specificity was 0.88 (0.85–0.91).
Fig 3. Meta-analysis of the diagnostic accuracy of the STANDING algorithm in trained emergency physicians.
The forest plot shows the individual and pooled sensitivity and specificity with 95% confidence intervals. A random-effects model was used. Cochran’s Q test, chi-square with p values, and I2 were calculated to estimate whether the variation between two studies was beyond that reasonably expected by chance.
Risk of bias across studies
Our analysis suggests heterogeneity for specificity (chi-square = 4.49 [p = 0.0341]; I2 = 77.7%) but no heterogeneity for sensitivity (chi-square = 0.99 [p = 0.3188]; I2 = 0.0%).
Patient characteristics are summarized in Table 1. We noted minimal variance between the two studies. Given this sufficient homogeneity, the data can usefully be combined to answer broader questions than could be addressed by the individual studies alone.
Discussion
Previous studies of HINTS examination use by emergency physicians have differentially supported two general streams of thought. Some have suggested that emergency physicians are unable to use the HINTS examination with sufficient accuracy, and have suggested a move toward VOG-based care (e.g. the ongoing “eye ECG” trial by Newman-Toker et al. [14]). A systematic review by Ohle et al. (2020) also supports this view; it reported a sensitivity of 83% and a specificity of 44% [13]. However, this was based solely on the Kerber et al. (2015) study [31], which is limited in that it did not provide any education or training to emergency physicians; the examiners were fellowship-trained in vascular neurology. Moreover, the outcome was purely MRI-based, with a substantial incidence of patient withdrawal (48/320 patients or 15%).
Other studies have not concluded against emergency physicians, but have instead called for further research into education and training required, for example, Hunter (2020), Dumitrascu et al. (2017), and Ceccofiglio et al. (2020) [16–18]. Concerns have been expressed about the feasibility of expensive VOG-HINTS examinations in non-major city hospitals. This avenue of inquiry is particularly important for addressing health inequality.
We designed our study to follow this line of enquiry. We sought to focus on emergency physicians, explicitly measuring the education and training they received, and evaluating their use of the HINTS examination and its four variants. We adopted a deliberately extensive search strategy, without limitations, in an attempt to improve upon Ohle et al. (2020), who had pre-emptively excluded a large volume of potentially relevant studies, including ongoing trials, non-peer reviewed journals, and unpublished data [13].
Our systematic review and meta-analysis thus provides the first quantitative summary estimate of the sensitivity and specificity achieved by educated and trained emergency physicians in performing and interpreting the STANDING algorithm to rule out central pathology in patients with AVS. Our results show that, with education and training, emergency physicians can independently identify central vertigo in AVS using the STANDING algorithm. This was achieved with high diagnostic accuracy, with a pooled sensitivity of 0.96 (0.87–1.00) and a specificity of 0.88 (0.85–0.91). This finding is significant support for the contention that emergency physicians have been prematurely dismissed as effective examiners.
Currently, data for evaluating the use of the original HINTS examination by emergency physicians are not available. However, it is pleasing to note that there is ongoing (Gerlier et al.) and proposed (Rayner et al.) research on the diagnostic accuracy of the HINTS examination by appropriately educated and trained emergency physicians [21, 29]. Our search found no studies where emergency physicians used the HINTS-PLUS, TiTrATE paradigm, or ATTEST approach.
We wish to highlight that, although “HINTS appears to be accurate independent of the presence or absence of other neurological signs” [32], the original test cannot be conducted in patients who are no longer symptomatic or without spontaneous nystagmus. Furthermore, many neurologists have expressed concern that other neurological signs are undervalued. There is increasing evidence to support that vestibulospinal signs, in particular, are “probably strong predictors of a central cause”: for example, a systematic review by Tarnutzer et al. (2011) [33]. Edlow and Newman-Toker (2016) noted that the first of five prudent questions to pose when assessing acute dizziness and vertigo, was regarding the patient’s ability to sit or stand without assistance [32]. Carmona et al. (2015) noted that truncal ataxia was “an easy sign, even for physicians without specific training in clinical neurology” [34]. In essence, the STANDING algorithm is a HINTS examination plus positional nystagmus and evaluation of standing position and gait–an assessment that can be performed in all patients.
The ATTEST approach (the revised and renamed TiTrATE paradigm) incorporates aspects of the patient history combined with targeted bedside examinations to best inform a differential diagnosis. Although there have been no clinical trials to date, even among physicians of other specialities, it is considered to be “supported by a very strong evidence base in the speciality literature” [7].
Limitations
Despite its design being a major strength of our study, it has inherent limitations that contribute to uncertainty and bias, particularly methodological diversity. Capturing as many studies as possible ensured that some of our data came from subsections of studies with an array of different study types, designs, and objectives. Many studies did not record as much data as we required. For example, we were unable to assess interobserver reliability for the STANDING algorithm because the particular physician who examined each patient was not recorded.
Furthermore, our meta-analysis contained only two studies. We were unable to perform an individual patient data meta-analysis (considered the gold standard) because of a lack of individual patient data.
Other limitations included:
Three of the five studies were conducted by a common working group of physicians: Vanni et al. (2014), Vanni et al. (2017), and Ceccofiglio et al. (2020) [5, 18, 28]. Limited and repetitive authorship is not ideal and increases the risk of bias, raising concerns that findings may vary significantly in different settings and populations.
Vanni et al. (2017) mandated active follow-up with a neuro-otologist at one week and three months [28]. This resulted in the withdrawal of 13 of 365 patients.
Lack of response and/or data decreased the amount of gray literature that could be screened, with the majority of contacts repeatedly failing to respond and many records incomplete or missing. A similar pattern of lack of response and missing data also affected the fives studies in our review.
We used MetaDiSc for meta-analysis despite its outdated Moses-Littenberg method, because we did not require a hierarchical or bivariate analysis.
Other considerations
(1) Currently, no studies have looked at the dose-response curve, or quality and quantity of the education and training required for emergency physicians. All of the five studies included in the qualitative synthesis conducted (or planned to conduct) both education and practical training. Gerlier et al. was the only study which did not conduct supervised placements or assessments [29].
(2) Similarly, no studies have looked at the value of subsequent ongoing education and refresher courses to maintain skills and confidence in emergency physicians. Although the level of experience, confidence, and clinical utility will vary significantly between practice sites and individual physicians, operator proficiency is likely to decrease over time. Research on other examinations, such as a 2019 study by Schwid et al. suggests that even brief educational interventions can boost confidence to perform and supervise point-of-care ultrasound [35].
(3) Health inequity is a major problem in many countries. Health resource maldistribution is of particular concern, with governments over the last decade backing centralization, and either merging or closing services in Australia, the United States, Canada, Germany, and France [36].
While VOG-HINTS and telemedicine may appear very attractive, concerns have been expressed about the practicality of widespread VOG-assisted care because of cost considerations, including Ceccofiglio et al. (2020) [18]. For example, in Australia, only 25% of public hospitals are located in major cities [37]. Many rural and remote emergency departments have no facilities to conduct pathology tests or advanced imaging; they rely on emergency retrieval services to transport critical patients (and specimens). The cost of VOG is not feasible in such settings, especially with the strain on resources compounded by the coronavirus pandemic.
Three further significant logistical concerns with VOG and telemedicine are that: A) training may be required to use the equipment; B) equipment may malfunction, or face other technical issues such as a slow internet connection; and C) there is limited availability of on-call city specialists for emergency telemedicine consults, particularly after-hours.
(4) A pleasing new development is the increasing interest in using smartphones to record eye movements. Parker et al. (2021), conducted a preliminary study which supports the concept of smartphone-based applications as an alternative to traditional VOG [38].
(5) There is a large body of literature concerned with the loss of clinical skills and growing dependence on technology in medicine [39]. Fear of missing a low probability diagnosis and litigation has also resulted in defensive medicine, increasing the rates of unnecessary diagnostic imaging [40]; this places emergency physicians at particularly high risk due to the nature of their work.
A decade long study by Kanzaria et al. (2015) identified that dizziness-related imaging in emergency settings had increased by 169%, without a corresponding increase in the diagnosis of cerebrovascular disease among these patients [41]. The largest increase of 281% occurred in patients aged 20 to 44, who are relatively at low risk of such events [41]. A recent case presented by Khachatoorian et al. also warns of the potential dangers of misdiagnosis when a thorough history and physical examination are omitted [42].
Conclusions
The results of our systematic review and meta-analysis are particularly significant for less privileged hospitals that have minimal access to specialists and VOG-assisted technology. We believe that previous studies were unable to demonstrate an acceptable level of diagnostic accuracy because they did not provide prior education and training. Our results show that by addressing the lack of education and training in the STANDING algorithm, frontline emergency physicians can become equipped to make rapid point-of-care decisions, even in rural or remote settings. We note that two ongoing studies are focusing on providing education and training for the HINTS examination in emergency physicians.
Future research is required to focus on the quality and quantity of the education and training required, as well to assess the value of subsequent ongoing education and refresher courses to preserve operator proficiency.
Supporting information
(DOCX)
Acknowledgments
I am sincerely grateful to supervisor Dr. Mark Paine (neuro-otology and neuro-ophthalmology specialist), Syeda Azim (statistician), and the team of librarians and translators.
Data Availability
The minimal dataset is now available online from https://doi.org/10.25910/tqv0-qz64.
Funding Statement
The author(s) received no specific funding for this work.
References
- 1.Hotson JR, Baloh RW. Acute vestibular syndrome. N Engl J Med. 1998;339(10):680–685. doi: 10.1056/NEJM199809033391007 . [DOI] [PubMed] [Google Scholar]
- 2.Kattah JC, Talkad AV, Wang DZ, Hsieh YH, Newman-Toker DE. HINTS to diagnose stroke in the acute vestibular syndrome: three-step bedside oculomotor examination more sensitive than early MRI diffusion-weighted imaging. Stroke. 2009;40(11):3504–3510. doi: 10.1161/STROKEAHA.109.551234 . [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Kattah JC. Use of HINTS in the acute vestibular syndrome. An Overview. Stroke Vasc Neurol. 2018;3(4):190–196. doi: 10.1136/svn-2018-000160 . [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Newman-Toker DE, Kerber KA, Hsieh YH, Pula JH, Omron R, Saber Tehrani AS, et al. HINTS outperforms ABCD2 to screen for stroke in acute continuous vertigo and dizziness. Acad Emerg Med. 2013;20(10):986–996. doi: 10.1111/acem.12223 . [DOI] [PubMed] [Google Scholar]
- 5.Vanni S, Pecci R, Casati C, Moroni F, Risso M, Ottaviani M, et al. STANDING, a four-step bedside algorithm for differential diagnosis of acute vertigo in the emergency department. ACTA Otorhinolaryngol Ital. 2014;34(6):419–426. . [PMC free article] [PubMed] [Google Scholar]
- 6.Newman-Toker DE, Edlow JA. TiTrATE: a novel, evidence-based approach to diagnosing acute dizziness and vertigo. Neurol Clin. 2015;33(3):577–599. doi: 10.1016/j.ncl.2015.04.011 . [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Edlow JA, Gurley KL, Newman-Toker DE. A new diagnostic approach to the adult patient with acute dizziness. J Emerg Med. 2018;54(4):469–483. doi: 10.1016/j.jemermed.2017.12.024 . [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Newman-Toker DE, Saber Tehrani AS, Mantokoudis G, Pula JH, Guede CI, Kerber KA, et al. Quantitative video-oculography to help diagnose stroke in acute vertigo and dizziness: toward and ECG for the eyes. Stroke. 2013;44(4):1158–1161. doi: 10.1161/STROKEAHA.111.000033 . [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Hubert RL, Todd NS, Lehnen N, Jahn K, Boy S, Audevert HJ, et al. Stroke or non-stroke in dizzy patients in rural areas: the necessity for remote examination, televertigo project. Cerebrovasc Dis. 2014;37(1):462. [Google Scholar]
- 10.Keita M, Padula W, Newman-Toker DE. Video-oculography-guided workups vs. MRI-first for stroke diagnosis in emergency department vertigo and dizziness presentations: a model-based cost-effectiveness analysis. Diagnosis. 2017;4(4):eA:49. [Google Scholar]
- 11.Mantokoudis G, Saber Tehrani AS, Wozniak A, Eibenberger K, Kattah JC, Guede CI, et al. Impact of artifacts on VOR gain measures by video-oculography in the acute vestibular syndrome. J Vestib Res. 2016;26(4):375–385. doi: 10.3233/VES-160587 . [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Rau CJ, Terling L, Elkhodair S, Kaski D. Acute vertigo in the emergency department: use of bedside oculomotor examination. Eur J Emerg Med. 2020;27(5):381–383. doi: 10.1097/MEJ.0000000000000674 . [DOI] [PubMed] [Google Scholar]
- 13.Ohle R, Montpellier RA, Marchadier V, Wharton A, McIsaac S, Anderson M, et al. Can emergency physicians accurately rule out a central cause of vertigo using the HINTS examination? A systematic review and meta-analysis. Acad Emerg Med. 2020;27(9):887–896. doi: 10.1111/acem.13960 . [DOI] [PubMed] [Google Scholar]
- 14.Newman-Toker DE. Grantome National Institudes of health; 2014. AVERT—acute video-oculography for vertigo in emergency rooms for rapid triage; [cited 2020 Oct]. Available from: https://grantome.com/grant/NIH/U01-DC013778-01A1 [Google Scholar]
- 15.Clin Trials.gov US National Libraray of Medicine; 2015. Acute video-oculography for vertigo in emergency rooms for rapid triage (AVERT); [cited 2020 Oct]. Available from: https://clinicaltrials.gov/ct2/show/NCT02483429 ID: NCT02483429
- 16.Dumitrascu OM, Torbati S, Tighiouart M, Newman-Toker DE, Song SS. Pitfalls and rewards for implementing ocular motor testing in acute vestibular syndrome: A Pilot Project. Neurologist. 2017;22(2):44–47. doi: 10.1097/NRL.0000000000000106 . [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Hunter RB. Accuracy of the HINTS exam for vertigo in the hands of emergency physicians. NEJM J Watch. 2020. Apr 03. [Google Scholar]
- 18.Ceccofiglio A, Pecci R, Peruzzi G, Rivasi G, Rafanelli M, Vanni S, et al. STANDING update: a retrospective analysis in the emergency department one year after its validation. Emerg Care J. 2020;16(2):94–98. doi: 10.4081/ecj.2020.8848 [DOI] [Google Scholar]
- 19.Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gøtzsche PC, Ioannidis JPA, et al. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and elaboration. BMJ. 2009;339:b2700. doi: 10.1136/bmj.b2700 . [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20.PROSPERO international prospective register of systematic reviews [Internet]; 2020. A systematic literature review and meta-analysis: can frontine point-of care emergency physicians, genral practitioners and hospital trianees use the bedside HINTS examintation to rule out stroke in acute vestibular syndrome? [cited 2020 May]. National Institute of Health Research. Available from: https://www.crd.york.ac.uk/prospero/display_record.php?ID=CRD42020187166 ID: CRD42020187166
- 21.Rayner R, Hmu C. ‘HINTS’ at the front door: an acute stroke service quality improvement project. Int J Ther Rehabil. 2019;26(6):5. doi: 10.12968/ijtr.2019.26.6.5 [DOI] [Google Scholar]
- 22.Polanin JR, Pigott TD, Espelage DL, Grotpeter JK. Best practice guidelines for abstract screening large-evidence systematic reviews and meta-analyses. Res Synth Methods. 2019;10(3):330–342. doi: 10.1002/jrsm.1354 [DOI] [Google Scholar]
- 23.Edlow JA, Newman-Toker DE. Medical and nonstroke neurologic causes of acute, continuous vestibular symptoms. Neurol Clin. 2015;33(3):699–716, xi. doi: 10.1016/j.ncl.2015.04.002 . [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24.Vanni S, Nazerian P, Casati C, Moroni F, Risso M, Ottaviani M, et al. Can emergency physicians accurately and reliably assess acute vertigo in the emergency department? Emerg Med Australas. 2015;27(2):126–131. doi: 10.1111/1742-6723.12372 . [DOI] [PubMed] [Google Scholar]
- 25.The Cochrane Collaboration. Reitsma JB, Rutjes AWS, Whiting P, Vlassov VV, Leeflang MMG, et a. Chapter 9: assessing methodological quality. In: Deeks JJ, Bossuyt PM, Catsonis C, editors, Cochrane Handbook of Systematic Reviews of Diagnostic Test Accuracy version 1.0, 2009. Available from: https://methods.cochrane.org/sites/methods.cochrane.org.sdt/files/public/uploads/ch09_Oct09.pdf [Google Scholar]
- 26.McGuinness LA, Higgins JPT. Risk-of-bias visualization (ROBVIS): an R package and Shiny web app for visualizing risk-of-bias assessments. Res Syn Meth. 2021. Jan;12(1):55–61. doi: 10.1002/jrsm.1411 . [DOI] [PubMed] [Google Scholar]
- 27.Higgins JPT, Thompson SG, Spiegelhalter David J. A re-evaluation of random-effects meta-analysis. J R Stat Soc Ser A Stat Soc. 2009;172(1):137–159. doi: 10.1111/j.1467-985X.2008.00552.x . [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28.Vanni S, Pecci R, Edlow JA, Nazerian P, Santimone R, Pepe G, et al. Differential diagnosis of vertigo in the emergency department: a prospective validation study of the STANDING algorithm. Front Neurol. 2017;8:590. doi: 10.3389/fneur.2017.00590 . [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29.Clin Trials.gov US National Libraray of Medicine; 2019. Eye-ECG approach to emergencies: diagnostic performance of the HINTS test; [cited 2021 Jan]. Available from: https://clinicaltrials.gov/ct2/show/NCT04118361 ID: NCT04118361
- 30.Nakatsuka M, Molloy EE. The HINTS examination and STANDING algorithim in acute vestibular syndrome involving frontline point-of-care emergency physicians [dataset]. 2022. FEB 3 [cited 2022 FEB 3]. The Sydney eScholarship Repository, The University of Sydney. Available from: 10.25910/tqv0-qz64 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 31.Kerber KA, Meurer WJ, Brown DL, Burke JF, Hofer TP, Tsodikov A, et al. Stroke risk stratification in acute dizziness presentations: a prospective imaging-based study. Neurology. 2015;85(21):1869–1878. doi: 10.1212/WNL.0000000000002141 . [DOI] [PMC free article] [PubMed] [Google Scholar]
- 32.Edlow JA, Newman-Toker DE. Using the physical examination to diagnose patients with acute dizziness and vertigo. J Emerg Med. 2016;50(4):617–28. doi: 10.1016/j.jemermed.2015.10.040 . [DOI] [PubMed] [Google Scholar]
- 33.Tarnutzer AA, Berkowitz AL, Robinson KA, Hsieh YH, Newman-Toker DE. Does my dizzy patient have a stroke? A systematic review of bedside diagnosis in acute vestibular syndrome. CMAJ. 2011;183(9):E571–E592. doi: 10.1503/cmaj.100174 . [DOI] [PMC free article] [PubMed] [Google Scholar]
- 34.Carmona S, Martínez C, Zalazar G, Moro M, Batuecas-Caletrio A, Luis L, et al. The diagnostic accuracy of truncal ataxia and HINTS as cardinal signs for acute vestibular syndrome. Front Neurol. 2016;8(7):125. doi: 10.3389/fneur.2016.00125 . [DOI] [PMC free article] [PubMed] [Google Scholar]
- 35.Schwid M, Harris O, Landry A, Eyre A, Henwood P, Kimberly H. Use of a refresher course increases confidence in point-of-care ultrasound skills in emergency medicine faculty. Cureus. 2019:11(8):e5413. doi: 10.7759/cureus.5413 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 36.Vaughan L, Edwards N. The problems of smaller, rural and remote hospitals: separating facts from fiction. Future Healthcare Journal. 2020;7(1):38–45. doi: 10.7861/fhj.2019-0066 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 37.Australian Institute of Health and Welfare. Hospital resources 2017–18: Australian hospital statitics [Internet]. Canberra: Australian Institute of Health and Welfare, 2019. [cited 2020 July 7]. Available from: https://www.aihw.gov.au/reports/hospitals/hospital-resources-2017-18- ahs/contents/hospitals-and-average-available-beds [Google Scholar]
- 38.Parker TM, Farrell N, Otero-Millain J, Kheradmand A, McClenney A, Newman-Toker DE. Proof of concept for an “eyePhone” app to measure video head impulses. Digit Biomark. 2021;5:1–8. doi: 10.1159/000511287 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 39.Datta A. Clinical Skill: The Ebbing Art of Medicine. Malays J Med Sci. 2021;28(1):105–108. doi: 10.21315/mjms2021.28.1.13 ; PMCID: PMC7909357. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 40.Kene M, Ballard D, Vinson D, Rauchwerger A, Iskin H, & Kim A. Emergency physician attitudes, preferences, and risk tolerance for stroke as a potential cause of dizziness symptoms. West J Emerg Med. 2015;16(5):768–76. doi: 10.5811/westjem.2015.7.26158 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 41.Kanzaria HK, Hoffman JR, Probst MA, Caloyeras JP, Berry SH, Brook RH. Emergency physician perceptions of medically unnecessary advanced diagnostic imaging. Academic Emergency Medicine. 2015;22(4), 390–398. doi: 10.1111/acem.12625 [DOI] [PubMed] [Google Scholar]
- 42.Khachatoorian Y, Uberoi A, Shihadeh S. The Lost Art of Physical Examination. Abstract published at Hospital Medicine 2020, Virtual Competition. Abstract 806 Journal of Hospital Medicine. https://shmabstracts.org/abstract/the-lost-art-of-physical-examination/. January 17th 2022 [Google Scholar]



