Skip to main content
PLOS ONE logoLink to PLOS ONE
. 2019 Jul 25;14(7):e0219569. doi: 10.1371/journal.pone.0219569

Validity of screening instruments for the detection of dementia and mild cognitive impairment in hospital inpatients: A systematic review of diagnostic accuracy studies

Aljoscha Benjamin Hwang 1,2,*, Stefan Boes 2,#, Thomas Nyffeler 1,#, Guido Schuepfer 3,#
Editor: Terence J Quinn4
PMCID: PMC6657852  PMID: 31344048

Abstract

Introduction

As the population ages, Alzheimer's disease and other subtypes of dementia are becoming increasingly prevalent. However, in recent years, diagnosis has often been delayed or not made at all. Thus, improving the rate of diagnosis has become an integral part of national dementia strategies. Although screening for dementia remains controversial, the case is strong for screening for dementia and other forms of cognitive impairment in hospital inpatients. For this reason, the objective of this systematic review was to provide clinicians, who wish to implement screening, an up-to-date choice of cognitive tests with the most extensive evidence base for the use in elective hospital inpatients.

Methods

For this systematic review, PubMed, PsycINFO and Cochrane Library were searched by using a multi-concept search strategy. The databases were accessed on April 10, 2019. All cross-sectional studies that utilized brief, multi-domain cognitive tests as index test and a reference standard diagnosis of dementia or mild cognitive impairment as comparator were included. Only studies conducted in the hospital setting, sampling from unselected, elective inpatients older than 64 were considered.

Results

Six studies met the inclusion criteria, with a total of 2112 participants. Diagnostic accuracy data for the Six-Item Cognitive Impairment Test, Cognitive Performance Scale, Clock-Drawing Test, Mini-Mental Status Examination, and Time & Change test were extracted and descriptively analyzed. Clinical and methodological heterogeneity between the studies precluded performing a meta-analysis.

Discussion

This review found only a small number of instruments and was not able to recommend a single best instrument for use in a hospital setting. Although it was not possible to estimate the pooled operating characteristics, the included description of instrument characteristics, the descriptive analysis of performance measures, and the critical evaluation of the reporting studies may contribute to clinician's choice of the screening instrument that fits best their purpose.

Introduction

Background

Dementia is a progressive syndrome of global cognitive impairment. It encompasses a group of neurodegenerative disorders that are characterized by a progressive and irreversible decline of brain functions, with symptoms such as memory loss, disorientation, and the inability to perform daily activities of living independently [1]. Possible epiphenomena include neuropsychiatric symptoms and challenging behaviors of varying type and severity [2]. The most common dementia types include vascular dementia (VD), dementia with Lewy bodies (LBD), and Alzheimer's disease (AD), which is in around 40% the neuropathological diagnosis in patients with clinically diagnosed dementia disorder [36]. The process of AD pathology can be described as a continuum with a long asymptomatic preclinical stage; an early symptomatic clinical stage, which encompasses mild cognitive impairment (MCI), or prodromal AD; and a dementia stage, with dementia further divided into mild, moderate, and severe [79].

In 2015, more than 45 million people worldwide were estimated to be living with dementia. This number will almost double every 20 years, reaching 75 million in 2030 [10]. Of the Swiss population around 145 000 people are affected (year 2017), with a prevalence of 9% in individuals aged over 65 years, increasing to a prevalence of approximately 30% in adults aged 85 years and older [11]. Given that the prevalence of dementia rises steeply after the age of 65, the number of people living with dementia is expected to increase significantly due to the growing elderly population in Switzerland [12]. For the time being, Alzheimer's disease and related forms of dementia are incurable [13] and the burden on Swiss society is expected to grow substantially in the future. A national study, commissioned by the Swiss Alzheimer's Society, reported estimated annual costs of dementia at 6.3 billion CHF for 2007 and 6.96 billion CHF for 2009 [14, 15]. These findings are consistent with contemporary international studies, which predict rising global costs at similar rates until 2030 and beyond [10, 16, 17]. In response to this, Switzerland and many other countries have recognized dementia as a public health priority [1820] and developed national dementia strategies [21].

Despite the absence of a cure for dementia, numerous strategies emphasize earlier diagnosis and intervention [2225] in accordance with the Alzheimer Cooperative Valuation in Europe (ALCOVE), which recommends, that diagnosis should generally occur earlier than is currently common practice [26]. When speaking about earlier diagnosis, the conventional understanding usually distinguishes between early diagnosis and timely diagnosis. Whereas the term "early diagnosis" reflects the identification of people in the asymptomatic phase as a result of population or targeted screening, the term "timely diagnosis" is used to reflect diagnosis occurring at a time when patients and their family first notice changes in cognitive performance and seek medical examination [27]. In recent years, diagnosis has often been delayed or not made at all [22, 27, 28]. Multiyear delays from first symptom occurrence to presentation and inactivity by health professionals in offering help have been attributed mostly to patients or families not having the knowledge to realize the symptoms are part of a medical condition, the mutual false belief that nothing can be done, and the stigma of dementia preventing open discussions, respectively [2730]. According to the World Alzheimer Report, only between a third and a half of people with dementia ever receive a formal diagnosis, that is usually necessary for insurers to pay for medical services [22, 27].

In view of this well-documented and widely recognized problem of inadequate recognition of dementia, national and international advisory- and policy-making groups have evaluated the possibility of earlier diagnosis facilitated by screening for dementia or mild cognitive impairment [3137]. However, the U.S. Preventive Services Task Force concluded in its 2014 statement that evidence was insufficient to recommend routine screening for cognitive impairment in community-dwelling adults in the general primary care population who are older than 65 years [38]. Consistently, none of the remaining organizations recommended routine screening of patients in whom cognitive impairment was not symptomatic, but diagnostic workup when memory problems or dementia were suspected [36, 39].

This drive toward earlier diagnosis and intervention has been accompanied by a debate about the value of arriving at a diagnosis of dementia earlier in the disease process [27, 4042]. Several studies reported evidence that supports a possible beneficial effect of early and accurate diagnosis [27, 28, 40]. Early diagnosis potentially offers the opportunity for early interventions that slow down or lessen the disease process [4348], implementation of coordinated care plans while the patient is still competent to do so [49], better management of symptoms [50, 51], and postponement of institutionalization [47]. On the other hand, it should be acknowledged that diagnostic processes are costly and can come along with major psychological and psychosocial effects [27, 5254]. Another concern is misdiagnosis, which can result in unnecessary or incorrect treatment [55].

Since then, new research findings regarding benefits and harms, the approval of new pharmaceutical agents for treatment, and growing media attention have converged to challenge this previous thinking about screening for cognitive impairment [56, 57]. As a result, changes in health care policies and priorities, such as the introduction of an opportunistic "dementia case-finding scheme" in the United Kingdom [58, 59], the Alzheimer's Foundation of America's National Memory Screening Program [60], and the implementation of cognitive assessments in the Medicare Annual Wellness Visit in the United States [61] have occurred.

Rationale

Although most policy statements acknowledge that physicians should be sensitive to evidence of cognitive impairment and should act on their suspicion, recommendations for operationalizing the detection of possible dementia are scarce [35, 62]. Usually, frontline recognition and assessment of people with possible dementia, regardless of the setting, requires a test of cognitive function, third-party anamnesis, or both [38, 62]. At the moment, neuropsychological tests, usually developed and validated in primary care and memory clinics, are regarded as the most implementable instruments for screening [35, 36, 63, 64]. They are usually paper-and-pencil-based, easy to administer, and take between 10 and 45 minutes to complete. Country-specific guidelines and/or systematic literature reviews on which instruments to favor have already been published for the primary care setting [6577]. With respect to the hospital setting however, where dementia and MCI are much more prevalent [7880], comparable guidelines concentrate on minorities or selected patient groups, such as geriatric, stroke or emergency patients [8184]. The variations in demographic features, health condition, disease prevalence, and severity but also, differences in test conditions (e.g. timing, interventions between index test and reference standard) entail separate external validation prior to general application. In response to this gap in information, two systematic reviews have recently been conducted to establish adequate tools for dementia screening, considering the particularities in secondary care. In 2010, Appels and colleagues [85] reported validation studies sampling from selected hospital outpatients with a focus on mild dementia and rather extensive screening instruments (10 to 45 min administration time). In comparison, in 2013, Jackson and colleagues [86] performed a review and meta-analysis of validated dementia screening instruments in unselected general hospital inpatients. Unselected, elective inpatients that account for some 40% of all hospitalizations have not been evaluated so far [8789].

Clinical role of index test

In the hospital setting, the knowledge that a patient has or might have dementia or MCI is essential because of the multiple immediate implications for care. Hospital medical staff may administer brief cognitive screening tests before or on the day of admission and, depending on the test results, cause additional investigations to be made to confirm whether a diagnosis is present or not; provide appropriate care during the hospital stay (e.g., choice of anesthesia, involvement of primary caregiver, medication management, etc.), and realize adequate discharge management [9093], which may then lead to avoiding new medical events known to be more likely among patients with cognitive impairment and promoting earlier diagnosis [94, 95].

Objective

Many screening instruments are recommended for the application in primary care setting but not so many, for screening in older hospital inpatients. The aim of this review is to provide clinicians, who wish to implement screening for dementia or MCI, an up-to-date choice of practical and accurate instruments that have been validated well for the use in unselected, elective hospital inpatients.

Methods

Eligibility criteria

Articles were limited to the English and German languages. Abstracts fulfilling the following criteria were included:

Clinical setting: Only studies conducted in a hospital setting (general or university hospital) involving elective inpatients over 64 years of age as the main study group, or as a clearly defined subgroup, were included. The aim of the review was to identify screening instruments and to establish their diagnostic accuracy in unselected samples within the hospital setting. For this reason, studies including participants that were selected on the basis of a specific disease or medical field (e.g., Parkinson's disease or orthopedic patients) were excluded. In addition, wards providing services exclusively for patients with diseases related to dementia (psychiatric and neurology) were excluded. In case of mixed settings, studies were excluded if no separate data was presented for outnumbered elective inpatients.

Target condition: Mild cognitive impairment (MCI), dementia, and any common dementia subtype, including Alzheimer's disease (AD), vascular dementia (VD), Lewy body dementia (LBD), and frontotemporal dementia (FTD).

Index tests: Screening during pre-operative examination or hospitalization in the more stable, elective inpatients might be less affected by time as a limiting factor. In comparison, especially in emergency departments or primary care setting, where time is scarce, administration time is key determinant of whether screening instruments are used in clinical practice or not [96, 97]. For this reason, screening instruments with a short, but also medium administration time (up to 15 minutes in non-impaired patients) were considered. Furthermore, instruments had to cover more than one cognitive domain to be eligible for inclusion because the coverage of multiple cognitive domains undoubtedly increases the instrument's sensitivity to different types of dementia [96, 98]. Optimally, the instrument had to cover at least the domains of "learning and memory" and "executive function", which are considered central to a diagnosis of dementia -most particularly to its most prevalent forms, Alzheimer's disease (AD) and vascular dementia (VD) [3, 99, 100].

Although the incorporation of informant reports into assessments for dementia is known to increase the overall accuracy of detection of cases and non-cases, tests that are wholly informant rated were not considered [101103], solely because, in the clinical setting, the presence of an informant is not the norm and proxy rating comes with confidentiality concerns. Self-administered tests, measures that assessed daily living activities and functional status, and telephonic or computerized self-tests were also excluded.

The full-texts were reviewed against the following additional inclusion criteria:

Types of studies: Cross-sectional studies, in which inpatients received the index test and reference standard diagnostic assessment during a hospital stay, preferably on the day of admission and before the commencement of treatment, were included. Studies were excluded for inadequate reporting (e.g., studies that did not report sensitivity or specificity), non-availability of the full-text article, or if subjects with prevalent target disease at baseline were included. Case-control studies and longitudinal studies (or related, nested case-control studies) were excluded due to the high risk of spectrum bias [96]. Also, studies sampling fewer than 100 participants were excluded due to the potential for bias in selection and lack of representativeness.

Reference standards: Studies were included that used a reference standard for MCI, all-cause dementia or any standardized definition of subtypes. For MCI, the reference standard diagnosis had to be made according to published criteria, that is, Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-V) [104], National Institute of Ageing-Alzheimer's Association criteria [105], Petersen [106], Gauthier [107], or Winblad [108] criteria. For all-cause dementia, any version of the DSM [104, 109], and the International Classification of Diseases (ICD) [110] criteria were included. For dementia subtypes (e.g. AD or probable AD, vascular dementia, or Lewy body dementia) common diagnostic criteria were included [111114]. In order not to further restrict the number of eligible studies, diagnostic accuracy studies that compared the index test with a diagnosis based on an expert consensus, or results of the Mini-Mental State Examination (MMSE) test were also included. Studies that applied a neuropathological diagnosis which needs to be verified post-mortem were excluded [115].

Information sources

An electronic literature search was conducted in the following databases: PubMed from 1972, Cochrane Library from 1992, and PsycINFO from 1967. All databases were accessed on the September 6, 2018. To ensure published literature saturation relevant systematic literature reviews, the reference sections of selected articles and the 'similar articles' feature in PubMed were assessed for further relevant studies. An update search was performed on April 10, 2019.

Search

To search the databases, a multi-concept search strategy was applied. The primary strategy used the following concepts: (a) Disease: Dementia and cognition disorders (general terms, both free text and MeSH, exploded); (b) Outcome: Validation and sensitivity and specificity values (both free text and MeSH, exploded); (c) Intervention: Diagnostic tests, mass screening, etc. (both free text and MeSH, exploded), and (d) Setting: Aged in-patients (both free text and MeSH, exploded).

The secondary strategy involved a review of dementia practice guidelines [35] to identify recommended screening instruments. Irrespective of the setting targeted by those practice guidelines, recommended instruments were used as key search terms to run an additional electronic search if they met the fore mentioned criteria of a screening instrument. The original search strategies were developed for the PubMed database and slightly adapted to run on Cochrane Library and PsychINFO. The search strategy was peer-reviewed (by MA), using the PRESS 2015 Guideline Evidence-Based Checklist [116]. Disagreements were discussed and decided by consensus. The search strategy for PubMed is documented in the Supporting Information (S1 Appendix). Search strategies for PsychINFO and Cochrane are available from the corresponding author upon request.

Study selection

The titles and abstracts (where needed) were independently screened by the author (ABH) and one trained assessor (MG). Full-texts were independently reviewed by two assessors (ABH and MG). Any disagreements were discussed and decided by consensus. For all articles whose full-text was screened, additional information from authors was sought to resolve questions about eligibility, and reasons for exclusion were recorded (maximum three email contact attempts; if data was not available, the article was excluded). All articles selected were included only after reaching a consensus among all the authors. The study selection process was detailed in a Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) flow diagram.

Data extraction and management

From the selected articles, the following data was extracted using an extraction sheet, which was pilot-tested on three randomly selected articles: Country; type of hospital; patient group; target condition; sample size; age, and mean age; gender ratio; index test and applied cut-off; reference standard; point in time of screening; other assessments; assessment for delirium; prevalence, and sensitivity and specificity. The complete data extraction form is available on reasonable request from the corresponding author. Data extraction was done by one author (ABH) and one reviewer (MG). Any disagreements were decided by consensus. In case of uncertainties, study authors were contacted by e-mail (maximum of three email attempts; if data was not available, the article was tagged with a no-data-badge).

Risk of bias and applicability

One author (ABH) and one reviewer (MG) independently assessed, discussed, and reached a consensus on the methodological quality of each included study, using the recommended quality assessment tool for Diagnostic Accuracy Studies (QUADAS-2) [117, 118] and the Standards for the Reporting of Diagnostic Accuracy studies checklist (STARD 2015) [119]. Brief definitions describing the operational application of both instruments are detailed in the Supporting Information (S2 and S3 Appendices).

Statistical analysis and synthesis of results

Statistical analysis was performed according to the Cochrane guidelines for diagnostic test accuracy reviews [118]. For all included studies, diagnostic accuracy data was presented in two-by-two tables and used to calculate sensitivity and specificity values as well as measures of statistical uncertainty (95% confidence intervals). Data from each study was presented graphically by plotting estimates of sensitivities and specificities on a coupled forest plot. For studies that reported more than one threshold, only sensitivity and specificity data at the most common threshold were included in the two-by-two table. Investigation of heterogeneity was done through visual examination of the forest plot.

Registration and protocol

The pre-defined review protocol was registered at the PROSPERO international prospective register of systematic reviews (https://www.crd.york.ac.uk/prospero/, registration number CRD42019133093). The protocol for this review and its primary search strategy are accessible on https://www.protocols.io/ and as supporting information; see S1 and S4 Appendices.

Results

Study selection and characteristics

The initial literature search revealed 1524 citations. Thirty-three of them originated from the reference sections of selected systematic reviews. In the end, 1518 articles were excluded and six studies investigating the validity of five different instruments were included in the reviewing process. The results of the search are summarized in the flow diagram (Fig 1. Study flow diagram). The inter-rater agreement between the author (ABH) and the peer-reviewer (MG) was moderate with a Cohan's kappa of 0.48

Fig 1. Study flow diagram.

Fig 1

Of the six studies, two studies were conducted in Australia and one study each took place in the Netherlands, Switzerland, the UK, and the US. All studies sampled from consecutive inpatients being admitted to university hospitals [120122], general hospitals [123], or to a mix of both [124, 125]. While five studies included inpatients from medical and surgical wards, one study included inpatients from the general internal medicine ward only [122]. Merely two studies reported admission or discharge diagnoses [120, 122]. Also, reporting of patient characteristics (ethnicity, marital status, living situation, and educational attainment) was inconsistent. Diagnosis included all-cause dementia (4) and cognitive impairment (2), as defined by the third and fourth DSM version, [121, 123, 125] an expert diagnosis following interview and cognitive assessment [126]; and MMSE test results as criterion standard, respectively [122, 124]. Delirium as a cause of cognitive impairment was ruled out in four studies [121, 123, 125, 126]. Studies ranged in size from 103 to 776 participants (total n = 2112), including patients older than 69 years, with a mean value between 78 and 80 years. On average, studies included slightly more women (58%) than men (42%). The mean point prevalence of the target condition ranged between 14% and 33%. Further study details, that is, index test, cut-off value, point in time of screening, or other performed assessments can be found in the Characteristics of Included Studies (Table 1).

Table 1. Characteristics of included studies.

Study Setting Target condition Sample size Age, mean age (SD) Sex, female % Education Index test Cut-off Reference standard Moment of screening Other assessments Delirium assessment Prevalence % ***
Death et al. (1993) UK, 2 sites General Hospitals, medical & surgical wards Dementia 117 >70, 79 (SD = 6*) 59 No data CDT Clock class 1 & 2 DSM-III Within 48 hrs. of admission MMSE, psychological and physical examination Yes 27
Inouye et al. (1998) US, 1 site University Hospital, medical & surgical wards Dementia 776 >69, 78 (SD = 6.1) 55 mean 11.3 years T&C ≥1 error Panel decision based on interview, assessment & medical record data During hospitalization Story Recall Test, Visual Analog Scale for Confusion, self-reported ADLs, Standard Near-Vision Test, Modified Blessed Dementia Rating Scale and MMSE Yes 14
Nair et al. (2007) Australia, 1 site University Hospital medical & surgical wards Dementia 103 >69, 80 (SD = 6.9) 64 20% no education; 34% primary education; 38% incomplete high school education; 9% completed high school; 2% tertiary education T&C ≥1 error DSM-IV 72 hrs. after admission Telling Time Task, Making Change Task and MMSE Yes 33**
MMSE 23/24
Travers et al. (2013) Australia, 4 sites University & General Hospitals, medical & surgical wards Dementia 462 >69, 80 (SD = 6.5) 57 82% had at least achieved secondary level education CPS ≥2 DSM-IV Within 48 hrs. of admission (or 72 hours after surgery) ADL, IADL, interRAI Acute Care, MMSE, CAM, 16-item IQCODE Yes 18
MMSE 23/24
Büla et al. (2009) Switzerland, 1 site University Hospital medical wards only Cognitive Impairment 401 >74, 82 (SD = 5) 61 46% had less than high school CPS ≥2 MMSE Within 48 hrs. of admission ADL, Geriatric Depression Scale, Charlson Comorbidity Index No 32
Tuijl et al. (2012) Netherlands, 2 sites University & General Hospital, medical & surgical wards Cognitive Impairment 253 >69, 80 (SD = 6.7) 56 41% had less than 11years of education; 34% had more than 12 years
6CIT ≥11 MMSE Within the first 4 days of the stay/during preoperative screening - No 28

(A) Abbreviations: Prev.: Prevalence; SD: Standard Deviation; CDT: Clock Drawing Test; DSM-II/IV: Diagnostic and Statistical Manual of Mental Disorders II/IV; MMSE: Mini Mental Status Examination; T&C: Time & Change Test; ADLs: Activities of Daily Living; CPS: Cognitive Performance Scale; 6CIT: 6 Item Cognitive Impairment test; IADL: Instrumental Activities of Daily Living; CAM: Confusion Assessment Method; IQCODE: Informant Questionnaire on Cognitive Decline in the Elderly

(B) *SD has been approximated

(C) ** Includes DSM IV diagnosis of dementia or delirium

(D) *** according to reference standard

(E) Funding sources: Death et al.–No data; Inouye et al.: This work was supported in part by grants from the National Institute on Aging, from the Commonwealth Fund and from the Retirement Research Foundation; Nair et al.: No data; Büla et al.: This work was supported by a grant from the Public Health Service, Canton de Vaud, Switzerland; Tuijl et al.: No outside sources of funds; Travers et al.: This research was funded by a National Health and Medical Research Council (NHMRC) Project Grant (ID: 511125).

Risk of bias and applicability

To assess the study quality and risk of bias, all included studies were reviewed using the QUADAS-2 methodology (Fig 2. Risk of bias and applicability concerns graph). Of all included studies, only Travers and colleagues [125] Cognitive Performance Scale (CPS) was rated at low risk in all the categories. In the patient selection domain, two studies were rated high risk of bias due to inappropriate exclusion of privately insured and comatose patients [122] and patients who were not able to sustain their attention sufficiently [124]. Two studies were considered as having an unclear risk because it was not stated whether a consecutive or random sample was enrolled [121]. In the index test domain, one study was rated at high risk of bias because the index test results were interpreted with the knowledge of the results of the reference standard [121]. In the reference standard domain, four studies were rated high risk, and two studies were considered to be at unclear risk of bias. The high risk-rated studies interpreted the reference standard results with the knowledge of the results of the index test [121, 125] or used a reference standard that is not likely to correctly classify the target condition [122, 124]. The studies considered as unclear only mentioned vague information about whether the reference standard rater was blinded to the results of the index test results or not [123, 126]. In the flow and timing domain, three studies were rated at high risk of bias because not all patients were included in the analysis [121] or not all patients received the same reference standard [123]. Two studies were rated with unclear risk of bias. They did not provide enough information about whether any interventions were done between the administration of the index test and the reference standard [124, 126].

Fig 2. Risk of bias and applicability concerns graph.

Fig 2

(A) Abbreviations: CDT: Clock Drawing Test; T&C: Time & Change Test; MMSE: Mini-Mental Status Examination; CPS: Cognitive Performance Scale; 6CIT: Six-Item Cognitive Impairment test.

Finally, regarding the assessment of applicability concerns, for the majority of studies there was no concern that the included patients, the conduct and interpretation of the index test, or the reference standard did not match the review question. However, for four studies there were high applicability concerns, because the index tests took place more than 48 hours after hospital admission and there were no sufficient information whether prior index testing any measures with potentially negative effects on patient's cognitive performance were performed or not [121, 124, 126].

Reporting quality was assessed using the STARD guideline. In all included articles limitations in reporting were found. Reporting items of particular concern were: Description of sample size calculation (Item 18: No paper did report on its pre-specified sample size), flow of participants (Item 19: No paper did visualize the patient flow using a flow diagram), distribution of alternative diagnoses in those without the target condition (Item 21b: No paper did establish and document diagnoses of subjects without the target condition) and reporting of registration number and name of registry (Item 28: Only one study reported on this item). Affected by irregular reporting were also items 15 and 16, describing how indeterminate index test or reference standard results and missing data were handled (only two studies reported on these items). Further details are illustrated in Table 2. (STARD 2015 Checklist).

Table 2. STARD 2015 checklist.

Study
Item Section STARD 2015 Checklist Criteria J. Death (1993) S.K. Inouye (1998) B.R. Nair (2007) C.J. Büla (2009) J.P. Tuijl (2012) C. Travers (2013)
1 Title or Abstract Identification as a study of diagnostic accuracy using at least one measure of accuracy (such as sensitivity, specificity, predictive values or AUC) 2 2 2 2 2 2
2 Abstract Structured summary of study design, methods, results and conclusions (for specific guidance, see STARD for Abstracts) -1 2 2 1 2 2
3 Introduction Scientific and clinical background, including the intended use and clinical role of the index test 2 2 2 2 2 2
4 Introduction Study objectives and hypotheses 2 2 2 2 2 2
5 Methods Whether data collection was planned before the index test and reference standard were performed (prospective study) or after (retrospective study) 2 2 2 2 2 2
6 Methods Eligibility criteria 2 2 2 2 2 2
7 Methods On what basis potentially eligible participants were identified (such as symptoms, results from previous tests, inclusion in registry) 2 2 2 2 2 2
8 Methods Where and when potentially eligible participants were identified (setting, location and dates) 1 2 1 1 2 2
9 Methods Whether participants formed a consecutive, random or convenience series 2 2 0 2 2 2
10a Methods Index test, in sufficient detail to allow replication 2 2 2 2 2 2
10b Methods Reference standard, in sufficient detail to allow replication 2 2 2 2 2 2
11 Methods Rationale for choosing the reference standard (if alternatives exist) 2 1 2 1 2 2
12a Methods Definition of and rationale for test positivity cut-offs or result categories of the index test, distinguishing pre-specified from exploratory 2 2 2 2 2 2
12b Methods Definition of and rationale for test positivity cut-offs or result categories of the reference standard, distinguishing pre-specified from exploratory 2 2 2 2 2 2
13a Methods Whether clinical information and reference standard results were available to the performers or readers of the index test 2 2 2 2 2 2
13b Methods Whether clinical information and index test results were available to the assessors of the reference standard 0 0 2 2 2 2
14 Methods Methods for estimating or comparing measures of diagnostic accuracy 2 2 2 2 2 2
15 Methods How indeterminate index test or reference standard results were handled 2 -1 -1 2 -1 2
16 Methods How missing data on the index test and reference standard were handled 0 2 -1 -1 -1 2
17 Methods Any analyses of variability in diagnostic accuracy, distinguishing prespecified from exploratory -1 2 -1 2 2 1
18 Methods Intended sample size and how it was determined -1 -1 -1 -1 -1 -1
19 Results Flow of participants, using a diagram -1 -1 -1 -1 -1 -1
20 Results Baseline demographic and clinical characteristics of participants 1 2 1 2 2 2
21a Results Distribution of severity of disease in those with the target condition 2 2 1 2 -1 2
21b Results Distribution of alternative diagnoses in those without the target condition -1 -1 -1 -1 -1 -1
22 Results Time interval and any clinical interventions between index test and reference standard 2 0 0 0 1 2
23 Results Cross tabulation of the index test results (or their distribution) by the results of the reference standard 2 2 2 -1 2 2
24 Results Estimates of diagnostic accuracy and their precision (such as 95% CIs) 1 2 2 2 2 2
25 Results Any adverse events from performing the index test or the reference standard -1 2 2 2 2 -1
26 Discussion Study limitations, including sources of potential bias, statistical uncertainty and generalizability 1 2 2 2 2 2
27 Discussion Implications for practice, including the intended use and clinical role of the index test 1 2 2 2 2 2
28 Other Information Registration number and name of registry -1 -1 -1 -1 -1 2
29 Other Information Where the full study protocol can be accessed 2 2 2 2 2 2
30 Other Information Sources of funding and other support; role of funders -1 2 -1 2 2 2
Sum of all reporting items 35 48 37 45 46 55

(A) Legend: 2 = fully reported, 1 = partially reported, 0 = unclear, -1 = not reported/missing

Findings

For the final review, six studies on five unique screening instruments were selected [121126]. The instruments studied were the Clock- Drawing Test (CDT), the Cognitive Performance Scale (CPS), the Mini-Mental Status Examination (MMSE), the Time & Change (T&C) test, and the Six-Item Cognitive Impairment Test (6-CIT). The MMSE and T&C were administered in two studies [121, 122, 125]. Hereinafter, all five instruments are briefly described, following a portrayal of the diagnostic accuracy data in the setting to be evaluated.

Instruments

Clock Drawing Test (CDT). The CDT is a commonly used, brief neuropsychological test, sensitive to cognitive changes and functional skills [127]. Originally, the CDT was developed as an instrument for attentional and visual disorders [128, 129]. Due to its valuable characteristics (i.e., free of charge, quick, easy to administer, relatively high robustness), the CDT has gained in popularity among practitioners and researchers as a screening instrument for Alzheimer's dementia either by itself or as part of a test battery [130137]. Because of its simplicity and brevity, the CDT is well accepted by older and very old adults [138]. Although the CDT covers several cognitive domains and can thus provide some more information on the actual nature of cognitive impairment, it does not differentiate among Alzheimer's disease (AD), dementia with Lewy bodies (DLB), and cognitively impaired Parkinson's disease [139, 140]. In clinical practice, there are basically three approaches on how to administer the CDT. The most common administration instructions ask the patient to draw a clock face with all its numbers and set the time to 10 past 11 [137]. Variations can include a pre-drawn clock face, a different time setting, or a toy clock from which the patient needs to read the time [132, 141, 142]. In addition to the differences in how to administer the test, there are also various scoring methods [137, 143]. Commonly put into practice is the classification of drawn clocks into distinct classes. Death and colleagues distinguishes four classes, normal clocks (4), clocks with minor spacing abnormalities (3), clocks with major spacing abnormalities (2) and bizarre clocks (1). Clocks class 1 and 2 indicate cognitive impairment, and class 3 and 4 no cognitive impairment [123]. In the literature, there is no consensus about which scoring method is the most adequate, mainly because comparative studies have been questioned with respect to a methodologically diverse set of included studies [144].

Cognitive Performance Scale (CPS). The Cognitive Performance Scale was developed in 1994 as a standardized, comprehensive assessment instrument for cognitive function in nursing-home residents [145]. It is based on a subset of five items of the Minimum Data Set (MDS), which were combined to create a single, functionally meaningful, seven-category ranked scale [145]. The CPS is free of charge and takes less than 3 minutes to administer and covers several cognitive subdomains (i.e. short- and long-term memory, orientation, and executive function) [145]. For scoring, all items are combined by a branching logic with five decision nodes, daily decision making ability, short-term memory, procedural memory, ability to make self-understood and ability to feed oneself. Using this branching logic, patients can be classified in seven ranked categories, ranging from Intact (0) to Very Severe Impairment (6) [145]. According to Morris and colleagues, a rank of 2 or higher indicates the presence of cognitive impairment.

Due to limitations, that is, low sensitivity to early impairment, overestimation in dependent patients with comorbidities and depressive symptoms and underestimation in older patients, Morris and colleagues revised the CPS in 2016 [146]. The revised Cognitive Performance Scale 2 includes a new, most-independent category and a series of dichotomous severity options, providing a stepped hierarchical report of cognitive performance decline. The levels of cognitive impairment expanded from seven to nine and, thus, enabled repeated assessments to detect changes, in particular in early levels of cognitive decline.

At present, both CPS versions can only be scored following the administration of the complete interRAI AC, an instrument to obtain detailed information about patient's physical and cognitive status and psychosocial functioning (including the Minimum Data Set).

Mini-Mental Status Examination (MMSE).The Mini-Mental Status Examination is a brief measure of cognitive functioning and its change and was developed more than 30 years ago [136]. Although originally distributed free of charge, the MMSE has recently been subject to copyright restrictions [147]. The MMSE takes around five to 10 minutes to administer and is available in multiple languages. Its use as a cognitive test is widespread among researchers and specialists [148150], though it is not very popular in primary care, because its administration time is considered too long [151]. The MMSE was developed from items selected from different neuropsychological batteries. Although it covers five cognitive subdomains, Orientation, Registration, Attention and Calculation, Recall, and Language [136], the MMSE is not an adequate instrument to identify early stages of dementia or distinguish different subtypes of dementia [152]. It does not assess executive functions, and there are only a few episodic and semantic memory or visuospatial tasks [153]. There are 11 items, with a maximum score of 30. For persons with at least eight years of education, the presence of suspected cognitive impairment or dementia is determined by a score below the cut-off value of 23/24 [136], with lower scores indicating increasing cognitive impairment [154]. Since 1975, numerous other cut-offs have been calculated from the receiver operating characteristic (ROC) curve analysis of specific populations together with adjustments of sociocultural variables (such as age, ethnicity, and education), which have been found to have an effect on the performance of the MMSE [155157].

Six-Item Cognitive Impairment Test (6-CIT). The 6-CIT, originally referred to as Six-Item Orientation-Memory-Concentration Test, was developed in 1983 by Katzman and colleagues [158] by shortening the Blessed and colleagues' Mental Status Test [159]. It was designed as a screening test for dementia and is freely available. Because of its practicality, high acceptability, and decent psychometric properties [160], the 6-CIT has been used in research and a broad range of settings in clinical practice [161165]. For use in primary care and hospital setting, the 6-CIT has been recommended as a cognitive screening tool by the Alzheimer's Society and the National Collaborating Centre for Mental Health (UK) [166, 167]. The 6-CIT takes less than 10 minutes to administer and involves three tests of temporal orientation, a short-term memory test and two tests of attention [160]. It is scored out of 28, scores greater than 10 indicate cognitive impairment [168]. Because of its verbal method of test administration, the 6-CIT can also be used in visually impaired patients [76]. The performance of the 6-CIT is influenced by age, education, and ethnicity [76, 160, 168] and thus needs adjustment when administered in diverse settings.

Time and Change Test (T&C). The T&C test, originally developed in 1998 by Inouye and colleagues, is a simple, standardized, performance-based test for the detection of dementia [126]. Due to its brevity, it takes less than five minutes to administer; it is highly acceptable to patients and may offer particular advantages in clinical and research settings where frequent examination of cognitive status is required [126, 169]. The T&C test incorporates supplemental cognitive domains such as calculation, conceptualization, and visuospatial ability [126]. It consists of a telling-time task and a making-change task. In the telling-time task, patients must respond to a clock face set at 11:10. Patients are allowed two tries within a 60-second period. If the patient fails to respond correctly, the task is terminated and recorded as an error. In the making-change task, the patients are asked to give one dollar in change from a group of coins with smaller denominations. Patients are allowed two tries in 120 seconds. Incorrect responses on either or both tasks indicate dementia. According to Inouye and colleagues, the T&C test is only minimally affected by education.

Diagnostic accuracy

The diagnostic accuracy data for the inpatient setting was extracted for each study and are summarized together with sensitivity, specificity, and statistical uncertainty intervals in the forest plot presented in Fig 3 (Summary of diagnostic accuracy data). Positive predictive values (PPV) and negative predictive values (NPV) are also summarized in the designated figure.

Fig 3. Summary of diagnostic accuracy data.

Fig 3

(A) Abbreviations: CPS: Cognitive Performance Scale; MMSE: Mini-Mental Status Examination; 6-CIT: Six Item Cognitive Impairment Test; T&C: Time & Change test; CDT: Clock-Drawing Test (B) All 95% CI have been calculated using the Wilson formula with continuity correction.

The study by Travers and colleagues applied the CPS and MMSE. For the CPS, at a threshold of ≥2, a sensitivity of 0.68 and specificity of 0.89 was reported [125]. The authors concluded, upon their study findings, that the CPS could substitute other widely used instruments for screening for dementia in older hospital inpatients. However, at present, the CPS can only be scored following administration of the complete interRAI Acute Care assessment. For the accuracy of the MMSE, which was evaluated as a comparator to the CPS, Travers and colleagues reported 0.75 for sensitivity and 0.73 for specificity using the common cut-off value of <24 [125]. The diagnosis according DSM-IV was established with access to the MMSE results. Therefore, overestimation is a legitimate concern.

For the accuracy of the 6-CIT, Tuijl and colleagues reported the largest number of correct positive predictions and the lowest number of false-positive predictions at a cut-off value of ≥11. Sensitivity and specificity were 0.71 and 0.98, respectively [124]. Compared with the MMSE, based on its equal performance but greater practicality, the 6CIT appears to be preferable to the MMSE in screening for cognitive impairment in older patients in a general hospital setting. However, the QUADAS-2 assessment revealed significant methodological limitations that may have led to overestimation of the accuracy of the 6-CIT.

The second study evaluating the diagnostic accuracy of the CPS, using the same common cut-off value of ≥2, reported 0.56 for sensitivity and 0.93 for specificity [122]. Compared to the rather positive conclusion of Travers and colleagues, the concluding Büla and colleague's statement remained more critical toward an implementation of the CPS in the hospital setting, mainly, because their analyses have shown that the CPS tends to underestimate cognitive impairment in older patients and overestimate it in dependent patients with depressive symptoms and comorbidities. Both are likely to be found in the hospital setting. Methodological limitations arose from the domain of patient selection, index test and reference standard. Due to inappropriate exclusions (privately insured patients and surgical patients), spectrum bias may have led to overestimation; thus, accuracy data should be interpreted with caution. Last, visual assessment of the forest plot showed high levels of heterogeneity where there is little overlap in the sensitivity confidence intervals of both CPS studies [122, 125].

For the T&C, which was evaluated in two studies, Inouye and colleagues reported 0.86 and 0.71 for sensitivity and specificity, respectively [126]. A positive test for either of the two components was classified as a positive result for dementia. According to the authors, the T&C was well tolerated and acceptable to nearly all participants across educational levels and diverse cultural backgrounds. Considering its relatively high sensitivity and negative predictive value (0.97), the T&C is advocated as a brief screener in high-risk settings for sequential use in combination with further clinical assessments to evaluate possible positive results. The methodological assessment showed no risk of bias or applicability concerns except for the index test, which was not administered early during hospitalization.

The second study with the objective to validate the T&C as a screening tool reported unusually divergent diagnostic accuracy values. For the same cut-off, Nair and colleagues reported in its Australian study 0.44 for sensitivity and 0.91 for specificity [121]. As evidenced by the high participation rate, the T&C test appeared, once again, to be readily acceptable but failed as a sensitive screening tool. As possible explanation, the authors argued that their country-specific adaptation of the making-change task, namely, the use of Australian coin money, may have varied the complexity and affected its sensitivity. The relatively small sample size (N = 103), accounted for low accuracy and large confidence intervals, especially with regard to sensitivity. Methodological limitations emerged from three of four domains. Only the patient selection domain was not rated as high risk of bias but was considered unclear because whether a consecutive or random sample was enrolled was not mentioned. Not surprisingly, the forest plot presented diverging confidence intervals without any overlap. For the second instrument evaluated, the MMSE, Nair and colleagues reported 0.88 for sensitivity and 0.94 for specificity [121]. Compared with the T&C, based on its superior performance and usefulness, the MMSE appears to be preferable to the T&C test in screening for dementia in older, general-hospital inpatients. However, in addition to the above mentioned methodological limitations, another potential problem with this study was that the DSM-IV diagnoses and the MMSE scores were obtained by the same interviewer. This possible source of confounding may have led to an overestimation of the performance of the MMSE relative to the T&C test. In comparison to the MMSE accuracy data presented by Travers and colleagues, the visual assessment of the forest plot showed, at least for the sensitivity confidence intervals, a good degree of overlap at a cut-off value of <24.

Last, for the CDT, Death and colleagues presented 0.77 and 0.87 for sensitivity and specificity, respectively [123]. Because of its characteristics, the test is proposed as a simple and rapid tool for use by admitting junior staff to highlight possible dementia and to alert to the necessity for further testing. The QUADAS-2 assessment revealed methodological limitations regarding the flow and timing domain that may have led to overestimation or underestimation of the accuracy of the CDT. In particular, because patients were only reviewed according to DSM-III diagnostic criteria, if a discrepancy in test results occurred between the CDT and MMSE, accuracy should be interpreted with caution. Furthermore, the reference standard domain was considered to be at unclear risk because whether the reference standard results were interpreted with or without the knowledge of the results of the index test was not stated. Finally, the comparative visual assessment revealed rather large confidence intervals, which may originate from the relatively small study sample (N = 117).

Because of the insufficient number of included studies, no meta-analysis of the diagnostic test accuracy, investigation of heterogeneity, and sensitivity analyses were conducted.

Discussion

Summary of main results

The aim of this review was to search for and identify adequate instruments for screening for dementia and MCI in unselected, elective hospital inpatients, with a restriction to validation studies of high quality to minimize possible biases due to methodological shortcomings and differences in reporting. Accordingly, this review applied a well-constructed search strategy and included quality assessments. The overall number of studies included in this review is small. Only six studies evaluating five unique screening instruments were found. Four instruments, the Cognitive Performance Scale (CPS), the Mini-Mental Status Examination (MMSE), the Time & Change (T&C), and the Clock Drawing Test (CDT) were investigated as screening tools for detecting dementia; the CPS and the 6-Item Cognitive Impairment Test were investigated as screening tools for unspecified cognitive impairment. Overall, despite a restrictive combination of inclusion criteria, a considerable number of included studies were rated as having a high risk of bias or applicability concerns, in particular in the flow and timing and index test domain, respectively. In addition, in six cases, risk of bias was rated as unclear. This originated directly from the limitations in reporting, when older articles seemingly adhered less to recent guidelines. Consequently, the scarcity of information, methodological limitations, and heterogeneity of study characteristics did not allow formal meta-analyses of study results and further analysis.

Strengths and weaknesses of the review

The strengths of this review include the use of a multi-concept search strategy to identify a wide spectrum of potential articles, which would reduce the risk of publication bias. The primary search concept used terms from four domains and was complemented by a more rigorous second concept, which used instrument names from the Dementia Practice Guidelines as key search terms. While the latter concept ensured coverage of widely used instruments, primarily in the primary care setting, the former targeted, but more sensitive search approach, may have identified studies that could have been overlooked. For quality and comprehensiveness reassurance, the search strategy was peer-reviewed, using the PRESS 2015 Guideline Evidence-Based Checklist.

Importantly, this review also included a detailed quality assessment that provided crucial information for the interpretation of the reported studies. For quality assessment, the recommended assessment tools for Diagnostic Accuracy Studies (QUADAS-2) and the Standards for the Reporting of Diagnostic Accuracy studies (STARD 2015) checklist were applied. This review itself reports according to the PRISMA Statement for Preferred Reporting Items for a Systematic Review and Meta-Analysis of Diagnostic Test Accuracy Studies (PRISMA-DTA statement 2018).

Finally, in contrary to recent reviews, this systematic review excluded case-control studies, which are prone to overestimate diagnostic accuracy by including phenotypic extremes; that is two extreme populations are compared, rather than typical healthy and diseased populations. Complementary, the focus on rather naturalistic, cross-sectional validation studies provided an applicable choice of instruments for systematic screening during hospital routine care.

This review has several limitations. Formal meta-analyses and additional analysis were precluded due to the small number of studies reported. Initially, for meta-analysis of sensitivity and specificity, it was planned to use the bivariate random-effects model approach (if studies used the same index test at a common threshold) or the hierarchical summary ROC (HSROC) method (if multiple thresholds were reported) [170, 171]. For the investigation of heterogeneity, in addition to the visual examination of the forest plot, performing meta-regression was planned by fitting HSROC models with pre-specified covariates. Therefore, drawing conclusions from the reported studies regarding the diagnostic accuracy of included screening instruments was limited.

Furthermore, the inclusion of the MMSE as a criterion standard could be criticized. The choice to accept any screening instrument as a criterion standard could be justified by two reasons: (1) In particular in cross-sectional studies embedded in daily clinical routine, the confirmation of the index test with a more adequate reference standard, i.e. clinical assessment, or neuropsychological testing, with explicit diagnostic criteria with or without expert consensus, is usually logistically constrained and thus, often deliberately avoided. (2) The diagnostic criteria for MCI used in this review are relatively recent. Therefore, with the motive to increase the number of potentially eligible studies, this review also included diagnostic accuracy studies comparing their index test with the MMSE as a criterion standard. The choice of the MMSE is justified by its widespread use in research and popularity in the clinical setting. However, even though the MMSE is widely used, it has imperfect specificity and sensitivity, and very limited ability to differentiate between MCI patients and healthy controls [154].

An additional limitation originated from low methodological quality of some included studies. Instrument accuracy was potentially overestimated (due to selection bias, time lag bias, information bias, and study result elimination bias); thus, results should be interpreted with caution. The exclusion of informant-rated questionnaires, web-based and telephonic screening tools can be seen as another shortcoming. Due to this restriction, some promising screening instruments were not evaluated in this review. Finally, the exclusion of non-English language, gray literature, and unpublished studies also had potential for bias.

Applicability of findings to the review question

In 2013, Jackson and colleagues conducted a very similar review and meta-analysis [86]. Their intent was to determine which of the instruments advocated for screening for dementia had been validated in older hospital inpatients. In the end, in most of their included studies, the sample population was either mixed with outpatients [172, 173], geriatric [174176], or admitted through the emergency department [177, 178]. Jackson and colleagues reported the largest evidence base (with more than one report) for the use of the Abbreviated Mental Test Score (AMTS) and stated a clear need for more validation studies to inform screening for dementia in hospital inpatients best. In 2018, Carpenter and colleagues performed a systematic review and meta-analysis of the diagnostic accuracy of brief screening instruments for dementia in geriatric ED patients [82]. The AMT-4, a shorter, 4-item version of the AMTS, was found the most accurate ED screening instrument to rule in dementia.

This present review found only a small number of validated instruments and was not able to recommend a single best instrument. Concerning the AMTS, for screening MCI or dementia in unselected, elective hospital inpatients, this review found no evidence. Although the findings of this review do not advocate a specific instrument in terms of best diagnostic accuracy, the results do suggest that for screening dementia there are valuable instruments as the majority of the included studies report satisfying sensitivity and negative predictive values–both of which need to be maximized in order to miss relatively few true cases.

The lack of evidence is surprising, because despite the wider public interest in dementia and the recent debates about targeted screening initiatives, this review found not one eligible study published after 2013. Although it was not possible to estimate the pooled operating characteristics, the included description of instrument characteristics, the descriptive analysis of performance measures, and the critical evaluation of the reporting studies may contribute to clinicians' choice of the best screening instrument for their purpose.

Implications for clinical practice

At the present time, there is insufficient evidence to recommend for or against the use of a specific instrument for screening for dementia or MCI in older hospital inpatients. Although some instruments performed comparatively well and were advocated by the individual study authors, based on the limited information currently available, a universal recommendation for routine use would be of questionable quality and little clinical utility. In the end, whatever test is used, evidence of cognitive impairment on single tests must be interpreted in the light of contextual and other information. The main caveat is that simple cognitive tests used in isolation are not reliable enough. In addition, even in the absence of dementia, inpatients may perform poorly because of other reasons (e.g., medication, pain, language barriers, and cultural issues) and/or competing disorders (e.g., delirium, depression, diabetes) [179]. Delirium is the most common cause; it affects at least one in 10 hospital inpatients [79, 180, 181]. For this reason, a sequential use in combination with detailed expert assessments is highly recommended before establishing diagnosis and following care pathways.

If screening is chosen, timing matters. In general, the sooner MCI or dementia is identified during a hospital stay, the sooner appropriate interventions can be tailored to the individual's needs (e.g., choice of anesthesia, involvement of primary caregiver, medication management) [94]. Under the assumption that elective inpatients are in general more stable and are not in need of immediate care, clinicians should consider incorporating screening as part of the overall hospital admission assessment and follow-up further evaluations both during and after hospitalization. In many cases, subsequent detailed examinations may only be realistic after discharge. As long as clinicians are accustomed to managing possible confounding factors and are trained in the use of cognitive tests, such test do have a role in screening for dementia or MCI in hospital inpatients [90]. However, the time needed to perform assessments of cognitive function means an increase in the workload. Although the costs of initial screening can be kept quite inexpensive, the costs of a subsequent diagnostic workup will vary, depending on the specific diagnostic pathway. Finally, it needs to be said that at present, evidence that screening for dementia is effective is lacking [41, 182].

Implications for research

At present, there is a clear need for further validation studies of dementia or MCI screening instruments for older hospital inpatients, rather than for the development of new instruments. Future studies should incorporate strong methodological study designs to minimize the risks of bias but also need to report in sufficient detail, so that trustworthiness and applicability of the study findings can be judged. The conduct of a meta-analysis might be a valuable objective for future research, provided that the number of validation studies to be evaluated is sufficient. In addition, on the basis of well-validated cognitive tests, distinct recommendations for clinicians on how to identify patients with possible dementia systematically should be established. Ultimately, this will also require additional evidence regarding the cost-effectiveness of screening for dementia or MCI. A corresponding analysis of the benefits and costs of screening should be measured in terms of the value of timely and correct diagnosis and the application of adequate medical treatments and care management programs.

Supporting information

S1 Appendix. Search strategy for PubMed.

(PDF)

S2 Appendix. Assessment of methodological quality using QUADAS-2.

(PDF)

S3 Appendix. Standards for the Reporting of Diagnostic Accuracy studies checklist.

(PDF)

S4 Appendix. Study protocol.

(PDF)

S5 Appendix. Prisma 2009 checklist.

(PDF)

S6 Appendix. Minimal data set.

(XLSX)

Acknowledgments

The authors would like to thank Mattia Gianinazzi (MA), analyst at Biogen, for his peer-reviewing of the search strategy (PRESS 2015 Guideline), independent screening of search results (title, abstract, and full-text), and assessment of methodological quality of each included study (QUADAS-2, STARD 2015). The authors would also like to thank HP Switzerland for the provision of Windows devices to support efficient literature search and screening.

Data Availability

All relevant data are within the manuscript and its Supporting Information files.

Funding Statement

The authors received no specific funding for this work.

References

  • 1.Masuhr K, Neumann M. Neurologie. 7th. ed: Thieme; 2013. [Google Scholar]
  • 2.Hywel T. Understanding Behaviour in Dementia that Challenges. Nursing Older People (through 2013). 2011;23(9):8. [DOI] [PubMed] [Google Scholar]
  • 3.Rizzi L, Rosset I, Roriz-Cruz M. Global Epidemiology of Dementia: Alzheimer's and Vascular Types. BioMed Research International. 2014;2014:1–8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Grinberg LT, Nitrini R, Suemoto CK, Lucena Ferretti-Rebustini RE, Leite RE, Farfel JM, et al. Prevalence of dementia subtypes in a developing country: a clinicopathological study. Clinics (Sao Paulo, Brazil). 2013;68(8):1140–5. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Brunnstrom H, Gustafson L, Passant U, Englund E. Prevalence of dementia subtypes: a 30-year retrospective survey of neuropathological reports. Archives of gerontology and geriatrics. 2009;49(1):146–9. 10.1016/j.archger.2008.06.005 [DOI] [PubMed] [Google Scholar]
  • 6.Goodman R, Lochner K, Thambisetty M, Wingo T, Posner S, Ling S. Prevalence of dementia subtypes in United States Medicare fee-for-service beneficiaries, 2011–2013. Alzheimer's & Dementia: The Journal of the Alzheimer's Association. 2017;13(1):28–37. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Dubois B. The Emergence of a New Conceptual Framework for Alzheimer's Disease. Journal of Alzheimer's disease. 2018;62(3):1059–66. 10.3233/JAD-170536 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Sperling R, Aisen P, Beckett L, Bennett D, Craft S, Fagan A, et al. Toward defining the preclinical stages of Alzheimer's disease: Recommendations from the National Institute on Aging-Alzheimer's Association workgroups on diagnostic guidelines for Alzheimer's disease. Alzheimer's & Dementia. 2011;7(3):280–92. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Aisen P, Cummings J, Jack C, Morris J, Sperling R, Frölich L, et al. On the path to 2025: understanding the Alzheimer’s disease continuum. Alzheimer's research & therapy. 2017;9(1):60. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Prince M, Wimo A, Guerchet M, Ali G, Wu Y, Prina M. World Alzheimer Report 2015: The Global Impact of Dementia—An Analysis of Prevalence, Incidence, Cost and Trends. Alzheimer's Disease International (ADI); 2015. [cited 2018 November 10]. Available from: https://www.alz.co.uk/research/WorldAlzheimerReport2015.pdf. [Google Scholar]
  • 11.Swiss Alzheimer's Society. Menschen mit Demenz in der Schweiz: Zahlen und Prognosen. Swiss Alzheimer's Society; 2018. [cited 2018 August 10]. Available from: https://www.alzheimer-schweiz.ch/fileadmin/dam/Alzheimer_Schweiz/de/Publikationen-Produkte/Zahlen-Fakten/2018-CH-zahlen-fakten.pdf. [Google Scholar]
  • 12.Swiss Federal Statistical Office. Population projections for Switzerland 2015–2045 [Internet]. Swiss Federal Statistical Office; 2016 [cited 2018 September 5]. Available from: https://www.bfs.admin.ch/bfs/en/home/statistics/population/population-projections.assetdetail.39916.html.
  • 13.Winblad B, Amouyel P, Andrieu S, Ballard C, Brayne C, Brodaty H, et al. Defeating Alzheimer's disease and other dementias: a priority for European science and society. The Lancet Neurology. 2016;15(5):455–532. 10.1016/S1474-4422(16)00062-4 [DOI] [PubMed] [Google Scholar]
  • 14.Ecoplan. Kosten der Demenz in der Schweiz. Swiss Alzheimer's Society; 2010. [cited 2018 July 20]. Available from: https://www.ecoplan.ch/download/alz_sb_de.pdf. [Google Scholar]
  • 15.Ecoplan. Kosten der Demenz in der Schweiz: Update. Swiss Alzheimer's Society; 2009. [cited 2018 July 20]. Available from: http://www.alz.ch/index.php/zahlen-zur-demenz.html. [Google Scholar]
  • 16.Brown L, Hansnata E, La H. Economic Cost of Dementia in Australia 2016–2056. Alzheimer’s Australia, University of Canberra; 2017. [cited 2018 September 11]. Available from: https://staff.dementia.org.au/files/NATIONAL/documents/The-economic-cost-of-dementia-in-Australia-2016-to-2056.pdf. [Google Scholar]
  • 17.Wimo A, Jönsson L, Bond J, Prince M, Winblad B. The worldwide economic impact of dementia 2010. Alzheimer's & Dementia. 2013;9(1):1–11. [DOI] [PubMed] [Google Scholar]
  • 18.Department of Health and Social Care and Prime Minister's Office UK. G8 dementia summit declaration [Internet]. Department of Health and Social Care and Prime Minister's Office UK; 2010 [cited 2018 November 9]. Available from: https://www.gov.uk/government/publications/g8-dementia-summit-agreements.
  • 19.Alzheimer Cooperative Valuation in Europe. The European Joint Action on Dementia: Synthesis Report [Internet]. Alzheimer Cooperative Valuation in Europe; 2013 [cited 2018 August 27]. Available from: https://www.scie-socialcareonline.org.uk/the-european-joint-action-on-dementia-synthesis-report-2013/r/a11G000000CTfeIIAT.
  • 20.International Longevity Centre—UK. The European Dementia Research Agenda. International Longevity Centre—UK; 2011 [cited 2018 May 19]. Available from: http://ilcuk.org.uk/files/pdf_pdf_165.pdf.
  • 21.Europe Alzheimer. National Dementia Strategies [Internet]. Alzheimer Europe; 2017. [cited 2018 November 22]. Available from: https://www.alzheimer-europe.org/Policy-in-Practice2/National-Dementia-Strategies. [Google Scholar]
  • 22.Federal Department of Home Affairs. National Dementia Strategy 2014–2019. Federal Office of Public Health and Swiss Conference of the Cantonal Ministers of Public Health; 2018 [cited 2018 October 25]. Available from: www.nationaldementiastrategy.ch.
  • 23.Department of Health and Social Care UK. Living well with dementia: A national dementia strategy [Internet]. Department of Health; 2009. [cited 2018 December 2]. Available from: https://www.gov.uk/government/publications/living-well-with-dementia-a-national-dementia-strategy. [Google Scholar]
  • 24.Scottish Government. Scotland's national dementia strategy 2017–2020. The Scottish Government Edinburgh; 2010. [cited 2018 May 2]. Available from: https://www.alzscot.org/assets/0002/6035/Third_Dementia_Strategy.pdf. [Google Scholar]
  • 25.Lee S. Dementia Strategy Korea. International journal of geriatric psychiatry. 2010;25(9):931–2. 10.1002/gps.2614 [DOI] [PubMed] [Google Scholar]
  • 26.Brooker D, Fontaine JL, Evans S, Bray J, Saad K. Public health guidance to facilitate timely diagnosis of dementia: Alzheimer's Cooperative Valuation in Europe recommendations. International journal of geriatric psychiatry. 2014;29(7):682–93. 10.1002/gps.4066 [DOI] [PubMed] [Google Scholar]
  • 27.Prince M, Bryce R, Ferri C. World Alzheimer Report 2011: The benefits of early diagnosis and intervention. Alzheimer's Disease International; 2011. [cited 2018 August 29]. [Google Scholar]
  • 28.Phillips J, Pond D, Goode S. Timely Diagnosis of Dementia: Can we do better. Alzheimer's Australia; 2011 [cited 2018 October 11]. Available from: https://www.dementia.org.au/files/Timely_Diagnosis_Can_we_do_better.pdf.
  • 29.Koch T, Iliffe S. Rapid appraisal of barriers to the diagnosis and management of patients with dementia in primary care: a systematic review. BMC family practice. 2010;11(1):52. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Knopman D, Donohue JA, Gutterman EM. Patterns of Care in the Early Stages of Alzheimer's Disease: Impediments to Timely Diagnosis. Journal of the American Geriatrics Society. 2000;48(3):300–4. 10.1111/j.1532-5415.2000.tb02650.x [DOI] [PubMed] [Google Scholar]
  • 31.Boustani M, Peterson B, Hanson L, Harris R, Lohr KN. Screening for dementia in primary care: a summary of the evidence for the US Preventive Services Task Force. Annals of internal medicine. 2003;138(11):927–37. 10.7326/0003-4819-138-11-200306030-00015 [DOI] [PubMed] [Google Scholar]
  • 32.Petersen RC, Stevens JC, Ganguli M, Tangalos EG, Cummings J, DeKosky S. Practice parameter: early detection of dementia: mild cognitive impairment (an evidence-based review): report of the Quality Standards Subcommittee of the American Academy of Neurology. Neurology. 2001;56(9):1133–42. 10.1212/wnl.56.9.1133 [DOI] [PubMed] [Google Scholar]
  • 33.Eccles M, Clarke J, Livingston M, Freemantle N, Mason J. North of England evidence based guidelines development project: guideline for the primary care management of dementia. BMJ: British Medical Journal. 1998;317(7161):802 10.1136/bmj.317.7161.802 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.Patterson C, Grek A, Gauthier S, Bergman H, Cohen C, Feightner J, et al. The recognition, assessment and management of dementing disorders: conclusions from the Canadian Consensus Conference on Dementia. Canadian Journal of Neurological Sciences. 2001;28(S1):S3–S16. [DOI] [PubMed] [Google Scholar]
  • 35.Ngo J, Holroyd-Leduc JM. Systematic review of recent dementia practice guidelines. Age and ageing. 2014;44(1):25–33. 10.1093/ageing/afu143 [DOI] [PubMed] [Google Scholar]
  • 36.Ashford JW, Borson S, O’Hara R, Dash P, Frank L, Robert P, et al. Should older adults be screened for dementia? Alzheimer's & Dementia. 2006;2(2):76–85. [DOI] [PubMed] [Google Scholar]
  • 37.Dementia Study Group of the Italian Neurological Society. Guidelines for the diagnosis of dementia and Alzheimer's disease. Neurological Sciences. 2000;21(4):187–94. [DOI] [PubMed] [Google Scholar]
  • 38.Moyer VA. Screening for cognitive impairment in older adults: US Preventive Services Task Force recommendation statement. Annals of internal medicine. 2014;160(11):791–7. 10.7326/M14-0496 [DOI] [PubMed] [Google Scholar]
  • 39.Ashford JW, Borson S, O’Hara R, Dash P, Frank L, Robert P, et al. Should older adults be screened for dementia? It is important to screen for evidence of dementia!: Elsevier; 2007. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 40.Dubois B, Padovani A, Scheltens P, Rossi A, Dell’Agnello G. Timely diagnosis for Alzheimer’s disease: a literature review on benefits and challenges. Journal of Alzheimer's disease. 2016;49(3):617–31. 10.3233/JAD-150692 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 41.Robinson L, Tang E, Taylor J-P. Dementia: timely diagnosis and early intervention. Bmj. 2015;350:302–9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 42.Yumiko A, Asuna A, Yoko M. The national dementia strategy in Japan. International journal of geriatric psychiatry. 2010;25(9):896–9. 10.1002/gps.2589 [DOI] [PubMed] [Google Scholar]
  • 43.Raina P, Santaguida P, Ismaila A, Patterson C, Cowan D, Levine M, et al. Effectiveness of cholinesterase inhibitors and memantine for treating dementia: evidence review for a clinical practice guideline. Annals of internal medicine. 2008;148(5):379–97. 10.7326/0003-4819-148-5-200803040-00009 [DOI] [PubMed] [Google Scholar]
  • 44.Birks J. Cholinesterase inhibitors for Alzheimer’s disease. The Cochrane database of systematic reviews; 2006. [cited 2018 June 22]. Available from: https://www.cochranelibrary.com/cdsr/doi/10.1002/14651858.CD005593/full. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 45.Spijker A, Vernooij‐Dassen M, Vasse E, Adang E, Wollersheim H, Grol R, et al. Effectiveness of nonpharmacological interventions in delaying the institutionalization of patients with dementia: a meta‐analysis. Journal of the American Geriatrics Society. 2008;56(6):1116–28. 10.1111/j.1532-5415.2008.01705.x [DOI] [PubMed] [Google Scholar]
  • 46.Rolinski M, Fox C, Maidment I, McShane R. Cholinesterase inhibitors for dementia with Lewy bodies, Parkinson's disease dementia and cognitive impairment in Parkinson's disease.2012. [cited 2018 July 2]. Available from: 10.1002/14651858.CD006504.pub2 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 47.Wilkinson D. A review of the effects of memantine on clinical progression in Alzheimer's disease. International journal of geriatric psychiatry. 2012;27(8):769–76. 10.1002/gps.2788 [DOI] [PubMed] [Google Scholar]
  • 48.McShane R, Areosa A, Minakaran N. Memantine for dementia. The Cochrane database of systematic reviews; 2006. [cited 2018 September 19]; (2). Available from: https://www.cochranelibrary.com/cdsr/doi/10.1002/14651858.CD003154.pub5/abstract. [DOI] [PubMed] [Google Scholar]
  • 49.Pimouguet C, Lavaud T, Dartigues J, Helmer C. Dementia case management effectiveness on health care costs and resource utilization: a systematic review of randomized controlled trials. The journal of nutrition, health & aging. 2010;14(8):669–76. [DOI] [PubMed] [Google Scholar]
  • 50.Livingston G, Barber J, Rapaport P, Knapp M, Griffin M, King D, et al. Long-term clinical and cost-effectiveness of psychological intervention for family carers of people with dementia: a single-blind, randomised, controlled trial. The Lancet Psychiatry. 2014;1(7):539–48. 10.1016/S2215-0366(14)00073-X [DOI] [PubMed] [Google Scholar]
  • 51.Clare L, Linden DEJ, Woods RT, Whitaker R, Evans SJ, Parkinson CH, et al. Goal-Oriented Cognitive Rehabilitation for People With Early-Stage Alzheimer Disease: A Single-Blind Randomized Controlled Trial of Clinical Efficacy. The American Journal of Geriatric Psychiatry. 2010;18(10):928–39. 10.1097/JGP.0b013e3181d5792a [DOI] [PubMed] [Google Scholar]
  • 52.Manthorpe J, Samsi K, Campbell S, Abley C, Keady J, Bond J, et al. From forgetfulness to dementia: clinical and commissioning implications of diagnostic experiences. British Journal of General Practice. 2013;63(606):e69–e75. 10.3399/bjgp13X660805 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 53.Bunn F, Goodman C, Sworn K, Rait G, Brayne C, Robinson L, et al. Psychosocial factors that shape patient and carer experiences of dementia diagnosis and treatment: a systematic review of qualitative studies. PLoS medicine. 2012;9(10):e1001331 10.1371/journal.pmed.1001331 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 54.Wimo A, Religa D, Spångberg K, Edlund AK, Winblad B, Eriksdotter M. Costs of diagnosing dementia: results from SveDem, the Swedish Dementia Registry. International journal of geriatric psychiatry. 2013;28(10):1039–44. 10.1002/gps.3925 [DOI] [PubMed] [Google Scholar]
  • 55.Gaugler JE, Ascher-Svanum H, Roth DL, Fafowora T, Siderowf A, Beach TG. Characteristics of patients misdiagnosed with Alzheimer’s disease and their medication use: an analysis of the NACC-UDS database. BMC geriatrics. 2013;13(1):137. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 56.Borson S, Frank L, Bayley PJ, Boustani M, Dean M, Lin P-J, et al. Improving dementia care: the role of screening and detection of cognitive impairment. Alzheimer's & Dementia. 2013;9(2):151–9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 57.Calzà L, Beltrami D, Gagliardi G, Ghidoni E, Marcello N, Rossini-Favretti R, et al. Should we screen for cognitive decline and dementia? Maturitas. 2015;82(1):28–35. 10.1016/j.maturitas.2015.05.013 [DOI] [PubMed] [Google Scholar]
  • 58.Kmietowicz Z. Cameron launches challenge to end “national crisis” of poor dementia care. BMJ. 2012;344. [DOI] [PubMed] [Google Scholar]
  • 59.Department of Health—UK. Using the Commissioning for Quality and Innovation (CQUIN) payment framework. Department of Health; 2012. [cited 2018 September 18]. Available from: https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/215049/dh_133859.pdf. [Google Scholar]
  • 60.Bayley PJ, Kong JY, Mendiondo M, Lazzeroni LC, Borson S, Buschke H, et al. Findings from the national memory screening day program. Journal of the American Geriatrics Society. 2015;63(2):309–14. 10.1111/jgs.13234 [DOI] [PubMed] [Google Scholar]
  • 61.Cordell CB, Borson S, Boustani M, Chodosh J, Reuben D, Verghese J, et al. Alzheimer's Association recommendations for operationalizing the detection of cognitive impairment during the Medicare Annual Wellness Visit in a primary care setting. Alzheimer's & dementia: the journal of the Alzheimer's Association. 2013;9(2):141–50. [DOI] [PubMed] [Google Scholar]
  • 62.Arevalo-Rodriguez I, Pedraza OL, Rodríguez A, Sánchez E, Gich I, Solà I, et al. Alzheimer’s disease dementia guidelines for diagnostic testing: a systematic review. American Journal of Alzheimer's Disease & Other Dementias®. 2013;28(2):111–9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 63.Laske C, Sohrabi HR, Frost SM, López-de-Ipiña K, Garrard P, Buscema M, et al. Innovative diagnostic tools for early detection of Alzheimer's disease. Alzheimer's & Dementia. 2015;11(5):561–78. [DOI] [PubMed] [Google Scholar]
  • 64.Lin J, O'Connor E, Rossom R, Perdue L, Burda B, Thompson M, et al. Screening for Cognitive Impairment in Older Adults: An Evidence Update for the U.S. Preventive Services Task Force. Agency for Healthcare Research and Quality (US); 2013. [cited 2018 November 22]. Available from: https://www.ncbi.nlm.nih.gov/books/NBK174643/. [PubMed] [Google Scholar]
  • 65.Brouwers M, Kho M, Browman G, Burgers J, Cluzeau F, Feder GZ. For the AGREE Next Steps Consortium. AGREE II: Advancing guideline development, reporting and evaluation in healthcare. Canadian Medical Association Journal. 2010;182(18):E839–E42. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 66.Alzheimer-Demenz. Konsensus 2012 zur Diagnostik und Therapie von Demenzkranken in der Schweiz. Praxis; 2012 [cited 2018 August 24]. Available from: https://econtent.hogrefe.com/doi/abs/10.1024/1661-8157/a001085?journalCode=prx. [DOI] [PubMed]
  • 67.Chertkow H. Introduction: the third Canadian consensus conference on the diagnosis and treatment of dementia, 2006. Alzheimer's & dementia: the journal of the Alzheimer's Association. 2007;3(4):262–5. [DOI] [PubMed] [Google Scholar]
  • 68.National Collaborating Centre for Mental Health. Dementia: A NICE-SCIE Guideline on Supporting People With Dementia and Their Carers in Health and Social Care. The British Psychological Society & The Royal College of Psychiatrists.; 2007. [cited 2018 October 26]. Available from: https://www.ncbi.nlm.nih.gov/books/NBK55459/. [PubMed] [Google Scholar]
  • 69.Feldman HH, Jacova C, Robillard A, Garcia A, Chow T, Borrie M, et al. Diagnosis and treatment of dementia: 2. Diagnosis. Canadian Medical Association Journal. 2008;178(7):825–36. 10.1503/cmaj.070798 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 70.Gauthier S, Patterson C, Chertkow H, Gordon M, Herrmann N, Rockwood K, et al. 4th Canadian consensus conference on the diagnosis and treatment of dementia. Canadian Journal of Neurological Sciences. 2012;39(S5):S1–S8. [DOI] [PubMed] [Google Scholar]
  • 71.Hogan DB, Bailey P, Black S, Carswell A, Chertkow H, Clarke B, et al. Diagnosis and treatment of dementia: 4. Approach to management of mild to moderate dementia. Canadian Medical Association Journal. 2008;179(8):787–93. 10.1503/cmaj.070803 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 72.Hort J O’brien J, Gainotti G, Pirttila T, Popescu B, Rektorova I, et al. EFNS guidelines for the diagnosis and management of Alzheimer’s disease. European Journal of Neurology. 2010;17(10):1236–48. 10.1111/j.1468-1331.2010.03040.x [DOI] [PubMed] [Google Scholar]
  • 73.Steinberg E, Greenfield S, Wolman D, Mancher M, Graham R. Clinical practice guidelines we can trust. National Academies Press; 2011. [cited 2018 July 7]. Available from: https://www.ncbi.nlm.nih.gov/books/NBK209539/. [PubMed] [Google Scholar]
  • 74.Sorbi S, Hort J, Erkinjuntti T, Fladby T, Gainotti G, Gurvit H, et al. EFNS‐ENS Guidelines on the diagnosis and management of disorders associated with dementia. European Journal of Neurology. 2012;19(9):1159–79. 10.1111/j.1468-1331.2012.03784.x [DOI] [PubMed] [Google Scholar]
  • 75.American Geriatrics Society. A Guide to Dementia Diagnosis and Treatment. American Geriatrics Society; 2011. [cited 2018 November 12]. Available from: http://unmfm.pbworks.com/f/American+Geriatric+Society+Dementia+Diagnosis+03-09-11.pdf. [Google Scholar]
  • 76.Yokomizo JE, Simon SS, Bottino CM. Cognitive screening for dementia in primary care: a systematic review. International psychogeriatrics. 2014;26(11):1783–804. 10.1017/S1041610214001082 [DOI] [PubMed] [Google Scholar]
  • 77.Razak MA, Ahmad N, Chan Y, Kasim NM, Yusof M, Ghani MA, et al. Validity of screening tools for dementia and mild cognitive impairment among the elderly in primary health care: a systematic review. Public health. 2019;169:84–92. 10.1016/j.puhe.2019.01.001 [DOI] [PubMed] [Google Scholar]
  • 78.Boustani M, Baker MS, Campbell N, Munger S, Hui SL, Castelluccio P, et al. Impact and recognition of cognitive impairment among hospitalized elders. Journal of hospital medicine: an official publication of the Society of Hospital Medicine. 2010;5(2):69–75. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 79.Mukadam N, Sampson EL. A systematic review of the prevalence, associations and outcomes of dementia in older general hospital inpatients. International psychogeriatrics. 2011;23(3):344–55. 10.1017/S1041610210001717 [DOI] [PubMed] [Google Scholar]
  • 80.Sampson EL, Blanchard MR, Jones L, Tookman A, King M. Dementia in the acute hospital: prospective cohort study of prevalence and mortality. The British Journal of Psychiatry. 2009;195(1):61–6. 10.1192/bjp.bp.108.055335 [DOI] [PubMed] [Google Scholar]
  • 81.Carpenter CR, Bassett ER, Fischer GM, Shirshekan J, Galvin JE, Morris JC. Four sensitive screening tools to detect cognitive dysfunction in geriatric emergency department patients: brief Alzheimer's Screen, Short Blessed Test, Ottawa 3DY, and the caregiver-completed AD8. Academic Emergency Medicine. 2011;18(4):374–84. 10.1111/j.1553-2712.2011.01040.x [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 82.Carpenter CR, Banerjee J, Keyes D, Eagles D, Schnitker L, Barbic D, et al. Accuracy of Dementia Screening Instruments in Emergency Medicine: A Diagnostic Meta‐analysis. Academic Emergency Medicine. 2019;26(2):226–45. 10.1111/acem.13573 [DOI] [PubMed] [Google Scholar]
  • 83.de Koning I, van Kooten F, Dippel DW, van Harskamp F, Grobbee DE, Kluft C, et al. The CAMCOG: a useful screening instrument for dementia in stroke patients. Stroke. 1998;29(10):2080–6. [DOI] [PubMed] [Google Scholar]
  • 84.Swain DG, O'Brien AG, Nightingale PG. Cognitive assessment in elderly patients admitted to hospital: the relationship between the shortened version of the Abbreviated Mental Test and the Abbreviated Mental Test and Mini-Mental State Examination. Clinical rehabilitation. 2000;14(6):608–10. 10.1191/0269215500cr368oa [DOI] [PubMed] [Google Scholar]
  • 85.Appels BA, Scherder E. The diagnostic accuracy of dementia-screening instruments with an administration time of 10 to 45 minutes for use in secondary care: a systematic review. American journal of Alzheimer's disease and other dementias. 2010;25(4):301–16. 10.1177/1533317510367485 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 86.Jackson TA, Naqvi SH, Sheehan B. Screening for dementia in general hospital inpatients: a systematic review and meta-analysis of available instruments. Age and ageing. 2013;42(6):689–95. 10.1093/ageing/aft145 [DOI] [PubMed] [Google Scholar]
  • 87.Swiss Federal Statistical Office. Swiss Hospital Medical Statistics Tables 2017 [Internet]. Swiss Federal Statistical Office; 2017 [cited 2018 December 27]. Available from: https://www.bfs.admin.ch/bfs/de/home/statistiken/kataloge-datenbanken/tabellen.assetdetail.6406943.html.
  • 88.Schuur JD, Venkatesh AK. The growing role of emergency departments in hospital admissions. New England Journal of Medicine. 2012;367(5):391–3. 10.1056/NEJMp1204431 [DOI] [PubMed] [Google Scholar]
  • 89.Cowling TE, Soljak MA, Bell D, Majeed A. Emergency hospital admissions via accident and emergency departments in England: time trend, conceptual framework and policy implications. Journal of the Royal Society of Medicine. 2014;107(11):432–8. 10.1177/0141076814542669 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 90.Shenkin SD, Russ TC, Ryan TM, MacLullich AM. Screening for dementia and other causes of cognitive impairment in general hospital in-patients. Age and ageing. 2014;43:166–68. 10.1093/ageing/aft184 [DOI] [PubMed] [Google Scholar]
  • 91.Gray SL, Anderson ML, Dublin S, Hanlon JT, Hubbard R, Walker RL, et al. Cumulative Use of Strong Anticholinergic Medications and Incident Dementia: 746. Pharmacoepidemiology and Drug Safety. 2015;24:426. [Google Scholar]
  • 92.Campbell N, Boustani M, Limbil T, Ott C, Fox C, Maidment I, et al. The cognitive impact of anticholinergics: a clinical review. Clinical interventions in aging. 2009;4:225 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 93.Gieche H. Mit der Nebendiagnose Demenz im Akutspital—Den Spitalaufenthalt optimal vorbereiten. Swiss Alzheimer's Association; 2015. [cited 2018 October 2]. Available from: http://www.alz.ch/tl_files/PDFs/PDF-D-Dienstleistungen/6_Gieche_151111.pdf. [Google Scholar]
  • 94.Travers C, Gray L, Martin-Khan M, Hubbard R. Evidence for the safety and quality issues associated with the care of patients with cognitive impairment in acute care settings: a rapid review. Australian Commission on Safety and Qualiy in Health Care (ACSQHC); 2013. [cited 2018 August 16]. Available from: https://www.safetyandquality.gov.au/publications/evidence-for-the-safety-and-quality-issues-associated-with-the-care-of-patients-with-cognitive-impairment-in-acute-care-settings-a-rapid-review/. [Google Scholar]
  • 95.Maslow M, Mezey M. Adverse health events in hospitalized patients with dementia. The journals of gerontology Series A, Biological sciences and medical sciences. 2003;58(1):76–81. 10.1093/gerona/58.1.m76 [DOI] [PubMed] [Google Scholar]
  • 96.Lorentz WJ, Scanlan JM, Borson S. Brief Screening Tests for Dementia. The Canadian Journal of Psychiatry. 2002;47(8):723–33. 10.1177/070674370204700803 [DOI] [PubMed] [Google Scholar]
  • 97.Yang L, Yan J, Jin X, Jin Y, Yu W, Xu S, et al. Screening for Dementia in Older Adults: Comparison of Mini-Mental State Examination, Mini-Cog, Clock Drawing Test and AD8. PLoS One. 2016;11(12). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 98.Cullen B, O'Neill B, Evans JJ, Coen RF, Lawlor BA. A review of screening tests for cognitive impairment. Journal of Neurology Neurosurgery and Psychiatry. 2007;78(8):790–9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 99.O'Brien JT, Thomas A. Vascular dementia. The Lancet. 2015;386(10004):1698–706. [DOI] [PubMed] [Google Scholar]
  • 100.Ott A, Breteler MMB, van Harskamp F, Claus JJ, van der Cammen TJM, Grobbee DE, et al. Prevalence of Alzheimer's disease and vascular dementia: association with education. The Rotterdam study. BMJ. 1995;310(6985):970–3. 10.1136/bmj.310.6985.970 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 101.Galvin JE. Using Informant and Performance Screening Methods to Detect Mild Cognitive Impairment and Dementia. Current Geriatrics Reports. 2018;7(1):19–25. 10.1007/s13670-018-0236-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 102.Mackinnon A, Khalilian A, Jorm AF, Korten AE, Christensen H, Mulligan R. Improving screening accuracy for dementia in a community sample by augmenting cognitive testing with informant report. Journal of Clinical Epidemiology. 2003;56(4):358–66. [DOI] [PubMed] [Google Scholar]
  • 103.Mackinnon A, Mulligan R. Combining cognitive testing and informant report to increase accuracy in screening for dementia. The American journal of psychiatry. 1998;155(11):1529–35. 10.1176/ajp.155.11.1529 [DOI] [PubMed] [Google Scholar]
  • 104.Samuel BG. Diagnostic and Statistical Manual of Mental Disorders, 4th ed. (DSM-IV). American Journal of Psychiatry. 1995;152(8):1228–848. [Google Scholar]
  • 105.Albert MS, DeKosky ST, Dickson D, Dubois B, Feldman HH, Fox NC, et al. The diagnosis of mild cognitive impairment due to Alzheimer's disease: recommendations from the National Institute on Aging-Alzheimer's Association workgroups on diagnostic guidelines for Alzheimer's disease. Alzheimer's & dementia: the journal of the Alzheimer's Association. 2011;7(3):270–9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 106.Petersen RC, Negash S. Mild cognitive impairment: an overview. CNS spectrums. 2008;13(1):45–53. [DOI] [PubMed] [Google Scholar]
  • 107.Gauthier S, Reisberg B, Zaudig M, Petersen RC, Ritchie K, Broich K, et al. Mild cognitive impairment. The lancet. 2006;367(9518):1262–70. [DOI] [PubMed] [Google Scholar]
  • 108.Winblad B, Palmer K, Kivipelto M, Jelic V, Fratiglioni L, Wahlund LO, et al. Mild cognitive impairment–beyond controversies, towards a consensus: report of the International Working Group on Mild Cognitive Impairment. Journal of internal medicine. 2004;256(3):240–6. 10.1111/j.1365-2796.2004.01380.x [DOI] [PubMed] [Google Scholar]
  • 109.Samuel BG. Diagnostic and statistical manual of mental disorders (revised 4th ed.). American Journal of Psychiatry; 2000. [cited 2018 August 27]. Available from: https://dsm.psychiatryonline.org/doi/abs/10.1176/appi.books.9780890420249.dsm-iv-tr. [Google Scholar]
  • 110.World Health Organization. International statistical classification of diseases and related health problems. World Health Organization; 2018. [cited 2018 December 9]. Available from: http://www.who.int/classifications/icd/en/. [Google Scholar]
  • 111.Englund B, Brun A, Gustafson L, Passant U, Mann D, Neary D, et al. Clinical and neuropathological criteria for frontotemporal dementia. The Lund and Manchester Groups. Journal of neurology, neurosurgery, and psychiatry. 1994;57(4):416–8. 10.1136/jnnp.57.4.416 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 112.Roman GC, Tatemichi TK, Erkinjuntti T, Cummings JL, Masdeu JC, Garcia JH, et al. Vascular dementia: diagnostic criteria for research studies. Report of the NINDS-AIREN International Workshop. Neurology. 1993;43(2):250–60. 10.1212/wnl.43.2.250 [DOI] [PubMed] [Google Scholar]
  • 113.McKeith IG, Galasko D, Kosaka K, Perry EK, Dickson DW, Hansen LA, et al. Consensus guidelines for the clinical and pathologic diagnosis of dementia with Lewy bodies (DLB): report of the consortium on DLB international workshop. Neurology. 1996;47(5):1113–24. 10.1212/wnl.47.5.1113 [DOI] [PubMed] [Google Scholar]
  • 114.McKhann G, Drachman D, Folstein M, Katzman R, Price D, Stadlan EM. Clinical diagnosis of Alzheimer's disease: report of the NINCDS-ADRDA Work Group under the auspices of Department of Health and Human Services Task Force on Alzheimer's Disease. Neurology. 1984;34(7):939–44. 10.1212/wnl.34.7.939 [DOI] [PubMed] [Google Scholar]
  • 115.Davis DH, Creavin ST, Noel-Storr A, Quinn TJ, Smailagic N, Hyde C, et al. Neuropsychological tests for the diagnosis of Alzheimer's disease dementia and other dementias: a generic protocol for cross-sectional and delayed-verification studies. Cochrane Database of Systematic Review; 2013. [cited 2018 August 14]. Available from: https://www.cochranelibrary.com/cdsr/doi/10.1002/14651858.CD010460/full. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 116.McGowan J, Sampson M, Salzwedel DM, Cogo E, Foerster V, Lefebvre C. PRESS Peer Review of Electronic Search Strategies: 2015 Guideline Statement. Journal of clinical epidemiology. 2016;75:40–6. 10.1016/j.jclinepi.2016.01.021 [DOI] [PubMed] [Google Scholar]
  • 117.Whiting PF, Rutjes AS, Westwood ME, et al. Quadas-2: A revised tool for the quality assessment of diagnostic accuracy studies. Annals of Internal Medicine. 2011;155(8):529–36. 10.7326/0003-4819-155-8-201110180-00009 [DOI] [PubMed] [Google Scholar]
  • 118.Macaskill P GC, Deeks JJ, Harbord RM, Takwoingi Y. Cochrane Handbook for Systematic Reviews of Diagnostic Test Accuracy: Chapter 10 Analysing and Presenting Results. The Cochrane Collaboration; 2010; 1.0. Available from: http://srdta.cochrane.org/. [Google Scholar]
  • 119.Cohen JF, Korevaar DA, Altman DG, Bruns DE, Gatsonis CA, Hooft L, et al. STARD 2015 guidelines for reporting diagnostic accuracy studies: explanation and elaboration. BMJ Open. 2016;6(11):1–17. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 120.Inouye SK, Robison JT, Froehlich TE, Richardson ED. The time and change test: a simple screening test for dementia. The journals of gerontology Series A, Biological sciences and medical sciences. 1998;53(4):M281–6. 10.1093/gerona/53a.4.m281 [DOI] [PubMed] [Google Scholar]
  • 121.Nair BR, Browne WL, Chua LE, D’Este C, O'Dea I, Agho K. Validating an Australian version of the Time and Change Test: A screening test for cognitive impairment. Australasian Journal on Ageing. 2007;26(2):87–90. [Google Scholar]
  • 122.Bula CJ, Wietlisbach V. Use of the Cognitive Performance Scale (CPS) to detect cognitive impairment in the acute care setting: concurrent and predictive validity. Brain research bulletin. 2009;80(4–5):173–8. 10.1016/j.brainresbull.2009.05.023 [DOI] [PubMed] [Google Scholar]
  • 123.Death J, Douglas A, Kenny RA. Comparison of clock drawing with Mini Mental State Examination as a screening test in elderly acute hospital admissions. Postgraduate medical journal. 1993;69(815):696–700. 10.1136/pgmj.69.815.696 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 124.Tuijl JP, Scholte EM, de Craen AJ, van der Mast RC. Screening for cognitive impairment in older general hospital patients: comparison of the Six-Item Cognitive Impairment Test with the Mini-Mental State Examination. International journal of geriatric psychiatry. 2012;27(7):755–62. 10.1002/gps.2776 [DOI] [PubMed] [Google Scholar]
  • 125.Travers C, Byrne GJ, Pachana NA, Klein K, Gray L. Validation of the interRAI Cognitive Performance Scale against independent clinical diagnosis and the Mini-Mental State Examination in older hospitalized patients. The journal of nutrition, health & aging. 2013;17(5):435–9. [DOI] [PubMed] [Google Scholar]
  • 126.Inouye SK, Robison JT, Froehlich TE, Richardson ED. The Time and Change Test: A Simple Screening Test for Dementia. The Journals of Gerontology: Series A. 1998;53A(4):M281–M6. [DOI] [PubMed] [Google Scholar]
  • 127.Hubbard EJ, Santini V, Blankevoort CG, Volkers KM, Barrup MS, Byerly L, et al. Clock drawing performance in cognitively normal elderly. Archives of Clinical Neuropsychology. 2008;23(3):295–327. 10.1016/j.acn.2007.12.003 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 128.Battersby WS, Bender MB, Pollack M, Kahn RL. Unilateral spatial agnosia (inattention) in patients with cerebral lesions. Brain: a journal of neurology. 1956;79(1):68–93. [DOI] [PubMed] [Google Scholar]
  • 129.Critchley M. The parietal lobes. 1 ed: Hafner Pub; 1953. [Google Scholar]
  • 130.Agrell B, Dehlin O. The clock-drawing test. Age and ageing. 1998;41 Suppl 3:iii41–5. [DOI] [PubMed] [Google Scholar]
  • 131.Schramm U, Berger G, Muller R, Kratzsch T, Peters J, Frolich L. Psychometric properties of Clock Drawing Test and MMSE or Short Performance Test (SKT) in dementia screening in a memory clinic population. International journal of geriatric psychiatry. 2002;17(3):254–60. [DOI] [PubMed] [Google Scholar]
  • 132.Shulman KI. Clock-drawing: is it the ideal cognitive screening test? International journal of geriatric psychiatry. 2000;15(6):548–61. [DOI] [PubMed] [Google Scholar]
  • 133.Solomon PR, Hirschoff A, Kelly B, Relin M, Brush M, DeVeaux RD, et al. A 7 minute neurocognitive screening battery highly sensitive to Alzheimer's disease. Archives of neurology. 1998;55(3):349–55. 10.1001/archneur.55.3.349 [DOI] [PubMed] [Google Scholar]
  • 134.Cacho J, Benito-Leon J, Garcia-Garcia R, Fernandez-Calvo B, Vicente-Villardon JL, Mitchell AJ. Does the combination of the MMSE and clock drawing test (mini-clock) improve the detection of mild Alzheimer's disease and mild cognitive impairment? Journal of Alzheimer's disease. 2010;22(3):889–96. 10.3233/JAD-2010-101182 [DOI] [PubMed] [Google Scholar]
  • 135.Shulman KI, Shedletsky R, Silver IL. The challenge of time: Clock‐drawing and cognitive function in the elderly. International journal of geriatric psychiatry. 1986;1(2):135–40. [Google Scholar]
  • 136.Folstein MF, Folstein SE, McHugh PR. "Mini-mental state". A practical method for grading the cognitive state of patients for the clinician. Journal of psychiatric research. 1975;12(3):189–98. [DOI] [PubMed] [Google Scholar]
  • 137.Palsetia D, Rao GP, Tiwari SC, Lodha P, De Sousa A. The Clock Drawing Test versus Mini-mental Status Examination as a Screening Tool for Dementia: A Clinical Comparison. Indian journal of psychological medicine. 2018;40(1):1–10. 10.4103/IJPSYM.IJPSYM_244_17 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 138.Mazancova AF, Nikolai T, Stepankova H, Kopecek M, Bezdicek O. The Reliability of Clock Drawing Test Scoring Systems Modeled on the Normative Data in Healthy Aging and Nonamnestic Mild Cognitive Impairment. Assessment. 2017;24(7):945–57. 10.1177/1073191116632586 [DOI] [PubMed] [Google Scholar]
  • 139.Storey JE, Rowland JT, Basic D, Conforti DA. Accuracy of the clock drawing test for detecting dementia in a multicultural sample of elderly Australian patients. International psychogeriatrics. 2002;14(3):259–71. [DOI] [PubMed] [Google Scholar]
  • 140.Cahn-Weiner DA, Williams K, Grace J, Tremont G, Westervelt H, Stern RA. Discrimination of dementia with lewy bodies from Alzheimer disease and Parkinson disease using the clock drawing test. Cognitive and behavioral neurology. 2003;16(2):85–92. [DOI] [PubMed] [Google Scholar]
  • 141.Nyborn JA, Himali JJ, Beiser AS, Devine SA, Du Y, Kaplan E, et al. The Framingham Heart Study clock drawing performance: normative data from the offspring cohort. Experimental aging research. 2013;39(1):80–108. 10.1080/0361073X.2013.741996 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 142.Bodner T, Delazer M, Kemmler G, Gurka P, Marksteiner J, Fleischhacker WW. Clock drawing, clock reading, clock setting, and judgment of clock faces in elderly people with dementia and depression. Journal of the American Geriatrics Society. 2004;52(7):1146–50. 10.1111/j.1532-5415.2004.52313.x [DOI] [PubMed] [Google Scholar]
  • 143.Hazan E, Frankenburg F, Brenkel M, Shulman K. The test of time: a history of clock drawing. International journal of geriatric psychiatry. 2018;33(1):e22–e30. 10.1002/gps.4731 [DOI] [PubMed] [Google Scholar]
  • 144.Aprahamian I, Martinelli J, Neri A. The Clock Drawing Test: A review of its accuracy in screening for dementia. 2009;3(2):74–81. 10.1590/S1980-57642009DN30200002 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 145.Morris JN, Fries BE, Mehr DR, Hawes C, Phillips C, Mor V, et al. MDS Cognitive Performance Scale. Journal of Gerontology. 1994;49(4):M174–M82. [DOI] [PubMed] [Google Scholar]
  • 146.Morris JN, Howard EP, Steel K, Perlman C, Fries BE, Garms-Homolová V, et al. Updating the Cognitive Performance Scale. Journal of geriatric psychiatry and neurology. 2016;29(1):47–55. 10.1177/0891988715598231 [DOI] [PubMed] [Google Scholar]
  • 147.de Silva V, Hanwella R. Why are we copyrighting science? Bmj. 2010;341:c4738 10.1136/bmj.c4738 [DOI] [PubMed] [Google Scholar]
  • 148.Davey RJ, Jamieson S. The validity of using the mini mental state examination in NICE dementia guidelines. Journal of neurology, neurosurgery, and psychiatry. 2004;75(2):343–4. [PMC free article] [PubMed] [Google Scholar]
  • 149.Shulman KI, Herrmann N, Brodaty H, Chiu H, Lawlor B, Ritchie K, et al. IPA survey of brief cognitive screening instruments. International psychogeriatrics. 2006;18(2):281–94. 10.1017/S1041610205002693 [DOI] [PubMed] [Google Scholar]
  • 150.Nieuwenhuis-Mark RE. The death knoll for the MMSE: has it outlived its purpose? Journal of geriatric psychiatry and neurology. 2010;23(3):151–7. 10.1177/0891988710363714 [DOI] [PubMed] [Google Scholar]
  • 151.Brodaty H, Howarth GC, Mant A, Kurrle SE. General practice and dementia. A national survey of Australian GPs. The Medical journal of Australia. 1994;160(1):10–4. [PubMed] [Google Scholar]
  • 152.Arevalo-Rodriguez I, Smailagic N, Roque IFM, Ciapponi A, Sanchez-Perez E, Giannakou A, et al. Mini-Mental State Examination (MMSE) for the detection of Alzheimer's disease and other dementias in people with mild cognitive impairment (MCI). The Cochrane database of systematic reviews; 2015. [cited 2018 July 14]. Available from: https://www.cochranelibrary.com/cdsr/doi/10.1002/14651858.CD010783.pub2/full. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 153.Velayudhan L, Ryu SH, Raczek M, Philpot M, Lindesay J, Critchfield M, et al. Review of brief cognitive tests for patients with suspected dementia. International psychogeriatrics. 2014;26(8):1247–62. 10.1017/S1041610214000416 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 154.Mitchell AJ. A meta-analysis of the accuracy of the mini-mental state examination in the detection of dementia and mild cognitive impairment. Journal of psychiatric research. 2009;43(4):411–31. 10.1016/j.jpsychires.2008.04.014 [DOI] [PubMed] [Google Scholar]
  • 155.Bravo G, Hebert R. Age- and education-specific reference values for the Mini-Mental and modified Mini-Mental State Examinations derived from a non-demented elderly population. International journal of geriatric psychiatry. 1997;12(10):1008–18. [DOI] [PubMed] [Google Scholar]
  • 156.Crum RM, Anthony JC, Bassett SS, Folstein MF. Population-based norms for the Mini-Mental State Examination by age and educational level. Jama. 1993;269(18):2386–91. [PubMed] [Google Scholar]
  • 157.Grigoletto F, Zappala G, Anderson DW, Lebowitz BD. Norms for the Mini-Mental State Examination in a healthy population. Neurology. 1999;53(2):315–20. 10.1212/wnl.53.2.315 [DOI] [PubMed] [Google Scholar]
  • 158.Katzman R, Brown T, Fuld P, Peck A, Schechter R, Schimmel H. Validation of a short Orientation-Memory-Concentration Test of cognitive impairment. The American journal of psychiatry. 1983;140(6):734–9. 10.1176/ajp.140.6.734 [DOI] [PubMed] [Google Scholar]
  • 159.Blessed G, Tomlinson BE, Roth M. The association between quantitative measures of dementia and of senile change in the cerebral grey matter of elderly subjects. The British journal of psychiatry. 1968;114(512):797–811. 10.1192/bjp.114.512.797 [DOI] [PubMed] [Google Scholar]
  • 160.Milne A, Culverwell A, Guss R, Tuppen J, Whelton R. Screening for dementia in primary care: a review of the use, efficacy and quality of measures. International psychogeriatrics. 2008;20(5):911–26. 10.1017/S1041610208007394 [DOI] [PubMed] [Google Scholar]
  • 161.Etgen T, Sander D, Huntgeburth U, Poppert H, Forstl H, Bickel H. Physical activity and incident cognitive impairment in elderly persons: the INVADE study. Archives of internal medicine. 2010;170(2):186–93. 10.1001/archinternmed.2009.498 [DOI] [PubMed] [Google Scholar]
  • 162.Goring H, Baldwin R, Marriott A, Pratt H, Roberts C. Validation of short screening tests for depression and cognitive impairment in older medically ill inpatients. International journal of geriatric psychiatry. 2004;19(5):465–71. 10.1002/gps.1115 [DOI] [PubMed] [Google Scholar]
  • 163.Brooke P, Bullock R. Validation of a 6 item cognitive impairment test with a view to primary care usage. International journal of geriatric psychiatry. 1999;14(11):936–40. [PubMed] [Google Scholar]
  • 164.Williams MM, Roe CM, Morris JC. Stability of the Clinical Dementia Rating, 1979–2007. Archives of neurology. 2009;66(6):773–7. 10.1001/archneurol.2009.69 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 165.Villareal DT, Grant E, Miller JP, Storandt M, McKeel DW, Morris JC. Clinical outcomes of possible versus probable Alzheimer's disease. Neurology. 2003;61(5):661–7. 10.1212/wnl.61.5.661 [DOI] [PubMed] [Google Scholar]
  • 166.Ballard C., Burns A, Corbett A, Livingston G. Helping You to Assess Cognition. A practical toolkit for clinicians. Alzheimer's Society; 2013. [cited 2019 January 12]. Available from: https://www.wamhinpc.org.uk/sites/default/files/dementia-practical-toolkit-for-clinicians.pdf. [Google Scholar]
  • 167.National Collaborating Centre for Mental Health. A NICE-SCIE Guideline on Supporting People With Dementia and Their Carers in Health and Social Care. The British Psychological Society & The Royal College of Psychiatrists.; 2007. [cited 2019 January 15]. Available from: https://www.ncbi.nlm.nih.gov/pubmed/21834193. [PubMed] [Google Scholar]
  • 168.O'Sullivan D, O'Regan NA, Timmons S. Validity and Reliability of the 6-Item Cognitive Impairment Test for Screening Cognitive Impairment: A Review. Dementia and geriatric cognitive disorders. 2016;42(1–2):42–9. 10.1159/000448241 [DOI] [PubMed] [Google Scholar]
  • 169.Froehlich TE, Robison JT, Inouye SK. Screening for dementia in the outpatient setting: the time and change test. Journal of the American Geriatrics Society. 1998;46(12):1506–11. 10.1111/j.1532-5415.1998.tb01534.x [DOI] [PubMed] [Google Scholar]
  • 170.Reitsma JB, Glas AS, Rutjes AW, Scholten RJ, Bossuyt PM, Zwinderman AH. Bivariate analysis of sensitivity and specificity produces informative summary measures in diagnostic reviews. J Clin Epidemiol. 2005;58(10):982–90. 10.1016/j.jclinepi.2005.02.022 [DOI] [PubMed] [Google Scholar]
  • 171.Rutter CM, Gatsonis CA. A hierarchical regression approach to meta-analysis of diagnostic test accuracy evaluations. Statistics in medicine. 2001;20(19):2865–84. [DOI] [PubMed] [Google Scholar]
  • 172.Anthony JC, LeResche L, Niaz U, von Korff MR, Folstein MF. Limits of the 'Mini-Mental State' as a screening test for dementia and delirium among hospital patients. Psychological medicine. 1982;12(2):397–408. [DOI] [PubMed] [Google Scholar]
  • 173.Erkinjuntti T, Sulkava R, Wikstrom J, Autio L. Short Portable Mental Status Questionnaire as a screening test for dementia and delirium among the elderly. Journal of the American Geriatrics Society. 1987;35(5):412–6. 10.1111/j.1532-5415.1987.tb04662.x [DOI] [PubMed] [Google Scholar]
  • 174.O'keeffe E, Mukhtar O, T O'Keeffe S. Orientation to time as a guide to the presence and severity of cognitive impairment in older hospital patients. Journal of Neurology, Neurosurgery & Psychiatry. 2011;82(5):500–4. [DOI] [PubMed] [Google Scholar]
  • 175.Incalzi RA, Cesari M, Pedone C, Carosella L, Carbonin P. Construct validity of the abbreviated mental test in older medical inpatients. Dementia and geriatric cognitive disorders. 2003;15(4):199–206. 10.1159/000068787 [DOI] [PubMed] [Google Scholar]
  • 176.Jitapunkul S, Pillay I, Ebrahim S. The abbreviated mental test: its use and validity. Age and ageing. 1991;20(5):332–6. 10.1093/ageing/20.5.332 [DOI] [PubMed] [Google Scholar]
  • 177.Leung JL, Lee GT, Lam Y, Chan RC, Wu JY. The use of the Digit Span Test in screening for cognitive impairment in acute medical inpatients. International psychogeriatrics. 2011;23(10):1569–74. 10.1017/S1041610211000792 [DOI] [PubMed] [Google Scholar]
  • 178.Harwood DM, Hope T, Jacoby R. Cognitive impairment in medical inpatients. I: Screening for dementia—is history better than mental state? Age and ageing. 1997;26(1):31–5. 10.1093/ageing/26.1.31 [DOI] [PubMed] [Google Scholar]
  • 179.Mathews SB, Arnold SE, Epperson CN. Hospitalization and cognitive decline: Can the nature of the relationship be deciphered? The American journal of geriatric psychiatry. 2014;22(5):465–80. 10.1016/j.jagp.2012.08.012 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 180.Davis DH, Muniz Terrera G, Keage H, Rahkonen T, Oinas M, Matthews FE, et al. Delirium is a strong risk factor for dementia in the oldest-old: a population-based cohort study. Brain: a journal of neurology. 2012;135(Pt 9):2809–16. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 181.Ryan DJ, O'Regan NA, Caoimh RO, Clare J, O'Connor M, Leonard M, et al. Delirium in an adult acute hospital population: predictors, prevalence and detection. BMJ Open. 2013;3(1). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 182.Brunet MD, McCartney M, Heath I, Tomlinson J, Gordon P, Cosgrove J, et al. There is no evidence base for proposed dementia screening. Bmj. 2012;345:e8588 10.1136/bmj.e8588 [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

S1 Appendix. Search strategy for PubMed.

(PDF)

S2 Appendix. Assessment of methodological quality using QUADAS-2.

(PDF)

S3 Appendix. Standards for the Reporting of Diagnostic Accuracy studies checklist.

(PDF)

S4 Appendix. Study protocol.

(PDF)

S5 Appendix. Prisma 2009 checklist.

(PDF)

S6 Appendix. Minimal data set.

(XLSX)

Data Availability Statement

All relevant data are within the manuscript and its Supporting Information files.


Articles from PLoS ONE are provided here courtesy of PLOS

RESOURCES