Skip to main content
The Cochrane Database of Systematic Reviews logoLink to The Cochrane Database of Systematic Reviews
. 2022 Apr 5;2022(4):CD014696. doi: 10.1002/14651858.CD014696

The Rowland Universal Dementia Assessment Scale (RUDAS) for the detection of dementia in a variety of healthcare settings

Alisha Vara 1,, Susan J Yates 1,2, Cristian Andrés González Prieto 3, Claudia L Rivera-Rodriguez 4, Sarah Cullum 1
Editor: Cochrane Dementia and Cognitive Improvement Group
PMCID: PMC8980941

Objectives

This is a protocol for a Cochrane Review (diagnostic). The objectives are as follows:

To determine the diagnostic accuracy of the Rowland Universal Dementia Assessment Scale (RUDAS) for the detection of all‐cause dementia or its subtypes (Alzheimer’s disease, vascular dementia, dementia with Lewy bodies, and frontotemporal dementia) in primary and secondary healthcare settings, and in community‐based epidemiological studies (for example, in a cross‐sectional prevalence survey). We will synthesise studies separately by setting.

Secondary objectives

To examine the sources of heterogeneity of test accuracy in the included studies. Sources of heterogeneity may include differences in settings, reference standards, index test cut‐offs, and different translations of the RUDAS.

To identify the gaps in the evidence base for the diagnostic test accuracy of RUDAS in dementia, and consider the scope for future research in this area.

Background

This review will evaluate the diagnostic accuracy of the Rowland Universal Dementia Assessment Scale (RUDAS) for the detection of dementia in people in community settings, and in primary and secondary healthcare settings. The RUDAS was developed in 2004 in Australia, and designed to overcome the difficulty of detecting dementia in culturally and linguistically diverse (CALD) populations (Storey 2004). Existing bedside cognitive tests, such as the Mini‐Mental State Examination (MMSE (Folstein 1975)), and the Montreal Cognitive Assessment (MoCA (Nasreddine 2005)), are widely used, but pose difficulties when trying to diagnose people with low literacy, or those from non‐English speaking backgrounds (Jones 2001; Tombaugh 1992). The RUDAS was designed to be used with interpreters in multiple languages, and therefore, is a useful generic cognitive assessment tool for multi‐ethnic communities. Early diagnosis and treatment for dementia is important, and this review is an important step in determining the accuracy of a screening tool that is suitable for CALD populations.

The Cochrane Dementia and Cognitive Improvement Group (CGDCIG) has undertaken a series of reviews investigating the diagnostic accuracy of a variety of tests for diagnosing dementia. We based the protocol for this review on Neuropsychological tests for the diagnosis of Alzheimer's disease dementia and other dementias: a global protocol for cross‐sectional and delayed‐verification studies (Davis 2013).

Target condition being diagnosed

The target condition is all‐cause dementia or its subtypes: Alzheimer’s disease dementia, vascular dementia, dementia with Lewy bodies, and frontotemporal dementia; mixed types can also occur. Dementia is a syndrome, characterised by progressive loss of cognitive function, with associated impairment in completing activities of daily living. It is a considerable cause of disability and dependence globally, with significant impacts on individuals, caregivers, communities, and society (WHO 2017). In 2015, 47 million people worldwide were affected by dementia; this number is expected to triple by 2050 (Prince 2015). Many people with dementia live in low‐ and middle‐income countries; by 2050, it is expected they will contribute over 71% of all cases worldwide due to ageing of the global population (Prince 2015). The current cost of dementia is estimated to be over one trillion USD, and is predicted to double in 10 years (Prince 2015).

Since there is considerable overlap in the clinical and pathological presentations of the different causes of dementia, using neuropathological criteria as the gold standard is not advised; hence, we will use clinical reference standards, defined below (Scheltens 2011).

We will assess the diagnostic accuracy of the RUDAS in detecting dementia.

Index test(s)

The RUDAS was initially developed in 2004 as a simple tool to screen for dementia in CALD populations (Appendix 1; Storey 2004). It is portable, requires minimal training, and is freely available. The original test was written in English, but designed to be administered in any other language, with the use of an interpreter. It has a score interval of 0 to 30 points, and includes six items. The cognitive domains assessed are: registration, visuospatial orientation, praxis, visuoconstructional drawing, judgement, memory recall, and language.

The RUDAS was originally validated in a multi‐ethnic Australian population. The accuracy did not vary by years of education (P = 0.20), or preferred languages (P = 0.33). In the original sample from a secondary care setting, the sensitivity was 89% (95% confidence interval (CI) 76% to 96%), and specificity was 98% (95% CI 88% to 97%), at a cut point of 22/23 out of a possible 30 points (Storey 2004).

Since its development in 2004, the English version of the RUDAS has been translated or adapted for use in several different languages, but not all of these versions have been validated in a diagnostic test accuracy study. Some of the untested versions have subsequently been used by interpreters, with people who do not speak the translated language; and these have been presented as diagnostic test accuracy studies. There have been two previous systematic reviews of the diagnostic test accuracy of the RUDAS, but it is not always clear in these reviews whether the non‐English versions of the RUDAS have been tested for diagnostic accuracy before being subsequently used in CALD populations in that country (Naqvi 2015; Nielsen 2020). For this reason, we will update the diagnostic test accuracy systematic review of the RUDAS using Cochrane methods, which will examine the effect of including un‐validated translations of the original RUDAS as a source of heterogeneity

Clinical pathway

Dementia is a progressive neurodegenerative condition, which develops over several years. People with dementia may present to a variety of healthcare settings, including general practice, inpatient settings, outreach, and community services. As dementia progresses, more cognitive domains are affected, and difficulty planning complex tasks becomes more apparent. Depending on the stage at which a person presents, their pathway to diagnosis may vary.

Current guidance from the UK and Australia advocates for early referral to a specialist memory service when a diagnosis of dementia is suspected (Ballard 2013; Dyer 2016; NICE 2018). Brief cognitive assessments, specifically designed for community and general practice, are available to assist community practitioners to decide when referrals may be appropriate (NICE 2018; Velayudhan 2014). A diagnosis of dementia should only be made following a comprehensive, specialist assessment (NICE 2018).

In low‐ and middle‐income countries, where the prevalence of dementia is increasing, lack of resources has resulted in recommendations for non‐specialist clinicians to carry out diagnostic tasks (Dua 2011; Saxena 2007). Therefore, it is valuable to add to the evidence base for accurate screening tools, that can be used by non‐specialists in culturally diverse settings.

Standard diagnostic practice

There is no gold standard test to confirm the diagnosis of dementia. Standard diagnostic practice for dementia includes thorough clinical history taking, including assessment of cognition and ability to perform activities of daily living, collateral history, when possible, physical examination, laboratory work‐up, and cognitive testing (NICE 2018). Information about functional ability is usually obtained from caregivers or other informants. Reversible causes of dementia, or conditions contributing to cognitive impairment, such as depression, are identified and treated prior to making a diagnosis (NICE 2018). Neuroradiological examination can be done to exclude reversible causes of dementia and to aid with the diagnosis of dementia and its subtypes. Further investigations, such as perfusion or metabolic imaging tests, are only considered if they can help both the diagnosis of a subtype and the management of dementia, and if facilities are available (NICE 2018). Most clinicians make a diagnosis of dementia guided by the diagnostic criteria outlined in the 10th International Classification of Diseases (ICD‐10), the Diagnostic and Statistical Manual of Mental Disorders (DSM), or both (APA 2013; WHO 1992).

Alternative test(s)

Cognitive testing is an important part of a comprehensive assessment when dementia is suspected (NICE 2018). Traditional bedside cognitive tests include the Mini‐Mental State Examination (MMSE), developed in 1975 (Folstein 1975), and the Montreal Cognitive Assessment (MoCA (Nasreddine 2005)), although copyright restrictions currently limit their use in everyday practice. The MMSE has been reported to have good diagnostic accuracy for dementia across both community and primary healthcare settings (Creavin 2016). Research evidence for the diagnostic accuracy of the MoCA is only available in secondary care settings (Davis 2015). The MoCA was originally developed for the identification of mild cognitive impairment (MCI), so at the recommended cut point, the diagnostic accuracy of the MoCA is limited by a high rate of false positives (Davis 2015). The Addenbrooke’s Cognitive Examination (ACE‐III) is an alternative screening test, but it takes approximately 15 to 20 minutes to administer, which is often too long for primary care clinicians. A shortened version, the mini‐ACE, was developed for brief screening across clinical settings (Hsieh 2015). Both the mini‐ACE and ACE‐III have been translated into several languages, but optimal thresholds for the detection of dementia in healthcare settings and in different languages are yet to be determined (Beishon 2019).

The advantage of using the RUDAS, which may be administered with an interpreter, is that it can be used in multilingual communities within the same country, without having to produce multiple written translations.

Rationale

Dementia is a global public health priority (WHO & ADI 2012). Earlier diagnosis of dementia enables earlier treatment and improves outcomes (Rasmussen 2019; WHO & ADI 2012). This enables people with dementia and their families to carefully plan for the future, and access therapies (pharmacological and psychosocial) that may improve symptoms and quality of life. Earlier access to treatment can provide increased support and reduction of the caregiver's burden. Earlier diagnosis of dementia can mean potentially greater savings than delaying treatment, by introducing anti‐dementia drugs and caregiver supports, because these interventions help to delay placement into costly assisted‐living care (Prince 2015). Accurate diagnosis of dementia is also important. False positive diagnoses have adverse consequences for people with dementia and their caregivers, including psychological harm and avoidable treatments (Brunet 2012; Wilson 1968). For these reasons, the Cochrane Dementia and Cognitive Improvement Group (CDCIG) set out to create a comprehensive set of systematic reviews of the diagnostic accuracy of cognitive assessments for dementia diagnosis.

Cognitive tests are often used and validated in English‐speaking populations. However, non‐English speakers can represent significant minority populations in countries where English is the main language, for example in the UK, USA, Canada, Australia, and New Zealand. It has been established that minority ethnic groups experience difficulty seeking help for dementia for a number of reasons, including language barriers (Mukadam 2011). Equitable screening for dementia in both English and non‐English speakers is important (Grypma 2007). More research is needed to assess interventions that improve access to dementia services for minority ethnic groups (Mukadam 2013).

Delayed and false diagnoses generate serious problems for the precarious health systems of low‐ and middle‐income countries. Using simple tools in clinical practice to detect dementia, which are valid across cultures, can lead to more equitable access to health care (Storey 2004). There is a pressing need for culturally unbiased screening tests for dementia (Prince 2007). The RUDAS was developed for this purpose. Determining the diagnostic accuracy of the RUDAS will contribute to the evidence base for dementia screening tools in CALD populations, for whom there are no translated tools available.

A systematic review published in 2015 assessed the diagnostic accuracy of the RUDAS compared with recognised reference standards for dementia, and also compared the RUDAS to other cognitive assessment tests (Naqvi 2015). They included 11 studies, with a total of 1236 participants, 8 of which assessed the RUDAS against a reference standard diagnostic assessment for dementia. The studies included diverse populations, similar to outpatient clinical settings, however, the authors were unable to obtain full datasets for many of the studies.

A more recent systematic review assessed the diagnostic accuracy of the RUDAS in different sociocultural settings (Nielsen 2020). The primary meta‐analysis pooled data from 21 studies, with 3023 participants, and produced a summary sensitivity of 82% (95% CI 78% to 86%) and specificity of 83% (95% CI 78% to 87%); but the meta‐analysis appeared to have combined studies with different cut‐offs, which were also conducted in different settings. At the recommended cut‐off of 22/23, the combined data gave a pooled estimate of 0.78 (95% CI 0.72 to 0.83) for sensitivity, and 0.85 (95% CI 0.78 to 0.90) for specificity; however, this meta‐analysis also combined studies from different settings.

Both meta‐analyses included studies that had case‐control study designs, which are at high risk of bias, and will produce spuriously high levels of sensitivity and specificity. We also have concerns that some of the translated versions of the RUDAS have not been validated in a diagnostic test accuracy study; some of these untested versions have subsequently been used by interpreters with people who do not speak the translated language, and these have been presented as diagnostic test accuracy studies, which have been included in previous systematic reviews.

For these reasons, we will update the diagnostic test accuracy systematic review of the RUDAS, using Cochrane methods. We will exclude case‐control studies; specific analyses by setting a cut‐off score where available; and un‐validated translated versions used by interpreters. Our proposed systematic review will re‐examine the risk of bias in the previously included studies, in addition to more recent studies.

Objectives

To determine the diagnostic accuracy of the Rowland Universal Dementia Assessment Scale (RUDAS) for the detection of all‐cause dementia or its subtypes (Alzheimer’s disease, vascular dementia, dementia with Lewy bodies, and frontotemporal dementia) in primary and secondary healthcare settings, and in community‐based epidemiological studies (for example, in a cross‐sectional prevalence survey). We will synthesise studies separately by setting.

Secondary objectives

To examine the sources of heterogeneity of test accuracy in the included studies. Sources of heterogeneity may include differences in settings, reference standards, index test cut‐offs, and different translations of the RUDAS.

To identify the gaps in the evidence base for the diagnostic test accuracy of RUDAS in dementia, and consider the scope for future research in this area.

Methods

Criteria for considering studies for this review

Types of studies

We based the criteria for including studies in this review on the generic protocol (Davis 2013). We will consider any study design (exceptions follow) that used the Rowland Universal Dementia Assessment Scale (RUDAS), as long as they provide original data.

We will include cross‐sectional studies in which the index test was administered within three months of the reference test in participants from the same sample. We recognise that cross‐sectional studies may be at higher risk of incorporation bias (Worster 2008).

We will exclude studies in which the RUDAS was used as part of the diagnostic assessment for the reference test. We will exclude case‐control studies, due to the high risk of spectrum bias; and longitudinal studies and post‐mortem verification of neuropathological diagnoses, because they are better evaluated using delayed‐verification reviews (Davis 2013).

Participants

We will consider studies with participants who are at least 18 years of age, who have been assessed with the RUDAS. The RUDAS was initially validated in a multicultural sample of consecutive new referrals to a secondary care geriatric medicine outpatient clinic in Sydney, Australia (Storey 2004). The prevalence of dementia in secondary care settings is reported to be up to ten times higher than community settings (Anderson 2005). Therefore, the secondary care population has a higher prevalence of severe dementia, which impacts the sensitivity and specificity of diagnostic tests. We will assess the diagnostic accuracy of the RUDAS as a screening tool in community settings, and primary and secondary healthcare settings.

We will evaluate the diagnostic accuracy of the RUDAS in people who present to primary care, secondary care (to memory services and to general hospitals (either inpatient or outpatient settings)), or if the RUDAS was used in community‐based epidemiological studies as a dementia screening tool. We recognise that the diagnostic accuracy of the index test is likely to differ in these populations, and will present the findings separately. For example, we will present findings from people in memory services separately from those in community‐based services, where most dementia is likely to be of milder severity, which will impact the sensitivity and specificity of the test.

Index tests

We will include any form of the full RUDAS. We expect to find the recommended cut‐off point of 22/23 to be used in studies to differentiate non‐dementia (23 and above) from dementia (22 or less); however, we will also include studies that use other thresholds. Some translated versions are also available, and we will include these if they have been validated in a diagnostic test accuracy study that meets the criteria for inclusion in this review.

Target conditions

The target condition is all‐cause dementia, and any common dementia subtype, including Alzheimer’s disease, vascular dementia, Lewy body dementia, and frontotemporal dementia. We will include a diagnosis of dementia at any stage.

We will exclude studies that include participants with rare forms of dementia, such as: (i) current abuse, or history of alcohol or drug abuse; (ii) CNS trauma (e.g. subdural haematoma), tumour, or infection; or (iii) other specific neurological conditions.

We will exclude studies that assess the diagnostic accuracy of RUDAS for detection of mild cognitive impairment (MCI) only (i.e. dementia is not being investigated). If we find studies that detected MCI in addition to dementia, we will include them, but we will categorise 'MCI no dementia', since excluding MCI will introduce spectrum bias.

Reference standards

We will include studies that apply the reference standard of all‐cause dementia or standardised definitions of subtypes, which are reported in the generic protocol (Davis 2013). Existing clinical reference standards include the Diagnostic and Statistical Manual (DSM (APA 2013)), the International Classification of Diseases (ICD (WHO 1992)), and the Clinical Dementia Rating (CDR) scale (Morris 1993). We will include any version. The DSM and the CDR scale can also indicate the stage of dementia. Dementia subtype reference standards include the National Institute of Neurological and Communicative Disorders and Stroke and the Alzheimer’s Disease and Related Disorders Association (NINCDS‐ADRDA) criteria for Alzheimer’s disease (McKhann 1984), the National Institute of Neurological Disorders and Stroke and Association Internationale pour la Recherchéet l’Enseignement en Neurosciences (NINDS‐AIREN) criteria for vascular dementia (Román 1993), Lewy body dementia (McKeith 2005), and frontotemporal dementia (Englund 1994). More recent studies may have used the National Institute on Aging‐Alzheimer’s Association (NIA‐AA) criteria, which include biomarkers to support a diagnostic classification (Jack 2011). We will Include studies that used expert specialist clinical judgement, or other well validated methods of applying the diagnostic criteria.

We will include studies that compare the RUDAS with a recognised clinical diagnostic assessment reference standard, but exclude studies that only compare the RUDAS with other cognitive screening instruments, e.g. MMSE or MoCA.

Different diagnostic criteria have different thresholds for ‘caseness’, and therefore, may be a source of heterogeneity; we will explore these quantitatively if we identify sufficient studies. We will present all available data, and pool them by reference standards. If more than two reference standards are used, our hierarchy of preference will be DSM (any version), then ICD, followed by other reference standards, such as NIA‐AA.

Studies must administer the index and reference tests within three months. If the time interval is not clear, we will include the study, but judge the timing as 'unclear' in the quality appraisal using QUADAS‐2.

Search methods for identification of studies

We will use an inclusive search strategy of the intention to identify any published study that assesses the RUDAS.

Electronic searches

We will search:

  • MEDLINE Ovid,

  • Embase Ovid,

  • Web of Science Core Collection (ISI Web of Knowledge),

  • PsycINFO Ovid, and

  • LILACS BIREME (Latin American and Caribbean Health Science Information database).

See Appendix 2 for a proposed strategy for MEDLINE Ovid. We will design similarly structured search strategies using search terms appropriate for each database. Where appropriate, we will use controlled vocabulary, such as MeSH terms and EMTREE.

In the searches we develop, we will not restrict studies on the basis of sampling frame or setting. This approach is intended to maximise sensitivity. We will not use search filters (collections of terms aimed at reducing the number need to screen) as an overall limiter, because current filters are not sensitive enough (Whiting 2011). We will not apply any language restrictions to the electronic searches; we will use translation services as necessary.

Searching other resources

We will review the reference lists of all included studies. We will also search the following additional databases.

We will use the ‘related articles’ feature of PubMed to search for additional studies. We will search citation databases, such as Science Citation Index and Scopus, using key studies to identify any additional relevant studies. We will search grey literature, including conference proceedings, theses, and PhD abstracts. We will not handsearch, in accordance with the generic protocol (Davis 2013). We will contact research groups involved in previously published or ongoing research on the RUDAS to identify any relevant, unpublished data.

Data collection and analysis

Selection of studies

Initially, two review authors will independently select relevant reports from all retrieved titles and abstracts. Following this, we will locate the full‐text reports of references that appear to meet our inclusion criteria, or those about which we are unsure. If there is missing information, we will contact study authors. Two review authors will then independently evaluate each paper for inclusion or exclusion. We will resolve disagreements by discussion. The study selection process will be detailed in a PRISMA flow chart (Page 2021).

Data extraction and management

We will develop a study‐specific form, based on the list required for Cochrane Reviews of diagnostic test accuracy (Deeks 2013). We detailed in Appendix 3 the data we will collect on the form.

Two review authors will independently extract data. We will dichotomise test accuracy data if required, and cross‐tabulate the results of the index test in two‐by‐two tables (positive or negative) against the target condition (positive or negative). If a study reports test accuracy at more than one threshold, we will input the data into multiple two‐by‐two tables, but will restrict the primary meta‐analysis to findings for the recommended threshold. We will resolve disagreements between review authors by discussion, involving an arbitrator if necessary. We will extract the results directly into tables in Review Manager 5 (RevMan 5) software (Review Manager 2020). For each included study, we will outline the flow of the participant (i.e. number recruited, included, and assessed), in a flow diagram.

Assessment of methodological quality

Two review authors will independently assess methodological quality, using the Quality Assessment Tool for Diagnostic Accuracy Studies (QUADAS‐2) checklist (Whiting 2011a). We will resolve disagreements by discussion, and involve an arbitrator if necessary.

We will classify studies as high, medium, or low risk of bias, and present a narrative summary for each study. The QUADAS‐2 tool is available in Appendix 4, with the anchoring statements in Appendix 5.

Statistical analysis and data synthesis

We will assess the accuracy of the RUDAS against the reference standards performed at the same time as (or within three months of) the index test. We will use the true positive, true negative, false positive, and false negative results identified by the index test to illustrate the diagnostic accuracy of the test to identify people with all‐cause dementia, and dementia subtypes, if studies are available.

From a methodological standpoint, we anticipate that the chosen cut‐off thresholds for the RUDAS may differ in the included studies. We will report the cut‐off thresholds specified in the primary studies, and whether the recommended threshold was used and reported.

We will use paired data on sensitivity and specificity to calculate the accuracy of the index test for diagnosing all‐cause dementia, and dementia subtype, if such studies exist. For all included studies, we will extract data in binary two‐by‐two tables (binary test results cross‐classified with the binary reference standard), and use this to calculate sensitivities and specificities, with 95% confidence intervals (CIs). We will present individual study results graphically by plotting estimates of sensitivities and specificities in a forest plot. We will perform analyses using RevMan 5 (Review Manager 2020).

Ideally, we will analyse studies with different target condition subtypes separately, but to maximise statistical power, we will pool studies with all subtypes of dementia. We will conduct subgroup analyses if there are sufficient data to assess different subtypes.

We will conduct meta‐analyses on pairs of sensitivity and specificity in each of the main settings (e.g. community‐based, primary care, secondary care settings). We will restrict the meta‐analyses to studies from each of the main settings that report using the same diagnostic cut‐offs within those settings. If more than two reference standards are used, our hierarchy of preference starts with the DSM, with the recommended cut‐off. If a variety of cut‐offs are reported, our hierarchy of preference will be the original recommended cut‐off of 22/23, followed by other cut‐offs, if sufficient data are available. If it is appropriate to pool the data, we will use a hierarchical summary receiver operating characteristic (ROC) curve. However, we anticipate that the data will be too sparse and heterogenous to conduct a primary analysis specific to the main settings, and we will not be able to pool data, in which case, we will present a qualitative analysis.

Investigations of heterogeneity

In line with previous Cochrane Reviews of diagnostic test accuracy of neuropsychological tests, we anticipate there will be a number of sources of heterogeneity in the included studies (Davis 2013). Sources of heterogeneity may include differences in settings, reference standards, index test cut‐offs, and translated versions of the RUDAS. However, it is unlikely that there will be sufficient study data for a full investigation of these sources of heterogeneity, in which case we will describe them qualitatively.

Sensitivity analyses

We will undertake sensitivity analyses to determine the effects of excluding low‐quality studies on analyses. We will dichotomise studies by risk of bias (quantified by number of QUADAS‐2 domains assessed to be at high risk) for sensitivity analyses. We will re‐run ROC curves, forest plots, and summary statistics, and compare the results to the original analyses. Primary analysis will include all studies; sensitivity analyses will exclude:

  • studies we judged to be at high risk of bias using the QUADAS‐2 tool,

  • studies identified as less appropriate for inclusion (i.e. where there was unresolved disagreement between authors).

We will exclude studies at highest risk of bias at the study selection phase (e.g. if the study design was case‐control).

We will use MetaDTA software for the sensitivity analysers (Patel 2020).

Assessment of reporting bias

We will not explore reporting publication bias in this review, as current quantitative methods for exploring reporting bias are not well established for studies of diagnostic test accuracy. Specifically, we will not consider funnel plots of the diagnostic odds ratio versus the standard error of this estimate.

Summary of findings and assessment of the certainty of the evidence

We will prepare a summary of findings table to present the main results and key information regarding the certainty of evidence assessed using the GRADE approach (Schünemann 2020a; Schünemann 2020b).

Acknowledgements

We would like to thank peer reviewers Nilton Custodio and T. Rune Nielsen; and consumer reviewer Kit Byatt for their comments and feedback

Appendices

Appendix 1. The Rowland Universal Dementia Assessment Scale (RUDAS)

Available at www.dementia.org.au/sites/default/files/20110311_2011NSWRUDASscoring_sheet.pdf?_ga=2.227454235.760463448.1647727253-386313411.1647727253

Appendix 2. MEDLINE search strategy

We created a multi‐strand search using three concepts: index test (concept A: lines 1‐3 inclusive), condition of interest (concept B: lines 5‐13 inclusive), and diagnosis (concept C: lines 15‐30 inclusive). The final line of the search shows the Boolean OR combination of searching: the index test alone, searching a focused condition of interest combined with a sensitive diagnostic filter, and searching a broader condition of interest term set in combination with a focused diagnostic term. Evaluated against a known set of likely includable studies, the strategy identified them all.

1.RUDAS.ti,ab.
2.Rowland.ti,ab.
3.*Neuropsychological Tests/
4.or/1‐3
5.exp *Dementia/di
6.dement*.ti,ab.
7.alzheimer*.ti,ab.
8.(VaD or VCI).ti,ab.
9.(lewy* adj2 bod*).ti,ab.
10.(LBD or DLB).ti,ab.
11.(FTD or FTLD or frontotemp* or "fronto‐temp*").ti,ab.
12.((cognit* or memory or cerebr* or mental* or neurocog*) adj3 (declin* or impair* or los* or deteriorat* or degenerat* or complain* or disturb* or disorder*)).ti,ab.
13.(MCI or aMCI or nMCI or mMCI or AAMI or ARCD or ACMI or CIND).ti,ab.
14.or/5‐13
15."sensitivity and specificity"/
16."reproducibility of results"/
17."Predictive Value of Tests"/
18.*Diagnosis/
19.(diagnos* adj2 accura*).ti,ab.
20.sensitivit*.ti,ab.
21.specificit*.ti,ab.
22.Area under curve/
23.ROC Curve/
24.("Area under curve" or AUC).ab.
25.sROC.ab.
26.(likelihood adj3 (ratio* or function*)).ab.
27.((true or false) adj2 (positive* or negative*)).ab.
28.((positive* or negative* or false or true) adj3 rate*).ti,ab.
29.("positive predictive value" or PPV).ab.
30.("negative predictive value" or NPV).ab.
31.or/15‐30
32.4 and 31
33.14 and 18
34.4 or 32 or 33

Appendix 3. Information for extraction for proforma

Bibliographic details of primary paper: author, title of study, year

Details of index test

  • Language of test

  • Was a diagnostic test accuracy study of the translated version of the index test conducted? (yes/no)

  • Was any translation of RUDAS validated? (yes/no)

  • RUDAS diagnostic threshold

  • Was the threshold pre‐specified? (yes/no)

  • Who administered the RUDAS?

  • Was index test conducted without knowledge of reference standard results?

  • Could the conduct or interpretation of the index test have introduced bias?

  • Notes on conduct of index test

Reference standard

  • Target condition

  • What was the prevalence of dementia in the sample population?

  • Who administered the reference standard?

  • Reference standard

  • Was any attempt made to subtype dementia categories?

  • Was reference standard interpreted without knowledge of index test results?

Study population

  • Country of study

  • Number of participants

  • Number of participants in analysis

  • Patient sampling

  • Consecutive/random sampling (yes/no)

  • Did the study avoid inappropriate exclusions? (yes/no)

  • Could the selection process have introduced bias? (yes/no)

  • Comments on sampling, inclusions and exclusions

  • What is the patient population?

    • Unselected community

    • Community with possible memory problem

    • Unselected primary care

    • Primary care with possible memory problem

  • Age

  • Gender (% female participants)

  • Years of education

  • Social class

  • Comorbidity

Patient flow and timing

  • What was the interval between index test and reference standard?

  • Did all participants receive a reference standard?

  • Did all participants receive the same reference standard?

  • Notes of reference standard procedure.

  • Were all participants included in the analysis?

  • Were those not included in the analysis fully accounted for?

  • Notes on patient flow and timing

  • Other characteristics (e.g. ApoE status)

  • Attrition and missing data

Appendix 4. Assessment of methodological quality QUADAS‐2

Domain Patient selection Index test Reference standard Flow and timing
Description Describe methods of participant
selection: describe
included participants (prior
testing, presentation, intended
use of index test
and setting).
Describe the index test, how it was conducted and interpreted Describe the reference standard, how it was conducted and interpreted. Describe any participants who did not receive the index
test(s) or reference standard,
or who were excluded from
the 2 x 2 table (refer to flow
diagram): describe the time
interval and any interventions
between index test(s)
and reference standard
Signalling questions
(yes, no,
unclear)
Was a consecutive or random sample of participants
enrolled?
Were the index test results
interpreted without
knowledge of the
results of the reference
standard?
Is the reference standard
likely to correctly classify the
target condition?
Was there an appropriate interval
between index test(s)
and reference standard?
  Was a case‐control design
avoided?*
If a threshold was used,
was it prespecified?
Were the reference standard
results interpreted without
knowledge of the results of
the index test?
Did all participants receive a reference
standard?
  Did the study avoid inappropriate
exclusions?
    Did all participants receive the
same reference standard?
        Were all participants included in
the analysis?
Risk of
bias:
(High, low, unclear
Could the selection of participants
have introduced
bias?
Could the conduct or
interpretation of the
index test have introduced
bias?
Could the reference standard,
its conduct, or its interpretation
have introduced
bias?
Could the participant flow have
introduced bias?
Concerns
regarding
applicability:
(High, low, unclear)
Are there concerns that
the included participants
do not match the review
question?
Are there concerns that
the index test, its conduct,
or interpretation
differ from the review
question?
Are there concerns that the
target condition, as defined
by the reference standard,
does not match the review
question?
 
*We will not include case‐control studies, therefore, they will not be detected or rated

Appendix 5. Anchoring statements for quality assessment of The Rowland Universal Dementia Assessment Scale (RUDAS) diagnostic studies

Here are some core anchoring statements for quality assessment of diagnostic test accuracy reviews of the RUDAS in dementia. These statements are designed to be used with the QUADAS‐2.

If review authors answer yes to a QUADAS‐2 question for a specific domain, they can judge there is a low risk of bias. if they answer no, this potentially indicates a high risk of bias, depending on the question.

If review authors rate the question at high risk of bias, then this question indicates a significant aspect of study design has potential for bias, so the whole domain is considered to be at high overall risk of bias, regardless of how other items are rated within that domain.

In assessing individual items, the review authors should only rate them as unclear if there is genuine uncertainty. In these situations, review authors will contact the relevant study teams for additional information.

Anchoring statements

Domain 1: participant selection

Risk of bias: could the selection of participants have introduced bias? (high, low, unclear)

Was a consecutive or random sample of participants enrolled?

When sampling is used, the methods least likely to cause bias are consecutive sampling or random sampling, which should be stated, described, or both. Non‐random sampling or sampling based on volunteers is more likely to be at high risk of bias.

Rating: high risk of bias

Was a case‐control design avoided?

Case‐control study designs have a high risk of bias, but sometimes they are the only studies available, especially if the index test is expensive or invasive. Nested case‐control designs (systematically selected from a defined population cohort) are less prone to bias, but they will still narrow the spectrum of participants who receive the index test. Study designs (both cohort and case‐control) that may also increase bias are those designs in which the study team deliberately increases or decreases the proportion of participants with the target condition, for example a population study may be enriched with extra dementia participants from a secondary care setting.

Rating: high risk of bias

Did the study avoid inappropriate exclusions?

The study will be automatically graded as unclear if exclusions are not detailed (pending contact with study authors). Where exclusions are detailed, the study will be rated as low risk if the review authors judge the exclusions are appropriate. Certain exclusions common to many studies of dementia are: medical instability; terminal disease; alcohol or substance misuse; concomitant psychiatric diagnosis; other neurodegenerative conditions. However, if difficult to diagnose groups are excluded, this may introduce bias, so exclusion criteria must be justified. For a community sample, we would expect relatively few exclusions. We will judge post hoc exclusions to be at high risk of bias.

Rating: high risk of bias

Applicability: are there concerns that the included participants do not match the review question? (high, low, unclear)

The included participants should match the intended population described in the review question. The setting will be particularly important – the review authors should consider population in terms of symptoms; pre‐testing; potential disease prevalence. Review authors will classify studies that use very selected participants or subgroups as low applicability, unless they are intended to represent a defined target population, for example people with memory problems referred to a specialist and investigated by lumbar puncture.

Domain 2: index test

Risk of bias: could the conduct or interpretation of the RUDAS have introduced bias? (high, low, unclear)

Were the RUDAS results interpreted without knowledge of the reference standard?

Terms such as 'blinded' or 'independently and without knowledge of' are sufficient, and full details of the blinding procedure are not required. This item may be judged at low risk if explicitly described, or if there is a clear temporal pattern to the order of testing that precludes the need for formal blinding, i.e. all RUDAS assessments were performed before the dementia assessment. As most neuropsychological tests are administered by a third party, knowledge of a dementia diagnosis may influence their ratings; tests that are self‐administered, for example using a computerised version, may have less risk of bias.

Rating: high risk of bias

Were the RUDAS cut points prespecified?

For neuropsychological scales, there is usually a cut point, above which participants are classified as 'test positive'; this may also be referred to as the threshold, clinical cut‐off, or dichotomisation point. Different cut points are used for different populations. A study is classified at higher risk of bias if the authors define the optimal cut‐off post hoc, based on their own study data. Certain papers may use an alternative methodology for analysis that does not use thresholds, and review authors should consider that these papers are not applicable.

Rating: low risk of bias

Were sufficient data on RUDAS application given for the test to be repeated in an independent study?

Particular points of interest include method of administration (for example self‐completed questionnaire versus direct questioning interview); nature of informant; language of assessment. If a novel form of the index test is used, for example a translated questionnaire, details of the scale should be included, a reference given to an appropriate descriptive text, and there should be evidence of validation.

Rating: low risk of bias

Applicability: are there concerns that the RUDAS, its conduct, or interpretation differ from the review question? (high, low, unclear)

Variations in the length, structure, language, and administration of the index test may all affect applicability if they vary from those specified in the review question.

Domain 3: reference standard

Risk of bias: could the reference standard, its conduct, or its interpretation have introduced bias? (high, low, unclear)

Is the reference standard likely to correctly classify the target condition?

Commonly used international criteria to assist with clinical diagnosis of dementia include those detailed in the DSM‐IV, DSM‐V, and ICD‐10. Criteria specific to dementia subtypes include, but are not limited to, the National Institute of Neurological and Communicative Disorders and Stroke and the Alzheimer’s Disease and Related Disorders Association criteria for Alzheimer’s dementia; McKeith criteria for Lewy body dementia; Lund criteria for frontotemporal dementias; and the National Institute of Neurological Disorders and Stroke and Association Internationale pour la Recherchéet l’Enseignement en Neurosciences criteria for vascular dementia. When the review authors and the Cochrane Dementia and Cognitive Improvement group are not familiar with the criteria used for assessment, this item should be classified at high risk of bias.

Rating: high risk of bias

Were the reference standard results interpreted without knowledge of the results of the RUDAS?

Terms such as 'blinded' or 'independent' are sufficient, and full details of the blinding procedure are not required. This may be assessed at low risk if explicitly described, or if there is a clear temporal pattern to order of testing, i.e. all dementia assessments performed before (neuropsychological) testing.

Informant rating scales and direct cognitive tests present certain problems. It is accepted that informant interview and cognitive testing is a usual component of clinical assessment for dementia, however, specific use of the scale under review in the clinical dementia assessment should be assessed at high risk of bias.

Rating: high risk of bias

Was sufficient information on the method of dementia assessment given for the assessment to be repeated in an independent study?

Particular points of interest for dementia assessment include the training or expertise of the assessor, and whether additional information was available to inform the diagnosis (e.g. neuroimaging, other neuropsychological test results), and whether this was available for all participants.

Rating: variable risk, but high risk if method of dementia assessment not described

Applicability: are there concerns that the target condition, as defined by the reference standard, does not match the review question? (high, low, unclear)

There is the possibility that some methods of dementia assessment, although valid, may diagnose a smaller or larger proportion of participants with disease than in usual clinical practice. In this instance, the review authors should consider the reference standards to have poor applicability.

Domain 4: participant flow and timing(N.B. refer to, or construct, a flow diagram)

Risk of bias: could the participantflow have introduced bias? (high, low, unclear)

Was there an appropriate interval between the RUDAS and reference standard?

For a cross‐sectional study design, there is potential for the participant to change between assessments, however, dementia is a slowly progressive disease, which is not reversible. The ideal scenario would be a same day assessment, but longer periods of time (for example, several weeks or months, up to six months) are unlikely to lead to a high risk of bias.

Rating: low risk of bias

Did all participants receive the same reference standard?

There may be scenarios in which participants who test positive on the index test have a more detailed assessment for the target condition. When dementia assessment (or reference standard) differs between participants, the review authors should judge this at high risk of bias.

Rating: high risk of bias

Were all participants included in the final analysis?

Dropouts (and missing data) should be accounted for. Attrition that is higher than expected (compared to similar studies) should be treated as a high risk of bias.

Rating: high risk of bias

Contributions of authors

All authors contributed to the writing and editing of the protocol.

Sources of support

Internal sources

  • No sources of support provided

External sources

  • NIHR, UK

    This protocol was supported by the National Institute for Health Research (NIHR), via Cochrane Infrastructure funding to the Cochrane Dementia and Cognitive Improvement group. The views and opinions expressed therein are those of the authors and do not necessarily reflect those of the Systematic Reviews Programme, NIHR, National Health Service, or the Department of Health

Declarations of interest

There are no declarations of interest.

New

References

Additional references

Anderson 2005

  1. Anderson D, Aveyard B, Baldwin B, Barker A, Forsyth D, Guthrie E, et al.Who cares wins: improving the outcome for older people admitted to the general hospital: guidelines for the development of liaison mental health services for older people. London: Royal College of Psychiatrists 2005.

APA 2013

  1. American Psychiatric Association (APA).Diagnostic and Statistical Manual of Mental Disorders (DSM-5). Washington, DC: American Psychiatric Association Publishing, 2013. [Google Scholar]

Ballard 2013

  1. Ballard C, Burns A, Corbett A, Livingston G, Rasmussen J.Helping you to assess cognition: a practical toolkit for clinicians. www.wamhinpc.org.uk/sites/default/files/dementia-practical-toolkit-for-clinicians.pdf 2013.

Beishon 2019

  1. Beishon LC, Batterham AP, Quinn TJ, Nelson CP, Panerai RB, Robinson T, et al.Addenbrooke's Cognitive Examination III (ACE-III) and mini-ACE for the detection of dementia and mild cognitive impairment. Cochrane Database of Systematic Reviews 2019, Issue 12. Art. No: CD013282. [DOI: 10.1002/14651858.CD013282.pub2] [DOI] [PMC free article] [PubMed] [Google Scholar]

Brunet 2012

  1. Brunet MD, McCartney M, Heath I, Tomlinson J, Gordon P, Cosgrove J, et al.There is no evidence base for proposed dementia screening. BMJ 2012;345:e8588. [DOI] [PubMed] [Google Scholar]

Creavin 2016

  1. Creavin ST, Wisniewski S, Noel‐Storr AH, Trevelyan CM, Hampton T, Rayment D, et al.Mini-Mental State Examination (MMSE) for the detection of dementia in clinically unevaluated people aged 65 and over in community and primary care populations. Cochrane Database of Systematic Reviews 2016, Issue 1. Art. No: CD011145. [DOI: 10.1002/14651858.CD011145.pub2] [DOI] [PMC free article] [PubMed] [Google Scholar]

Davis 2013

  1. Davis DHJ, Creavin ST, Noel-Storr A, Quinn TJ, Smailagic N, Hyde C, et al.Neuropsychological tests for the diagnosis of Alzheimer's disease dementia and other dementias: a generic protocol for cross-sectional and delayed-verification studies. Cochrane Database of Systematic Reviews 2013, Issue 3. Art. No: CD010460. [DOI: 10.1002/14651858.CD010460] [DOI] [PMC free article] [PubMed] [Google Scholar]

Davis 2015

  1. Davis DH, Creavin ST, Yip JL, Noel‐Storr AH, Brayne C, Cullum S.Montreal Cognitive Assessment for the diagnosis of Alzheimer's disease and other dementias. Cochrane Database of Systematic Reviews 2015, Issue 10. Art. No: CD013282. [DOI: 10.1002/14651858.CD013282.pub2] [DOI] [PMC free article] [PubMed] [Google Scholar]

Deeks 2013

  1. Deeks JJ, Bossuyt PM, Gatsonis C, editor(s).Cochrane Handbook for Systematic Reviews of Diagnostic Test Accuracy Version 1.0.0. The Cochrane Collaboration, 2013. Available at methods.cochrane.org/sdt/handbook-dta-reviews.

Dua 2011

  1. Dua T, Barbui C, Clark N, Fleischmann A, Poznyak V, Ommeren M, et al.Evidence-based guidelines for mental, neurological, and substance use disorders in low- and middle-income countries: summary of WHO recommendations. PLoS Medicine 2011;8(11):e1001122. [DOI] [PMC free article] [PubMed] [Google Scholar]

Dyer 2016

  1. Dyer SM, Laver K, Pond CD, Cumming RG, Whitehead C, Crotty M.Clinical practice guidelines and principles of care for people with dementia in Australia. Australian Family Physician 2016;45(12):884-9. [PubMed] [Google Scholar]

Englund 1994

  1. Englund B, Brun A, Gustafson L, Passant U, Mann D, Neary D, et al, the Lund and Manchester Groups.Clinical and neuropathological criteria for frontotemporal dementia. Journal of Neurology, Neurosurgery, and Psychiatry 1994;57(4):416-8. [DOI] [PMC free article] [PubMed] [Google Scholar]

Folstein 1975

  1. Folstein MF, Folstein SE, McHugh PR.“Mini-mental state”: a practical method for grading the cognitive state of patients for the clinician. Journal of Psychiatric Research 1975;12(3):189-98. [DOI] [PubMed] [Google Scholar]

Grypma 2007

  1. Grypma R, Mahajani S, Tam E.Screening and diagnostic assessment of non-English speaking people with dementia: guidelines and system recommendations for practitioners, service managers and policy makers. Developed for Alzheimer’s Australia, May 2007. www.dementia.org.au/sites/default/files/20101224-Nat-CALD-ScreeningGuidelines-07May.pdf.

Hsieh 2015

  1. Hsieh S, McGrory S, Leslie F, Dawson K, Ahmed S, Butler C R, et al.The Mini-Addenbrooke's Cognitive Examination: a new assessment tool for dementia. Dementia and Geriatric Cognitive Disorders 2015;39(1-2):1-11. [DOI] [PMC free article] [PubMed] [Google Scholar]

Jack 2011

  1. Jack CR Jr, Albert MS, Knopman DS, McKhann GM, Sperling RA, Carrillo MC, et al.Introduction to the recommendations from the National Institute on Aging-Alzheimer's Association workgroups on diagnostic guidelines for Alzheimer's disease. Alzheimer's & Dementia 2011;7(3):257-62. [DOI] [PMC free article] [PubMed] [Google Scholar]

Jones 2001

  1. Jones RN, Gallo JJ.Education bias in the Mini-Mental State Examination. International Psychogeriatrics 2001;13(3):299-310. [DOI] [PubMed] [Google Scholar]

McKeith 2005

  1. McKeith IG, Dickson DW, Lowe J, Emre M, O'Brien JT, Feldman H, et al.Diagnosis and management of dementia with Lewy bodies: third report of the DLB Consortium. Neurology 2005;65(12):1863-72. [DOI] [PubMed] [Google Scholar]

McKhann 1984

  1. McKhann G, Drachman D, Folstein M, Katzman R, Price D, Stadlan EM.Clinical diagnosis of Alzheimer's disease: report of the NINCDS-ADRDA Work Group under the auspices of Department of Health and Human Services Task Force on Alzheimer's Disease. Neurology 1984;34(7):939-44. [DOI] [PubMed] [Google Scholar]

Morris 1993

  1. Morris JC.The Clinical Dementia Rating (CDR): current version and scoring rules. Neurology 1993;43(11):2412-4. [DOI] [PubMed] [Google Scholar]

Mukadam 2011

  1. Mukadam N, Cooper C, Livingston G.A systematic review of ethnicity and pathways to care in dementia. International Journal of Geriatric Psychiatry 2011;26(1):12-20. [DOI] [PubMed] [Google Scholar]

Mukadam 2013

  1. Mukadam N, Cooper C, Livingston G.Improving access to dementia services for people from minority ethnic groups. Current Opinion in Psychiatry 2013;26(4):409-14. [DOI] [PMC free article] [PubMed] [Google Scholar]

Naqvi 2015

  1. Naqvi RM, Haider S, Tomlinson G, Alibhai S.Cognitive assessments in multicultural populations using the Rowland Universal Dementia Assessment Scale: a systematic review and meta-analysis. CMAJ 2015;187(5):E169-75. [DOI] [PMC free article] [PubMed] [Google Scholar]

Nasreddine 2005

  1. Nasreddine ZS, Phillips NA, Bédirian V, Charbonneau S, Whitehead V, Collin I, et al.The Montreal Cognitive Assessment, MoCA: a brief screening tool for mild cognitive impairment. Journal of the American Geriatrics Society 2005;53(4):695-9. [DOI] [PubMed] [Google Scholar]

NICE 2018

  1. National Institute for Health Care Excellence (NICE).Dementia: assessment, management and support for people living with dementia and their carers. NICE guideline NG97. www.nice.org.uk/guidance/ng97 (accessed 28/03/2021). [PubMed]

Nielsen 2020

  1. Nielsen TR, Jørgensen K.Cross-cultural dementia screening using the Rowland Universal Dementia Assessment Scale: a systematic review and meta-analysis. International Psychogeriatrics/IPA 2020;32(9):1031-44. [DOI] [PubMed] [Google Scholar]

Page 2021

  1. Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, et al.The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. International Journal of Surgery 2021;1(88):105906. [DOI] [PubMed] [Google Scholar]

Patel 2020

  1. Patel A, Cooper N, Freeman S, Sutton A.Graphical enhancements to summary receiver operating characteristic plots to facilitate the analysis and reporting of meta-analysis of diagnostic test accuracy data. Research Synthesis Methods 2020;12(1):34-44. [DOI] [PubMed] [Google Scholar]

Prince 2007

  1. Prince M, Ferri CP, Acosta D, Albanese E, Arizaga R, Dewey M, et al.The protocols for the 10/66 dementia research group population-based research programme. BMC Public Health 2007;7(1):1-8. [DOI] [PMC free article] [PubMed] [Google Scholar]

Prince 2015

  1. Prince M, Wimo A, Guerchet M, Ali GC, Wu YT, Prina M.The global impact of dementia: an analysis of prevalence, incidence, cost and trends. World Alzheimer Report. 2015. www.alzint.org/resource/world-alzheimer-report-2015/.

Rasmussen 2019

  1. Rasmussen J, Langerman H.Alzheimer’s disease – why we need early diagnosis. Degenerative Neurological and Neuromuscular Disease 2019;9:123. [DOI] [PMC free article] [PubMed] [Google Scholar]

Review Manager 2020 [Computer program]

  1. The Cochrane Collaboration Review Manager 5 (RevMan 5).Version 5.4.1. The Cochrane Collaboration, 2020.

Román 1993

  1. Román GC, Tatemichi TK, Erkinjuntti T, Cummings JL, Masdeu JC, Garcia JH, et al.Vascular dementia: diagnostic criteria for research studies. Report of the NINDS-AIREN International Workshop. Neurology 1993;43(2):250-60. [DOI] [PubMed] [Google Scholar]

Saxena 2007

  1. Saxena S, Thornicroft G, Knapp M, Whiteford H.Resources for mental health: scarcity, inequity, and inefficiency. The Lancet 2007;370(9590):878-89. [DOI] [PubMed] [Google Scholar]

Scheltens 2011

  1. Scheltens P, Rockwood K.How golden is the gold standard of neuropathology in dementia? Alzheimer's & Dementia 2011;7(4):486-9. [DOI] [PubMed] [Google Scholar]

Schünemann 2020a

  1. Schünemann HJ, Mustafa RA, Brozek J, Steingart KR, Leeflang M, Murad MH, et al.GRADE guidelines: 21 part 1. Study design, risk of bias, and indirectness in rating the certainty across a body of evidence for test accuracy. Journal of Clinical Epidemiology 2020;122:129-41. [DOI] [PubMed] [Google Scholar]

Schünemann 2020b

  1. Schünemann HJ, Mustafa RA, Brozek J, Steingart KR, Leeflang M, Murad MH, et al.GRADE guidelines: 21 part 2. Test accuracy: inconsistency, imprecision, publication bias, and other domains for rating the certainty of evidence and presenting it in evidence profiles and summary of findings tables. Journal of Clinical Epidemiology 2020;122:142-52. [DOI] [PubMed] [Google Scholar]

Storey 2004

  1. Storey JE, Rowland JT, Basic D, Conforti DA, Dickson HG.The Rowland Universal Dementia Assessment Scale (RUDAS): a multicultural cognitive assessment scale. International Psychogeriatrics/IPA 2004;16(1):13-31. [DOI] [PubMed] [Google Scholar]

Tombaugh 1992

  1. Tombaugh TN, McIntyre NJ.The Mini-Mental State Examination: a comprehensive review. Journal of the American Geriatrics Society 1992;40(9):922-35. [DOI] [PubMed] [Google Scholar]

Velayudhan 2014

  1. Velayudhan L, Ryu SH, Raczek M, Philpot M, Lindesay J, Critchfield M, et al.Review of brief cognitive tests for patients with suspected dementia. International Psychogeriatrics/IPA 2014;26(8):1247-62. [DOI] [PMC free article] [PubMed] [Google Scholar]

Whiting 2011

  1. Whiting P, Westwood M, Beynon R, Burke M, Sterne JA, Glanville J.Inclusion of methodological filters in searches for diagnostic test accuracy studies misses relevant studies. Journal of Clinical Epidemiology 2011;64(6):602-7. [DOI] [PubMed] [Google Scholar]

Whiting 2011a

  1. Whiting PF, Rutjes AW, Westwood ME, Mallett S, Deeks JJ, Reitsma JB, et al.QUADAS-2: a revised tool for the quality assessment of diagnostic accuracy studies. Annals of Internal Medicine 2011;155(8):529-36. [DOI] [PubMed] [Google Scholar]

WHO & ADI 2012

  1. World Health Organization and Alzheimer’s Disease International (WHO & ADI).Dementia: a public health priority. apps.who.int/iris/handle/10665/75263 2012.

WHO 1992

  1. World Health Organization (WHO).The ICD-10 classification of mental and behavioural disorders: clinical descriptions and diagnostic guidelines. apps.who.int/iris/handle/10665/37958 1992.

WHO 2017

  1. World Health Organization (WHO).Global action plan on the public health response to dementia 2017-2025. apps.who.int/iris/bitstream/handle/10665/259615/9789241513487-eng.pdf 2017.

Wilson 1968

  1. Wilson JMG, Jungner G, World Health Organization.Principles and Practice of Screening for Disease. Public Health Paper No. 34. WHO_PHP_34.pdf 1968.

Worster 2008

  1. Worster A, Carpenter C.Incorporation bias in studies of diagnostic tests: how to avoid being biased about bias. Canadian Journal of Emergency Medicine 2008;10(2):174-5. [DOI] [PubMed] [Google Scholar]

Articles from The Cochrane Database of Systematic Reviews are provided here courtesy of Wiley

RESOURCES