Abstract
The Montreal Cognitive Assessment (MoCA) is a free, easily accessible screener ideal for rural areas where resources are limited. We examined administration and scoring by Veteran Community Outreach Health Workers (VCOHWs); compared positive screening rates using two cutoff scores; and examined predictors of education-adjusted scores in N = 168 rural military Veterans from the Alabama Veteran Rural Health Initiative. Accuracy of administration (95 percent) and scoring (68 percent) was calculated and recommendations are offered. Higher than expected rates of positive screens were observed (40 percent using 24/30 cutoff) in this relatively young (M = 55 years) community-dwelling sample. Age, education, and race but not subjective health predicted differences in domain and total education-adjusted scores on multivariate and univariate tests. This study advances social science research in rural communities by being the first to: (1) examine MoCA scores in a rural, Deep South U.S. sample; and (2) report fidelity administration data for VCOHWs.
Keywords: Assessment, cognitive screening, dementia, outreach, veteran
INTRODUCTION
The Montreal Cognitive Assessment (MoCA) published by Nasreddine et al. (2005) has clinical utility in disease conditions ranging from carbon monoxide poisoning to epilepsy to various major and minor neurocognitive disorders (e.g. Parkinson’s disease). It has been translated into over 50 languages resulting in more than 85 alternate versions. Abbreviated forms and accommodations for individuals with low literacy or who are visually impaired have also been developed (Julayanont et al. 2015). However, strong interest in cognitive screening alternatives to the Mini-Mental Status Exam (MMSE) (Folstein, Folstein, and McHugh 1975) emerged as emphasis has shifted to earlier detection through more sensitive screening tools and as distribution of the MMSE is restricted by a controversial copyright license requiring an official test form for every administration (Newman 2015). The MoCA has gained favor as one of the most promising alternatives to the historically ubiquitous MMSE. Like the MMSE, the MoCA can be administered in approximately 10 minutes, is on a familiar 30-point scale, and is easy to administer, score, and interpret, facilitating potential use by non-expert outreach staff in underserved areas and thus broadening access to screening and referral for services. Tables for converting raw MoCA scores to MMSE scores have made interpreting discrepancies in the 30-point scales clearer (e.g. a score of 18 on the MoCA is comparable to a 24 on the MMSE) (Roalf et al. 2013 Adamis et al. 2016).
An abundance of international publications describing MoCA cross-cultural validation studies from memory clinics and geriatric psychiatry settings are available (e.g. Smith, Gildeh, and Holmes 2007; Hu et al. 2013; Lee et al. 2008; O’Driscoll and Shaikh 2017). With such widespread use, variability in the interpretation and optimal cutoff scores for some populations have been described (e.g. Luis, Keegan, and Mullan 2009), and adjustments beyond the one-point education correction may be warranted (Freitas et al. 2012; Malek-Ahmadi et al. 2015).
Carson, Leach, and Murphy (2018) selected nine international diagnostic validity studies (with Ns ranging from 35 to 266) from an initial pool of 304 articles to conduct a meta-analysis examining optimal MoCA cutoff scores. Their results support a lower cutoff than the original Canadian norms to reduce high observed rates of “false positives.” Their review is not the first to suggest a downward adjustment of the cutoff score for optimal classification (e.g. see Luis et al. 2009 based on a sample in Florida). Similarly, Rossetti et al. (2017) pointed out that a majority of African American participants scored below the suggested 26-point cutoff which could contribute to inappropriate categorization. This possibility is particularly problematic considering there is an established health disparity in the United States, such that there is increased risk of dementia for African Americans (Mayeda et al. 2016).
The MoCA’s rapid uptake can be credited to: (1) administration that can be completed by trained non-clinical examiners; (2) superior sensitivity to detect mild cognitive impairment; (3) materials that are publicly available for use without permission in clinical and educational settings (MocaTest.org); and (4) written, standardized administration and scoring instructions. These strengths make the MoCA ideal for social scientists across disciplines conducting research in rural areas where access to partnering medical providers, neuropsychologists, and health insurance may be lower than in urban areas. In addition, MoCA administration by trained non-clinical examiners, such as Veteran Community Outreach Health Workers (VCOHW), may streamline complex referral processes in communities defined as health professional and mental health professional shortage areas nationally (Merwin et al. 2003; Wang & Luo 2005).
This study describes baseline assessment data collected by Veteran Community Health Outreach Workers for the Alabama Veterans Rural Health Initiative, a larger study aimed at understanding potential barriers to health care for rural-dwelling military veterans who were not utilizing Veterans Health Administration (VHA) services (Allen et al. 2013; Davis et al. 2011; Hilgeman et al. 2014). It contributes to the growing MoCA cross-cultural validation work by addressing two specific objectives:
To describe administration and scoring of the MoCA by VCHOWs in a rural setting of the southeastern United States. To date, there are no published studies on the use of the MoCA by non-clinician examiners.
To examine the impact of age, education, subjective health, and race on education-adjusted MoCA scores in rural military veterans, and compare performance (e.g. domain scores, rates of positive screens) to the published norms from the original Canadian-based sample.
Facilitating mental health screening and referral services in underserved rural areas through the use of cognitive screening tools by non-clinician examiners addresses a major public health need and facilitates access to care within underserved communities.
METHODS
The Alabama Veteran Rural Health Initiative (AVHRI) was a clinical outreach program and research study targeting increased enrollment and appointment attendance for rural veterans not utilizing VHA services. Rural veterans completed a baseline assessment and were randomized to either a multi-component Enhanced Enrollment and Engagement (EEE) intervention or to an Administrative Outreach (AO) condition (Davis et al. 2011; Hilgeman et al. 2014). Study procedures were approved by the Institutional Review Boards at both facilities (Tuscaloosa and Birmingham VA Medical Centers in Alabama, respectively, TVAMC and BVAMC), and participants completed informed consent prior to study enrollment.
Recruitment and Eligibility
Participants were recruited using a variety of methods by Veteran Community Outreach Health Workers (VCOHW). Veterans who were at least 19 years old, had decision-making capacity, resided in rural zip codes as determined by the VA’s Planning Systems Support Group, and had not accessed VHA services in ≥2 years were eligible. No health or cognitive-related inclusion or exclusion criteria were made. No veteran was excluded based on gender, race, social class, or ethnicity.
Rural veterans (N = 203) from 31 counties in the state of Alabama completed baseline assessment prior to randomization. The MoCA was administered at baseline assessment for 168 of the 203 veterans; data for the full sample were not available because the measure was added after the program was already underway. Participants were interviewed in their homes or another private location in their communities to eliminate travel to the medical center. Detailed baseline characteristics, study procedures, and primary outcomes for the AVHRI are described elsewhere (Davis et al. 2011; Hilgeman et al. 2014).
Veteran Community Outreach Health Workers (VCOHWs)
VCOHWs completed the MOCA as well as all other aspects of recruitment, data collection, and intervention delivery rather than using “traditional” research coordinators or health science specialist research staff. VCOHWs were non-clinical VA employees who were not trained in research prior to study involvement. Eight VCOHWs supported the program’s two sites. Most VCOHWs were military veterans with some previous experience working with other veterans and familiarity with diverse rural communities in Alabama. Some resided in rural areas, though that was not a prerequisite for being hired onto the project. The majority had completed a bachelor’s degree (n = 5, 65 percent), two had high school diplomas (25 percent), and one (12.5 percent) had a Master’s degree in social work. VCOHWs were predominantly Black / African American (n = 6, 65 percent), but one was Hispanic/Latino and one was White/Caucasian. Men and women were equally represented (50 percent each). VCOHWs were trained on MoCA administration and scoring by experienced clinicians (a clinical psychologist or a nurse practitioner) using role play and live practice procedures. Administration and scoring were then reviewed periodically for the duration of the program to ensure fidelity. Audits of scoring procedures were also completed on 100 percent of administered MoCAs by a clinical psychologist (MMH) to ensure reliability of obtained scores.
Measures
Baseline measures were completed through self-report or interview for: illness burden, occupational and functional disability, psychiatric symptoms, stress and trauma checklists, and healthcare utilization (see Davis et al. 2011 for sample characteristics and Hilgeman et al. 2014 for study outcomes). General demographics, military history, and legal history basic to a study of healthcare access and barriers were also included.
The Montreal Cognitive Assessment (MoCA), Version 7.1 Original Version is a cognitive screening instrument (within the public domain (http://www.mocatest.org/default.asp) that takes approximately 10 minutes to administer and has excellent sensitivity and specificity for mild cognitive impairment. It was included in the AVRHI study because it has more frontal/executive functioning items than other cognitive screens, which are important for screening younger individuals and those with potential traumatic brain injuries. Attention and concentration, executive functioning, memory, language, visuo-constructional skills, conceptual thinking, calculations, and orientation are assessed across 12 discrete tasks (Table 1). Total scores are out of 30 points, with higher scores indicating better performance. One point is added as an “educational adjustment” for individuals with 12 or fewer years of formal education. A cutoff score of greater than or equal to 26 has been recommended by Nasreddine et al. (2005). However, Luis et al. (2009) recommend using 24 as a cutoff score, where scores from 24-30 rather than 26-30 indicate normal cognitive functioning in a sample collected in Florida. Since Luis et al.’s sample most closely matched the current sample, their cutoff score was used. Frequencies using both cut points are presented for the current sample.
Table 1:
Alabama Rural Veterans |
Canada Nasreddine et al. 2005 |
|||
---|---|---|---|---|
Task | Mean (SD) | Mean % Points Possible | Mean (SD) | Mean % Points Possible |
Trails (1 pt)* | - | - | 0.87 (0.34) | 87% |
Cube (1 pt) | - | - | 0.71 (0.46) | 71% |
Clock (3 pts) | - | - | 2.65 (0.65) | 88% |
Naming (3 pts) | 2.78 (0.50) | 93% | 2.66 (0.36) | 87% |
Memory (4 pts) | 2.39 (1.72) | 60% | 3.73 (1.27) | 93% |
Digit Span (2 pts) | 1.63 (0.57) | 82% | 1.82 (0.44) | 91% |
Letter A (1pt) | 0.91 (0.29) | 91% | 0.97 (0.18) | 97% |
Serial 7 (3 pts) | 2.26 (0.92) | 75% | 2.89 (0.41) | 96% |
Sentence rep (2 pts) | 1.59 (0.62) | 80% | 1.83 (0.37) | 92% |
Fluency F (1 pt) | 0.73 (0.45) | 73% | 0.87 (0.34) | 87% |
Abstraction (2 pts) | 1.74 (0.59) | 87% | 1.83 (0.43) | 92% |
Orientation (6 pts) | 5.87 (0.71) | 98% | 5.99 (0.11) | 99% |
Total (Educ. Adjusted) | 23.66 (4.20) | 78% | 27.27 (2.20) | 91% |
Note. Item-level scores were not available for the Visuospatial tasks in the Alabama Veteran Rural Health Initiative Study since data were entered in accordance with the fields represented on the Original Version 7.1 published by Nasreddine et al. 2005. Domains with discrepancies of more than 10 percent between groups are bolded and discussed as potentially meaningful.
The Short Form (SF-12; Pickard et al. 1999; Ware, Kosinski, and Keller 1996) assessed subjective physical and mental health. This widely used measure has established psychometric properties. Test-retest reliability has been observed at r = 0.89 and r = 0.76 for the 12-item Physical Component Summary and the 12-item Mental Component Summary scores. Relative validity estimates for the physical component ranged from 0.43-0.93; estimates for the mental component ranged from 0.60-1.07.
RESULTS
Participants
Data for 168 veterans who completed the MoCA were analyzed. Participant age ranged from 21 to 85, with a mean age of 55.6 years (SD = 14.4). The majority were men (92.9 percent), though 12 (7.1 percent) female veterans also participated. Self-identified race/ethnicity revealed 58.6 percent White/Caucasian, 40.9 percent Black/African American, 1 percent Hispanic (n = 2), and 0.5 percent Asian (n = 1). Sixty-three percent of the participants were married, and 53 percent reported formal education past high school. One in five (22 percent) reported having no health insurance or other coverage and 32 percent reported no income or household income less than or equal to $20,000. Regarding military deployment history, 54.4 percent reported being deployed one or more times during their military career, with 15.5 percent of veteran participants endorsing combat experience.
MoCA Administration and Scoring Accuracy by Non-Experts
Administration and scoring audits were conducted on 100 percent of completed MoCAs across the two study sites. Results revealed that 114 (67.9 percent) of the 168 MoCAs reviewed were scored correctly by the VCOHWs in the field. The remaining 54 (32.1 percent) required adjustments to the scoring. Similar rates of administration/scoring errors were observed across study sites, i.e. 29 errors at the BVAMC and 24 errors at the TVAMC, and errors were evenly distributed across VCOHWs (i.e. individuals with a high school or bachelor’s degree were not notably different than those with higher education). Importantly, the majority of errors required a post-audit downward adjustment of scores (i.e. n = 36 or 66.7 percent of errors) primarily as a result of too much credit being given on tasks assessing the Visual/Spatial domain (i.e. scoring on a clock drawing task and cube). Other themes that appeared in scoring errors included: 1) inconsistent application of the educational adjustment – particularly for individuals with a GED, and 2) miscalculation when summing the total score. Errors reflecting potential carelessness in scoring were minimal (i.e. recording the wrong number of points in response to partially correct answers at the domain level). Adjustments in scores ranged from 1-3 points, with 1 being the modal number of points changed. Evidence of errors in administration was minimal occurring in < 10 of the 168 (5.4 percent) tests.
MoCA Performance
Participant scores on the MoCA ranged from 8 to 30, with a mean of 23.6 (SD = 4.33). When the suggested cutoff score of 26 was applied to the rural Alabama Veteran sample, a disproportionally high number of veterans (57.5 percent) “screened positive” for probable mild cognitive impairment. In other words, more than half of the veterans in this community-dwelling sample screened positive for a neurocognitive disorder after educational adjustments had been applied. Our results were most similar to the Luis et al. (2009) southeastern United States sample (collected largely in Northern Florida and southeastern Georgia), which reported a mean score of 25.9 (SD = 1.8) in healthy controls. By comparison, our sample represented a >2 point lower mean. Using the Luis et al. (2009) modified cutoff of 24 yielded a 39.5 percent positive screen for possible mild cognitive impairment, a higher than expected percentage. To further explore these trends, task-level data from this sample are compared to the original Nasreddine et al. (2005) normative data in Table 1. Item-level performance data depicted the greatest discrepancies in tasks associated with memory performance, verbal fluency, serial 7 calculations, and sentence repetition.
Impact of Age, Education, and Race on MoCA Scores
Correlation analyses revealed age was negatively correlated with overall performance (−.48, p <.001). Age was also negatively correlated with five of six cognitive domains: visuospatial/executive abilities, naming, delayed recall, orientation, and attention. Abstract thinking was not significantly related to age. An ANOVA examining the total education-adjusted MoCA score from subjective health, race, age, and education revealed a significant model [F (4, 158) = 17.48, p < .0001], such that older age (t = −7.63, p < .001), Black or African American participants (t = −2.51, p = .01), and less education (t = 3.25, p = .001) significantly predicted lower MoCA scores. Subjective health was not significant.
Next, three multivariate tests using MANOVA were conducted to further examine associations among age, race, education, and the collection of domain scores (see Table 2). Results revealed significant omnibus tests for age [F (7, 152) = 10.31, p < 0.01; Wilks’ Lambda = 0.68] and education [F (7, 152) = 3.76, p < 0.01; Wilks’ Lambda = 0.85], which suggests that the domain level results can be meaningfully interpreted. Race was not a significant predictor of the collection of MoCA domain scores. Examining the between-subjects effects for age revealed a significant effect on the domain scores of visuospatial/executive abilities (F (1, 158) = 38.35, p < 0.01), naming (F (1, 158) = 11.88, p < 0.01), delayed recall (F (1, 158) = 29.09, p < 0.01), orientation (F (1, 158) = 5.85, p = 0.02), and attention (F (1, 158) = 23.69, p < 0.01), such that as age increased, domain scores decreased; abstract thinking and language were not predicted by age. Regarding education, there was a significant difference in the domain scores for visuospatial/executive abilities (F (1, 158) = 12.80, p < 0.01), naming (F (1, 158) = 4.20, p = 0.04), delayed recall (F (1, 158) = 3.93, p = 0.05), attention (F (1, 158) = 14.72, p < 0.01), and language (F (1, 158) = 6.04, p = 0.02); higher education predicted higher domain scores; abstract thinking and orientation were not predicted by education.
Table 2:
Domain | Race | Education | Age |
---|---|---|---|
Visuospatial / Executive | 9.59 (.002) | 12.80 (<.001) | 38.35 (<.001) |
Naming | 0.17 (.681) | 4.20 (.042) | 11.88 (.001) |
Attention | 2.49 (.117) | 14.72 (<.001) | 23.69 (<.001) |
Language | 1.76 (.187) | 6.04 (.015) | 2.68 (.104) |
Abstraction | 1.86 (.174) | 2.05 (.155) | 0.467 (.495) |
Delayed Recall | 1.12 (.292) | 3.93 (.049) | 29.09 (<.001) |
Orientation | 0.08 (.772) | 0.36 (.551) | 5.85 (.017) |
Reported as F (p value); A fourth model predicting domain scores from Subjective Health was not significant and is not depicted. Bolded items represent significance at p < 0.05.
DISCUSSION
The current study is the first to examine MoCA administration and scoring by trained VCOHWs in rural community settings rather than using clinical or research-prepared staff. We found that the MoCA can be administered and scored effectively by trained examiners with a variety of educational backgrounds (e.g. high school diploma, bachelor’s degree), but that supervision by more experienced clinicians is necessary to maintain fidelity. Rather than incorrectly identifying deficits where none exist, we noticed that VCOHWs were more likely to give credit on subjective items like the cube and clock drawing tasks. This observation is consistent with one prior study that found that less experienced registered nurse administrators tended to err in the direction of giving too much credit when scoring the Mini-Mental State Examination (Koder and Klahr 2010). Though only one-point downward adjustments were typically needed, this increases the likelihood of a Type II error, or inaccurately determining that someone is more intact than indicated by true performance. Strict adherence to a scoring metric is important for administrators to keep in mind because being too “generous” with scoring could lead to making false-negatives in a screener designed to be a first-line, quick assessment of underlying cognitive decline that warrants further evaluation and possible referral. However, since it is unlikely non-clinical staff would be used for screening in the process of diagnosis, the balance of this specific risk seems outweighed by the accessibility gained by using VHOCW and members of the rural community to complete the assessments. Therefore, these findings may offer preliminary support for other paraprofessionals in community agencies (e.g. community health advocates, patient navigators, etc.) and /or research assistants (e.g. undergraduate students) to utilize screening tools like the MoCA when written administration and scoring criteria are well established.
Building on the work by Koder and Klahr (2010) with registered nurses and the MMSE, several recommendations are made to increase confidence in reliable MoCA administration and scoring by non-experts. Specifically: (1) reviewing all administered tests for calculation errors and inconsistent or incorrect application of the education adjustment; (2) ensuring responses are recorded to enhance the ability to “check” scoring of those responses once back in the office; (3) retraining or recalibrating scoring and administration with an experienced clinician at regular intervals (e.g. every 2-3 months) to prevent drift; and (4) creating a “one pager” scoring and administration reference sheet for easy access and reference in the field. Finally, with non-clinicians, it may be particularly important to explain during the initial training that withholding points during scoring is not “unkind.” Several of our VCOHWs benefited from reassurance that participants could make several mistakes and still fall within the normal range, and that missing any given item is not indicative of cognitive decline. A quick introduction into the measure development process (e.g. how cognitive screening tasks are selected to differentiate levels of ability) may help provide context for perceived “subjective” decisions.
The current study also expands the growing body of work utilizing the MoCA in diverse samples (e.g. Carson et al. 2018). This exclusively rural sample has a higher representation of African American participants than the standardization sample, high percentages of men, and more individuals with lower education. While Carson and colleagues’ work included culturally and educationally heterogenous samples, four of the nine analyzed studies used non-English speaking samples. The current study offers a not-yet represented group that may help clinicians and researchers interpret findings with similar populations in the rural United States, a health and mental health provider shortage area (Merwin et al. 2003; Wang and Luo 2005).
In this study, like other studies (Luis et al. 2009; Malek-Ahmadi et al. 2015), age was negatively correlated with overall performance. The older participants were, the lower the total scores were on the MoCA. This was also true for all domains besides abstract thinking. Race and subjective health were not predictors of performance on the MoCA domains. Rather, outside of age, education seems to play a role in determining how participants scored both on total scores and within certain domains, in that higher education predicted higher scores in every domain aside from abstract thinking and orientation. Perhaps for this particular population, the quality of early-life education plays more of a role than overall education level. Sisco et al. (2014) found that among African Americans, poorer educational quality was associated with lower baseline cognitive level and greater negative cognitive change over time. This is a potential factor in the current study that may warrant further investigation by rural social scientists in the future. Moreover, this finding must play a role in training initiatives for veteran community outreach workers in rural settings in order to improve access to accurate cognitive screening in underserved areas.
This study highlights the importance of establishing norms for rural areas that extend the original Canadian-based normed sample. Participants in this study tended to score much below the suggested cutoff score of 26 proposed by Nasreddine et al. (2005) to determine presence of cognitive decline. This is especially important in rural areas where healthcare resources are limited, and a readily available, free screener may be the first-line or even the only assessment tool available in determining a patient’s cognitive status. Julayanont et al. (2015) have acknowledged that education and literacy levels impact the original MoCA’s ability to detect mild cognitive impairment – and as a result have developed an alternate version of the form called the Montreal Cognitive Assessment – Basic (MoCA-B). The MoCA-B replaces complex executive functioning and literacy-dependent items with more simplified tasks (e.g. a simplified trails task, fruit-naming as the semantic fluency task, a visuoperception task identifying images in an overlapping drawing, and animal naming with more detail to facilitate recognition). However, until further studies are done using this simplified version, the norms are not available.
One limitation of this study is that other cognitive measures were not obtained, thus, true validation and psychometric work was not possible in this study. Additionally, since these individuals did not exist in the VHA electronic health record, confirmation of diagnoses was also not possible. The use of data from a larger study also limits the conclusions we can draw on the fidelity of examiners since the study was not designed with that research question in mind. Future translational research could answer this question in a prospective design by systematically comparing adherence to MoCA administration/scoring guidelines across examiner groups of interest (undergraduate research assistants, nurses, paraprofessionals, providers, etc.).
CONCLUSION
While screening measures – even in the hands of clinical professionals – are not indicated for the diagnosis of a minor or major neurocognitive disorder, the information gleaned from a screening tool like the MoCA can be invaluable particularly in rural communities and other health and mental health provider shortage areas. Confidence in the administration, scoring, and interpretation of performance data shapes the characterization of the study sample and appropriate next steps in the case of clinical evaluations. For applied projects conducted in rural areas, like the state-wide AVRHI enrollment study, poor performance on the MoCA can be used to prompt a higher level of enrollment and scheduling support to ensure that the individual is able to access and effectively utilize the healthcare system. As with published findings on other screeners (e.g. mental health) in this population, more research is needed to fully understand the scope of cognitive impairment in rural military populations and ultimately the impact on functional and health-related outcomes.
ACKNOWLEDGMENTS
We are grateful for the support of the VA Office of Rural Health (funding), Lawrence Biro, EdD (former VA VISN 7 Director), Alan Tyler, MS, MPA, FACHE (former Tuscaloosa VA Medical Center Director), Rica Lewis-Payton, MHA, FACHE, (former Birmingham VA Medical Center Director), and VA Research Services at the Tuscaloosa and Birmingham VA Medical Centers. A full list of participating Alabama Veteran Rural Health Initiative (AVRHI) investigators and institutions are published elsewhere (Hilgeman et al. 2014).
FUNDING
The VA Office of Rural Health funded staff salaries and provided resources for the outreach activities described in this study; however, the sponsor did not have a role in study design, collection, analysis, interpretation of data, preparation of this report, or the decision to submit the paper for publication. In addition, the first author’s contributions were supported by Career Development Awards from VA Rehabilitation Research & Development Service (IK1 RX000791, IK2 RX001824, Hilgeman, PI). The contents do not represent the views of the U.S. Department of Veterans Affairs or the United States Government.
Footnotes
DISCLOSURE STATEMENT
No potential conflict of interest was reported by the authors.
Contributor Information
Michelle M. Hilgeman, Tuscaloosa VA Medical Center, Alabama Research Institute on Aging and the Department of Psychology at The University of Alabama, and The University of Alabama School of Medicine.
Eugenia M. Boozer, Tuscaloosa VA Medical Center and Florida Institute of Technology.
A. Lynn Snow, Tuscaloosa VA Medical Center and Alabama Research Institute on Aging and the Department of Psychology at The University of Alabama.
Rebecca S. Allen, Alabama Research Institute on Aging and the Department of Psychology at The University of Alabama.
Lori L. Davis, Tuscaloosa VA Medical Center and The University of Alabama School of Medicine.
REFERENCES
- Adamis D, Helmi L, Fitzpatrick O, Meager D, and McCarthy G. 2016. “Agreement and equation between Mini Mental State Examination (MMSE) and Montreal Cognitive Assessment (MoCA) in an Old Age Psychiatry Outpatient Clinic Population.” European Psychiatry 33:S184. doi: 10.1016/j.eurpsy.2016.01.402 [DOI] [Google Scholar]
- Allen Rebecca S., Guadagno Rosanna E., Parmelee Patricia, Minney Jessica A., Hilgeman Michelle M., Tabb Kroshona D., McNeal Sandre F., Houston Thomas, Kertesz Stefan, and Davis Lori L.. 2013. “Internet Connectivity Among Rural Alabama Veterans: Baseline Findings from the Alabama Veterans Rural Health Initiative Project.” Rural and Remote Health 13(1):2138–2147. doi: http://www.rrh.org.au [PubMed] [Google Scholar]
- Carson Nicole, Leach Larry, and Murphy Kelly J.. 2018. “A Re-examination of Montreal Cognitive Assessment (MoCA) Cutoff Scores.” International Journal of Geriatric Psychiatry 33(2):379–388. doi: 10.1002/gps.4756 [DOI] [PubMed] [Google Scholar]
- Davis Lori L., Kertesz Stefan G., Mahaney-Price Ann F., Martin Michelle Y., Tabb Kroshona D., Pettey Kristin M., McNeal Sandre F., Granstaff U. Shanette, Hamner Karl, Powell M. Paige, Hilgeman Michelle M., Snow A. Lynn, Stanton Marietta, Parmelee Patricia, Litaker Mark S., and Hawn Mary T.. 2011. “Alabama Veterans Rural Health Initiative: A Preliminary Evaluation of Unmet Health Care Needs.” Journal of Rural Social Sciences 26(3):14–31. [Google Scholar]
- Folstein Marshal F., Folstein Susan E., and McHugh Paul R.. 1975. “‘Mini-mental State’. A Practical Method for Grading the Cognitive State of Patients for the Clinician.” Journal of Psychiatric Research 12(3):189–198. [DOI] [PubMed] [Google Scholar]
- Freitas Sandra, Simões Mario R., Alves Lara, and Santana Isabel. 2012. “Montreal Cognitive Assessment: Influence of Sociodemographic and Health Variables.” Archives of Clinical Neuropsychology 27(2):165–175. doi: 10.1093/arclin/acr116 [DOI] [PubMed] [Google Scholar]
- Hilgeman Michelle M., Mahaney-Price Ann F., Stanton Marietta P., McNeal Sandre F., Pettey Kristin M., Tabb Kroshona D., Litaker Mark S., Parmelee Patricia, Hamner Karl, Martin Michelle Y., Hawn Mary T., Kertesz Stefan G., Davis Lori L., and the Alabama Veterans Rural Health Initiative Steering Committee. 2014. “Alabama Veterans Rural Health Initiative: A Pilot Study of Enhanced Community Outreach in Rural Areas.” Journal of Rural Health 30(2):153–163. doi: 10.1111/jrh.12054 [DOI] [PubMed] [Google Scholar]
- Hu Jian-bo, Zhou Wei-hua, Hu Shao-hua, Huang Man-li, Wei Ning, Qi Honglie, Huang Jin-wen, and Xu Yi. 2013. “Cross-cultural Difference and Validation of the Chinese Version of Montreal Cognitive Assessment in Older Adults Residing in Eastern China: Preliminary Findings.” Archives of Gerontology and Geriatrics 56(1):38–43. [DOI] [PubMed] [Google Scholar]
- Julayanont Parunyou, Tangwongchai Sookjaroen, Hemrungrojn Solaphat, Tunvirachaisakul Chavit, Phanthumchinda Kammant, Hongsawat Juntanee, Suwichanarakul Panida, Thanasirorat Saowaluck, and Nasreddine Ziad S.. 2015. “The Montreal Cognitive Assessment—Basic: A Screening Tool for Mild Cognitive Impairment in Illiterate and Low-Educated Elderly Adults.” Journal of the American Geriatrics Society 63(12):2550–2554. doi: 10.1016/j.archger.2012.05.008 [DOI] [PubMed] [Google Scholar]
- Koder Deborah-Anne, and Klahr Amanda. 2010. “Training Nurses in Cognitive Assessment: Uses and Misuses of the Mini-Mental State Examination.” Educational Gerontology 36(10-11):827–833. doi: 10.1080/03601277.2010.485027 [DOI] [Google Scholar]
- Lee Jun-Young, Lee Dong Woo, Cho Seong-Jin, Na Duk L., Jeon Hong Jin, Kim Shin-Kyum, Lee You Ra, Youn Jung-Hae, Kwon Miseon, Lee Jae-Hong, and Cho Maeng Je. 2008. “Brief Screening for Mild Cognitive Impairment in Elderly Outpatient Clinic: Validation of the Korean Version of the Montreal Cognitive Assessment.” Journal of Geriatric Psychiatry and Neurology 21(2):104–110. doi: 10.1177%0891988708316855 [DOI] [PubMed] [Google Scholar]
- Luis Cheryl A., Keegan Andrew P., and Mullan Michael. 2009. “Cross Validation of the Montreal Cognitive Assessment in Community Dwelling Older Adults Residing in the Southeastern US.” International Journal of Geriatric Psychiatry 24(2):197–201. doi: 10.1002/gps.2101 [DOI] [PubMed] [Google Scholar]
- Malek-Ahmadi Michael, Powell Jessica J., Belden Christi M., Kathy O’Connor Linda Evans, Coon David W., and Nieri Walter. 2015. “Age- and Education-Adjusted Normative Data for the Montreal Cognitive Assessment (MoCA) in Older Adults Age 70–99.” Neuropsychology, Development, and Cognition. Section B, Aging, Neuropsychology, and Cognition 22(6):755–761. doi: 10.1080/13825585.2015.1041449 [DOI] [PubMed] [Google Scholar]
- Mayeda Elizabeth Rose, Glymour Maria M., Quesenberry Charles P., and Whitmer Rachel A.. 2016. “Inequalities in Dementia Incidence Between Six Racial and Ethnic Groups Over 14 Years.” Alzheimer’s & Dementia: The Journal of the Alzheimer’s Association 12(3):216–224. doi: 10.1016/j.jalz.2015.12.007 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Merwin Elizabeth, Hinton Ivora, Dembling Bruce, and Stern Steven. 2003. “Shortages of Rural Mental Health Professionals.” Archives of Psychiatric Nursing, 17(1):42–51. doi: 10.1053/apnu.2003.1 [DOI] [PubMed] [Google Scholar]
- Nasreddine Ziad S., Phillips Natalie A., Bédirian Valerie, Charbonneau Simon, Whitehead Victor, Collin Isabelle, Cummings Jeffrey L., and Chertkow Howard. 2005. “The Montreal Cognitive Assessment (MoCA©): A Brief Screening Tool for Mild Cognitive Impairment.” Journal of the American Geriatrics Society 53(4):695–699. doi: 10.1111/j.1532-5415.2005.53221.x [DOI] [PubMed] [Google Scholar]
- Newman John C. 2015. “Copyright and Bedside Cognitive Testing: Why We Need Alternatives to the Mini-Mental State Examination.” JAMA Internal Medicine 175(9):1459–1460. doi: 10.1001/jamainternmed.2015.2159 [DOI] [PubMed] [Google Scholar]
- O’Driscoll Ciarán, and Shaikh Madiha. 2017. “Cross-Cultural Applicability of the Montreal Cognitive Assessment (MoCA): A Systematic Review.” Journal of Alzheimer’s Disease 58(3):789–801. doi: 10.3233/JAD-161042 [DOI] [PubMed] [Google Scholar]
- Pickard A. Simon, Johnson Jeffrey A., Penn Andrew, Lau Francis, and Noseworthy Tom. 1999. “Replicability of SF-36 Summary Scores by the SF-12 in Stroke Patients.” Stroke 30(6):1213–1217. doi: 10.1161/01.STR.30.6.1213 [DOI] [PubMed] [Google Scholar]
- Roalf David R., Moberg Paul J., Xie Sharon X., Wolk David A., Moelter Stephen T., and Arnold Steven E.. 2013. “Comparative Accuracies of Two Common Screening Instruments for Classification to Alzheimer’s Disease, Mild Cognitive Impairment, and Healthy Aging.” Alzheimer’s and Dementia 9(5):529–537. doi: 10.1016/j.jalz.2012.10.001 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Rossetti Heidi C., Lacritz Laura H., Hynan Linda S., Cullum C. Munro, Van Wright Aaron, and Weiner Myron F.. 2017. “Montreal Cognitive Assessment Performance among Community-Dwelling African Americans.” Archives of Clinical Neuropsychology 32(2):238–244. doi: 10.1093/arclin/acw095 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Sisco Shannon, Gross Alden L., Shih Regina A., Sachs Bonnie C., Glymour M. Maria, Bangen Katherine J., Benitez Andreana, Skinner Jeannine, Schneider Brooke C., and Manly Jennifer J.. 2014. “The Role of Early-life Educational Quality and Literacy in Explaining Racial Disparities in Cognition in Late Life.” Journal of Gerontology Series B: Psychological Sciences and Social Sciences 70(4):557–567. doi: 10.1093/geronb/gbt133 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Smith Tasha, Gildeh Nadia, and Holmes Clive. 2007. “The Montreal Cognitive Assessment: Validity and Utility in a Memory Clinic Setting.” Canadian Journal of Psychiatry 52(5):329–32. doi: 10.1177%070674370705200508 [DOI] [PubMed] [Google Scholar]
- Wang Fahui, and Luo Wei. 2005. “Assessing Spatial and Nonspatial Factors for Healthcare Access: Towards an Integrated Approach to Defining Health Professional Shortage Areas.” Health & Place, 11(2):131–146. 10.1016/j.healthplace.2004.02.003 [DOI] [PubMed] [Google Scholar]
- Ware John E., Kosinski Mark, and Keller Susan D.. 1996. “A 12-Item Short-form Health Survey: Construction of Scales and Preliminary Tests of Reliability and Validity.” Medical Care 34(3):220–233. doi: 10.1097/00005650-199603000-00003 [DOI] [PubMed] [Google Scholar]