Skip to main content
VA Author Manuscripts logoLink to VA Author Manuscripts
. Author manuscript; available in PMC: 2016 May 17.
Published in final edited form as: Neurology. 2006 May 9;66(9):1367–1372. doi: 10.1212/01.wnl.0000210527.13661.d1

Cognitive performance predicts treatment decisional abilities in mild to moderate dementia

RJ Gurrera 1, J Moye 1, MJ Karel 1, AR Azar 1, JC Armesto 1
PMCID: PMC4869070  NIHMSID: NIHMS784920  PMID: 16682669

Abstract

Objective

To examine the contribution of neuropsychological test performance to treatment decision-making capacity in community volunteers with mild to moderate dementia.

Methods

The authors recruited volunteers (44 men, 44 women) with mild to moderate dementia from the community. Subjects completed a battery of 11 neuropsychological tests that assessed auditory and visual attention, logical memory, language, and executive function. To measure decision making capacity, the authors administered the Capacity to Consent to Treatment Interview, the Hopemont Capacity Assessment Interview, and the MacCarthur Competence Assessment Tool—Treatment. Each of these instruments individually scores four decisional abilities serving capacity: understanding, appreciation, reasoning, and expression of choice. The authors used principal components analysis to generate component scores for each ability across instruments, and to extract principal components for neuropsychological performance.

Results

Multiple linear regression analyses demonstrated that neuropsychological performance significantly predicted all four abilities. Specifically, it predicted 77.8% of the common variance for understanding, 39.4% for reasoning, 24.6% for appreciation, and 10.2% for expression of choice. Except for reasoning and appreciation, neuropsychological predictor (β) profiles were unique for each ability.

Conclusions

Neuropsychological performance substantially and differentially predicted capacity for treatment decisions in individuals with mild to moderate dementia. Relationships between elemental cognitive function and decisional capacity may differ in individuals whose decisional capacity is impaired by other disorders, such as mental illness.


Capacity for treatment decisions hinges on four legal standards, or abilities14: understanding (comprehension of diagnostic and treatment information), appreciation (personalization of information through integration with one’s values, beliefs, and expectations), reasoning (evaluation of treatment alternatives in light of potential consequences for everyday life), and expression of choice (communication of a treatment decision). This taxonomy corresponds broadly to cognitive models for making treatment decisions.5,6 Each ability is necessary for an intact decision-making capacity.7

Capacity evaluations are invariably consequential for both patient and treaters, but assessment procedures and outcomes are inconsistent.8,9 Reasons include clinicians’ misconceptions10,11 and the intrusion of personal values1215 or professional bias4,16 into the assessment process. Not surprisingly, physician proxies do not accurately predict patients’ treatment preferences.17,18 Recognizing the inherent limitations of conventional capacity assessments, federal practice guidelines for psychologists19 recommend supplementation with specific instruments. Three such instruments are the Capacity to Consent to Treatment Interview (CCTI),20 the Hopemont Capacity Assessment Interview (HCAI),21 and the MacCarthur Competence Assessment Tool—Treatment (MacCAT-T).22 While these instruments advance efforts to improve objectivity in capacity assessments, they yield only mixed results for convergent validity with respect to individual treatment decisional abilities.23

It is logical to examine the neuropsychological correlates of decisional abilities for ways in which to refine their assessment,8,19,24 and preliminary studies are encouraging with respect to the feasibility of this approach.8,25,26 The authors evaluated the contribution of neuropsychological task performance to decisional abilities in community volunteers with mild to moderate dementia. They hypothesized that each ability would be significantly predicted by a distinct subset of neuropsychological tasks, and that understanding would be most, and expression of choice least, strongly associated with neuropsychological performance measures.

Methods

Subjects

The sample consisted of 44 men and 44 women. All subjects had intact primary attentional ability (defined by at least low average Digit Span combined score, equal to a standard score > 6). Mean ± SD age was 74.9 ± 6.2 years, and mean educational achievement was 13.9 ± 3.1 grade levels.

Recruitment procedure

Potential subjects were recruited from the community using fliers distributed in hospital waiting rooms, senior centers, and senior housing; and advertisements placed in local newspapers, council of aging newsletters, and an Alzheimer association newsletter. A total of 290 individuals called to express interest in the study, either on behalf of themselves or their partner. To exclude individuals with psychiatric conditions that might interfere with cognition, all potential participants were screened with the Geriatric Depression Scale Short Form27 (GDS) and the Brief Symptom Inventory28 (BSI). Sixteen (5.5%) callers with a GDS raw score >10 (of 15) or a BSI T score >70 were excluded.

Potential study participants (n = 274) were next screened with the Telephone Interview for Cognitive Status29 (TICS-M), which assesses orientation, 10-word memory, naming, nonverbal praxis, attention, and calculation. The TICS-M was modified to include assessment of delayed recall of the 10-word list (total possible score = 50). Callers with a score less than 30, or those with a score between 30 and 39 who expressed concern about their memory, were considered for the dementia group. Callers with a score ≥40 were tentatively assigned to a nondemented comparison group.

Informed consent procedure

Information about the study was disclosed in simple direct language, both orally and in written format, to maximize comprehension.30 Participants were informed that they could discontinue participation at any time. Written informed consent was obtained after a complete description of the study had been provided. Only one subject had a guardian; in that case both subject and guardian completed the written informed consent procedure. Subjects were compensated financially for their time. The institutional Human Studies Subcommittee approved this study.

Diagnostic procedure for dementia group

Potential dementia group subjects (n = 154) next completed the 94-item Dementia Diagnostic Screening Questionnaire31 (DDSQ), with the help of a caregiver if needed, and submitted medical records. The DDSQ asks about subjective memory difficulties and risk factors for specific dementia subtypes. Consensus (R.G. and J.M.) clinical diagnoses of DSM-IV dementia were made on the basis of TICS-M scores, DDSQ responses, and medical records. This procedure excluded 34 (22.1%) callers for whom a clinical dementia diagnosis could not be confirmed. In addition, 28 (18.2%) callers declined to participate further, two were too cognitively impaired to complete neuropsychological testing, one died before completing the study, and another was excluded for administrative reasons.

Forty-eight of the remaining 88 dementia subjects had TICS-M scores of 31 or greater, and 40 subjects had TICS-M scores of 30 or less. These cut-off scores have been shown to distinguish mild and moderate dementia,32 and corresponded to 2.0 standard deviations below the comparison group mean. Individuals were included without regard to dementia subtype so that the sample would reflect a typical community population. Information from the DDSQ was used to subtype this group as follows: vascular dementia, 25 (28.4%); probable Alzheimer dementia, 3 (3.4%); possible Alzheimer dementia, 31 (35.2%); PD, 1 (1.1%); traumatic brain injury, 1 (1.1%); and multiple etiologies (i.e., two or more of: vascular dementia, possible or probable Alzheimer dementia, PD, traumatic brain injury, alcohol dementia, or “other”), 27 (30.7%).

Screening procedure for comparison group

Potential subjects for the healthy comparison group (n = 120) completed the 37-item Health Screening Questionnaire33 to exclude individuals with health problems that could cause cognitive impairment (e.g., “Have you ever had a stroke or TIA?”). Seventeen (14.2%) individuals were excluded for health reasons. Another 13 (10.8%) individuals declined further participation and two were excluded for administrative reasons. Thus, the comparison group consisted of 88 healthy volunteers.

Decision capacity assessment instruments

Three instruments were chosen for their prominence in the field: the MacCarthur Competence Assessment Tool–Treatment34 (MacCAT-T); the Hopemont Capacity Assessment Interview21 (HCAI); and the Clinical Competency Test Interview35 (CCTI). All three utilize hypothetical vignettes or actual diagnosis and treatment information, followed by a series of probing questions, to generate a score for each decisional ability.

The CCTI is based on two clinical vignettes (a neoplasm condition and a cardiac condition) presented orally and in writing at a fifth- to sixth-grade reading level. Inter-rater reliability on this instrument is high (r = 0.83 for interval scales, and r > 0.96 for categorical scales).35 The HCAI consists of two clinical vignettes (treatment of an eye infection and administration of cardiopulmonary resuscitation) that are presented after general concepts of choice, risk, and benefit have been reviewed. Pilot data (n = 17) suggest that inter-rater agreement is high (r = 0.93). The MacCAT-T utilizes a semistructured interview to guide the clinician through an assessment of capacity to make an actual treatment decision. For research purposes, a standardized vignette was developed36 that involves choosing between amputation and surgical management of a non-healing toe ulcer. Inter-rater reliability is similar to that for CCTI and HCAI (r ≥ 0.87). Administration order was counterbalanced to avoid order effects.

Neuropsychological testing

Participants completed a neuropsychological battery consisting of 11 tests: Wechsler Memory Scale, third edition37 (Digit Span forward and backward and Logical Memory I and II subtests), Wechsler Adult Intelligence Scale, third edition38 (Vocabulary subtest), Trails A and B39, Visual Search and Attention Test40 (VSAT), Boston Naming Test41 (BNT), Controlled Oral Word Association Test42 (COWA or FAS), and the Mazes Test43 (Large Print Version), which is time-limited. Vocabulary performance was scored as 2 = fully correct, 1 = partially correct. Summed results for digits forward and backward, and VSAT total score, were used in the analyses.

Testing was done at various locations (e.g., medical center, senior center, home) based on subject preference. Test sessions lasted approximately 120 minutes. Participants were given at least one break and were offered additional breaks throughout testing to minimize fatigue.

Statistical analyses

Principal Components Analysis (PCA) was applied to dementia group ability scores across measures. This approach yielded a more stable and valid measure of each ability (i.e., less subject to the idiosyncrasies of each instrument)23 while reducing the number of dependent variables (from 12 to 4). PCA was also applied to dementia group neuropsychological performance data for data reduction because their number was large relative to sample size, and they are highly intercorrelated. Neuropsychological principal components were rotated using Equamax criteria (SPSS 12.0). A series of stepwise linear regressions was conducted with scores for each decisional ability principal component as the dependent variable and neuropsychological component scores as independent variables. Only components with eigenvalues > 1 were considered. Probability of F to enter was 0.05, probability of F to remove was 0.1. Significance levels are two-tailed. SPSS 12.0 was used for all statistical analyses.

Results

Ability scores for the dementia group are provided in table 1. Understanding scores across the three instruments showed the widest, and expression of choice the narrowest, range. Understanding scores yielded a single component (eigenvalue 2.397) that accounted for 79.9% of total variance; the range of loadings was 0.891 to 0.898. Appreciation scores produced a single component (1.348, 44.9% of total variance, loadings 0.631 to 0.704). Reasoning scores gave rise to a single component (1.669, 55.6% of total variance, loadings 0.706 to 0.804). Expression of choice scores produced two components. The first (1.338, 44.6% of total variance) contained positive loadings for CCTI (0.855), MacCAT-T (0.525), and HCAI (0.575) choice. The second (1.070, 35.7% of total variance) contained the higher positive loading for MacCAT-T choice (0.755); the larger, but negative, loading for HCAI choice (−0.707); and no loading for CCTI choice (0.013). This second component was not considered further.

Table 1.

Mean decisional ability scores in dementia group

MacCAT-T*
HCAI
CCTI
Decisional ability Mean (SD) Range Mean (SD) Range Mean (SD) Range
Understanding 16.95 (4.10) 4–23 24.22 (3.32) 11–28 46.02 (13.74) 7–70
Appreciation 3.80 (0.53) 1–4 1.66 (0.52) 0–2 4.98 (1.58) 1–8
Reasoning 6.68 (1.30) 3–8 5.10 (1.28) 0–6 3.91 (1.92) 0–8
Expression of choice 1.97 (0.18) 1–2 5.80 (0.55) 4–6 5.74 (0.80) 2–6
*

MacCAT-T subscale scores are reported as actual scores rather than subscale score divided by the number of subscale items.

MacCAT-T = MacCarthur Competence Assessment Tool—Treatment; HCAI = Hopemont Capacity Assessment Interview; CCTI = Capacity to Consent to Treatment Interview.

Neuropsychological performance data for subjects and comparison group are summarized in table 2. Three components (eigenvalues 4.586, 1.399, 1.170) accounted for 45.9%, 14.0%, and 11.7% of total variance in the dementia group. Component 1 included immediate (0.913) and delayed (0.926) logical memory, and Boston naming test (0.600). Component 2 included Trails A (−0.798) and B (−0.676), mazes (0.846), and total VSAT (0.607). Component 3 included digits composite score (0.734), FAS Test (0.836), and vocabulary (0.722). Content analysis suggested component 1 represented verbal retrieval; component 2 represented executive logical control or problem solving (planning, inhibition, flexibility); and component 3 represented general knowledge and executive motivational control (initiation, sustaining).

Table 2.

Neuropsychological performance with and without dementia

Neuropsychological test Dementia group, mean (SD) Comparison group, mean (SD) t p df*
Digits (number of correct trials [forward + backward]) 17.1 (4.2) 18.4 (4.1) 2.04 0.043 174
FAS Test (total distinct words within 60 seconds) 35.3 (12.5) 40.2 (11.8) 2.69 0.008 174
Logical Memory, Immediate (story units recalled) 33.6 (12.2) 41.5 (8.7) 4.91 <0.0005 157.2
Logical Memory, Delayed (story units recalled) 18.0 (10.0) 25.7 (7.0) 5.97 <0.0005 156.3
Trail Making, Part A (seconds) 52.2 (42.3) 39.1 (13.1) −2.76 0.007 103.6
Trail Making, Part B (seconds) 149.2 (82.7) 89.8 (33.1) −6.23 <0.0005 114.2
WAIS Vocabulary (total correct definitions) 41.0 (12.8) 47.7 (9.3) 3.98 <0.0005 159.1
Boston Naming Test (total non-cued correct responses) 49.6 (10.7) 55.8 (3.5) 5.10 <0.0005 105.6
Mazes (total correctly completed mazes) 19.4 (4.8) 22.2 (3.3) 4.58 <0.0005 155.9
VSAT (total correct targets within 60 seconds) 156.8 (52.2) 183.8 (35.0) 4.04 <0.0005 151.9
Age, y 75.3 (6.2) 72.2 (6.5) −3.21 0.002 174
Education (grades completed) 13.9 (3.0) 14.0 (2.7) 0.33 0.739 174
*

Degrees of freedom less than 174 indicate that variances were unequal by Levene’s test.

The FAS Test is also known as the Controlled Oral Word Association (COWA) Test.

WAIS = Wechsler Adult Intelligence Scale; VSAT = Visual Search and Attention Test.

Stepwise linear regression (table 3) revealed that understanding in the dementia group was significantly predicted by all three neuropsychological components. Component 1 accounted for 57.3%, component 3 for an additional 12.3%, and component 2 for an additional 8.2% of understanding total variance. For appreciation, component 1 accounted for 11.3%, component 2 for an additional 8.0%, and component 3 for an additional 5.3% of total variance. Reasoning, like understanding, was predicted mainly by component 1 (19.9% of total variance), with components 2 and 3 accounting for an additional 11.6% and 7.9% of total variance. The main choice component was predicted by neuropsychological components 1 and 3 in equal measure (5.2% and 5.0% of total variance), but was not related to neuropsychological component 2. Overall, neuropsychological performance accounted for 77.9% of understanding, 24.6% of appreciation, 39.5% of reasoning, and 10.2% of choice variance.

Table 3.

Stepwise linear regression results for ability components

Sum of squares df Mean square F p R R2 Neuropsychological components Beta p
Understanding
 Regression 67.75 3 22.58 98.52 0.000 0.882 0.779 Verbal retrieval [1] 0.757 0.000
 Residual 19.25 84 0.23 Knowledge/motivation [3] 0.351 0.000
Problem solving [2] 0.287 0.000
Appreciation
 Regression 21.42 3 7.14 9.14 0.000 0.496 0.246 Verbal retrieval [1] 0.337 0.001
 Residual 65.58 84 0.78 Problem solving [2] 0.283 0.004
Knowledge/motivation [3] 0.230 0.017
Reasoning
 Regression 34.37 3 11.46 18.29 0.000 0.629 0.395 Verbal retrieval [1] 0.447 0.000
 Residual 52.63 84 0.63 Problem solving [2] 0.341 0.000
Knowledge/motivation [3] 0.282 0.001
Choice
 Regression 8.89 2 4.44 4.84 0.010 0.320 0.102 Verbal retrieval [1] 0.228 0.029
 Residual 78.11 85 0.92 Knowledge/motivation [3] 0.224 0.032

For most capacity components, predictor (β) profiles were distinct. Component 1 (verbal retrieval) dominated the profile for understanding, followed by components 3 (knowledge/motivation) and 2 (problem solving). Component 1 also had the largest contribution to reasoning, but it was much less dominant with respect to the other components, and in contrast to understanding the contribution of component 2 was greater than that for component 3. The neuropsychological profile for appreciation closely paralleled that for reasoning, but indicated a consistently smaller contribution from neuropsychological performance. Finally, choice was unrelated to component 2, and had nearly identical contributions from components 1 and 3. Understanding was most, and choice least, predicted by neuropsychological performance.

Discussion

The principal finding of this study is that performance on a neuropsychological test battery significantly predicted each treatment decisional ability in a sample of community volunteers with mild to moderate dementia of diverse etiologies. Notably, the neuropsychological predictors examined in this study explained 78% of the common variance in understanding, 39.5% of the common variance in reasoning, and almost 25% of the common variance in appreciation. In contrast to previous studies, neuropsychological task performance significantly predicted all four decisional abilities, and it also accounted for larger portions of variance for each ability. The use of principal component scores, which captured the common variance across instruments for each ability and excluded idiosyncratic variance associated with each instrument, may account for these differences.

Except for reasoning and appreciation, neuropsychological performance predictor profiles for each ability were distinct. The distinctness of the predictor profiles provides support for the construct validity of the current decisional capacity model, which considers understanding, appreciation, reasoning, and expression of choice to be discrete elements of decision-making capacity. The similarity of predictor profiles for appreciation and reasoning suggests they may utilize substantially equivalent or overlapping cognitive mechanisms. However, Grisso and Appelbaum found that impairments in appreciation and reasoning were relatively independent of one another in a heterogeneous patient sample,44 implying that each of these abilities depends upon more or less discrete cognitive processes. This inconsistency may reflect greater homogeneity of the present sample. Thus, impairments in appreciation and reasoning may evolve in tandem in patients with mild to moderate dementia, but be more differentially affected by other disorders, such as psychiatric illness. For example, appreciation may be relatively more impaired in patients with certain delusional beliefs. This serves as a reminder that individual factors beyond those reliably indexed by neuropsychological task performance contribute to variance in decisional abilities, and that neuropsychological assessment is not a substitute for clinical assessment.

Expression of choice produced two principal components. The first had substantial positive loadings from all three instruments and was weakly predicted by verbal retrieval and knowledge/motivation components. The second component was not interpretable because two loadings were in opposite directions and orthogonal to the third. The consistently narrow range of scores for this ability, relative to the others, probably contributed to the low predictability. In practice, most community-dwelling patients with mild to moderate dementia will be able to express a treatment choice, even if their ability to understand, appreciate, or reason is inadequate to support the choice.

This study is limited in a number of ways. Although the sample size was considerably larger than those reported in previous studies, it may not have been large enough to accurately represent the factor structure of neuropsychological performance in the population of community-dwelling individuals with mild to moderate dementia. Similarly, although our neuropsychological battery was fairly comprehensive compared to previous studies of treatment decision capacity, our findings are restricted to the neuropsychological tests employed. An even more extensive neuropsychological battery, or one using different tests, may yield different results. Subjects’ responses to hypothetical treatment scenarios may differ from their responses in real treatment situations involving a current and active medical condition. Finally, the etiologically diverse nature of the dementia group limits generalizability of these findings to specific types of dementia, as etiologically distinct dementias may demonstrate unique patterns of impairment. However, this diversity is also a strength of the present study, because the relationships demonstrated here are not likely limited to any dementia subtype.

Verbal retrieval was the strongest single predictor of each decisional ability. This may reflect the fact that all of these instruments present information in an exclusively verbal format, and so draw heavily upon verbal information processing mechanisms, including retrieval; or it may be that decision-making is an intrinsically verbal process. Future studies should examine the contribution of other verbal information processing elements to decisional capacity, and explore the feasibility and effectiveness of alternative information presentation formats.

Neuropsychological components predicted much more understanding variance than they did for other abilities. The strong relationship between understanding and neuropsychological test scores is similar to other research,45,46 showing that understanding is more strongly related to neuropsychological tests than are other decisional abilities, indicating that understanding may be the most cognitively mediated consent task. This result may also reflect a relatively greater internal consistency of the understanding construct within and across instruments, or the choice of neuropsychological tasks employed here, or both. Specifically, the mental processes represented by existing understanding constructs may draw upon verbal retrieval more heavily than other ability constructs, because verbal retrieval accounted for a much larger portion of the shared variance with understanding than did other neuropsychological components. Not surprisingly, the tasks utilized by these capacity instruments to operationalize understanding resemble those used to test logical memory, which was a main element of the verbal retrieval component. Subsequent studies should employ a broader range of neuropsychological tasks, and explore whether there are subcomponents of reasoning and appreciation that relate more strongly to neuropsychological performance. It is also possible that the greater range of scores on understanding, compared to ranges observed for other abilities, made it easier to detect a relationship between this ability and neuropsychological performance. The second and third neuropsychological components represent composites of cognitive processes usually viewed as relatively independent; whether these results accurately depict a replicable data structure in the reference population, or reflect idiosyncratic features of our study sample (such as dementia subtypes), awaits further study.

The present findings suggest that neuropsychological assessment may improve the validity and reliability of capacity evaluations by elucidating the cognitive processes essential to each decisional ability. A detailed understanding of the neuropsychological underpinning for decisional abilities may facilitate the crafting of more refined instruments that minimize individual and subjective evaluator factors. In addition, such research could lead to modifications of information presentation formats that compensate more effectively for deficits in memory, auditory and visual attention, and language. The unique relationships between the cognitive processes indexed here and capacity also provide a basis for testable hypotheses regarding how various disease processes may differentially affect decisional abilities.

Acknowledgments

Supported by NIMH R29 MH57104 (J.M., PI) and Department of Veterans Affairs medical research funds.

The authors thank Thomas Grisso, PhD, for his assistance in the early stages of this project; Daniel Marson, PhD, for consultation regarding use of the CCTI; and Barry Edelstein, PhD, for consultation regarding use of the HCAI.

Footnotes

Disclosure: The authors report no conflicts of interest.

References

  • 1.Tepper A, Elwork A. Competency to consent to treatment as a psychological construct. Law Hum Behav. 1984;8:205–223. doi: 10.1007/BF01044693. [DOI] [PubMed] [Google Scholar]
  • 2.Drane JF. The many faces of competency. Hastings Center Report. 1985;15:17–21. [PubMed] [Google Scholar]
  • 3.Appelbaum PS, Grisso T. Assessing patients’ capacities to consent to treatment. N Engl J Med. 1988;319:1635–1638. doi: 10.1056/NEJM198812223192504. [DOI] [PubMed] [Google Scholar]
  • 4.Roth LH, Meisel CA, Lidz CA. Tests of competency to consent to treatment. Can J Psychiatry. 1997;134:279–284. doi: 10.1176/ajp.134.3.279. [DOI] [PubMed] [Google Scholar]
  • 5.Moye J. Theoretical frameworks for competency assessments in cognitively impaired elderly. J Aging Studies. 1996;10:27–42. [Google Scholar]
  • 6.Marson DC, Harrell LE. Neurocognitive changes associated with loss of capacity to consent to medical treatment in patients with Alzheimer’s disease. In: Park DC, Morrell REW, Shifren K, editors. Processing of medical information in aging patients: Cognition and human factors perspectives. Mahwah, NJ: Lawrence Erlbaum; 1999. pp. 109–126. [Google Scholar]
  • 7.Welie SPK. Criteria for patient decision making (in)competence: a review of and commentary on some empirical approaches. Med Health Care Philos. 2001;4:139–151. doi: 10.1023/a:1011493817051. [DOI] [PubMed] [Google Scholar]
  • 8.Marson DC, Schmitt FA, Ingram KK, Harrell LE. Determining the competency of Alzheimer patients to consent to treatment and research. Alzheimer Dis Assoc Dis. 1994;8(suppl 4):5–18. [PubMed] [Google Scholar]
  • 9.Marson DC, McInturff B, Hawkins L, Bartolucci A, Harrell LE. Consistency of physician judgments of capacity to consent in mild Alzheimer’s disease. J Am Geriatr Soc. 1997;45:453–457. doi: 10.1111/j.1532-5415.1997.tb05170.x. [DOI] [PubMed] [Google Scholar]
  • 10.Ganzini L, Volicer L, Nelson W, Derse A. Pitfalls in the assessment of decision-making capacity. Psychosomatics. 2003;44:237–243. doi: 10.1176/appi.psy.44.3.237. [DOI] [PubMed] [Google Scholar]
  • 11.Markson LJ, Kern DC, Annas GJ, Glantz LH. Physician assessment of patient competence. J Am Geriatr Soc. 1994;42:1074–1080. doi: 10.1111/j.1532-5415.1994.tb06212.x. [DOI] [PubMed] [Google Scholar]
  • 12.Macklin R. The geriatric patient: ethical issues in care and treatment. In: Mappes T, Zambaty JS, editors. Biomedical ethics. 2. New York: McGraw-Hill; 1986. pp. 162–167. [Google Scholar]
  • 13.Starr TJ, Pearlmann RA, Uhlmann RF. Quality of life and resuscitation decisions in elderly patients. J Gen Intern Med. 1986;1:373–379. doi: 10.1007/BF02596420. [DOI] [PubMed] [Google Scholar]
  • 14.Uhlmann RF, Pearlmann RA. Perceived quality of life and preferences for life-sustaining treatment in older adults. Arch Intern Med. 1991;151:495–497. [PubMed] [Google Scholar]
  • 15.Clemens E, Hayes HE. Assessing and balancing elder risk, safety, and autonomy: Decision making practices of elder care workers. Home Health Care Serv Quart. 1997;16:3–20. doi: 10.1300/J027v16n03_02. [DOI] [PubMed] [Google Scholar]
  • 16.Masand PS, Bouckoms AJ, Fischel SV, Calabrese LV, Stern TA. A prospective multicenter study of competency evaluations by psychiatric consultation services. Psychosomatics. 1998;39:55–60. doi: 10.1016/S0033-3182(98)71381-7. [DOI] [PubMed] [Google Scholar]
  • 17.Seckler AB, Meier DE, Mulvihill M, Cammer Paris BE. Substituted judgement: How accurate are proxy predictions? Ann Intern Med. 1991;115:92–98. doi: 10.7326/0003-4819-115-2-92. [DOI] [PubMed] [Google Scholar]
  • 18.Suhl J, Simons P, Reedy T, Garrick T. Myth of substituted judgement: Surrogate decision making regarding life support is unreliable. Arch Intern Med. 1994;150:90–96. doi: 10.1001/archinte.154.1.90. [DOI] [PubMed] [Google Scholar]
  • 19.Department of Veterans Affairs. Clinical assessment for competency determination: a practice guideline for psychologists. Milwaukee: Department of Veterans Affairs National Center for Cost Containment; 1997. [Google Scholar]
  • 20.Marson DC, Ingram KK, Cody HA, Harrell LE. Assessing the competency of patients with Alzheimer’s disease under different legal standards. Arch Neurol. 1995;52:949–954. doi: 10.1001/archneur.1995.00540340029010. [DOI] [PubMed] [Google Scholar]
  • 21.Edelstein B. Hopemont Capacity Assessment Interview manual and scoring guide. Morgantown, WV: West Virginia University; 1999. [Google Scholar]
  • 22.Grisso T, Appelbaum P. MacArthur Competency Assessment Tool for Treatment (MacCAT-T) Sarasota, FL: Professional Resource Press; 1998. [Google Scholar]
  • 23.Moye J, Karel M, Azar A, Gurrera R. Hopes and cautions for instrument-based evaluation of consent capacity: Results of a construct validity study of three instruments. Ethics Law Aging Rev. 2004;10:39–61. [PMC free article] [PubMed] [Google Scholar]
  • 24.Kim SYH, Karlawish JHT, Caine ED. Current state of research on decision-making competence of cognitively impaired elderly persons. Am J Geriatr Psychiatry. 2002;10:151–165. [PubMed] [Google Scholar]
  • 25.Holzer JC, Gansler DA, Moczynski NP, Folstein MF. Cognitive functions in the informed consent evaluation process: a pilot study. J Am Acad Psychiatry Law. 1997;25:531–540. [PubMed] [Google Scholar]
  • 26.Marson DC, Chatterjee A, Ingram KK, Harrell LE. Toward a neurologic model of competency: cognitive predictors of capacity to consent in Alzheimer’s disease using three different legal standards. Neurol. 1996;46:666–672. doi: 10.1212/wnl.46.3.666. [DOI] [PubMed] [Google Scholar]
  • 27.Sheikh JI, Yesavage JA. Geriatric depression scale: Recent evidence and development of a shorter version. Clin Gerontol. 1986;5:165–173. [Google Scholar]
  • 28.Derogatis LR. Brief symptom inventory: administration, scoring and procedures manual. Minneapolis: National Computer Systems; 1993. [Google Scholar]
  • 29.Brandt J, Welsh KA, Breitner JC, Folstein MF, Helms M, Christian JC. Hereditary influences on cognitive functioning in older men: a study of 4000 twin pairs. Arch Neurol. 1993;50:599–603. doi: 10.1001/archneur.1993.00540060039014. [DOI] [PubMed] [Google Scholar]
  • 30.Dunn LB, Jeste DV. Enhancing informed consent for research and treatment. Neuropsychopharmacology. 2001;24:595–607. doi: 10.1016/S0893-133X(00)00218-9. [DOI] [PubMed] [Google Scholar]
  • 31.Rogers RL, Meyer JS. Computerized history and self-assessment questionnaire for diagnostic screening among patients with dementia. J Am Geriatr Soc. 1988;36:13–21. doi: 10.1111/j.1532-5415.1988.tb03428.x. [DOI] [PubMed] [Google Scholar]
  • 32.Welsh KA, Breitner JCS, Magruder-Habib KM. Detection of dementia in the elderly using the telephone interview for cognitive status. Neuropsychiatry Neuropsychol Behav Neurol. 1993;6:103–110. [Google Scholar]
  • 33.Christensen KC, Moye J, Armson RR, Kern TM. Health screening and random recruitment for cognitive aging research. Psychology Aging. 1992;7:204–208. doi: 10.1037//0882-7974.7.2.204. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.Grisso T, Appelbaum PS. MacArthur Competence Assessment Tool—Treatment (MacCAT-T) Manual. Worcester, MA: University of Massachusetts; 1996. [Google Scholar]
  • 35.Marson DC, Ingram KK, Cody HA, Harrell LE. Assessing the competency of patients with Alzheimer’s disease under different legal standards: a prototype instrument. Arch Neurol. 1995;52:949–954. doi: 10.1001/archneur.1995.00540340029010. [DOI] [PubMed] [Google Scholar]
  • 36.Moye J, Karel MJ, Azar AR, Gurrera RJ. Capacity to consent to treatment: empirical comparison of three instruments in older adults with and without dementia. Gerontologist. 2004;44:166–175. doi: 10.1093/geront/44.2.166. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 37.Wechsler DA. Wechsler Memory Scale—III. New York: Psychological Corporation; 1997. [Google Scholar]
  • 38.Wechsler DA. Wechsler Adult Intelligence Scale—III Manual. New York: Psychological Corporation; 1997. [Google Scholar]
  • 39.Army Individual Test Battery. Washington, DC: War Department, Adjutant General’s Office; 1944. [Google Scholar]
  • 40.Trenerry MR, Crosson B, DeBoew J, Leber WR. Visual Search and Attention Test. Odessa, FL: Psychological Assessment Resources; 1990. [Google Scholar]
  • 41.Kaplan EF, Goodglass H, Weintraub S. The Boston Naming Test. Boston: E. Kaplan & H. Goodglass; 1978. [Google Scholar]
  • 42.Spreen O, Benton AL. Neurosensory center comprehensive examination for aphasia. Victoria, BC: Department of Psychology, University of Victoria; 1977. [Google Scholar]
  • 43.Christensen KJ, Multhaup KS, Norstrom S, Voss KA. Cognitive battery for dementia: development and measurement characteristics. Psychol Assess. 1991;3:168–174. [Google Scholar]
  • 44.Grisso T, Appelbaum PS. Comparison of standards for assessing patient’s capacities to make treatment decisions. Am J Psychiatry. 1995;152:1033–1037. doi: 10.1176/ajp.152.7.1033. [DOI] [PubMed] [Google Scholar]
  • 45.Marson DC, Cody HA, Ingram KK, Harrell LE. Neuropsychological predictors of competency in Alzheimer’s disease using a rational reasons legal standard. Arch Neurol. 1995;52:955–959. doi: 10.1001/archneur.1995.00540340035011. [DOI] [PubMed] [Google Scholar]
  • 46.Dymek MP, Atchison P, Harrell L, Marson DC. Competency to consent to medical treatment in cognitively impaired patients with Parkinson’s disease. Neurology. 2001;56:17–24. doi: 10.1212/wnl.56.1.17. [DOI] [PubMed] [Google Scholar]

RESOURCES