Abstract
Judgment is an important aspect of cognitive and real-world functioning that is commonly assessed during neuropsychological evaluations. This study utilized a brief, online survey to examine neuropsychologists’ practices and perspectives regarding available judgment instruments. Participants (n=290, 17% response rate) were randomly selected members of the International Neuropsychological Society and the National Academy of Neuropsychology. Respondents rank-ordered the following issues that should be incorporated into assessments of judgment (from most to least important): safety, ability to perform activities of daily living, and problem solving/decision making about medical, financial, social/ethical, and legal matters. A majority of respondents reported that they “often” or “always” assessed judgment when evaluating patients with traumatic brain injury (89%), dementia (87%), and psychiatric disorders (70%). Surprisingly, the top-ranked instruments were not tests of judgment per se, and included the WAIS-III Comprehension, Wisconsin Card Sorting Test, and WAIS-III Similarities. Further, 61% of respondents were slightly confident, and only 23% were very confident, in their ability to assess a patient’s judgment skills with their current tests. The overwhelming majority (87%) of respondents perceived a need for improved measures. Overall results indicate use of varied techniques by neuropsychologists to evaluate judgment and suggest the need for additional tests of this cognitive domain.
Keywords: judgment, judgment tests, neuropsychological assessment, survey
INTRODUCTION
Survey research has tracked various aspects of neuropsychological practice from the 1980s onward. Investigators have focused on a range of topics, including professional activities (e.g., work setting, referral sources, patient populations), sociodemographic characteristics of practitioners, education and training, fees and salary ranges, journal preferences, use of technicians, physicians’ satisfaction with neuropsychological services, and neuropsychologists’ ethical beliefs and behaviors (see Rabin, Barr, & Burton, 2005; Sweet, Nelson, & Moberg, 2006; and Temple, Carvalho, & Tremont, 2006). Assessment issues also have been characterized including approaches to battery construction, cognitive domains assessed, and length of time allocated to testing, scoring, and interpretation/report writing (see Rabin et al., 2005). In terms of basic test usage, researchers have reported on the most commonly used instruments (Butler, Retzlaff, & Vanderploeg, 1991; Camara, Nathan, & Puente, 2000; Rabin et al., 2005), as well as those utilized for specific purposes such as forensic evaluations (Lees-Haley et al., 1996) or within the broad cognitive areas (i.e., memory, attention, executive functioning; Rabin et al., 2005). Recent surveys by Slick et al. (2004) and Sharland and Gfeller (2007) have focused on measures used to assess effort, malingering, and/or response bias. Thus, a development in neuropsychological survey work appears to be the identification of instruments used within specific cognitive and/or functional domains. Despite this trend, research has yet to report on the utilization of tests designed to assess judgment ability, a common presenting problem in patients referred for neuropsychological evaluation.
Judgment can be defined as the capacity to make sound decisions after careful consideration of available information, possible solutions, likely outcomes, and contextual factors.1 From a neuropsychological perspective, judgment falls under the domain of executive functioning (Woods, Patterson, & Whitehouse, 2000) and includes a cognitive appraisal process (i.e., deciding what to do in a situation) and the behavioral follow-through (i.e., carrying out an effective/safe behavior; Rabin et al., 2007; Thornton et al., 2007). Judgment is an important aspect of executive functioning that is regularly assessed during neuropsychological evaluations with varied patient populations. For example, loss of judgment ability is a common consequence and diagnostic feature of the dementing process, as executive cognitive functions that permit complex, goal-directed use of existing knowledge progressively fail (Duke & Kaszniak, 2000; Karlawish et al., 2005; Knopman et al., 2001; LaFleche & Albert, 1995; Marson & Harrell, 1999; Rabin et al., 2007). Judgment also may be compromised in individuals with chronic psychiatric illnesses such as schizophrenia, schizoaffective disorder, and bipolar disorder (Bearden, Hoffman, & Cannon, 2001; Rempfer et al., 2003; Semkovska et al., 2004), and these patients may manifest diminished insight into their cognitive and functional deficits (Flashman, 2002). There are known deficits in aspects of executive functioning, including judgment and problem solving in adults and children who have sustained traumatic brain injuries (TBI), and researchers are actively working to devise cognitive rehabilitation strategies for these patients (Bamdad, Ryan, & Warden, 2003; Busch et al., 2005; Gioia, 2004; Levin & Hanten, 2005; McDonald, Flashman, & Saykin, 2002).
When formally assessed, knowledge about a patient’s judgment ability can inform decisions about diagnosis, functional and cognitive competence, and treatment (Bertrand & Willis, 1999; Karlawish et al., 2005; Kim, Karlawish, & Caine, 2002; Willis et al., 1998). For example, patients with schizophrenia who manifest judgment and problem-solving deficits are known to have difficulty in vocational, community, or independent living settings (Revheim et al., 2006); following careful assessment and remediation of executive deficits, patients may show improvement on externally valid measures of judgment and problem solving (Medalia, Revheim, & Casey, 2001; 2002). Additionally, dementia patients with judgment deficits may persist in unsafe behaviors such as driving or using the stove, and may be at risk for medication mismanagement, poor nutrition, and financial abuse (Lai & Karlawish, 2007). Based on information derived from a neuropsychological evaluation, patients and their caregivers can be educated about the consequences of impaired judgment and the relationship of observed symptoms to the disease process. With this knowledge, caregivers may be better prepared to assume new responsibilities within the family system or provide the appropriate level of assistance to reduce the likelihood of negative outcomes, while maximizing independence (Duke & Kaszniak, 2000). Another important use of judgment tests is as an outcome variable in trials of cognitive-enhancing interventions for which improvements in patients’ judgment skills are expected, and they must be demonstrated objectively (Lai & Karlawish, 2007).
Despite the significance of this cognitive domain, a review of the literature revealed only three standardized tests of judgment for use with adult/older adult populations: (1) Judgment Questionnaire subtest of the Neurobehavioral Cognitive Status Exam (NCSE JQ; Northern California Neurobehavioral Group, Inc., 1988), (2) Judgment/Daily Living subtest of the Neuropsychological Assessment Battery (NAB JDG; Stern & White, 2003), and (3) the Problem Solving Subscale of the Independent Living Scales (Loeb, 1996). These instruments are all part of larger test batteries (which may limit their use by some neuropsychologists) and some have drawbacks, particularly when utilized with older adults (see Rabin et al., 2007).2 In addition to specific tests of judgment, practitioners frequently use clinical interviews, subjective rating scales, informant reports, chart review, and cognitive tests that fall under the general domain of executive functioning (e.g., planning and problem-solving tasks) to gather information and make inferences about patients’ judgment capacity (Lai & Karlawish, 2007). Furthermore, several tests tap closely related constructs such as everyday decision making, social problem solving, practical intelligence, and aspects of competency (for review see Marsiske & Margrett, 2006).3 Many of these tests, however, were developed for research purposes and may not be familiar or readily available to neuropsychologists. Additionally, many of these instruments lack published information about their psychometric properties and utility when included as part of a clinical assessment battery.
The current study surveyed neuropsychologists’ practices and perspectives regarding the assessment of judgment. Important study goals were to determine how often judgment is assessed during neuropsychological evaluations and with which patient populations, identify commonly used instruments, and assess the perceived need for additional/improved measures. We also sought to clarify understanding of the term “judgment ability” and the cognitive and behavioral processes subsumed under this heading. By exploring these issues through survey work, we aimed to elucidate current trends in the assessment of judgment and raise issues that might lead to improved evaluative methods.
METHOD
Potential Participants and Procedure
Potential participants were randomly selected members of the International Neuropsychological Society (INS) and the National Academy of Neuropsychology (NAN). Prior to conducting the study, the authors obtained the e-mail addresses of current members from the NAN and INS membership offices. Study procedures also were approved by the Dartmouth Medical School Institutional Review Board. INS and NAN members possessing doctoral degrees (i.e., PhD, PsyD, EdD) and residing in the United States (US) or Canada were selected for inclusion. Approximately 38% of INS and NAN members (n=1983) received an invitation to complete the online survey; approximately 1% (n=263) of the e-mails were returned as undeliverable.
Potential participants were asked to complete a brief (5–10 minute) Web-based survey, examining the practices and perspectives of neuropsychologists regarding their use of instruments to assess judgment skills. Those wishing to participate were instructed to activate the link to a Web address (www.hostedsurvey.com/judgment).4 Respondents received informed consent through a description of the study that preceded the questionnaires. They were told about the minimal foreseeable risks (e.g., loss of time) and issues of confidentiality (i.e., survey responses would be downloaded to a secured data file site and no identifiers would be attached to responses). In order to enhance the response rate, potential participants received a reminder e-mail approximately two weeks after the initial contact. Although no monetary incentive was offered, participants were informed that the researchers would donate 30 cents to the Alzheimer’s Association for each completed questionnaire. Participants were offered the opportunity to receive a summary of findings by contacting one of the researchers.
Questionnaire
The questionnaire contained 17 items, with a combination of open- and close-ended questions divided into two main sections. In the first part of the questionnaire (items 1–9), respondents provided basic demographic and practice-related information including: gender, age, degree type and field, years practicing, percentage of time devoted to various professional activities, average number of neuropsychological assessments performed each month, primary work settings, and percentage of professional time spent with individuals in various age ranges. In the second part of the questionnaire (items 10–17), respondents read a description/definition of judgment ability and listed any additional factors they consider this construct to involve. Participants also ranked the salient issues (provided by the researchers) that an assessment of judgment “should incorporate” and reported the frequency with which they assess judgment with specific clinical populations.
Participants next rated the frequency with which they use various techniques to assess judgment (e.g., clinical interview, subjective rating scales, neuropsychological instruments). Subsequently, participants listed the specific tests they typically administer to assess a patient’s judgment ability. Respondents then completed a confidence rating based on their utilization of the instruments just listed: “Using the instruments listed (in the previous question), how confident would you feel in your ability to assess a patient’s everyday judgment skills?” The confidence rating was made along a Likert scale that ranged from “not at all confident” to “very confident” Participants then listed any additional judgment tests they were aware of or had encountered (but not necessarily used) in their neuropsychological training or practice. A final “yes/no” question asked whether respondents believe there is a need for additional/improved standardized measures of judgment. At the end of the questionnaire, space was provided for any additional comments. To assess for content and clarity, the survey was administered to several neuropsychologists familiar to the authors and later modified based on the received feedback.
RESULTS
Response Rate and Organizational Affiliations
Links to the questionnaires were e-mailed in late June 2005; those completed and returned within eight weeks of the initial e-mailing were included in the analyses. We aimed to collect data from a geographically-diverse sample with the goal of representing broad-based practices and tools being used in the assessment of judgment skills. Of the total, active e-mail addresses for which invitations were sent (n=1720), 290 of the potential participants completed the survey with usable data (response rate=16.9%). We discarded responses from participants whose highest earned degree was at the bachelor’s level. At the completion of data collection, responses were downloaded directly into a statistical software program (SPSS 12.1 for Windows). Confidentiality was ensured by the survey company and our e-mail list was discarded after the initial mailing. Any identifying information (e.g., comments written) received with the questionnaire was deleted to ensure anonymity of response.
Relevant Demographic and Practice-Related Information
The average age of respondents was 44.0 (SD=10.4, Range 28 to 75 years) and males and females were roughly equivalent (46.6% and 53.4%, respectively) [χ2=(1, n=290)=.46, ns]. The percentage of respondents holding PhDs was highest (86.2%), followed by PsyDs (8.6%) and EdDs (1.0%). The remaining 4% indicated holding “other” types of advanced degrees. The majority of respondents received their degrees in clinical psychology (64.5%), followed by clinical neuropsychology (15.2%), counseling psychology (7.6%), “other” (5.2%), biopsychology/neuroscience (4.1%), and school psychology (3.4%). Respondents reported professionally practicing neuropsychology for an average of 10.9 years (SD=8.5, Range 1 to 42 years). The average respondent spent 53.0% of his or her professional time conducting neuropsychological assessments followed by research/teaching (24.2%), psychotherapy (10.0%), “other activities” (8.4%), and rehabilitation/cognitive remediation (4.4%).
Respondents reported spending the greatest percentage of professional time with older adults, aged 60 and over (30.7%), followed by adults, aged 36–59 (26.2%), young adults, aged 19–35 (19.5%), children, under age 12 (13.0%), and adolescents, aged 12–18 (10.6%). Most respondents (72.1%) reported performing an average of 1–15 neuropsychological assessments per month, while others (20.0% and 4.5%) performed 16–30 and over 30 assessments per month, respectively. The remaining 3.4% of respondents were not currently practicing neuropsychology but were included in the analyses due to their involvement in research/teaching and their previous assessment experience. Many respondents were employed in medical hospitals (48.6%) and private or group practices (44.5%), while others worked in rehabilitation facilities (14.5%), psychiatric hospitals (9.0%), and VA Medical Centers (8.6%); employment in community mental health centers (4.1%), college/university counseling centers (2.1%), business/industry (<1%), and “other” settings (9.0%) was less common. Thirty-one percent of respondents reported carrying out their neuropsychological work in more than one setting.
Assessment of Judgment Ability
Respondents read a brief definition/description of judgment (developed by the researchers based on clinical experience and review of the literature) and were asked to provide feedback about any additional factors they consider this ability to involve. The definition read as follows:
Judgment ability relies upon many cognitive processes, including aspects of executive functioning, problem solving skills, decision making, and practical knowledge. Judgment includes both a cognitive appraisal process (determining what to do in a situation) and the behavioral follow-through (engaging in the adaptive/safe behavior). There also are social/emotional factors that impact one’s judgment skills (e.g., perspective taking, responding appropriately to environmental or social feedback).
Fifty-one percent of respondents agreed with our statement and/or did not offer any comments. The remaining 49.0% of respondents provided feedback, primarily in the form of additional processes they thought should be included in a definition of judgment. Participants’ responses generally fell into one of three categories: (1) executive functioning (e.g., impulse control, cognitive flexibility, insight); (2) other cognitive processes and personal variables (e.g., memory, language, personality traits); and (3) environmental and cultural variables (personal experience, environmental supports, cultural mores). An abbreviated list of participants’ responses (grouped by category) is presented in the Appendix, and a complete, unedited list of responses is available upon request.
Respondents were asked to rank order the issues that an assessment of a patient’s everyday judgment ability “should” incorporate. Six responses plus an “other” option were provided, and respondents were asked to rank each item on a scale ranging from 1 to 7 (with lower numbers indicating a higher priority). Participants were able to provide their own responses in the “other” category. Results were as follows: (a) safety, (b) ability to perform activities of daily living (ADLs) adequately, (c) medical/health decision making, (d) financial decision making, (e) social/ethical problem solving, (f) legal decision making, and (g) “other” issues. Within the “other” category, the most common responses were: the ability to work/perform employment functions, the ability to operate a motor vehicle, academic performance/educational success, the ability to be a caregiver, and insight/awareness of deficit. With regard to the frequency of assessing judgment, approximately 67% and 57% of respondents indicated that they “always” assess judgment during evaluations with dementia and traumatic brain injury patients, respectively (see Table 1). A lower percentage of respondents (40% and 30%) reported that they “always” assess judgment during evaluations with adult psychiatric and “other” patients, respectively. The most frequently used technique to assess judgment was a clinical interview with the patient, followed by formal neuropsychological tests, interview with significant others, and subjective rating scales of relevant behaviors (e.g., ADL scales); less commonly endorsed were direct observation and “other” techniques (see Table 2).
TABLE 1.
Population |
Always (%) |
Often (%) |
Sometimes (%) |
Rarely (%) |
Never (%) |
---|---|---|---|---|---|
Dementia | 66.6 | 20.3 | 1.7 | 0.7 | 10.7 |
Traumatic brain injury | 56.9 | 31.7 | 6.2 | 1.7 | 3.5 |
Adult Psychiatric | 39.6 | 29.3 | 13.8 | 3.8 | 13.5 |
“Other” patients | 30.0 | 40.0 | 25.5 | 3.5 | 1.0 |
Note. Values are the percent of respondents endorsing the various responses options (N = 290).
TABLE 2.
Technique | Always (%) | Often (%) | Sometimes (%) | Rarely (%) | Never (%) |
---|---|---|---|---|---|
Clinical interview with patient | 77.6 | 17.9 | 4.2 | 0.3 | 0 |
Formal neuropsychological tests | 48.3 | 34.1 | 11.0 | 3.5 | 3.1 |
Interview with informant/significant other(s) | 30.0 | 51.7 | 15.9 | 2.1 | 0.3 |
Subjective rating scales of relevant behaviors | 11.4 | 29.3 | 36.9 | 15.2 | 7.2 |
Direct observation of patient in environment | 9.3 | 12.8 | 23.1 | 33.1 | 21.7 |
“Other” techniques | 6.2 | 16.6 | 23.8 | 19.3 | 34.1 |
Note. Values are the percent of respondents endorsing the various responses options (N = 290).
(Table 3) presents a rank-ordered list of the top 20 instruments used to assess judgment (a complete list of responses is available upon request). There were a total of 1079 responses, reflecting approximately 185 unique instruments. The Comprehension subtest of the Wechsler Adult Intelligence Scale-Third Edition (WAIS-III; Wechsler, 1997) was the most frequently reported instrument and was endorsed by 39.0% of respondents (representing 10.5% of responses). This was followed by the Wisconsin Card Sorting Test (WCST; Heaton et al., 1993) and Similarities subtest of the WAIS-III, which were endorsed by 35.5% and 19.3% of respondents (representing 9.5% and 5.2% of responses), respectively. After reporting their frequently used instruments, respondents completed a confidence rating. As shown in (Table 4), the majority (60.8%) of respondents indicted that they were “slightly confident” in their ability to assess a patient’s everyday judgment skills using the instruments previously listed. When questioned about the need for additional/improved measures of judgment, the vast majority (87.2%) replied “yes,” while only 2.5% replied “no”; additionally, 10.3% of respondents indicated that they were “not sure” about this need.
TABLE 3.
Rank | Instrument | N | % of respondents |
---|---|---|---|
1 | Wechsler Adult Intelligence Scale-III (WAIS-III), Comprehension Subtest1 |
113 | 39.0 |
2 | Wisconsin Card Sorting Test (WCST) | 103 | 35.5 |
3 | Wechsler Adult Intelligence Scale-III, (WAIS-III) Similarities Subtest1 | 56 | 19.3 |
4 | Delis-Kaplan Executive Function System (D-KEFS) | 44 | 15.2 |
4 | Judgment Questionnaire of the Neurobehavioral Cognitive Status Exam |
44 | 15.2 |
6 | Booklet Category Test | 38 | 13.1 |
7 | Independent Living Scales (ILS) | 36 | 12.4 |
8 | Trail Making Test (TMT) | 34 | 11.7 |
9 | Wechsler Adult Intelligence Scale-III, Picture Arrangement Subtest | 20 | 6.9 |
10 | Minnesota Multiphasic Personality Inventory-Second Edition (MMPI-2) |
18 | 6.2 |
10 | Judgment/Daily Living subtest of the Neuropsychological Assessment Battery |
18 | 6.2 |
10 | Vineland Adaptive Behavior Scales | 18 | 6.2 |
13 | Mini-Mental State Examination (MMSE) | 15 | 5.2 |
13 | Tower of London Test (TOL) | 15 | 5.2 |
15 | Activities of Daily Living Scale (ADL) | 14 | 4.8 |
15 | Behavior Rating Inventory of Executive Function (BRIEF) | 14 | 4.8 |
15 | Dementia Rating Scale-2 (DRS-2) | 14 | 4.8 |
18 | California Verbal Learning Test-II (CVLT-II) | 12 | 4.1 |
18 | Frontal Systems Behavior Scale (FrSBe) | 12 | 4.1 |
20 | Wechsler Adult Intelligence Scale-III (WAIS-III), Matrix Reasoning Subtest |
11 | 3.8 |
20 | Proverbs Test | 11 | 3.8 |
20 | Verbal Fluency Test (COWA/FAS) | 11 | 3.8 |
Note. Total number of respondents = 290.
Results include a minority of responses for which the Wechsler Intelligence Scale for Children-IV (WISC-IV; Psychological Corporation, 2003) was the identified test.
This table does not include the responses “clinical interview” and “informant/collateral interview,” which were endorsed by 11% and 5.9% of respondents, respectively; refer to Table 2 for information about the reported utilization of these techniques in assessments of judgment.
TABLE 4.
Level of Confidence | % of respondents |
---|---|
Not at all confident | 1.4 |
Slightly not confident | 5.5 |
Neutral | 8.9 |
Slightly confident | 60.8 |
Very confident | 23.4 |
Note. Total number of responses = 290.
DISCUSSION
To our knowledge, this was the first study to survey neuropsychologists’ beliefs and practices related to the assessment of judgment. Participants included 290 members of the INS and/or NAN (17% response rate) who regularly performed neuropsychological assessments in various settings (most commonly in medical hospitals or clinical practices). Results revealed the relevance of the construct of judgment to neuropsychologists. The vast majority of respondents indicated that they assess judgment at least “often” with different clinical populations, particularly dementia or TBI. Further, respondents generally agreed that judgment should be assessed via a combination of approaches, particularly clinical interviews with the patient, neuropsychological tests, and informant interviews. With regard to the issues that warrant inclusion in a judgment evaluation, respondents ranked the following responses, from most to least important: safety, performance of ADLs, and real-world decision making about medical/health issues, finances, social/ethical problems, and legal matters.
Having established that the assessment of judgment is a component of many neuropsychological evaluations, we next inquired about commonly used instruments. Respondents reported use of approximately 185 different instruments, though the majority appeared to utilize the same small group of tools. Specifically, neuropsychologists tend to rely on popular measures of executive functioning or closely related areas (i.e., understanding of social rules and conventions, novel problem solving/cognitive flexibility, and verbal abstraction) as proxies for assessing judgment. The top three measures were the WAIS-III Comprehension, WCST, and WAIS-III Similarities, which were endorsed by 39%, 36%, and 19% of respondents, respectively. By contrast, instruments identified in the literature as having been developed specifically to assess adults’ judgment ability only (i.e., NCSE JQ, ILS, and NAB JDG) were endorsed by 15%, 12%, and 6% of respondents, respectively.
There could be many reasons for the low usage rates of tests specifically designed to assess judgment, including neuropsychologists’ lack of familiarity with these measures, their poor psychometric properties or perceived clinical utility, or the prohibitive cost or inconvenience of using subtests of a larger battery. Future research might inquire directly about practitioners’ rationales for test selection, including the degree to which specific judgment tests are useful in assessing various clinical populations. Research could also evaluate the predictive validity of judgment tests or their ability to add meaningful information beyond that obtained by clinical interview and/or collateral report (i.e., incremental validity). Because neuropsychologists’ statements about judgment skills can impact patients’ lives, including decisions about diagnosis, treatment, and future living/work arrangements, achieving a consensus about which tests are most valuable represents an important future direction.
The majority of respondents reported feeling only “slightly” confident in their ability to assess a patient’s judgment skills using their current methods, and 87% perceived a need for additional/improved measures. This is not surprising in light of the fact that the top three instruments were not developed to assess “judgment” per se, and instead deal with general problem solving or knowledge of basic safety and hygiene issues. Future research should focus on the development of new judgment tests to meet the needs of practicing neuropsychologists. A difficulty inherent to this process, however, is the complex nature of the construct itself. Numerous functions are involved in the execution of good judgment, including generating strategies to approach a problem, identifying and prioritizing goals, initiating action, shifting between ideas, evaluating potential consequences of plans, inhibiting inappropriate responses, and monitoring the ongoing effectiveness of a chosen solution. As noted by many respondents, judgment may rely on additional cognitive processes including memory, language, basic attention, perception, and visuospatial skills. Factors such as social and emotional functioning, personality, culture, educational background, experience, and context also can combine to influence judgment ability. Thus, researchers attempting to improve upon existing tests face many challenges including defining what is meant by “judgment ability,” identifying the salient aspects of judgment to assess (e.g., independent living, driving), and creating items that are ecologically relevant across contexts and populations.
Another way to improve upon current assessments is to submit existing instruments to more rigorous evaluation. Respondents in this study were asked to list the judgment tests they utilized, as well as those they had encountered in their training or practice. Responses revealed the presence of numerous instruments and techniques, many of which received mention by a fraction of respondents (<1%), suggesting that they are not well known to the average practitioner. Included among these measures were various performance-based tests (including some of the everyday problem-solving tests listed in footnote 3), self- and informant-rating scales, functional assessment questionnaires, and behavioral observation methods; a complete list of these instruments is available upon request. These lesser-known tools may prove valuable to neuropsychologists but first must receive attention in the clinical or research literature by virtue of having demonstrated adequate validity and utility.
A few study limitations warrant mention. First, given concerns about respondent burden, we asked respondents to provide a single, overall confidence rating for the judgment test(s) they utilized. In retrospect, however, individualized test ratings would have provided valuable information about clinicians’ views about specific tests in comparison to one another. Another limitation was the low response rate, which could have resulted from several factors. First, we decided to utilize a Web-based survey because our target population of neuropsychologists constitutes a professional group expected to utilize the Internet with frequency. Some studies, however, have found lower response rates for Web-based surveys when compared to mailed surveys (Archer et al., 2006; Cook, Heath, & Thompson, 2000). Additionally, our survey was e-mailed during the summer months, which may have been an inconvenient time of year for potential respondents. Finally, the topic itself may have appealed only to a subset of neuropsychologists (i.e., those who routinely assess judgment, those working with adult populations), limiting the generalizability of our findings. Indeed, the average respondent reported spending the majority of his or her professional time with older adults and adults, and comparatively less time with younger individuals. Despite the low response rate, however, the demographic characteristics of our participants were similar to those reported in other recent surveys (Rabin et al., 2005; Sweet et al., 2006), suggesting that our sample is representative of neuropsychologists engaged in professional practice in the U.S. and Canada.5
CONCLUSIONS
Study findings revealed that neuropsychologists report being actively engaged in the evaluation of judgment with various clinical populations and tend to use popular tests of executive functioning instead of those specifically designed to assess judgment. Additionally, most respondents acknowledged a need for additional measures, a task that is fraught with challenges given the complexity of the construct. In the future, we hope to see the emergence of improved tests that will enable researchers and clinicians to assess judgment with higher degrees of accuracy. To achieve the best evaluative outcomes, these instruments likely will be combined with information derived from companion test forms (that inquire about relevant behaviors from the perspective of caregivers or other professionals) and clinical interviews with patients and their informants. Our study findings may serve as a starting point for these endeavors by generating discussion about the definition of judgment ability and the strengths and weaknesses associated with current assessment techniques. As well, our data may encourage neuropsychologists to compare their judgment tools with those reported by respondents in this survey and examine their rationales for instrument selection, which will lead to more informed test selection decisions.
ACKNOWLEDGMENTS
The authors gratefully acknowledge the International Neuropsychological Society and National Academy of Neuropsychology members who completed this online survey. We also would like to thank Eric Borgos, Don Brodale, and Heather Pixley for their contributions. Supported, in part, by grants from the National Institute on Aging (R01 AG19771) and the Alzheimer’s Association (IIRG-9–1653 sponsored by the Hedco Foundation). Portions of these findings were presented at the 34th annual International Neuropsychological Society conference (Borgos et al., 2006).
APPENDIX
Additional Factors Respondents Consider Judgment Ability to Involve
Executive Functioning
Impulse control
Response inhibition
Ability to delay gratification
Self-monitoring
Cognitive flexibility
Consideration of alternatives
Evaluation of outcomes
Working memory
Insight into neurocognitive or physical deficits/awareness of deficit
Abstract reasoning/abstraction
Ability to judge (have insight into) consequences for action
Understanding of cause and effect
Cost benefit analysis/differential weighing of possible benefits and risks
Initiative
Planning
Adaptability to change
Prioritizing
Reality testing
Ability to respond to external and internal feedback
Multi-tasking
Sequencing
Metacognition
Ability to modulate affective expression
Other Cognitive Processes and Personal Variables
Basic attention
Initial recognition that a situation exists to which one should react
Concentration
Memory (encoding, retention, prospective, verbal, nonverbal, autobiographic, etc.)
General fund of knowledge
Knowledge of appropriate behaviors
Language/verbal skills (both expressive and receptive)
Processing speed
Timing of responses
Perception
Intellect
Visuospatial skills
Appreciation for visual details in one’s environment
Capacity for self reflection, self confidence, and self appraisal
appraisal
Personality and temperament
Mood
Psychiatric status
Mental status
Clear sensorium
General cognitive functioning
Empathy and perspective taking/ability to interpret others’ emotional states
Ability to act and react under stress
Maturity, ego integrity
Common sense
Ability to learn from experience
Level of education
Environmental/Cultural Variables
Experience, familiarity with problem at hand
Cultural factors and mores
Role modeling from others
Availability of environmental supports and decision making aids (e.g., family members, data on outcomes, consultants)
Moral or religious values, “right” versus “wrong,” an underlying belief system
Footnotes
It is important to note that judgment is intimately linked with the processes of problem solving and decision making; in fact these terms are often used interchangeably in the neuropsychological literature. Our conceptualization of judgment is more of an evaluative process–the act of settling on a decision/solution after going through the stages of active problem solving. From this perspective, judgment can be thought of as one of the last stages of problem solving (preceding such steps as monitoring actual outcomes, and adjusting strategies). Thus, stating that someone has “bad judgment” typically means that the person has made a poor decision after consideration of the information=context available to him or her.
For example, Woods, Patterson, & Whitehouse (2000) and Rabin et al. (2007) evaluated the utility of the NCSE JQ and found significant content and statistical problems, including the insensitivity of this measure to impaired judgment in patients with Alzheimer’s disease and/or mild cognitive impairment. In addition, the 10-item NAB JDG (Stern & White, 2003) deals predominantly with basic safety and hygiene issues rather than higher-level judgment dilemmas.
Researchers have used a variety of instruments to assess these complex, multidimensional constructs, including: the Predicaments Task (Channon & Crawford, 1999), Reflective Judgment Dilemmas (Kajanne, 2003), Practical Problems Test (Denney & Pearce, 1989), Everyday Cognition Battery (Allaire & Marsiske, 1999), Everyday Problems Test or Everyday Problem-Solving Test (Artistico, Cervone, & Pezzuti, 2003; Thornton et al., 2007; Willis & Marsiske, 1993), Everyday Problem Solving Inventory (Cornelius & Caspi, 1987), Everyday Problems Test for Cognitively Challenged Elderly (Willis, 1993; Willis et al., 1998), and the Direct Assessment of Functional Status (Lowenstein et al., 1989). Instruments used to assess competence to consent to medical treatment or research include the MacArthur Competency Assessment Tool for Treatment (Grisso, 1998) and the Assessment of the Capacity for Everyday Decision Making (Karlawish, 2008); also see Fitten, Lusky, & Hamann, 1990; Lai & Karlawish, 2007; & Vellinga, et al., 2004.
Various Web-based companies were considered, and ultimately we selected HostedSurvey (www.hostedsurvey.com), which has experience hosting academic research surveys and provides a confidential data collection process. Other benefits included protection against multiple submissions from a single respondent and from missing data or data entry mistakes (because responses are downloaded directly into a text file).
Our decision to include only members residing within the U.S. and Canada was based on practical considerations (e.g., language barrier, ease of acquiring e-mail addresses). Admittedly, our findings may have included additional instruments and/or novel assessment approaches had we surveyed a geographically broader group of practitioners, and this represents a direction for future research.
Contributor Information
Laura A. Rabin, Brooklyn College and the Graduate Center of the City University of New York, Department of Psychology, Brooklyn, New York, and Neuropsychology Program, Department of Psychiatry, Dartmouth Medical School, Hanover, New Hampshire
Marlana J. Borgos, Neuropsychology Program, Department of Psychiatry, Dartmouth Medical School, Hanover, New Hampshire
Andrew J. Saykin, Center for Neuroimaging, Department of Radiology, Indiana University School of Medicine, Indianapolis, Indiana, and Neuropsychology Program, Department of Psychiatry, Dartmouth Medical School, Hanover, New Hampshire
REFERENCES
- Allaire JC, Marsiske M. Everyday cognition: Age and intellectual ability correlates. Psychology and Aging. 1999;14:627–644. doi: 10.1037//0882-7974.14.4.627. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Archer RP, Buffington-Vollum JK, Stredny RV, Handel RW. A survey of psychological test use patterns among forensic psychologists. Journal of Personality Assessment. 2006;87:84–94. doi: 10.1207/s15327752jpa8701_07. [DOI] [PubMed] [Google Scholar]
- Artistico D, Cervone D, Pezzuti L. Perceived self-efficacy and everyday problem solving among young and older adults. Psychology and Aging. 2003;18:68–79. doi: 10.1037/0882-7974.18.1.68. [DOI] [PubMed] [Google Scholar]
- Bamdad MJ, Ryan LM, Warden DL. Functional assessment of executive abilities following traumatic brain injury. Brain Injury. 2003;17:1011–1020. doi: 10.1080/0269905031000110553. [DOI] [PubMed] [Google Scholar]
- Bearden CE, Hoffman KM, Cannon TD. The neuropsychology and neuroanatomy of bipolar affective disorder: A critical review. Bipolar Disorders. 2001;3:106–150. doi: 10.1034/j.1399-5618.2001.030302.x. [DOI] [PubMed] [Google Scholar]
- Bertrand RM, Willis SL. Everyday problem solving in Alzheimer’s patients: A comparison of subjective and objective assessments. Aging & Mental Health. 1999;3:281–293. [Google Scholar]
- Borgos MJ, Rabin LA, Pixley HS, Saykin AJ. Practices and perspectives regarding the assessment of judgment skills: A survey of clinical neuropsychologists; Proceedings of the 34th Annual Meeting of the International Neuropsychological Society.2006. [Google Scholar]
- Busch RM, McBride A, Curtiss G, Vanderploeg RD. The components of executive functioningin traumatic brain injury. Journal of Clinical and Experimental Neuropsychology. 2005;27:1022–1032. doi: 10.1080/13803390490919263. [DOI] [PubMed] [Google Scholar]
- Butler M, Retzlaff P, Vanderploeg R. Neuropsychological test usage. Professional Psychology: Research and Practice. 1991;22:510–512. [Google Scholar]
- Camara WJ, Nathan JS, Puente AE. Psychological test usage: Implications in professional psychology. Professional Psychology: Research and Practice. 2000;31:141–154. [Google Scholar]
- Channon S, Crawford S. Problem-solving in real-life-type situations: The effects of anterior and posterior lesions on performance. Neuropsychologia. 1999;37:757–770. doi: 10.1016/s0028-3932(98)00138-9. [DOI] [PubMed] [Google Scholar]
- Cook C, Heath F, Thompson RL. A meta-analysis of response rates in Web- or Internet-based surveys. Educational and Psychological Measurement. 2000;60:821–836. [Google Scholar]
- Cornelius SW, Caspi A. Everyday problem solving in adulthood and old age. Psychology and Aging. 1987;2:144–153. doi: 10.1037//0882-7974.2.2.144. [DOI] [PubMed] [Google Scholar]
- Denney NW, Pearce KA. A developmental study of practical problem solving in adults. Psychology and Aging. 1989;4:438–442. doi: 10.1037//0882-7974.4.4.438. [DOI] [PubMed] [Google Scholar]
- Duke LM, Kaszniak AW. Executive control functions in degenerative dementias: A comparative review. Neuropsychology Review. 2000;10:75–99. doi: 10.1023/a:1009096603879. [DOI] [PubMed] [Google Scholar]
- Fitten LJ, Lusky R, Hamann C. Assessing treatment decision-making capacity in elderly nursing home residents. Journal of the American Geriatrics Society. 1990;38:1097–1104. doi: 10.1111/j.1532-5415.1990.tb01372.x. [DOI] [PubMed] [Google Scholar]
- Flashman LA. Disorders of awareness in neuropsychiatric syndromes: An update. Current Psychiatry Reports. 2002;4:346–353. doi: 10.1007/s11920-002-0082-x. [DOI] [PubMed] [Google Scholar]
- Gioia GG. Ecological assessment of executive function in traumatic brain injury. Developmental Neuropsychology. 2004;25:135–158. doi: 10.1080/87565641.2004.9651925. [DOI] [PubMed] [Google Scholar]
- Grisso T, Appelbaum PS. The MacArthur competence assessment tool-treatment. Professional Resources Press; Sarasota, FL: 1998. [Google Scholar]
- Heaton RK, Chelune GJ, Talley JL, Kay CG, Curtiss G. Wisconsin card sorting test manual. Psychological Assessment Resources; Odessa, FL: 1993. [Google Scholar]
- Kajanne A. Structure and content: The relationship between reflective judgment and laypeople’s viewpoints. Journal of Adult Development. 2003;10:173–189. [Google Scholar]
- Karlawish J. Measuring decision-making capacity in cognitively impaired individuals. Neurosignals. 2008;16:91–98. doi: 10.1159/000109763. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Karlawish JHT, Casarett DJ, James BD, Xie SX, Kim SYH. The ability of persons with Alzheimer disease (AD) to make a decision about taking an AD treatment. Neurology. 2005;64:1514–1519. doi: 10.1212/01.WNL.0000160000.01742.9D. [DOI] [PubMed] [Google Scholar]
- Kim SYH, Karlawish JHT, Caine ED. Current state of research on decision-making competence of cognitively impaired elderly persons. American Journal of Geriatric Psychiatry. 2002;10:151–165. [PubMed] [Google Scholar]
- Knopman DS, DeKosky ST, Cummings JL, Chui H, Corey-Bloom J, Relkin N, Small GW, Miller B, Stevens JC. Practice parameter: Diagnosis of dementia (an evidence-based review): Report on the quality standards subcommittee of the American academy of neurology. Neurology. 2001;56:1143–1153. doi: 10.1212/wnl.56.9.1143. [DOI] [PubMed] [Google Scholar]
- LaFleche G, Albert MS. Executive function deficits in mild Alzheimer’s disease. Neuropsychology. 1995;9:313–320. [Google Scholar]
- Lai JM, Karlawish J. Assessing the capacity to make everyday decisions: A guide for clinicians and an agenda for future research. American Journal of Geriatric Psychiatry. 2007;15:101–111. doi: 10.1097/01.JGP.0000239246.10056.2e. [DOI] [PubMed] [Google Scholar]
- Lees-Haley PR, Smith HH, Williams CW, Dunn JT. Forensic neuropsychologicaltest usage: An empirical study. Archives of Clinical Neuropsychology. 1996;11:45–51. [Google Scholar]
- Levin HS, Hanten G. Executive functions after traumatic brain injury in children. Pediatric Neurology. 2005;33:79–93. doi: 10.1016/j.pediatrneurol.2005.02.002. [DOI] [PubMed] [Google Scholar]
- Loeb PA. Psychological Corporation; San Antonio, TX: 1996. Independent living scales (ILS) Manual. [Google Scholar]
- Lowenstein D, Amigo E, Duara R, Guterman A, Hurwitz D, Berkowitz N, et al. A new scale for the assessment of functional status in Alzheimer’s disease and related disorders. Journal of Gerontology: Psychological Sciences. 1989;44:114–121. doi: 10.1093/geronj/44.4.p114. [DOI] [PubMed] [Google Scholar]
- Marson D, Harrell L. Executive dysfunction and loss of capacity to consent to medical treatment in patients with Alzheimer’s disease. Seminars in Clinical Neuropsychiatry. 1999;4:41–49. doi: 10.1053/SCNP00400041. [DOI] [PubMed] [Google Scholar]
- Marsiske M, Margrett JA. Everyday problem solving and decision making. In: Birren JE, Warner Schaie K, editors. Hand-book of the Psychology of Aging. 6th ed. Academic Press; New York: 2006. pp. 315–342. [Google Scholar]
- McDonald BC, Flashman LA, Saykin AJ. Executive dysfunction following traumatic brain injury: Neural substrates and treatment strategies. NeuroRehabilitation. 2002;17:333–344. [PubMed] [Google Scholar]
- Medalia A, Revheim N, Casey M. The remediation of problem-solving skills in schizophrenia. Schizophrenia Bulletin. 2001;27:259–267. doi: 10.1093/oxfordjournals.schbul.a006872. [DOI] [PubMed] [Google Scholar]
- Medalia A, Revheim N, Casey M. Remediation of problem-solving skills in schizophrenia: Evidence of a persistent effect. Schizophrenia Research. 2002;57:165–171. doi: 10.1016/s0920-9964(01)00293-6. [DOI] [PubMed] [Google Scholar]
- Northern California Neurobehavioral Group, Inc. Manual for the neurobehavioral cognitive status exam. Author; Fairfax, CA: 1988. [Google Scholar]
- Rabin LA, Barr WB, Burton LA. Assessment practices of clinical neuropsychologists in the United States and Canada: A survey of INS, NAN, and APA division 40 members. Archives of Clinical Neuropsychology. 2005;20:33–65. doi: 10.1016/j.acn.2004.02.005. [DOI] [PubMed] [Google Scholar]
- Rabin LA, Borgos MJ, Saykin AJ, Wishart HA, Crane PK, Nutter-Upham KE, Flashman LA. Judgment in older adults: Development and psychometric evaluation of the test of practical judgment (TOP-J) Journal of Clinical and Experimental Neuropsychology. 2007;29:752–767. doi: 10.1080/13825580601025908. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Rempfer MV, Hamera EK, Brown CE, Cromwell RL. The relations between cognition and the independent living skill of shopping in people with schizophrenia. Psychiatry Research. 2003;117:103–112. doi: 10.1016/s0165-1781(02)00318-9. [DOI] [PubMed] [Google Scholar]
- Revheim N, Schechter I, Kim D, Silipo G, Allingham B, Butler P, Javitt DC. Neurocognitive and symptom correlates of daily problem-solvings kills in schizophrenia. Schizophrenia Research. 2006;83:237–245. doi: 10.1016/j.schres.2005.12.849. [DOI] [PubMed] [Google Scholar]
- Semkovska M, Bedard M-A, Godbout L, Limoge F, Stip E. Assessment of executive dysfunction during activities of daily living in schizophrenia. Schizophrenia Research. 2004;69:289–300. doi: 10.1016/j.schres.2003.07.005. [DOI] [PubMed] [Google Scholar]
- Sharland MJ, Gfeller JD. A survey of neuropsychologists’ beliefs and practices with respect to the assessment of effort. Archives of Clinical Neuropsychology. 2007;22:213–223. doi: 10.1016/j.acn.2006.12.004. [DOI] [PubMed] [Google Scholar]
- Slick DJ, Tan JE, Strauss EH, Hultsch DF. Detecting malingering: A survey of experts’ practices. Archives of Clinical Neuropsychology. 2004;19:465–473. doi: 10.1016/j.acn.2003.04.001. [DOI] [PubMed] [Google Scholar]
- Stern RA, White T. Neuropsychological assessment battery: Administration, scoring, and interpretation manual. Psychological Assessment Resources; Lutz, FL: 2003. [Google Scholar]
- Sweet JJ, Nelson NW, Moberg PJ. The TCN/AACN 2005 “salary survey”: Professional practices, beliefs, and incomes of U.S. neuropsychologists. The Clinical Neuropsychologist. 2006;20:325–364. doi: 10.1080/13854040600760488. [DOI] [PubMed] [Google Scholar]
- Temple RO, Carvalho J, Tremont G. A national survey of physicians’ use of and satisfaction with neuropsychological services. Archives of Clinical Neuropsychology. 2006;21:371–382. doi: 10.1016/j.acn.2006.05.002. [DOI] [PubMed] [Google Scholar]
- Thornton WL, Deria S, Gelb S, Shaprio RJ, Hill A. Neuropsychological mediators of the links among age, chronic illness, and everyday problem solving. Psychology and Aging. 2007;22:470–481. doi: 10.1037/0882-7974.22.3.470. [DOI] [PubMed] [Google Scholar]
- Vellinga A, Smit JH, van Leeuwen E, van Tilburg W, Jonker C. Competence to consent to treatment of geriatric patients: Judgments of physicians, family members, and the vignette method. International Journal of Geriatric Psychiatry. 2004;19:645–654. doi: 10.1002/gps.1139. [DOI] [PubMed] [Google Scholar]
- Wechsler D. Wechsler adult intelligence scale-third edition: Administration and scoring manual. The Psychological Corporation; San Antonio, TX: 1997. [Google Scholar]
- Wechsler D. WISC-IV administration and interpretive manual. The Psychological Corporation; San Antonio, TX: 2003. [Google Scholar]
- Willis SL. Test manual for the everyday problems test for cognitively challenged elderly. The Pennsylvania State University; University Park, PA: 1993. [Google Scholar]
- Willis SL, Allen-Burge R, Dolan MM, Bertrand RM, Yesavage J, Taylor JL. Everyday problem solving among individuals with Alzheimer’s disease. The Gerontologist. 1998;38:569–577. doi: 10.1093/geront/38.5.569. [DOI] [PubMed] [Google Scholar]
- Willis SL, Marsiske M. Manual for the everyday problems test. The Pennsylvania State University; University Park, PA: 1993. [Google Scholar]
- Woods DC, Patterson MB, Whitehouse PJ. Utility of the judgment questionnaire subtest of the neurobehavioral cognitive status examination in the evaluation of individuals with Alzheimer’s disease. Clinical Gerontologist. 2000;21:49–66. [Google Scholar]