Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2014 Jan 28.
Published in final edited form as: J Clin Exp Neuropsychol. 2013 Jan 28;35(2):160–166. doi: 10.1080/13803395.2012.760535

Judgment of Line Orientation: An Examination of Eight Short Forms

Robert J Spencer 1,*, Carrington R Wendell 1,2, Paul P Giggey 1, Stephen L Seliger 3,4, Leslie I Katzel 4,5, Shari R Waldstein 1,4,5
PMCID: PMC3668441  NIHMSID: NIHMS432943  PMID: 23350928

Abstract

The Judgment of Line Orientation (JLO) test is a commonly used measure of visuospatial perception. Because of its length, several short forms have appeared in the literature. We examined the internal consistency of the JLO and eight of its published short forms among 128 undergraduates, 203 healthy older adults, and 55 chronic kidney disease patients. The full test demonstrated good reliability for traditional neuropsychological assessment, but the majority of short forms were adequate only for screening purposes, where greater measurement error is typically permitted in exchange for brevity. In contrast, a recently developed short form based upon item response theory (Calamia, Markon, Denburg, & Tranel, 2011) demonstrated promise as a stand-alone measure.

Keywords: Judgment of Line Orientation, short forms, reliability, psychometrics, visuospatial function


Judgment of Line Orientation (JLO; Benton, Hamsher, Varney, & Spreen, 1983), a 30-item test of visuospatial perception, is commonly administered in neuropsychological assessment. Whereas many tests of visuospatial functioning involve constructional-motor demands, such as copying (e.g., the Rey Complex Figure Test; Meyers & Meyers, 1995) or assembling blocks (e.g., Block Design from the Wechsler Scales; Wechsler, 1987), the JLO is purely visual. In practice, the information provided by the JLO is occasionally offset by administration times that can last up to 15 minutes and be frustrating, particularly for older examinees (Strauss, Sherman, & Spreen, 2006). Brief cognitive test batteries and research protocols are often limited by time, making instruments like the JLO inefficient for routine use. Several authors have suggested that shortened versions of the JLO can provide clinicians and researchers with an estimate of visuospatial functioning. Because short forms compromise reliability, this project sought to examine the reliability of the full JLO and its various short forms that have appeared in the literature.

Table 1 lists published reliability coefficients of the JLO and its short forms. The reliability of the standard forms of the JLO has generally been strong, with Cronbach’s alpha coefficients ranging from 0.84 to 0.90 in mixed neurologic and psychiatric samples (Benton, Hamsher, Varney, & Spreen, 1978; Qualls, Bliwise, & Stringer, 2000; Vanderploeg, LaLone, Greblo, & Schinka, 1997; Winegarden, Yates, Moses, Benton, & Faustman, 1998; Woodard et al., 1996). Various short forms of the JLO, which are comprised of items from the standard JLO and contain between 10 and 20 items, are less reliable, with Cronbach’s alpha and split-half reliability coefficients ranging from 0.61 to 0.82 (Mount, Hogg, & Johnstone, 2002; Qualls et al., 2000; Vanderploeg et al., 1997; Winegarden, et al., 1998; Woodard et al., 1996). It appears as though many of the short forms are acceptable in clinical samples, but these psychometric properties are not necessarily transferrable to healthy persons or to those with subtle impairments, where more accurate performance generally produces less variability in scores and, consequently, lower reliability coefficients. For example,Woodard et al. (1998) found an odd-even correlation of 0.55 among healthy older adults. Although shortened versions of the JLO may reduce administration time with relatively healthy adults, the cost to reliability may be too great.

Table 1.

Published Internal Consistency Data for the Full Versions of the Judgment of Line Orientation (JLO) and its Short Forms

Source Sample JLO Version M (SD) Reliability
Benton et al. (1978) 144 general medical and neurologic patients, 49% male, age range = 16–78 years Full Form H (n=40) NR .94 #
Full Form V (n=124) NR .89 #
Calamia et al. (2011) 524 neurological patients with focal brain lesions, 53% male, mean age = 47 (16) / 82 healthy elderly, 45% male, mean age = 73.3 (6.8) IRT-based Short Form N/A N/A
Mount et al. (2002) 72 traumatic brain injury patients, 66.7% male, mean age = 33.8 (10.8) Odd Form V 11.2 (2.6) .71 #
Even Form V 11.3 (2.3)
Qualls et al. (2000) 100 mixed neurologic patients, 66% male, mean age = 54.7 (16.2) Full Form V NR .90 @
Short Form “Q” NR .82 @
Short Form “S” NR .81 @
Vanderploeg et al. (1997) 81 neurologic and psychiatric patients, 99% male, mean age = 48.3 (17.9) Full Form V (n=60) 21.1 (6.0) .87 @
Odd Form V (n=60) 20.5 (6.3) .76 @
.81 #
Even Form V (n=60) 21.8 (6.3) .77 @
Winegarden et al. (1998) 230 neurologic and psychiatric patients, 97% male, mean age = 48.9 (14.4) Full Form V NR .84 @
Items 11–30 NR .80 @
Items 1–20 NR .75 @
Items 1–10 NR .61 @
Woodard et al. (1996) 386 neurologic and psychiatric patients, 48% male, mean age = 52.2 (18.3) Full Form V 20.9 (5.4) .85 @
Odd Form V
(doubled)
20.9 (5.9) .72 @
Even Form V
(doubled)
20.9 (6.2) .75 @
Woodard et al. (1998) 82 healthy older adults, 78% female, mean age = 65.8 (6.7) Odd Form V 11.6 (2.4) .55 #
Even Form V 11.5 (2.1)

Note: Papers are identified by first authors;

Abbreviations: IRT = item response theory, NR = Not Reported, N/A = Not applicable,

#

split-half correlation,

@

Cronbach’s alpha coefficient

Of interest, an innovative JLO short form was recently developed by Calamia, Markon, Denburg, and Tranel (2011) using item response theory (IRT), a method that has been successfully applied to other neuropsychological tests (Graves, Bezeau, Fogarty, & Blair, 2004). Based upon data from 524 neurological patients, items were reordered according to difficulty estimates, and basal and ceiling rules were subsequently applied to reduce administration time. An average of 20.4 items were administered, and the mean difference in short and full form scores was 0.60 points. Further, when a clinical cutoff score was applied, classification rates of impairment differed in only 3% of participants. This preliminary evidence suggests that this IRT-based form may offer advantages over other short forms, at a minimum. To our knowledge, Calamia et al.’s findings have yet to be independently replicated.

The present study thus examined the reliability and, where feasible, classification rates, of the JLO and seven of its short forms by examining their internal consistency across three samples: undergraduate students, healthy older adults, and older adults with chronic kidney disease. We also examined Calamia et al.’s (2011) IRT-based short form in terms of (a) time savings and (b) classification success in comparison to the JLO full form. These analyses permitted a critical examination of JLO short forms across multiple samples.

Method

Participants

The present study used data from three samples: Undergraduate students, cognitively healthy older adults, and older adults with chronic kidney disease, a condition that is associated with cognitive impairment (Seliger et al., 2004).

Sample A: Sample A consisted of 128 undergraduate students [80% male; mean age = 21.5 years (SD = 4.4); education = 14.3 years (SD = 1.9)] enrolled in a study of memory and executive functioning. Participants were free of major medical, neurological, and psychiatric disease.

Sample B: Sample B consisted of 203 stroke- and dementia-free, community-dwelling older adults [56% male; mean age = 66.3 years (SD = 6.9); mean education = 16.3 years (SD = 2.8)] enrolled in a study of cardiovascular risk factors, brain, and cognition. Participants were free of major medical (except mild to moderate hypertension), neurological, and psychiatric disease. Participants were initially screened for dementia with the Mini Mental State Examination (Folstein, Folstein, & McHugh, 1975) at the Geriatric Assessment Clinic at the Baltimore Veterans Affairs Medical Center.

Sample C: Sample C consisted of 55 stroke- and dementia-free, community-dwelling older adults [91% male; mean age = 71.2 years (SD = 8.1); mean education = 12.9 years (SD = 2.8)] enrolled in a study of chronic kidney disease, brain, and cognition. Participants were free of neurological disease and serious mental illness. No patients required dialysis or transplantation.

Materials

The JLO is presented in flip-book style where two lines appear at the top page and a standard fan-shaped array of 11 lines appear at the bottom. Examinees must identify the two lines from the bottom page that match the angles of the two lines of the top page. The two standard versions of the JLO, forms H and V, contain the same test items but are presented in different sequences.

Eight short forms of JLO Form V, ranging from 10 to 20 items, were identified from the literature (see Table 1). These versions included odd items, even items, items 1–10, 1–20, 11–30, Form Q (items 2, 6, 7, 9, 12, 16, 17, 19, 20, 21, 22, 24, 26, 28, 30), and Form S (items 1, 3, 4, 5, 8, 10, 11, 13, 14, 15, 18, 23, 25, 27, 29). The latter two short forms were sorted byQualls et al. (2000), who endeavored to create short forms of equivalent length and difficulty. Lastly, the short form based upon Calamia et al.’s (2011) findings was generated. Calamia and colleagues utilized IRT to reorder items according to item difficulty, designated item 16 as the start item, and then applied 6-item basal and ceiling set rules. We directly applied these rules to the three samples in the present study. Because the 6-item basal and ceiling set rules resulted in a mean of 20.4 items, a decrease of only 9.6 from the short form, we also applied 3-item basal and ceiling set rules to examine the costs and benefits of using a shorter discontinuation rule.

Procedure

Participants in all three samples were administered batteries of neurocognitive instruments, including the JLO. Samples A and B were tested at the University of Maryland, Baltimore County, whereas sample C was tested at the General Clinical Research Center at the University of Maryland Medical Center in Baltimore, Maryland.

All statistical analyses were performed using SPSS, version 17.0. Cronbach's alpha coefficients were used to examine internal consistency reliability of seven of the eight short forms. The composition of the eighth form (Calamia et al., 2011), with variable numbers of items administered, precluded examination of internal consistency in this way. Alpha coefficients were interpreted as: alpha ≥ 0.9 = “very high,” 0.9 > alpha ≥ 0.8 = “high,” 0.8 > alpha ≥ 0.7 = “adequate,” 0.7 > alpha ≥ 0.6 = “marginal,” and 0.6 > alpha ≥ 0.5 = “low,” (Strauss et al., 2006). Of note, an internal consistency coefficient is probably an optimistic estimate of a test’s reliability. Though the above guidelines were applied to avoid being overly conservative in our interpretations, Nunnally and Bernstein (1994) provide sound rationale for use of 0.9 as a “bare minimum” and 0.95 as “the desirable standard” (p. 265). Pearson correlations were computed to investigate the split-half reliability of odd vs. even items and Form “Q” vs. Form “S” items, as well as the strength of associations between each of the short JLO forms and the full form. T-tests and item-level examination of percentage of correct responses were used to examine relative item difficulty levels across different test forms. Descriptive statistics, including frequencies, were used to examine classification success (i.e., impaired vs. non-impaired) of short forms, as compared with classification according to the full JLO form. We applied a cut-off score of ≥21 (as unimpaired), consistent with common usage (Lezak, Howieson, & Loring, 2004; Strauss et al., 2006) and Calamia et al.’s (2011) original publication. Misclassifications rates were compared using McNemar’s test for correlated proportions. Lastly, for descriptive purposes, Pearson correlations and one-way analyses of variance (ANOVA) were used to examine JLO performance differences across sex, age, and level of education.

Results

Table 2 displays the means and reliability coefficients of the full JLO and its short forms. The alpha coefficient for the full JLO test was 0.84 in sample A and 0.81 in samples B and C. The internal consistency of the different short forms ranged from 0.60 to 0.77. The odd-even split-half correlation ranged from r = 0.72 to 0.75, and the Form "S" and "Q" split-half correlation ranged from r = 0.67 to 0.76 across the different samples.

Table 2.

Descriptive and Reliability Data in Young Adults, Older Adults, and Patients with Chronic Kidney Disease (CKD)

Sample A: Young Adults (n=128) Sample B: Older Adults (n=203) Sample C: CKD Patients (n=55)

JLO Form M (SD) Cronbach's
Alpha
Mean
Inter-item
Correlation
M (SD) Cronbach's
Alpha
Mean
Inter-item
Correlation
M (SD) Cronbach's
Alpha
Mean
Inter-item
Correlation
Full Form 23.8 (4.9) .84 .15 24.3 (4.3) .81 .13 18.7 (5.4) .81 .13
1–10 8.9 (1.7) .74 .24 9.1 (1.3) .60 .16 7.4 (2.0) .64 .16
1–20 17.2 (2.9) .77 .16 17.5 (2.5) .70 .13 14.1 (3.6) .75 .14
11–30 14.8 (3.5) .75 .13 15.2 (3.5) .77 .14 11.3 (3.9) .76 .14
Form "Q" 11.9 (2.5) .67 .13 12.3 (2.2) .65 .12 9.5 (2.8) .66 .11
Form "S" 11.9 (2.7) .74 .17 12.0 (2.4) .69 .13 9.2 (3.0) .72 .15
Odd 11.9 (2.8) .76 .18 12.1 (2.4) .68 .12 9.3 (2.9) .69 .14
Even 11.8 (2.4) .65 .12 12.3 (2.3) .65 .12 9.3 (2.8) .65 .11
IRT-based (basal/ceiling of 6) 24.2 (5.1) N/A N/A 24.7 (4.5) N/A N/A 18.6 (6.3) N/A N/A
IRT-based (basal/ceiling of 3) 23.9 (5.9) N/A N/A 24.3 (5.3) N/A N/A 18.3 (6.9) N/A N/A

T-tests revealed no significant differences in difficulty among the four 15-item short forms (i.e., odd vs. even items, Form “Q,” vs. Form “S”). The other short forms, however, were not of equal difficulty due to the pattern of ascending item difficulty across the full version. For example, participants responded correctly to items 24 and 27 less than 50% of the time, whereas items 4, 6, and 11 were answered correctly by over 95% of participants. As a result, the short forms, including items 1 through 10 and items 1 through 20, were easier than the standard test and the short form consisting of the final 20 items.

For theCalamia et al. (2011) IRT-based form, in which items were ordered according to difficulty level and basal/ceiling rules of 6 were applied, the average number of items administered was 20.5 (SD = 5.2, range = 12–30) across all three samples, a substantial time savings over the full 30-item form. One-way ANOVA with post-hoc Tukey HSD comparisons showed that sample C required significantly more items (M = 23.4, SD = 5.5) than samples A (M = 20.1, SD = 5.0) or B (M = 20.1, SD = 5.1), F(2, 383) = 10.1, p < .01. When basal/ceiling rules of 3 were applied (instead of 6), an average of only 14.8 items (SD=3.8) was required.

The Calamia short form correlated at 0.98 with the full JLO form, which is higher than any other short form that was examined (range = 0.806 – 0.966). Table 3 summarizes correlations between all of the JLO short forms and the full form, as well as misclassification rates for relevant short forms (i.e., those with available or calculable impairment cut-points) across all three samples. The Calamia short form demonstrated the lowest misclassification rate; classifications of participants as impaired vs. unimpaired (using cut-off score of ≥21) across all three samples differed in only 2.8% of cases, an estimate nearly identical to that cited by Calamia and colleagues (3%). Specifically, 99 participants were classified as having impaired scores according to the full JLO, and among them 88 were also classified as impaired using Calamia’s method. Conversely, of the 287 participants designated as unimpaired on the full JLO, all 287 were also designated as unimpaired according to the Calamia short form. Regarding individual samples, n=4, n=5, and n=2 had differing classifications across samples A, B, and C, respectively. When basal/ceiling rules were reduced from 6 to 3, the misclassification rate rose substantially to 8.8%. McNemar’s tests for correlated proportions indicated that the misclassification rate for the Calamia short form was significantly lower than the short form with basal/ceiling rule of 3 (p < .01) and items 11–30 only (p = .02), which had the next lowest misclassification rate.

Table 3.

Comparison of JLO Short Forms and JLO Full Form V: Correlations and Misclassification Rates across All Samples

Sample A: Young Adults
(n=128)
Sample B: Older Adults
(n=203)
Sample C: CKD Patients
(n=55)
Total Sample
(n=386)

Correlation
with Full
JLO
Misclass-
ification
Rate (%)
Correlation
with Full
JLO
Misclass-
ification
Rate (%)
Correlation
with Full
JLO
Misclass-
ification
Rate (%)
Correlation
with Full
JLO
Misclass-
ification
Rate (%)
1–10 * .850 N/A .710 N/A .790 N/A .806 N/A
1–20 * .935 N/A .893 N/A .942 N/A .929 N/A
11–30 .969 7.8 .965 6.4 .949 5.5 .966 6.7
Form "Q" .934 9.4 .915 9.9 .906 16.4 .930 10.6
Form "S" .944 8.6 .929 10.8 .918 10.9 .940 10.1
Odd .945 11.7 .931 10.3 .939 7.3 .944 10.4
Even .926 6.3 .922 7.4 .931 12.7 .936 7.8
IRT-based (basal/ceiling of 6) .978 3.1 .983 2.5 .976 3.8 .982 2.8
IRT-based (basal/ceiling of 3) .922 7.0 .916 7.9 .870 16.4 .919 8.8
*

No appropriate cut-off for calculation of misclassification rate due to (a) absence of published cut-offs and (b) discrepant difficulty of these short forms relative to others (thereby precluding application of a simple multiplicative conversion).

Cut-off of ≥13 as unimpaired (Winegarden et al., 1998) used for calculation of misclassification rate.

Score doubled prior to application of cut-off of ≥21 as unimpaired for calculation of misclassification rate.

Associations between JLO performance and demographic characteristics

ANOVAs demonstrated that males obtained higher scores than females on the standard 30-item JLO in samples A [M = 25.9 (3.6) vs. 23.2 (5.0); F = 6.23, p = .01] and B [M = 25.5 (3.7) vs. 22.9 (4.6); F = 19.80, p < .001]. Because sample C contained only five women, it was not possible to have an adequate test of the effect of sex on JLO performance. Performance was not significantly correlated with age among men (r = −0.05, ns) or women (r = −0.06, ns) across the healthy participants of samples A and B, nor among the participants of sample C (r = −0.18, ns). JLO performance was correlated with years of education in samples B (r = .24, p = .001) and C (r = .43, p = .001) but not in sample A, presumably because these participants had a restricted range of education.

Discussion

The JLO is a widely used test of visuospatial perception that has been criticized for excessive administration time (Strauss et al., 2006). Results from the current study indicate that the standard 30-item JLO has "high" reliability among diverse samples (Strauss et al., 2006). Practitioners desiring shorter administration times have several options for shortened versions of the JLO. Findings from the current study strongly suggest that adopting an item response theory approach, as was initially validated by Calamia and colleagues (2011), yields a short form that is psychometrically superior to other published short forms. The present study also found associations between JLO performance and demographic characteristics to be highly similar to those in the published literature. In general terms, men performed better than women, education was positively related to performance, and age was unrelated to performance.

Prolonged test administration, particularly when impairment is obvious early on in the process, can be tedious and laborious to examiners and cognitively taxing and occasionally distressing to examinees. Short forms of standard neuropsychological tests are attractive to the degree to which they shorten administration time while minimizing a loss of useful data. The IRT system of Calamia and colleagues (2011) produced a short form that sacrificed little useful psychometric data. In a test like the JLO, an IRT approach takes advantage of the fact that test items are not of equivalent difficulty. Unlike other short forms, which pre-determine which items to omit, an IRT approach omits items based on examinee performance. Accordingly, this approach maximizes efficiency by omitting the least informative items on the test. Among the short forms examined, the Calamia system had the largest correlation with the standard JLO, and had the lowest misclassification rate for "impaired" scores. Findings from the samples examined in the current study closely resemble those obtained by Calamia, in that the IRT approach reduces administration time by nearly one-third, produces a score that correlates .98 with the full JLO, and leads to concordant classification decisions in 97% of individuals. Other short forms of the JLO have lower correlations with the JLO and produce discrepant decisions in at least 7.8 percent of cases.

All short forms of the JLO should be examined for test-retest reliability. In clinical and research settings, repeated testing is often performed to track changes in functioning over time. Test-retest data, including information on expected performance changes over repeat administrations, is needed to understand the basic psychometric properties of the forms, as well as facilitate clinical decision-making.

Additional research is needed to examine the reliability and validity of the Calamia method in its reordered form. Neither the current examinees nor those of Calamia et al. used the prescribed reordered JLO (i.e., reordering occurred post hoc). Further, it is assumed that administering practice items to participants is sufficient to introduce the test. It is also assumed that briefer administration time is preferable to examinees. Research is needed to test these hypotheses, as it is possible that easy test items are useful in getting participants comfortable with the task and boosting confidence. Overall, though each of the JLO short forms considered herein may be sufficient for screening purposes, the Calamia IRT-based short form demonstrates the most promise as a stand-alone measure within neuropsychological batteries.

Acknowledgments

Funding: This work was supported by National Institutes of Health (NIH) grants [grant numbers R29 AG15112, 5RO1 AG015112, K24 AG00930, K23 DK063079]; a Veterans Affairs Merit Grant; and the Department of Veterans Affairs Baltimore Geriatric Research Education and Clinical Center.

References

  1. Benton AL, Hamsher K, Varney N, Spreen O. Visuospatial judgment: A clinical test. Archives of Neurology. 1978;35:364–367. doi: 10.1001/archneur.1978.00500300038006. [DOI] [PubMed] [Google Scholar]
  2. Benton AL, Hamsher K, Varney N, Spreen O. Contributions to Neuropsychological Assessment: A Clinical Manual. New York: Oxford; 1983. [Google Scholar]
  3. Calamia M, Markon K, Denburg NL, Tranel D. Developing a short form of Benton’s Judgment of Line Orientation Test: An item response theory approach. The Clinical Neuropsychologist. 2011;25:670–684. doi: 10.1080/13854046.2011.564209. [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Folstein MF, Folstein SE, McHugh PR. Mini-mental state: A practical method for grading the cognitive state of patients for the clinicians. Journal of Psychiatric Research. 1975;12:189–198. doi: 10.1016/0022-3956(75)90026-6. [DOI] [PubMed] [Google Scholar]
  5. Graves RE, Bezeau SC, Fogarty J, Blair R. Boston Naming Test short forms: A comparison of previous forms with new item response theory based forms. Journal of Clinical and Experimental Neuropsychology. 2004;26:891–902. doi: 10.1080/13803390490510716. [DOI] [PubMed] [Google Scholar]
  6. Lezak MD, Howieson DB, Loring DW. Neuropsychological Assessment. 4th Ed. New York: Oxford University Press; 2004. [Google Scholar]
  7. Lindgren SD, Benton AL. Developmental patterns of visuospatial judgment. Society of Pediatric Psychology. 1980;5:217–225. doi: 10.1093/jpepsy/5.2.217. [DOI] [PubMed] [Google Scholar]
  8. Mitrushina M. Cognitive screening methods. In: Grant I, Adams KM, editors. Neuropsychological Assessment of Neuropsychiatric and Neuromedical Disorders. 3rd ed. New York: Oxford University Press; 2009. pp. 101–126. [Google Scholar]
  9. Montse A, Pere V, Carme J, Francesc V, Eduardo T. Visuospatial deficits in Parkinson's disease assessed by Judgment of Line Orientation Test: Error analyses and practice effects. Journal of Clinical and Experimental Neuropsychology. 2001;23:592–598. doi: 10.1076/jcen.23.5.592.1248. [DOI] [PubMed] [Google Scholar]
  10. Mount DL, Hogg JR, Johnstone B. Applicability of the 15-item versions of the Judgment of Line Orientation Test for individuals with traumatic brain injury. Brain Injury. 2002;16:1051–1055. doi: 10.1080/02699050210154259. [DOI] [PubMed] [Google Scholar]
  11. Nunnally J, Bernstein IH. Psychometric theory. 3rd ed. New York: McGraw Hill; 1994. [Google Scholar]
  12. Qualls CE, Bliwise NG, Stringer AY. Short forms of the Benton Judgment of Line Orientation: Development and psychometric properties. Archives of Clinical Neuropsychology. 2000;15:159–163. [PubMed] [Google Scholar]
  13. Seliger SL, Siscovick DS, Stehman-Breen CO, Gillen DL, Fitzpatrick A, Bleyer A, et al. Moderate renal impairment and risk of dementia among older adults: The Cardiovascular Health Cognition Study. Journal of the American Society of Nephrology. 2004;15:1904–1911. doi: 10.1097/01.asn.0000131529.60019.fa. [DOI] [PubMed] [Google Scholar]
  14. Strauss E, Sherman EMS, Spreen O. A Compendium of Neuropsychological Tests: Administration, Norms, and Commentary. New York: Oxford University Press; 2006. [Google Scholar]
  15. Vanderploeg RA, LaLone LV, Greblo P, Schinka JA. Odd-even short forms of the Judgment of Line Orientation test. Applied Neuropsychology. 1997;4:244–246. doi: 10.1207/s15324826an0404_6. [DOI] [PubMed] [Google Scholar]
  16. Wechsler D. Manual for the Wechsler Memory Scale-Revised. San Antonio, Texas: The Psychological Corporation; 1987. [Google Scholar]
  17. Winegarden BJ, Yates BL, Moses JA, Benton AL, Faustman WO. Development of an optimally reliable short form for Judgment of Line Orientation. The Clinical Neuropsychologist. 1998;12:311–314. [Google Scholar]
  18. Woodard JL, Benedict RH, Roberts VJ, Goldstein FC, Kinner KM, Capruso DX, Clark AN. Short-form alternatives to the Judgment of Line Orientation Test. Journal of Clinical and Experimental Neuropsychology. 1996;18:898–904. doi: 10.1080/01688639608408311. [DOI] [PubMed] [Google Scholar]
  19. Woodard JL, Benedict RH, Salthouse TA, Toth JP, Zjaljardic DJ, Hancock HE. Normative data for equivalent, parallel forms of the Judgment of Line Orientation test. Journal of Clinical and Experimental Neuropsychology. 1998;20:457–462. doi: 10.1076/jcen.20.4.457.1473. [DOI] [PubMed] [Google Scholar]

RESOURCES