Abstract
Objective.
This study compared the relationship between computer experience and performance on computerized cognitive tests and a traditional paper-and-pencil cognitive test in a sample of older adults (N = 634).
Method.
Participants completed computer experience and computer attitudes questionnaires, three computerized cognitive tests (Useful Field of View (UFOV) Test, Road Sign Test, and Stroop task) and a paper-and-pencil cognitive measure (Trail Making Test). Multivariate analysis of covariance was used to examine differences in cognitive performance across the four measures between those with and without computer experience after adjusting for confounding variables.
Results.
Although computer experience had a significant main effect across all cognitive measures, the effect sizes were similar. After controlling for computer attitudes, the relationship between computer experience and UFOV was fully attenuated.
Discussion.
Findings suggest that computer experience is not uniquely related to performance on computerized cognitive measures compared with paper-and-pencil measures. Because the relationship between computer experience and UFOV was fully attenuated by computer attitudes, this may imply that motivational factors are more influential to UFOV performance than computer experience. Our findings support the hypothesis that computer use is related to cognitive performance, and this relationship is not stronger for computerized cognitive measures. Implications and directions for future research are provided.
Key Words: Computer attitudes, Computerized cognitive testing, Computer experience, Older adults.
Introduction
Concurrent with the aging of the population is an increasing reliance on computers and related technologies in society. Historically, the use of computers has been lower among older adults, but they are now one of the fastest growing populations of computer users, and this trend will undoubtedly continue as the “baby-boomers” age (Hart, Chaparro, & Halcomb, 2008; Newburger, 2001). Thus, examining the complex relationship between older adults and computer use is a particularly relevant research area. There are two main sectors of research in this relatively new field that are relevant to the scope of this article. The first more general area involves research examining differences in cognitive performance between older computer users and nonusers. The second more specific area involves examining the potential confounding effect of previous computer experience on computerized cognitive tests in particular, which has been studied to a lesser extent.
Research examining the relationship between computer use/experience and cognitive performance among older adults has widely used traditional paper-and-pencil measures as outcomes. The theoretical framework underlying this research typically involves concepts such as the “Use it or Lose it” theory (Hultsch, Hertzog, Small, & Dixon, 1999; Salthouse, 1991) and the theory of “Cognitive Reserve” (Scarmeas & Stern, 2003; Stern, 2002), which posit that individuals who engage in cognitively stimulating activities (e.g., using the computer in some capacity) would be more likely to maintain optimal cognitive functioning and be better able to tolerate underlying brain pathology into later life than those who do not engage in such enriching activities. However, the findings on this research are inconsistent, with some studies finding a positive relationship between computer use (and other mentally stimulating activities) and cognitive performance (Ball et al., 2002; Verghese et al., 2003), whereas others have not found this relationship (Salthouse, Berish, & Miles, 2002). One recent study found in sample of young and older adults that those with more computer experience performed higher on traditional paper-and-pencil cognitive measures administered via the telephone, particularly those requiring executive functions (Tun & Lachman, 2010). The directionality of this relationship is not clear; it may be that individuals who are inherently more cognitively “fit” are more likely to use the computer, thus yielding higher performance, or it could be that computer use may provide mental stimulation and thus improve cognitive functioning. An important consideration in regards to the second explanation is the assumption that simply using a computer provides mental stimulation. This issue is elaborated further within the discussion.
Slegers, van Boxtel, and Jolles (2009) examined the efficacy of a computer and internet intervention on cognitive abilities in a sample of older adults with no prior computer experience. Results revealed no subjective or objective cognitive benefit as a result of this intervention. These results support the hypothesis that older adults with better cognitive abilities tend to use the computer more (due to socioeconomic status/occupation, better computer attitudes, or some other factor) and do not support the hypothesis that computer use helps improve or maintain cognitive abilities. Altogether these findings suggest that computer use may be related to cognitive performance in older adults; however, whether it is simply that those with higher cognitive abilities are more likely to use the computer, or that computer use helps maintain or improve cognitive abilities, is still unclear and would likely be highly related to the types of activities being performed on the computer.
As computer programming has become easier and more “user-friendly,” researchers have begun to employ computer technology in cognitive testing at an increasing rate (Wild, Howieson, Webbe, Seelye, & Kaye, 2008). Although integrating computers into cognitive aging research offers potential benefits (e.g., objectivity, accurate timing of stimuli), it is still a relatively new technique in need of evaluation (Wild et al., 2008). An inherent limitation of computerized cognitive testing in older adult populations is the lack of psychometric data (i.e., normative, reliability, and validity data), compared with traditional paper-and-pencil measures. Furthermore, for those computerized cognitive tests that have been adapted from paper-and-pencil versions (e.g., the Stroop Task and the Wisconsin Card Sorting Test), there is a lack of consistent agreement on their equivalency, with some studies finding equivalence (Collerton et al., 2007; Fortuny & Heaton, 1996; Wagner & Trentini, 2009) and others have not (Feldstein et al., 1999; Hinkin, Castellon, Hardy, Granholm, & Siegle, 1999; Steinmetz, Brunner, Loarer, & Houssemand, 2010; Tien et al., 1996), suggesting that computer experience (or lack of) may affect performance (McDonald, 2002), especially among older adults.
In contrast to Tun and Lachman’s aforementioned study examining the relationship between computer use and telephone administered paper-and-pencil cognitive measures, Iverson, Brooks, Ashton, Johnson, and Gualtieri (2009) compared computerized cognitive test performance between middle-aged individuals with “some” and “frequent” computer use. They found significant differences in performance between the groups, with those with frequent computer use performing better on measures of psychomotor speed, reaction time, complex attention, and cognitive flexibility. Although this study used a relatively small convenience sample of middle-aged adults, nonetheless these findings indicate that computer experience and its relationship with computerized cognitive test performance should be further analyzed among larger samples of older adult samples.
Although computer use is increasing among older adults, studies have shown that older adults have poorer computer attitudes and less computer experience than their middle-aged and younger counterparts (Czaja & Sharit, 1998). Studies have suggested that these reported differences are likely due to cohort effects of older samples, with compositional or socioeconomic factors (i.e., education, race, gender, employment, and income) explaining more variance in their lower experience, rather than their older age alone (Cutler, Hendricks, & Guyer, 2003). Thus, the trends of computer use among older adults are changing and as younger cohorts begin to age, it is likely that they will maintain their computer use into older age, and these trends found in older cohorts of the future will resemble those of middle-aged adults today. Nonetheless, the lack of experience and poorer attitudes toward computers in current older cohorts could potentially affect their performance on computerized cognitive tests. Thus, there remains a concern that older adults may manifest their lack of experience in poorer computerized cognitive test performance.
Purpose
The purpose of the current study was to further examine computer experience and its relationship to cognitive performance among older adults. The primary purpose of this exploratory study was to examine the differences in cognitive test performance between a sample of older adults with and without computer experience. The first aim was to examine differences in computerized cognitive test performance of processing speed, reaction time, and executive functioning tasks between computer users and nonusers, and the second aim was to compare these differences with performance on a paper-and-pencil measure (the Trail Making Test), as no studies have conducted such a comparison to the authors’ knowledge. The justification for using the Trail Making Test as the paper-and-pencil dependent variable for this analysis was twofold. First, because this test measures visual attention and executive function (Reitan, 1958; Stern & Prohaska, 1996), this seemed appropriate to use as this measure was comparable to our computerized cognitive measures. Second, because time needed to complete the measure is the score for the Trail Making Test, this was also comparable to our computerized cognitive measures that also yield scores based on time.
Researchers hypothesized that if there is a similar relationship between computer experience and performance on the computerized and the paper-and-pencil measure, this would support the findings of Tun and Lachman, suggesting that this relationship likely reflects the hypothesis that those individuals who frequently use the computer tend to have higher cognitive test performance regardless of format (computerized vs. paper-and-pencil). If there is a stronger relationship of computer experience with computerized cognitive measures than the paper-and-pencil measure, this would imply that perhaps computerization is affecting performance. The third aim was to examine whether computer attitudes would attenuate any of the potential relationships between computer experience and computerized cognitive test performance.
Method
Participants and Procedure
This study used baseline data from the Staying Keen in Later Life study (Edwards, Wadley et al., 2005). Eight hundred and ninety-seven community-dwelling older adults from Western Kentucky University and the University of Alabama at Birmingham were recruited using newspaper advertisement, flyers at community organizations, letters, and word of mouth referral among participants. Inclusion criteria included adequate vision, hearing, and relatively intact cognition (see Screening Measures for more information). After screening, the sample consisted of 634 eligible older adults. Participants completed a 2.5-hr baseline assessment of demographic and health measures, cognitive measures, and questionnaires measuring computer experience and computer attitudes.
Measures
Screening measures.—The following measures were administered to determine eligibility for the study, which included grossly intact mental functioning and adequate vision and hearing. These measures were not used in the analyses.
Mini-Mental State Examination.—The Mini-Mental State Examination (MMSE) is a brief measure of overall cognitive status. Scores range from 0 to 30, with higher scores indicating better cognitive functioning (Folstein, Folstein, & McHugh, 1975). A score of 23 is representative of generally intact cognitive functioning and was thus used as the screening cutoff in this study.
Far visual acuity.—Corrected far visual acuity was assessed using the Snellen eye chart. Inclusion criteria for the study included a Snellen far visual acuity score of 20/80 or better in order to screen for the effects of poor vision on cognitive performance.
Contrast sensitivity.—Contrast sensitivity was assessed using the Pelli–Robson contrast sensitivity chart. In order to be eligible for the current study, participants were required to have a score of 1.35 log10 or better.
Hearing.—Hearing thresholds were examined using the GSI-17 Audiometer. A threshold of 40 dB or better (with hearing aids if applicable) was required in order to be eligible for the study.
Covariates.—The following measures were administered to gather demographic and health information in order to control for any potential confounds. These measures included self-reported measures of demographics, health, and depressive symptoms.
Demographic questionnaire.—Participants were administered this questionnaire in order to acquire baseline demographic information such as age, gender, education, and race. Gender was coded dichotomously, with 1 = man and 0 = woman. Education was coded on a continuous scale ranging from 1 (1st grade) to 20 (doctoral degree). Race was coded dichotomously (1 = Caucasian, 2 = African American).
Health questionnaire.—Participants were asked to report the presence of a variety of medical conditions. Each health condition was dichotomized for use in the analyses (0 = did not have the condition, 1 = had the condition). Only those conditions that were significantly correlated with cognitive test performance were used in subsequent analyses (i.e., diabetes, hypertension, and cataracts).
Depression scale.—Participants were administered the Center for Epidemiological Studies Depression Scale (CES-D). The CES-D consists of 20 items addressing a variety of depressive symptoms (e.g., inability to sleep). Participants were asked to indicate the frequency at which they experienced each symptom over the past week. Scores can range from 0 to 60. Higher scores are indicative of more depressive symptomology (Radloff, 1977). A score of 16 or higher is used to classify someone as depressed (Katz et al., 1996). Internal consistency is quite high for this measure (Cronbach’s α = 0.88; Clark, Mahoney, Clark, & Eriksen, 2002)
Computer Questionnaires
Computer Experience Questionnaire.—Participants were given a self-report questionnaire measuring computer experience. At the start of the questionnaire, those who reported that they had never used a computer did not continue with the questionnaire and were given a score of zero. Those who indicated that they had used a computer before were questioned further about the frequency and duration of their computer use and their overall breadth of knowledge and proficiency with computers (e.g., check which of the following computer operations with which you are proficient). Scores range on a continuous scale from 0 to 45, with higher scores reflecting greater self-reported computer use and proficiency (Czaja & Sharit, 1998). In order to examine group differences on this variable, this measure was dichotomized (0 = no computer experience, 1 = any computer experience).
Computer Attitudes Questionnaire.—Attitudes toward computers were assessed using a 35-item self-report measure with items addressing comfort, efficacy, control, gender equality, dehumanization, utility, and interest. Scoring included a five-point Likert scale format, with reverse scoring for negatively worded items. A composite score was calculated with scores ranging from 35 to 175. Higher scores indicate a more positive attitude toward computers (Jay & Willis, 1992). In the current study, internal consistency was quite high for this measure (Cronbach’s α = 0.96).
Cognitive Measures
The Useful Field of View Test.—The Useful Field of View (UFOV) test is a computerized measure of visual processing speed and attention (Edwards, Vance et al., 2005). The version used in this study was administered with a touch screen and included four increasingly complex subtests. In each subtest, stimuli were presented at varying durations (17–500ms long) and performance is tracked to hone in on the briefest duration at which participants can accurately process visual information 75% of the time. The scores are reported in milliseconds, thus lower scores (i.e., fewer milliseconds) indicate faster processing speed. For this study, a total score was used which was a sum of the scores for each subtest. The internal consistency between the four subtests in the current study was adequate (Cronbach’s α = 0.68). The total score for this measure has good test–retest reliability (r = .74–.88; Edwards, Vance et al., 2005).
Road Sign Test.—The Road Sign Test (RST) is a computerized measure of complex reaction time (Ball et al., 2002). Participants were instructed to respond to various target signs (bicycle, pedestrian, left arrow, and right arrow) using the computer mouse (clicking or moving the mouse left or right). These targets are presented randomly, and in motion among nontarget signs, requiring participants to visually scan the signs and react when one of the target signs appears. There are 24 trials; 12 include 3 signs presented together and 12 include 6 signs, causing increased distraction. The final scores for this test include an average reaction time (seconds) for the three and six sign trials, with lower scores reflective of faster reaction time. The internal consistency between these two trials was high in the current study (Cronbach’s α = 0.92). The test–retest reliability for this measure is modest, at r = .56 (Ball et al., 2002).
The Stroop task (Trenerry, Crosson, DeBoe, & Leber, 1989).—A computerized version of the original Stroop test was used in order to measure the executive functions of impulse control and inhibition. This test includes three tasks. In the first task, color patches, participants were shown 36 colored blocks on a computer screen and asked to verbally identify the color of each. Similarly, in the second task (color words), participants were asked to read 36 color names (e.g., red, blue) shown on a computer screen. Finally, for the interference task, participants were shown 36 color names printed in a conflicting ink color and asked to identify the color of the ink (e.g., for the word “red” printed in blue ink, the correct response is blue). For all tasks, participants were timed by the tester to determine the number of seconds needed to identify all the stimuli, and for the interference condition, the number of uncorrected errors was recorded. The “Stroop Effect” was calculated by subtracting the difference between the interference and color patch task times. A correction factor was added to the score, which was derived from dividing the interference condition time by 36 and multiplying this by the total number of uncorrected errors made. Lower scores indicate better executive functioning. Average test–retest reliability for the Stroop task is quite high (r = .84; Dikmen, Heaton, Grant, & Temkin, 1999).
Trail Making Test (Reitan, 1958; Stern & Prohaska, 1996).—The Trail Making Test is a widely used neuropsychological measure of visuomotor tracking, attention, perceptual motor speed, and executive functioning. It is a reliable measure with good sensitivity to age-related declines in performance (Lezak, 1995). Specifically, Trails A is a measure of attention and visuomotor tracking, whereas Trails B is a measure of the executive function of mental set flexibility. Performance was measured by the time (seconds) taken to complete each task. Total scores for this study were calculated by adding the Trails A and B times together. Lower scores are indicative of better cognitive functioning. Average test–retest reliability for the Trail Making Test is quite high (r = .84; Dikmen et al., 1999).
Data Analysis
All analyses were conducted using SPSS, version 20. Groups were formed using a dichotomy created from the computer experience questionnaire (1 = computer user; 0 = computer nonuser). Preliminary analyses included examining the data for influential outliers, multicollinearity, and missing data. To examine differences between the computer user and nonuser group on demographic and health variables, computer attitudes, the three computerized cognitive measures (i.e., UFOV, RST, and Stroop), and the paper-and-pencil cognitive measure (i.e., Trail Making Test), t-test and chi-square analyses were conducted. In order to examine the differences in cognitive performance between the computer users and nonusers, multivariate analysis of covariance (MANCOVA) was conducted with the three computerized cognitive tests and the paper-and-pencil measure entered as the dependent variables.
Correlations were conducted among the variables in order to examine the relationships between the variables of interest and to determine which demographic and health variables to include as covariates. To examine the third aim of this study, the aforementioned MANCOVA was conducted again with the inclusion of the Computer Attitudes Questionnaire as a covariate in order to determine whether this variable would attenuate the effect of computer experience.
Results
Inspection of the correlation matrix indicated that there was no substantial multicollinearity (r > .7) among the independent variables. The following variables had missing data: One data point was missing for the Computer attitude Questionnaire; two were missing for RST, Stroop, Trail Making Test, and presence of diabetes; three were missing for UFOV and presence of hypertension; four were missing for computer experience; five were missing for CES-D; and six were missing for presence of cataracts. All missing data points were determined to be missing at random. Subsequent analyses employed pairwise deletion except for the MANCOVA, which employed listwise deletion. Three variables (Stroop, RST, and Trail Making Test) had influential outliers, and these values were replaced with a value equivalent to 3 SD above or below the mean for each respective variable.
Results of chi-square and t-test analyses revealed a significant difference between computer users and nonusers on age, frequency of Caucasians, frequency of cataracts, education, MMSE, computer attitudes, UFOV test, RST, Stroop, and the Trail Making Test. There were no group differences on frequency of men, CES-D, frequency of diabetes, and frequency of hypertension (Table 1).
Table 1.
Computer Experience | |||||
---|---|---|---|---|---|
Yes (N = 433) | No (N = 197) | ||||
Number | M (SD) | Number | M (SD) | p value | |
Age | 72.14 (4.98) | 75.09 (6.02) | .000 | ||
No. Men (%) | 177 (40.89) | 78 (39.59) | .761 | ||
No. Caucasian (%) | 405 (93.53) | 169 (85.79) | .002 | ||
Education | 14.50 (2.64) | 13.17 (2.64) | .000 | ||
CES-D score | 7.29 (6.77) | 7.56 (6.30) | .664 | ||
MMSE score | 28.61 (1.33) | 27.89 (1.62) | .000 | ||
No. with diabetes (%) | 56 (12.93) | 27 (13.78) | .772 | ||
No. with hypertension (%) | 211 (48.73) | 100 (51.28) | .554 | ||
No. with cataracts (%) | 213 (49.42) | 114 (58.76) | .030 | ||
Computer attitudes | 133.95 (13.93) | 122.03 (12.56) | .000 | ||
Useful Field of View test | 804.88 (242.34) | 939.00 (292.94) | .000 | ||
Road Sign Test | 1.77 (0.47) | 2.14 (0.78) | .000 | ||
Stroop | 28.70 (12.10) | 38.67 (18.51) | .000 | ||
Trail Making Test | 143.96 (64.14) | 192.71 (97.47) | .000 |
Notes. No. = Number; CES-D = Center for Epidemiological Studies Depression Scale; MMSE = Mini-Mental Status Exam; for race, all others were African American.
*p < .05 **p < .01.
Correlations revealed that older age, African American race, fewer years of education, presence of diabetes and hypertension, being a computer nonuser, and poorer computer attitudes were related to poorer performance on the UFOV test, RST, Stroop, and Trail Making Test. Addition-ally, being woman was related to poorer RST performance, but higher scores on the CES-D were related to poorer performance on the UFOV test, RST, and Trails. Presence of cataracts was associated with poorer performance on the UFOV test and RST only. Correlations between computer experience and all other variables revealed that being younger, Caucasian, having more years of education, not having cataracts, and better computer attitudes were associated with being a computer user. CES-D score was related to computer attitudes but not computer experience. All significant correlations were at p < .05 (Table 2).
Table 2.
1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1. Age | 1.00 | |||||||||||||
2. Genderª | .03 | 1.00 | ||||||||||||
3. Raceª | −.13** | −.15** | 1.00 | |||||||||||
4. Education | .04 | .15** | −.13** | 1.00 | ||||||||||
5. CES-D | −.06 | −.14** | .15** | −.13** | 1.00 | |||||||||
6. Diabetesª | −.03 | .01 | .12** | .02 | .08* | 1.00 | ||||||||
7. Hypertensionª | .08* | −.08* | .11** | −.07 | .08 | .15** | 1.00 | |||||||
8. Cataractsª | .34** | −.08 | −.08 | −.02 | .05 | .03 | .09* | 1.00 | ||||||
9. Experienceª | −.23** | .01 | −.13** | .23** | −.03 | −.01 | −.02 | −.09* | 1.00 | |||||
10. Attitudes | −.14** | .11** | −.02 | .24** | −.19** | −.09* | −.10* | −.12** | .38** | 1.00 | ||||
11. UFOV | .41** | −.02 | .17** | −.14** | .14** | .11** | .09* | .17** | −.22** | −.21** | 1.00 | |||
12. RST | .28** | −.18** | .25** | −.20** | .19** | .14** | .16** | .10* | −.28** | −.20** | .57** | 1.00 | ||
13. Stroop | .22** | .07 | .24** | −.24** | .08 | .10* | .11* | .04 | −.26** | −.18** | .45** | .42** | 1.00 | |
14. Trails | .22** | −.02 | .23** | −.21** | .13** | .12** | .17** | .08 | −.28** | −.19** | .52** | .48** | .48** | 1.00 |
Note. CES-D = Center for Epidemiological Studies Depression Scale; Experience = Computer Experience; Attitudes = Computer Attitudes; UFOV = Useful Field of View Test; RST = Road Sign Test; Trails = Trail Making Test.
ªNonparametric correlations.
*p < .05 **p < .01.
Based on the correlations, the MANCOVA included the following covariates: age, gender, race, education, CES-D, diabetes, hypertension, and cataracts. Computer experience was the independent variable of interest. All four cognitive measures were entered as dependent variables in the MANCOVA model. As these four measures were moderately correlated (Table 2), this satisfied the requirements for MANCOVA. Although Box’s M test indicated significantly different variance–covariance matrices between groups (p = .000), our large sample size provided sufficient power to detect group differences. Furthermore, when conducting the MANCOVA with α set at 0.01 and 0.001, results remained significant. The one-way MANCOVA results revealed a significant multivariate main effect of computer experience, Wilk’s λ = −0.946, F(4, 591) = 8.499, p = .000, partial η2 = 0.054. Significant univariate main effects for computer experience adjusting for covariates were found for the UFOV test, the RST, Stroop, and the Trail Making Test (Table 3). For the UFOV test and RST, significant covariates included age, race, education, CES-D, and diabetes. For Stroop, significant covariates included age, gender, race, and education. Finally, for the Trail Making Test significant covariates included age, race, education, CES-D, diabetes, and hypertension. Table 3 includes values on the partial η2 for each predictor and p values. Table 4 includes the covariate adjusted means for each cognitive measure.
Table 3.
Dependent variables | Sources of variance | F | p value | Partial η2 |
---|---|---|---|---|
UFOV | Age** | 116.52 | .000 | 0.164 |
Gender | 1.74 | .188 | 0.003 | |
Race** | 45.58 | .000 | 0.071 | |
Education** | 8.97 | .003 | 0.015 | |
Depression** | 11.29 | .001 | 0.019 | |
Diabetes* | 6.37 | .012 | 0.011 | |
Hypertension | 0.16 | .690 | 0.000 | |
Cataracts | 2.13 | .145 | 0.004 | |
Computer Experience* | 6.27 | .013 | 0.010 | |
RST | Age** | 61.26 | .000 | 0.093 |
Gender | 2.19 | .140 | 0.004 | |
Race** | 45.84 | .000 | 0.072 | |
Education** | 13.97 | .000 | 0.023 | |
Depression** | 14.34 | .000 | 0.024 | |
Diabetes** | 9.31 | .002 | 0.015 | |
Hypertension | 3.12 | .075 | 0.005 | |
Cataracts | 3.18 | .075 | 0.005 | |
Computer Experience** | 18.07 | .000 | 0.030 | |
Stroop | Age** | 30.05 | .000 | 0.048 |
Gender** | 12.42 | .000 | 0.020 | |
Race** | 47.05 | .000 | 0.073 | |
Education** | 27.67 | .000 | 0.045 | |
Depression | 1.64 | .200 | 0.003 | |
Diabetes | 3.35 | .068 | 0.006 | |
Hypertension | 0.83 | .364 | 0.001 | |
Cataracts | 0.24 | .622 | 0.000 | |
Computer Experience** | 17.46 | .000 | 0.029 | |
Trail Making Test | Age** | 24.65 | .000 | 0.040 |
Gender | 2.02 | .156 | 0.003 | |
Race** | 55.40 | .000 | 0.085 | |
Education** | 10.90 | .001 | 0.018 | |
Depression* | 5.69 | .017 | 0.009 | |
Diabetes* | 4.52 | .034 | 0.008 | |
Hypertension* | 6.53 | .011 | 0.011 | |
Cataracts | 0.26 | .610 | 0.000 | |
Computer Experience** | 19.03 | .000 | 0.031 |
Notes. Depression = CES-D score; UFOV = Useful Field of View Test; RST =Road Sign Test; covariates included age, gender, race, education, depression, diabetes, hypertension, and cataracts. Independent variable = Computer experience (Yes/No).
*p < .05; ** p < .001.
Table 4.
Computer experience | ||
---|---|---|
Yes (N = 419) | No (N = 185) | |
M (SE) | M (SE) | |
Useful Field of View Test* | 826.62 (11.04) | 878.92 (17.06) |
Road Sign Test** | 1.81 (0.03) | 2.02 (0.04) |
Stroop** | 29.89 (0.66) | 35.07 (1.01) |
Trail Making Test** | 148.73 (3.41) | 176.87 (5.26) |
Notes. M = Mean; SE = Standard Error; covariates included age, gender, race, education, depression, diabetes, hypertension, and cataracts.
*p < .05; **p < .01.
The final aim was to investigate whether computer attitudes would attenuate, or account for, any relationship between computer experience and the cognitive measures after accounting for other significant covariates. Results revealed that when controlling for computer attitudes, the only difference in the results was that the effect of computer experience on the UFOV test was attenuated, making it no longer significant. Additional analyses were performed to examine whether there was an interaction between age and computer experience on cognitive performance and to determine whether excluding participants with low MMSE scores would change the MANCOVA results. Results revealed that there was not an age by experience interaction, and when excluding those participants with an MMSE score of 25 or less (n = 35, 5.52%) the initial results remained, indicating that suboptimal MMSE performance did not influence the findings.
Discussion
There are several findings from this study that concur with the literature. Our results confirm that computer users were on average younger, white, more educated, had better computer attitudes, and better cognitive performance. Furthermore, as expected, being older, African American, having fewer years of education, and having diabetes and hypertension were related to poorer cognitive performance, which is consistent with the literature suggesting that age (Birren & Schaie, 2001), race (Zsembik & Peek, 2001), education (Inouye, Albert, Mohs, Sun, & Berkman, 1993), and medical comorbidities (Kilander, Nyman, Boberg, Hansson, & Lithell, 1998; Zelinski, Crimmins, Reynolds, & Seeman, 1998) are related to cognitive functioning. The finding that female gender was related to poorer RST performance but had no relation to the other cognitive tests is in line with the inconsistent gender findings in the cognitive aging literature (Mekarski, Cutmore, & Suboski, 1996; Read et al., 2006; Stewart, Zelinski, & Wallace, 2000). Similarly, the finding that more depressive symptoms were related to poorer performance on the UFOV test, RST, and Trail Making Test but not the Stroop task was also unexpected; however, studies have suggested that the construct of apathy specifically, rather than depressive symptoms in general, may be more influential on executive tasks such as Stroop that rely on frontal lobe functioning (Feil, Razani, Boone & Lesser, 2003). Furthermore, it is important to note that this sample had relatively few individuals with severe reported depressive symptomology (n = 85, 13.51% with CES-D ≥ 16), which could have affected these findings. The finding that the presence of cataracts was only associated with poorer performance on the UFOV test and RST may be a result of the visually demanding nature of these tests, with stimuli that are constantly changing on a computer screen. The consistent relationship of education to computer use and the cognitive assessments again speaks to the strong potential confounding element of socioeconomic status on computer use and cognitive performance. Unfortunately, the current study did not have data on socioeconomic status or income. The finding that depressive symptomology was related to computer attitudes but not to computer experience was interesting and may imply that computer attitudes may at least partially be attributable to a general poor mood that is not necessarily specific to attitudes toward computers. In other words, one may score low on the Computer Attitudes Questionnaire in part due to poor attitudes in general due to depressive symptoms.
The MANCOVA results showed that after controlling for potential demographic, mental, and physical health confounds, computer experience still had a significant effect on cognitive test performance for the computerized measures and the paper-and-pencil measure. The similar effect sizes across the different test formats indicate that in this study, computer experience did not have a stronger effect with computerized cognitive measures. This implies that computerization of cognitive measures does not seem to be influenced by whether or not one has prior computer experience. The finding that there was not a significant age by experience interaction suggests that the effect of computer experience on cognitive test performance is not a function of age. The finding of the third aim analysis controlling for computer attitudes suggests that computer attitudes may be more influential to the UFOV test performance than computer experience alone. This may imply that for this measure, attitudinal or motivational factors may be more related to performance such that if someone has poor attitudes toward computers, they may have less motivation to perform the test correctly.
Our overall results are consistent with prior findings suggesting that individuals with more computer use and experience tend to perform higher on cognitive measures, regardless of the format. The question still remains as to whether older adults (or adults in general) who have higher cognitive abilities are more likely to use the computer or whether using the computer helps maintain or improve cognitive abilities. An important consideration in regards to the second explanation is the assumption that simply using a computer provides mental stimulation. The question of computer use as mental stimulation would come down to the types of activities being performed on a computer. Are they challenging and relatively novel or are they more passive in terms of cognitive difficulty? Clearly, this is a large area of research that still needs to be investigated and is beyond the scope of the current study. The most salient finding of this study is the similarity of the effect of computer experience on paper-and-pencil and computerized cognitive test performance, which has not been examined in the literature to our knowledge. This finding is promising as it implies that computerization of cognitive tests does not negatively affect performance among older adults. In other words, computerized cognitive tests are likely accurately assessing cognition, rather than computer familiarity or experience.
Although researchers and clinicians should be aware of this potential confound when administering and interpreting these computerized measures in older adults, our results do not suggest that computerized measures are affected by computer experience. Nonetheless, efforts to implement and promote computer/technology use among older adults are needed, as computers are playing an increasing role in day-to-day activities within society and social interactions (Sum, Mathews, Hughes, & Campbell, 2008).
Strengths and Limitations
One limitation is that the data are from participants who were tested nearly 10 years ago; thus, results may not generalize to the current population of older adults due to cohort effects. Furthermore, this is an overwhelmingly white sample from a relatively small region of the country; thus, results may not generalize demographic groups. Also, there were few older adults in this sample who were more than 85 years, making it difficult to draw inferences about the effect of computer experience on the oldest-old. Additionally, computer experience was measured by a self-report questionnaire, which may not be as accurate as performance-based assessments of computer proficiency that gather more extensive information about the types of activities performed on the computer for leisure and occupational purposes. Also the use of cross-sectional data does not allow inferences on causation to be made. Finally, although the questions on the Computer Attitudes Questionnaire are generally broad (e.g., “I know that if I worked hard to learn about computers, I could do well”; “I don’t care to know more about computers”), it is an older measure and would benefit from updated questions (such as attitudes toward specific types of computer activities). Some strengths of this study are a quite large sample size and the use of psychometrically sound measures. Perhaps, the most significant strength is the fact that this research question has not yet been examined, and thus the results this study provides are novel, and will hopefully be readdressed and further examined in future research.
Directions for Future Research
Longitudinal and prospective studies are greatly needed in order to examine the causal mechanisms in the relationship between cognitive performance and computer experience. Also, as previously mentioned, perhaps a more sensitive and objective measure of computer experience should be used, as older adults have been reported to underestimate their actual computer abilities (Marquié, Jourdan-Boddaert, and Huet, 2002). The influence of socioeconomic status and culture should also be examined in future research and the predictive utility of computer experience to other computerized cognitive tests. Furthermore, in order for computerized tests to be used as assessment tools, new age and possibly education adjusted norms should be developed specifically for these measures. Lastly, given that computer technology is becoming more common in the cognitive assessment of older adults and in day-to-day activities, older adults’ distinct limitations and capabilities along with their preferences, attitudes, and needs regarding computer-related tasks should be considered when designing computer interfaces for older adults (Rogers & Fisk, 2010). Additionally, as recent research has shown that home-based personal computers and other intelligent systems are a feasible method of assessing cognitive change among older adults without having to leave home; future research should examine this growing area of research and assessment (Kaye et al., 2011).
Funding
This work was supported by the National Institutes of Health (R01AG005739). Drs Ross, Vance, and Ball are supported by the Edward R. Roybal Center for Translational Research on Aging and Mobility, NIA 2 P30 AG022838.
Conflict of Interest
Karlene Ball owns stock in the Visual Awareness Research Group and Posit Science, Inc., the companies that market the Useful Field of View Test and speed of processing training software now called Insight, and she serves as a member of the Posit Science Scientific Advisory Board.
References
- Ball K., Berch D. B., Helmers K. F., Jobe J. B., Leveck M. D., Marsiske M., Willis S. L. (2002). Effects of cognitive training interventions with older adults. A randomized controlled trial. The Journal of the American Medical Association, 288, 2271–2281 doi:10.1001/jama.288.18.2271 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Birren J. E., Schaie K. W. (Eds.) (2001). Handbook of the psychology of aging (5th ed.). San Diego, CA: Academic Press; [Google Scholar]
- Clark C. H., Mahoney J. S., Clark D. J., Eriksen L. R. (2002). Screening for depression in a hepatitis C population: The reliability and validity of the Center for Epidemiologic Studies Depression Scale. Journal of Advanced Nursing, 40(3), 361–369 doi:10.1046/j.1365-2648.2002.02378.x [DOI] [PubMed] [Google Scholar]
- Collerton J., Collerton D., Yasumichi A., Barrass K., Eccles M., Jagger C., Kirkwood T. (2007). A comparison of computerized and pencil-and-paper tasks in assessing cognitive function in community-dwelling older people in the Newcastle 85+ study. Journal of the American Geriatrics Society, 55, 1630–1635 doi:10.1111/j.1532-5415.2007.01379.x [DOI] [PubMed] [Google Scholar]
- Cutler S. J., Hendricks J., Guyer A. (2003). Age differences in home computer availability and use. The Journals of Gerontology, Series B: Social Sciences, 58(5), 271–280 doi:10.1093/geronb/58.5.S271 [DOI] [PubMed] [Google Scholar]
- Czaja S. J., Sharit J. (1998). Age differences in attitudes toward computers. The Journals of Gerontology, Series B: Psychological Sciences and Social Sciences, 53(5), 329–340 doi:10.1093/geronb/53B.5.P329 [DOI] [PubMed] [Google Scholar]
- Dikmen S. S., Heaton R. K., Grant I., Temkin N. R. (1999). Test-retest reliability and practice effects of the Expanded Halstead-Reitan neuropsychological test battery. Journal of the International Neuropsychological Society, 5, 346–356 doi:10.1017/S1355617799544056 [PubMed] [Google Scholar]
- Edwards J. D., Vance D. E., Wadley V. G., Cissel G. M., Roenker D. L., Ball K. K. (2005). Reliability and validity of Useful Field of View Test scores as administered by a personal computer. Journal of Clinical and Experimental Neuropsychology, 27, 529–543 doi:10.1080/13803390490515432 [DOI] [PubMed] [Google Scholar]
- Edwards J. D., Wadley V. G., Vance D. E., Wood K., Roenker D. L., Ball K. K. (2005). The impact of speed of processing training on cognitive and everyday performance. Aging & Mental Health, 9(3), 262–271 doi:10.1080/13607860412331336788 [DOI] [PubMed] [Google Scholar]
- Feil D., Razani J., Boone K., Lesser I. (2003). Apathy and cognitive performance in older adults with depression. Geriatric Psychiatry, 18(6), 479–485 doi:10.1002/gps.869 [DOI] [PubMed] [Google Scholar]
- Feldstein S. N., Keller F. R., Portman R. E., Durham R. L., Klebe K. J., Davis H. P. (1999). A comparison of computerized and standard versions of the Wisconsin Card Sorting Test. The Clinical Neuropsychologist, 13(3), 303–313 doi:10.1076/clin.13.3.303.1744 [DOI] [PubMed] [Google Scholar]
- Folstein M. F., Folstein S. E., McHugh P. R. (1975). Mini-mental state: A practical method for grading the cognitive state of patients for the clinician. Journal of Psychiatric Research, 12(3), 189–198 doi:10.1016/0022-3956(75)90026-6 [DOI] [PubMed] [Google Scholar]
- Fortuny L. A., Heaton R. K. (1996). Standard versus computerized administration of the Wisconsin Card Sorting Test. The Clinical Neuropsychologist, 10(4), 419–424 doi:10.1080/13854049608406702 [Google Scholar]
- Hart T., Chaparro B., Halcomb C. (2008). Evaluating websites for older adults: Adherence to senior-friendly guidelines and end-user performance. Behaviour & Information Technology, 27(3), 191–199 doi:10.1080/01449290600802031 [Google Scholar]
- Hinkin C. H., Castellon S. A., Hardy D. J., Granholm E., Siegle G. (1999). Computerized and traditional Stroop Task dysfunction in HIV-1 infection. Neuropsychology, 13(2), 306–316 doi:10.1037/0894-4105.13.2.306 [DOI] [PubMed] [Google Scholar]
- Hultsch D. F., Hertzog C., Small B. J., Dixon R. A. (1999). Use it or lose it: Engaged lifestyle as a buffer of cognitive decline in aging? Psychology & Aging, 14(2), 245–263 doi:10.1037/ 0882-7974.14.2.245 [DOI] [PubMed] [Google Scholar]
- Inouye S. K., Albert M. S., Mohs R., Sun K., Berkman L. F. (1993). Cognitive performance in a high-functioning community-dwelling elderly population. The Journals of Gerontology: Medical Sciences, 48(4), M146–M151 doi:10.1093/geronj/48.4.M146 [DOI] [PubMed] [Google Scholar]
- Iverson G. L., Brooks B. L., Ashton V. L., Johnson L. G., Gualtieri C. T. (2009). Does familiarity with computers affect computerized neuropsychological test performance? Journal of Clinical and Experimental Neuropsychology, 31(5), 594–604 doi:10.1080/13803390802372125 [DOI] [PubMed] [Google Scholar]
- Jay G. M., Willis S. L. (1992). Influence of direct computer experience on older adults’ attitudes toward computers. Journal of Gerontology: Psychological Sciences, 47(4), P250–P257 doi:10.1093/geronj/ 47.4.P250 [DOI] [PubMed] [Google Scholar]
- Katz M. H., Douglas J. M., Jr., Bolan G. A., Marx R., Sweat M., Park M. S., Buchbinder S. P. (1996). Depression and use of mental health services among HIV-infected men. AIDS Care, 8(4), 433–442 doi:10.1080/09540129650125623 [DOI] [PubMed] [Google Scholar]
- Kaye J. A., Maxwell S. A., Mattek N., Hayes T. L., Dodge H., Pavel M., Zitzelberger T. A. (2011). Intelligent systems for assessing aging changes: Home-based, unobtrusive, and continuous assessment of aging. The Journals of Gerontology, Series B: Psychological Sciences and Social Sciences, 66(1), 180–190 doi:10.1093/geronb/gbq095 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kilander L., Nyman H., Boberg M., Hansson L., Lithell H. (1998). Hypertension is related to cognitive impairment: A 20-year follow-up of 999 men. Hypertension, 31, 780–786 doi:10.1161/01.HYP.31.3.780 [DOI] [PubMed] [Google Scholar]
- Lezak M. D. (1995). Neuropsychological assessment (3rd ed.). New York: Oxford University Press; [Google Scholar]
- Marquié J. C., Jourdan-Boddaert L., Huet N. (2002). Do older adults underestimate their actual computer knowledge? Behaviour & Information Technology, 21(4), 273–280 [Google Scholar]
- McDonald A. S. (2002). The impact of individual differences on the equivalence of computer-based and paper-and-pencil educational assessments. Computers & Education, 39, 299–312 doi:10.1016/S0360-1315(02)00032-5 [Google Scholar]
- Mekarski J. E., Cutmore T. R., Suboski W. (1996). Gender differences during the processing of the Stroop task. Perceptual & Motor Skills, 83(2), 563–568 [DOI] [PubMed] [Google Scholar]
- Newburger E. C. (2001). Home Computers and Internet Use in the United States: August 2000. Current Population Reports, Series P23-207, US Census Bureau, USA
- Radloff L. S. (1977). The CES-D scale: A self-report depression scale for research in the general population. Applied Psychological Measurement, 1, 385–401 doi:10.1177/014662167700100306 [Google Scholar]
- Read S., Pedersen N. L., Gatz M., Berg S., Vuoksimaa E., Malmberg B., McClearn G. E. (2006). Sex differences after all those years? Heritability of cognitive abilities in old age. The Journals of Gerontology, Series B: Psychological Sciences, 61(3), 137–143 [DOI] [PubMed] [Google Scholar]
- Reitan R. M. (1958). Trail making test: Manual for administration, scoring, and interpretation. Indianapolis: Indiana University Medical Center; [Google Scholar]
- Rogers W. A., Fisk A. D. (2010). Toward a psychological science of advanced technology design for older adults. The Journals of Gerontology, Series B: Psychological Sciences, 65(6), 645–653 doi:10.1093/geronb/gbq065 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Salthouse T. A. (1991). Theoretical perspectives on cognitive aging. Hillsdale, NJ: Erlbaum; [Google Scholar]
- Salthouse T. A., Berish D. E., Miles J. D. (2002). The role of cognitive stimulation on the relations between age and cognitive functioning. Psychology & Aging, 17(4), 548–557 doi:10.1037/0882-7974.17.4.548 [DOI] [PubMed] [Google Scholar]
- Scarmeas N., Stern Y. (2003). Cognitive reserve and lifestyle. Journal of Clinical & Experimental Neuropsychology, 25(5), 625–633 doi:10.1076/jcen.25.5.625.14576 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Slegers K., van Boxtel M. P. J., Jolles J. (2009). Effects of computer training and internet usage on cognitive abilities in older adults: A randomized controlled study. Aging, Clinical & Experimental Research, 21, 43–54 [DOI] [PubMed] [Google Scholar]
- Steinmetz J. P., Brunner M., Loarer E., Houssemand C. (2010). Incomplete psychometric equivalence of scores obtained on the manual and the computer version of the Wisconsin Card Sorting test? Psychological Assessment, 22(1), 199–202 doi:10.1037/a0017661 [DOI] [PubMed] [Google Scholar]
- Stern Y. (2002). What is cognitive reserve? Theory and research application of the reserve concept. Journal of the International Neuropsychological Society, 8, 448–460 doi:10.1017.S1355617701020240 [PubMed] [Google Scholar]
- Stern R. A., Prohaska M. L. (1996). Neuropsychological evaluation of executive functioning. Review of Psychiatry, 15, 243–266 [Google Scholar]
- Stewart S. T., Zelinksi E. M., Wallace R. B. (2000). Age, medical conditions, and gender as interactive predictors of cognitive performance: The effects of selective survival. The Journals of Gerontology: Psychological Sciences, 55(6), 381–383 doi:10.1093/geronb/55.6.P381 [DOI] [PubMed] [Google Scholar]
- Sum S., Mathews R. M., Hughes I., Campbell A. (2008). Internet use and loneliness in older adults. CyberPsychology & Behavior, 11(2), 208–211 doi:10.1089/cpb.2007.0010 [DOI] [PubMed] [Google Scholar]
- Tien A. Y., Spevack T. V., Jones D. W., Pearlson G. D., Schlaepfer T. E., Strauss M. E. (1996). Computerized Wisconsin Card Sorting Test: Comparison with manual administration. Kaohsiung Journal of Medical Sciences, 12, 479–485 [PubMed] [Google Scholar]
- Trenerry M. R., Crosson B., DeBoe J., Leber W. R. (1989). Stroop Neurological Screening Test. Lutz, FL: Psychological Assessment Resources; [Google Scholar]
- Tun P. A., Lachman M. E. (2010). The association between computer use and cognition across adulthood: Use it so you won’t lose it? . Psychology & Aging, 25(3), 560–568 doi:10.1037/a0019543 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Verghese J., Lipton R. B., Katz M. J., Hall C. B., Derby C. A., Kuslansky G., Buschke H. (2003). Leisure activities and the risk of dementia in the elderly. New England Journal of Medicine, 348, 2508–2516 doi:10.1056/NEJMoa022252 [DOI] [PubMed] [Google Scholar]
- Wagner G. P., Trentini C. M. (2009). Assessing executive functions in older adults: A comparison between the manual and the computer-based versions of the Wisconsin Card Sorting Test. Psychology & Neuroscience, 2(2), 195–198 doi:10.3922/j.psns.2009.2.011 [Google Scholar]
- Wild K., Howieson D., Webbe F., Seelye A., Kaye J. (2008). Status of computerized cognitive testing in aging: A systematic review. Alzheimer’s & Dementia, 4, 428–437 doi:10.1016/j.jalz.2008.07.003 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Zelinski E. M., Crimmins E., Reynolds S., Seeman T. (1998). Do medical conditions affect cognition in older adults? Health Psychology, 17(6), 504–512 doi:10.1037/0278-6133.17.6.504 [DOI] [PubMed] [Google Scholar]
- Zsembik B. A., Peek M. K. (2001). Race differences in cognitive functioning among older adults. The Journals of Gerontology: Psychological Sciences, 56(5), 266–274 doi:10.1093/geronb/56.5.S266 [DOI] [PubMed] [Google Scholar]