Abstract
Purpose of the Study:
Computers and the Internet have the potential to enrich the lives of seniors and aid in the performance of important tasks required for independent living. A prerequisite for reaping these benefits is having the skills needed to use these systems, which is highly dependent on proper training. One prerequisite for efficient and effective training is being able to gauge current levels of proficiency. We developed a new measure (the Computer Proficiency Questionnaire, or CPQ) to measure computer proficiency in the domains of computer basics, printing, communication, Internet, calendaring software, and multimedia use. Our aim was to develop a measure appropriate for individuals with a wide range of proficiencies from noncomputer users to extremely skilled users.
Design and Methods:
To assess the reliability and validity of the CPQ, a diverse sample of older adults, including 276 older adults with no or minimal computer experience, was recruited and asked to complete the CPQ.
Results:
The CPQ demonstrated excellent reliability (Cronbach’s α = .98), with subscale reliabilities ranging from .86 to .97. Age, computer use, and general technology use all predicted CPQ scores. Factor analysis revealed three main factors of proficiency related to Internet and e-mail use; communication and calendaring; and computer basics. Based on our findings, we also developed a short-form CPQ (CPQ-12) with similar properties but 21 fewer questions.
Implications:
The CPQ and CPQ-12 are useful tools to gauge computer proficiency for training and research purposes, even among low computer proficient older adults.
Key Words: Technology, Education and training, Independence, Survey design
Computers and the Internet have the potential to help seniors (65+) maintain independence and enrich their daily lives. These technologies can connect them with distant family members, provide information about community and national resources, allow them to shop and conduct banking transactions from home, and support prospective memory as cognitive abilities decline (Charness & Boot, 2009). Technological solutions to prolong independence are of growing relevance as many nations face population aging, a trend in which the proportion of the population composed of older adults (65+) is rising dramatically (Lutz, Sanderson, & Scherbov, 2008). However, despite the potential benefits associated with computer and Internet use, seniors often lag behind in the adoption of technology (Czaja et al., 2006b). For example, recent data from the Pew Internet and American Life Project finds that 47% of individuals surveyed more than the age of 64 still did not use the Internet (compared with 3% of 18–29 year olds; Zickuhr & Madden, 2012), with this lack of uptake being more pronounced among those who are of lower socioeconomic status, less educated, or disabled.
Attitudinal and cognitive factors have been identified as important barriers to technology adoption among seniors. To the extent that older adults perceive technology as not being beneficial and difficult to use, they do not adopt it (Melenhorst & Bouwhuis, 2004; Melenhorst, Rogers, & Bouwhuis, 2006; Mitzner et al., 2010). Poor technology design that does not take into account normative age-related changes in ability likely contributes to these perceptions (Fisk, Rogers, Charness, Czaja, & Sharit, 2009). Furthermore, Czaja and colleagues (2006b) determined that cognitive abilities (specifically fluid and crystalized intelligence), as well as computer anxiety and self-efficacy, were strong predictors of technology use. Each of these factors represents a barrier to be overcome or bypassed for seniors to fully reap the benefits of new technology.
Lack of adequate training is an additional barrier with respect to technology adoption, with seniors indicating a greater willingness to use technology when training is provided (Rogers, Cabrera, Walker, Gilbert, & Fisk, 1996), and many seniors wishing they had more technology training (Mitzner et al., 2008). Facilitating efficient and effective training is an important goal to ensure that seniors can benefit from technology. Understanding initial levels of technology proficiency is an important prerequisite to achieving this goal. Compared with one-on-one technology training, group or classroom training is often ideal in terms of efficiency (and is the format offered by many senior centers, retirement communities, and assisted living facilities). However, in a series of focus group studies conducted by Mitzner and colleagues (2008), seniors indicated a strong preference for one-on-one or solitary training over group training, partly out of fear that individuals with lower proficiency would require the instructor to cover already known material (resulting in boredom and wasted time). Conversely, a class containing individuals with a range of computer proficiencies might progress too quickly for individuals with extremely low proficiency. In fact recent data from a study evaluating a community-based computer and Internet class designed for older adults indicated that an important area of needed course refinement was a method for matching the initial skill level of the course participants (Czaja, Lee, Branham, & Remis, 2012).
Being able to assess initial levels of proficiency quickly and accurately allows classes to be organized with students of similar proficiencies, facilitates interactions among the trainees, and enhances trainee satisfaction with the training experience. Even in the case of individual training, knowing initial proficiency is important to ensure that a solid understanding of basic skills is present before more advanced skills are taught and to ensure that time is not wasted training what is already known. All of these issues are ultimately related to degree of learning success (Czaja & Sharit, 2012).
To achieve these goals with respect to computer and Internet training, we created the Computer Proficiency Questionnaire (CPQ) to assess the computer proficiency of seniors with a range of abilities, from noncomputer users to frequent computer and Internet users. The need for a new scale arose from the fact that existing surveys of computer proficiency did not meet our research and training needs, specifically with respect to an ongoing study with a focus on training minimally proficient seniors to use computers. For example, questionnaires initially considered were deemed inappropriate because they were developed with a focus on existing computer users, included relatively technical proficiencies without direct relevance for functional independence (e.g., programming, database management, and computer terminology), or were not specifically developed and validated with an older adult sample (Bradlow, Hoch, & Hutchingston, 2002; Kay, 1993). Others were developed with data from older adults who had substantial (multiple years on average) computer experience (Arning & Ziefle, 2008). Still other instruments designed for older adults focused largely on declarative knowledge rather than ability to perform computer tasks (Sengpiel & Dittberner, 2008), and others focused on frequency of use rather than proficiency (Czaja et al., 2006a).
To address these limitations, we developed the CPQ to assess proficiency across six different subscales: (a) Computer basics (6 questions), (b) printing (5 questions), (c) communication (9 questions), (d) Internet (7 questions), (e) scheduling software (3 questions), and (f) multimedia use (3 questions). The CPQ was developed in the context of a randomized clinical trial designed to assess the impact of computer and Internet access and training on the well-being of seniors with minimal computer experience. The computer system deployed in this trial was designed to support domains of computer activities predicted to improve quality of life and social support, and the questions of the CPQ were developed to assess proficiency within these same domains. More specifically, the CPQ was designed to assess proficiency with respect to potential computer-based activities to support social engagement/support (communication), information gathering (Internet), prospective memory (printing and scheduling software), and cognitive stimulation (multimedia use). In addition to these domains, the CPQ assesses basic ability to interact with a computer using various input devices (computer basics).
We aimed to create an instrument to provide a detailed measure of proficiency that would identify specific areas of needed training and be easier to administer and take substantially less time compared with having older adults demonstrate proficiency in each of these areas. To this end, the 33 questions of the full CPQ are presented in a matrix format with identical response options/anchors for each question to allow users to complete the questionnaire quickly. Here we present evidence that the CPQ is a reliable and valid measure of proficiency. However, 33 questions may still take too long to administer for some training and research purposes, so we also present a short version of the CPQ (CPQ-12), which has similar properties to the full CPQ. The CPQ and CPQ-12 can be used as a tool to measure proficiency for training and research purposes. It might have other applications as well such as helping individuals decide what kind of software to purchase for a parent or for selecting where to start in training a complex piece of software.
Method
Participant Recruitment
After initial pilot testing with older adults to ensure the CPQ was easily understood (age 65+, N = 15), the CPQ was administered to a large sample of older adults (65+, N = 352). Participants were recruited from the Atlanta, Miami, and Tallahassee regions of the Center for Research and Education on Aging and Technology Enhancement (CREATE). This sample included a subset of participants specifically selected for their minimal computer experience (low experience users, N = 276; low computer experience users were recruited as part of the PRISM clinical trial examining the effect of a computer intervention on well-being [ClinicalTrials.gov Identifier: NCT01497613]. Data represent baseline data before intervention administration.). Low experience users were recruited through advertisements (newspaper, radio, and television), direct mailings, and connections with community agencies serving older adults. All recruitment materials specified minimal computer experience (no computer or Internet experience within the past 3 months) as a prerequisite for participation. Participants were disqualified if they reported having a working computer at home and if they reported “yes” to using e-mail or the Internet frequently for the past 3 months. Other participants had multiple years of computer experience (high experience users, N = 76). These participants were primarily recruited by identifying individuals in our participant databases who provided an e-mail address and agreed to be contacted for future studies. An e-mail was sent to these participants with a link to an identically formatted online version of the paper CPQ. We expected only older adults with high computer experience to (a) have an e-mail account and (b) complete an online survey. These two samples (low and high experience users) were recruited to generate a range of responses and to allow the validity of the CPQ to be tested.
Participant Computer Experience
These recruitment methods generated two samples vastly different in terms of the computer experience. Of the low experience group, 47% reported not having any computer experience at all compared with 0% of the high experience group. All members of the high experience group reported using computers for 5 or more years. In the low experience group, there was a subgroup of participants who reported some computer experience for 5 years or more (19%); however, their experience was not recent, with 50% reporting not having used a computer within the past year. Their experience was also limited in scope, with 44% of participants within this subgroup reporting having no Internet experience and 58% reporting never having used e-mail or using e-mail only once in the past year. This is in stark contrast to the high experience group in which 97% of participants reported using a computer in the past week. All high experience users used the Internet and 93% of high experience users reported using e-mail frequently in the past year, with the rest reporting using e-mail occasionally.
Participant Demographics
Demographics of each of these groups are listed in Table 1. Not surprising, given the statistics on aging and computer/Internet use (Zickuhr & Madden, 2012), the low experience group was on average 4 years older than the high experience group (F(1,350) = 20.21, p < .001). The low experience group also proportionally included more women (χ 2(1) = 24.46, p < .001) and was more racially and ethnically diverse. Potential implications of these differences are discussed in the Results section.
Table 1.
Demographic Characteristics of Low and High Computer Experience Groups
| Low experience | High experience | |
|---|---|---|
| N | 276 | 76 |
| Mean age | 76 (7) | 72 (5) |
| Proportion female | 0.79 | 0.50 |
| Proportion Hispanic/Latino | 0.09 | 0.03 |
| Proportion non-Caucasian | 0.39 | 0.11 |
Note: SD within parentheses.
CPQ Administration
The CPQ was designed to include the six previously mentioned subscales, with each subscale consisting of 3–9 questions (Supplementary Material). The CPQ asks seniors to rate their ability to perform a number of computer-related tasks (e.g., I can: Find information about local community resources on the Internet; I can: Use a computer to play games) on a 5-point scale (1 = Never tried, 2 = Not at all, 3 = Not very easily, 4 = Somewhat easily, 5 = Very easily). Low experience users completed a paper version of the CPQ in their homes, whereas high experience users completed a similarly formatted online version of the survey distributed by e-mail. All protocols were IRB approved, and participants provided informed consent before participating.
Results
Reliability of the CPQ
Cronbach’s α was used to assess the reliability of the CPQ and the degree to which questions tapped a single underlying construct. Overall, the scale demonstrated excellent reliability (Cronbach’s α = .98), with subscale reliabilities ranging from .86 to .97 (computer basics: .91; printing: .94; communication: .95; Internet: .97; scheduling: .96; multimedia: .86).
Scoring of the CPQ
We scored the CPQ by computing the average response for the questions in each subscale, then summing these averages to produce a total CPQ score. This method allowed us to compute subscales that represent domains of computer activity predicted to be important for supporting independence (social engagement, information seeking, prospective memory support, and cognitive stimulation). In addition, it enabled us to weight each of these domains equally as each subscale contains a different number of questions. Table 2 lists average scores for each subscale and the scale as a whole for low experience and high experience older adults. These scores are useful in terms of gauging an individual’s level of proficiency from their CPQ score.
Table 2.
Mean Subscale and Total CPQ Scores for Low and High Experience Computer User Groups, Both for the Full CPQ and the 12-Item Short-Form Scale (CPQ-12)
| Low experience (N = 276) | High experience (N = 76) | |
|---|---|---|
| Full CPQ | ||
| Total | 10.00 (4.29) | 24.35 (4.24) |
| Computer basics | 2.53 (1.18) | 4.59 (0.43) |
| Printing | 1.88 (1.16) | 4.52 (0.71) |
| Communication | 1.50 (0.79) | 4.02 (0.68) |
| Internet | 1.42 (0.78) | 4.51 (0.67) |
| Scheduling | 1.21 (0.72) | 3.04 (1.74) |
| Multimedia | 1.46 (0.83) | 3.69 (1.35) |
| CPQ-12 | ||
| Total | 10.79 (4.97) | 25.67 (3.84) |
| Computer basics | 3.13 (1.51) | 4.91 (0.22) |
| Printing | 1.70 (1.19) | 4.39 (1.01) |
| Communication | 1.92 (1.37) | 4.97 (0.24) |
| Internet | 1.50 (1.06) | 4.56 (0.88) |
| Scheduling | 1.23 (0.75) | 3.18 (1.82) |
| Multimedia | 1.31 (0.79) | 3.66 (1.55) |
Note: CPQ = Computer Proficiency Questionnaire; SD within parentheses.
Validity of CPQ
We examined the validity of the CPQ by examining the relationship of scores on the CPQ with general technology use and experience. Our rationale was that if the CPQ is tapping computer and Internet proficiency, proficiency should be correlated with the use of and experience with other similar technologies. The Technology Experience Questionnaire (TEQ; Czaja et al., 2006a) was also administered to all participants. The TEQ measures previous experience with a variety of digital devices, including cell phones, automated ticket kiosks, video games, and MP3 players. TEQ scores take into account the number of technologies an individual used and how frequently he or she used each technology over the past year on a scale from 1 to 5 (1 = Not sure what it is, 2 = Not used, 3 = Used once, 4 = Used occasionally, 5 = Used frequently). Consistent with our prediction, total CPQ score was highly correlated with general technology experience, calculated as the sum of experience ratings across all technologies (r(350) = .72, p < .001).
Experience with relatively common technologies or experience with technologies that did not involve a computer were unrelated to CPQ scores (e.g., there was no significant relationship between total CPQ or subscale scores and microwave use, telephone use, or remote car starter use; −.05 < r < .07). Consistent with the prediction that computer proficiency would be associated with the use of more advanced technologies, total CPQ score was only weakly correlated with basic cell phone use (r(350) = .13, p < .05), but more strongly related to smart phone use (r(350) = .46, p < .001). Although there was a tendency for all CPQ subscales to predict computer-related technology experience fairly well, there were differences in the strength of these correlations. For example, the subscale that was most strongly correlated with e-mail experience was the communication subscale (r(350) = .86, p < .001). The subscale that was most strongly correlated with computer-based video game experience was the multimedia subscale (r(350) = .60, p < .001). Multimedia subscale scores were less related to video game experience with a console game system (r(350) = .25, p < .001) or handheld game system (r(350) = .26, p < .001). In general, results suggest that CPQ scores are significantly related to technology experience (not just computer experience), but CPQ scores are more strongly related to the use of advanced technology and experience with technology tasks that involve the use of a computer. All these facts support the CPQ as a valid measure of computer proficiency.
As a further test of the relationship between computer proficiency and computer experience, we also examined the relationship between scores of the CPQ and age. In the U.S. population, computer experience is lower among older adults (Zickuhr and Madden, 2012). As a consequence of this relationship, we would also expect computer proficiency to have a negative relationship with age. We found that CPQ score significantly correlated with participant age (r(350) = −.23, p < .001; Table 3). Older individuals were less computer proficient.
Table 3.
Correlations Between Technology Experience, Age, CPQ (Full Version), and CPQ-12 (Short Form)
| Technology experience | Age | CPQ | |
|---|---|---|---|
| Age | −.23** | ||
| CPQ | .72** | −.29** | |
| CPQ-12 | .69** | −.29** | .99** |
Note: CPQ = Computer Proficiency Questionnaire.
**Correlation is significant at the .01 level (two-tailed). N = 352.
Next, we further examined the validity of the CPQ by conducting a discriminant analysis (SPSS v. 19) using CPQ score to predict group membership (low vs. high experience). This analysis would support the validity of the CPQ by demonstrating that CPQ scores can accurately predict the membership of participants as belonging to either the low or high experience group. However, if participant classification does not differ substantially from chance (50%), this would be evidence for the lack of validity of the measure. Supporting the validity of the CPQ, this analysis demonstrated that 95% of high experience and 94% of low experience users could be correctly classified using CPQ scores alone.
Short-Form CPQ—The CPQ-12
Our aim was to create a measure of computer proficiency that was easy to administer and less time consuming than having older adults demonstrate their skill. The matrix nature of the survey was intended to allow participants to move from question to question rapidly. However, answering 33 questions might be mentally fatiguing or tedious and take some individuals considerable time to complete. To address this concern, we created a short-form CPQ measure (CPQ-12) containing only 12 items.
To shorten the CPQ to 12 items, we initially conducted a factor analysis of the questions of each subscale of the CPQ in isolation. This analysis revealed one factor per subscale. We then retained the two questions per subscale that loaded most highly onto the single factor underlying each subscale. Total CPQ-12 scores and subscale scores were computed in an identical fashion compared with the CPQ. As expected, the CPQ and CPQ-12 scores were highly correlated, and CPQ-12 scores were almost as strongly related to age and technology experience as the full CPQ (Table 3). Reliability of the CPQ-12 was excellent (Cronbach’s α = .95; all subscales > .89), and a discriminant analysis found that total CPQ-12 scores could classify 95% of high experience and 91% of low experience computer users correctly. Questions retained for the CPQ-12 are marked with an asterisk in Supplementary Material. For the purpose of quickly assessing general proficiency, we recommend the CPQ-12, but the full CPQ can provide more specific information related to an individual’s training needs.
Examining the Impact of Non–Computer Related Group Differences
The high and low experience groups differed in terms of age and gender composition, with the low experience group consisting of individuals who were on average 4 years older than individuals in the high experience group and containing more women. We wanted to ensure that the reliability and predictive power of the CPQ were not an artifact of demographic differences between groups. As evidence against this explanation, within each group (low and high experience) the CPQ score was still predicted by technology experience (high experience: r(74) = .57, p < .001; low experience: r(274) = .49, p < .001) and age (high experience: r(74) = −.26, p < .05; low experience: r(274) = −.17, p < .01). Additionally, as a result of the significant differences in demographics between groups, we would expect that age and gender would be somewhat predictive of group membership. A discriminant analysis with age and gender entered as predictors together classified participants as being within the low or high experience group with 71% accuracy. However, as previously discussed, CPQ scores much more accurately predicted group membership (Full CPQ: 94%, CPQ-12: 92%). Thus, the validity of the CPQ cannot easily be explained by differences between groups other than differences in computer experience.
Factor Structure of the CPQ
Finally, we aimed to characterize the full CPQ by examining its factor structure. A principal components analysis of low and high experience users’ data together using varimax rotation indicated three factors accounted for 75% of the variance (Table 4). Questions assessing Internet and e-mail proficiency loaded highly onto Factor 1, calendaring, and to a lesser extent certain aspects of communication (e.g., blogging, instant messaging, and chat room use) onto Factor 2, and basic computer proficiency onto Factor 3. Separate factors assessing basic computer proficiency and more advanced proficiency with respect to specific types of software is consistent with the organization of the CPQ and its subscales, and this analysis also reveals that separate CPQ subscales in some cases may be measuring aspects of a more general proficiency (e.g., communication and Internet subscales together might both be measuring the more general proficiency of being able to navigate and communicate online).
Table 4.
Principal Components Analysis of Low and High Experience User Data
| Factor | |||
|---|---|---|---|
| 1 | 2 | 3 | |
| 1. Send the same e-mail to multiple people at the same time | 0.77 | ||
| 2. Use search engines (e.g., Google) | 0.77 | ||
| 3. Find information about local community resources on the Internet | 0.77 | ||
| 4. Store e-mail addresses in an e-mail address book or contact list | 0.77 | ||
| 5. View pictures sent by e-mail | 0.77 | ||
| 6. Make purchases on the Internet | 0.77 | ||
| 7. Find information about my hobbies and interests on the Internet | 0.76 | ||
| 8. Send e-mails | 0.76 | ||
| 9. Open e-mails | 0.74 | ||
| 10. Bookmark web sites to find them again later (e.g., make favorites) | 0.71 | ||
| 11. Read the news on the Internet | 0.70 | ||
| 12. Set up alerts to remind me of events and appointments | 0.82 | ||
| 13. Check the date and time of upcoming and prior appointments | 0.78 | ||
| 14. Use a computer to enter events and appointments into a calendar | 0.77 | ||
| 15. Chat using instant messaging | 0.76 | ||
| 16. Chat using internet chat rooms | 0.74 | ||
| 17. Post messages to the Internet (e.g., to blogs, Facebook, Twitter, online forums) | 0.71 | ||
| 18. Turn a computer on and off | 0.90 | ||
| 19. Use a computer keyboard to type | 0.87 | ||
| 20. Use a mouse | 0.87 | ||
Note: For succinctness, only loadings > 0.70 shown. N = 352.
Discussion
A lack of computer and Internet skills can have negative consequences for the everyday activities of seniors. For example, as bank, medical, and increasingly more government records go “paperless,” people without these skills will experience greater difficulty accessing important information and may even be required to pay more to do so as is currently the case for buying airline tickets. Seniors without computer and Internet proficiency will also be disadvantaged because these technologies have the potential to enhance independence by facilitating access to community and national resources, supporting prospective memory, and enabling social connections and access to health resources.
In addition to cognitive and attitudinal barriers, lack of proper training is an important barrier to technology adoption and use. Learning is greatly facilitated when training takes into account an individuals’ base level of proficiency. Our newly developed CPQ can be used as a tool to assess computer and Internet proficiency to facilitate both individual and group training. The tool can also be used to help select software/applications for seniors.
Compared with previous computer measures in the literature (Arning & Ziefle, 2008; Bradlow et al., 2002; Czaja et al., 2006a; Kay, 1993; Sengpiel & Dittberner, 2008), the CPQ has a number of desirable properties. First, it is relatively short (particularly the CPQ-12) and easy to administer, either on paper or online. Second, its subscales cover a broad range of computer/Internet activities that have the potential to support independence. Third, it has demonstrated reliability and validity and is appropriate for seniors with a wide range of proficiencies, from complete nonusers to extremely proficient seniors. Finally, it can easily distinguish individuals with differing levels of computer proficiency. In addition to providing insight into how to customize training to match the proficiency of individuals, the CPQ might also serve as an outcome measure to assess the effectiveness of such training.
Although there are many desirable properties of the CPQ, this study has some limitations that need to be acknowledged. First, we compared a (relatively smaller) group of highly proficient individuals to a group with extremely low proficiency. The smaller sample of participants in the high experience group might make estimates of mean CPQ score and subscores of this group less reliable. Furthermore, it is unclear at this point how well the CPQ can make finer distinctions between groups that are more similar. The CPQ-12, given its short length, might be used to collect larger, national (or international) sample to better characterize the proficiency of the older adult population at large. Second, all participants in this study were English speaking. The CPQ also needs to be evaluated with other ethnic/culture groups given the increasing diversity of the older adult population (Administration on Aging, 2012). Third, proficiency as measured by the CPQ is self-reported. Self-perceptions of one’s own proficiency may not map veridically onto actual proficiency. The advantage of self-report is that it is quicker and easier than having participants demonstrate proficiency. Moreover, self-perceptions of proficiency may be most relevant for helping to determine where to commence training efforts or identify the need for additional training. A final limitation of this study is that administration format (online vs. paper survey) differed between groups. Although we do not believe this had much effect on responses, this cannot entirely be ruled out (perhaps the act of completing the survey online using a computer primed feelings of greater or less proficiency in the high experience group).
The CPQ has the potential to serve as a valuable tool to gauge proficiency overall, but especially proficiency at computer tasks that have the potential to prolong functional independence. These are computer activities that provide cognitive enrichment (games and entertainment), social support (communication and e-mail), prospective memory support (calendaring and printing), and access to local and national resources (Internet).
Supplementary Material
Supplementary material can be found at: http://gerontologist.oxfordjournals.org.
Acknowledgments
We gratefully acknowledge support from the National Institute on Aging, NIA 3 PO1 AG017211, Project CREATE III—Center for Research and Education on Aging and Technology Enhancement (www.create-center.org).
References
- Arning K., Ziefle M. (2008). Development and validation of a computer expertise questionnaire for older adults. Behaviour & Information Technology, 27, 89–93. 10.1080/01449290701760633 [Google Scholar]
- Bradlow E. T., Hoch S. J., Hutchinson J. W. (2002). An assessment of basic computer proficiency among active internet users: Test construction, calibration, antecedents and consequences. Journal of Educational and Behavioral Statistics, 27, 237–253. 10.3102/10769986027003237 [Google Scholar]
- Charness N., Boot W. R. (2009). Aging and information technology use potential and barriers. Current Directions in Psychological Science, 18, 253–258. 10.1111/j.1467-8721.2009.01647.x [Google Scholar]
- Czaja S. J., Charness N., Dijkstra K., Fisk A. D., Rogers W. A., Sharit J. (2006a). Computer and Technology Experience Questionnaire. CREATE Technical Report CREATE-2006-03. Retrieved from http://create-center.gatech.edu/publications_db/report%203%20ver1.3.pdf [Google Scholar]
- Czaja S. J., Charness N., Fisk A. D., Hertzog C., Nair S. N., Rogers W. A., Sharit J. (2006b). Factors predicting the use of technology: Findings from the Center for Research and Education on Aging and Technology Enhancement (CREATE). Psychology and Aging, 21, 333. 10.1037/0882-7974.21.2.333 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Czaja S. J., Lee C. C., Branham J., Remis P. (2012). OASIS connections: Results from an evaluation study. The Gerontologist, 52, 712–721. 10.1093/geront/gns004 [DOI] [PubMed] [Google Scholar]
- Czaja S. J., Sharit J. (2012). Designing training and instructional programs for older adults. Boca Raton, FL: CRC Press. [Google Scholar]
- Fisk A. D., Rogers W. A., Charness N., Czaja S. J., Sharit J. (2009). Designing for older adults: Principles and creative human factors approaches (Vol. 2). Boca Raton, FL: CRC Press. [Google Scholar]
- Kay R. H. (1993). A practical research tool for assessing ability to use computers: The computer ability survey (CAS). Journal of Research on Computing in Education, 26, 16–27. [Google Scholar]
- Lutz W., Sanderson W., Scherbov S. (2008). The coming acceleration of global population ageing. Nature, 451, 716–719. 10.1038/nature06516 [DOI] [PubMed] [Google Scholar]
- Melenhorst A. S., Bouwhuis D. G. (2004). When do older adults consider the internet? An exploratory study of benefit perception. Gerontechnology, 3, 89–101. 10.4017/gt.2004.03.02.004.00 [Google Scholar]
- Melenhorst A. S., Rogers W. A., Bouwhuis D. G. (2006). Older adults’ motivated choice for technological innovation: Evidence for benefit-driven selectivity. Psychology and Aging, 21, 190. 10.1037/0882-7974.21.1.190 [DOI] [PubMed] [Google Scholar]
- Mitzner T. L., Boron J. B., Fausset C. B., Adams A. E., Charness N., Czaja S. J., Dijkstra K., Fisk A. D., Rogers W. A., Sharit J. (2010). Older adults talk technology: Technology usage and attitudes. Computers in Human Behavior, 26, 1710–1721. 10.1016/j.chb.2010.06.020 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Mitzner T. L., Fausset C. B., Boron J. B., Adams A. E., Dijkstra K., Lee C. C., Rogers W. A., Fisk A. D. (2008, September). Older adults’ training preferences for learning to use technology. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting (Vol. 52, No. 26, pp. 2047–2051). Santa Monica, CA: HFES. 10.1177/154193120805202603 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Rogers W. A., Cabrera E. F., Walker N., Gilbert D. K., Fisk A. D. (1996). A survey of automatic teller machine usage across the adult life span. Human Factors, 38, 156–166. 10.1518/001872096778940723 [DOI] [PubMed] [Google Scholar]
- Sengpiel M., Dittberner D. (2008). The computer literacy scale (CLS) for older adults–development and validation. In Mensch & Computer (pp. 7–16). Munich, Germany: Oldenbourg Verlag. [Google Scholar]
- Zickuhr K., Madden M. (2012). Older adults and internet use Pew Internet & American Life Project. Retrieved from http://pewinternet.org/Reports/2012/Older-adults-and-internet-use. aspx.
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
