Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2008 Nov 20.
Published in final edited form as: J Gerontol B Psychol Sci Soc Sci. 2007 Jun;62(SPEC):85–96. doi: 10.1093/geronb/62.special_issue_1.85

A Multilevel Modeling Approach to Examining Individual Differences in Skill Acquisition for a Computer-Based Task

Sankaran N Nair 1, Sara J Czaja 1, Joseph Sharit 2
PMCID: PMC2585421  NIHMSID: NIHMS74337  PMID: 17565169

Abstract

This article explores the role of age, cognitive abilities, prior experience, and knowledge in skill acquisition for a computer-based simulated customer service task. Fifty-two participants aged 50–80 performed the task over 4 consecutive days following training. They also completed a battery that assessed prior computer experience and cognitive abilities. The data indicated that overall quality and efficiency of performance improved with practice. The predictors of initial level of performance and rate of change in performance varied according to the performance parameter assessed. Age and fluid intelligence predicted initial level and rate of improvement in overall quality, whereas crystallized intelligence and age predicted initial e-mail processing time, and crystallized intelligence predicted rate of change in e-mail processing time over days. We discuss the implications of these findings for the design of intervention strategies.


The existing literature of aging and cognition indicates that many component cognitive abilities—such as processing speed, working memory, and reasoning—associated with the fluid aspect of intelligence, which generally represents the processing and reasoning aspects of intelligence (Horn & Catell, 1966), show decline with age, especially under conditions of complexity or when a task represents an unfamiliar cognitive domain (e.g., Park, 1999). In contrast, knowledge or what is referred to as crystallized intelligence remains relatively stable or increases throughout the life span at least until about age 70 (Schaie, 1996). There are numerous examples of studies that indicate age differences on measures of component cognitive abilities that reflect the fluid aspects of intelligence and the performance of tasks that draw on these abilities (e.g., Fisk & Rogers, 1991). There are also numerous studies that report age-related differences in skill acquisition. These studies generally indicate that older adults learn new skills more slowly than younger adults and do not reach the same levels of performance (e.g., Charness & Campbell, 1988; Jenkins & Hoyer, 2000; Rogers, Fisk, & Hertzog, 1994; Salthouse, 1994).

An important issue is the relevance of these findings to everyday tasks within real world settings. Within this context there is limited information on the influence of individual differences in component cognitive abilities or other factors, such as prior experience or knowledge, on skill acquisition or learning. Most of the research that scientists have conducted on skill acquisition has involved laboratory tasks in controlled environments. Examining factors that impact learning is especially important in today's world, given the ubiquitous use of technology in most settings.

The issue of learning and skill acquisition for technology-based tasks is particularly relevant to older adults as compared to younger adults. One reason is because older adults are likely to have less exposure to, and experience with, technology; thus, technology represents an unfamiliar cognitive domain. Also, given that technology is not static, people will continually confront the need to learn new systems or activities at multiple points during their lives. Several studies (e.g., Charness, Kelley, Bosman, & Mottram, 2001; Czaja & Sharit, 1998, Czaja, Sharit, Ownby, Roth, & Nair, 2001) have shown that cognitive abilities (such as working memory, attention, and spatial abilities) are important predictors of performance of technology-based tasks.

However, research has also shown that prior experience and domain-specific knowledge are important to learning and to the performance of everyday activities and technology-based tasks. For example, investigators have shown that domain knowledge—knowledge that is specific to a particular task or task domain—contributes to success in the performance of chess (Chase & Simon, 1973), bridge (Engle & Bukstel, 1978), and other cognitively demanding tasks. Hambrick and Engle (2002) found that prior knowledge about baseball had a strong facilitative effect on memory performance for a baseball recall task independent of working memory and age. In fact, domain knowledge had a much stronger effect on memory performance than working memory. Beier and Ackerman (2005) found that prior knowledge was an important predictor of learning new information about cardiovascular disease and xerography. In a series of studies, we also found (e.g., Czaja & Sharit, 1998; Czaja et al., 2001) that prior experience with computers, in addition to cognitive abilities, was an important predictor of performance on subsequent computer-based tasks. Furthermore, Charness and colleagues (2001) found that breadth of experience with computer software was a strong positive predictor of learning a word-processing application.

These findings suggest that prior experience and knowledge can facilitate the process of learning and that, to the extent that older adults have prior experience or knowledge of a particular domain, age differences in skill acquisition should be reduced. However, other theorists (e.g., Posner & McLeod, 1982) suggest that although prior experience or knowledge will facilitate skill acquisition, it will not diminish the influence of cognitive abilities on learning. That is, abilities will have an influence on new learning even after controlling for individual differences in prior experience or knowledge. This view is largely consistent with investment theory (Horn & Cattell, 1967), which postulates that higher levels of fluid and crystallized intelligence enable an individual to learn new knowledge and skills. Although researchers originally postulated this theory to explain learning in children, scientists can apply it to situations involving skill acquisition with older adult populations, given that skill acquisition represents a dynamic process between acquired cognitive abilities and the learning of new material.

In a previous study evaluating an e-mail-based customer service task (Sharit et al., 2004), we found age differences in task performance such that the younger participants (aged 50–65 years) performed better than the older participants (aged 66–80 years). However, the results overall demonstrated performance improvements for both age groups with practice. One aim of this reanalysis of data from that study was to identify individual characteristics (prior computer or Internet experience, cognitive abilities, and age) related to skill acquisition for this task. In this regard, we hypothesized that age would have a negative influence on skill acquisition and that, to a large, extent these age-related differences in performance would be explained by differences in cognitive abilities. We also hypothesized that both prior experience with computers or the Internet and abilities would have a positive influence on skill acquisition, such that people with more experience and higher abilities would demonstrate more learning. However, consistent with an investment theory framework, we also hypothesized that although prior computer or Internet experience would facilitate learning, cognitive abilities would have a stronger influence on skill acquisition.

An additional aim of the study was to examine the extent to which the relationships between prior experience, abilities, and learning depended on the particular outcome measure used for assessing performance. Our prior work and that of others (e.g., Beier & Ackerman, 2005; Czaja & Sharit, 1998; Czaja et al., 2001) suggested that the relationship between experience, abilities, and performance varies according to the demands of the task. To this end, we hypothesized that fluid intelligence would be a strong predictor of the speed component of performance, whereas prior experience with computers or the Internet and crystallized intelligence would have a strong influence on aspects of performance related to knowledge of search strategies.

Overall, the objective of this study was to contribute to researchers' knowledge of factors that influence learning among older adults for ecologically valid tasks and to further elucidate the role of prior experience and abilities in learning. From a practical perspective, understanding factors that influence skill acquisition is important to the development of training programs and task aids.

Methods

Sample

We recruited a total of 52 participants from the local community, including 21 men and 31 women ranging in age from 50 to 80 years (M = 64.92 years, SD = 8.10). We required that all of the participants be English speaking and literate and have experience with computers. We paid participants $150 for completing the entire protocol.

We collapsed data on education into three levels: high school or less, some college or a college degree, and some graduate school or a graduate degree. All but 4 participants had at least some college education. With respect to occupational status, 29 of the participants were retired; the remainder were working full or part time. There was no difference between those aged 50–65 and those aged 66 or older in occupational status, χ2(1, n = 52) = 2.92, p > .05).

Task Description

The task used in this study was a simulation of a job performed by customer service representatives of a fictitious company called Media Products, an online store that sells various computer-related products. The participant's task was to open e-mails from an e-mail inbox window and respond to the customers' queries. Essentially, the task entailed reading the e-mail, mapping its contents into the appropriate locations within the company's database, and selecting the correct items of information from these locations.

Participants performed the task during two 2-hour work sessions on each of 4 consecutive days. Each work session contained 40 e-mails, and we instructed the participants to process as many of these e-mails as possible.

Setting and Equipment

Training as well as task performance was performed at workstations consisting of a desk, an adjustable chair, and a personal computer with a 19-in monitor. Each participant used the same workstation for the duration of the study. All of the workstations were contained in a single room designed to represent an office area.

Procedures

On 2 days preceding the 4 days of task performance, each participant underwent a battery of measures that included a demographic and health questionnaire, a self-efficacy questionnaire, a technology and computer experience questionnaire, a computer attitude questionnaire, a computer anxiety questionnaire, measures of hearing and vision, and 21 standardized measures of component cognitive abilities (see Appendix A; Czaja et al., 2006). Following the completion of these instruments, the participants received training on the task in groups of 5 or 6. We first gave participants an orientation to the computer, which was followed by formal training on the customer service task. A trained facilitator guided the training process, which took about 2 hr. All of the participants in the group then performed six practice e-mails completely on their own, which was followed by a review session that provided the opportunity for addressing both general and specific questions. Sharit and colleagues (2004) describes in more detail the sample, task, and procedures.

Measures

Cognitive abilities

For this study, we used three measures to assess crystallized intelligence: (a) Shipley Vocabulary Test (Shipley, 1986), (b) Multidimensional Aptitude Battery (Jackson, 1998), and (c) Information Subscale of the Wechsler Adult Intelligence Scale–III (Wechsler, 1997). We used six measures to assess fluid intelligence: (a) Alphabet Span (Craik, 1986), (b) Computation Span (Salthouse & Babcock, 1991), (c) Letter Sets (Ekstrom, French, Harman, & Dermen, 1976), (d) Paper Folding (Ekstrom et al., 1976), (e) Stroop Color Word Association Test (Golden, 1978), and (f) Trail Making Test (Forms A and B; Reitan, 1958).

We based the formation of the composites on the results of factor analysis for the 21 component ability measures performed with a structural equation model, estimated with AMOS 5.0 (Arbuckle, 2003), that included 1,110 people with complete data on the cognitive measures (see Czaja et al., 2006). We should note that we computed the composites with unit weighting of standardized variables that loaded uniquely on each factor. Cronbach's alphas for the composite measures of fluid and crystallized intelligence were .83 and .84, respectively.

Prior experience with computersorthe Internet

Knowledge of computers included assessment of length of time using computers, proficiency with various computer input devices (e.g., mouse, voice input), proficiency with a variety of computer operations (e.g., deleting and transferring files), and proficiency with a variety of computer applications (e.g., computer graphics, e-mail). We based proficiency with the Internet on frequency of use, intensity of use per week, duration of time using the Internet, and a variety of categories of use (e.g., shopping, communication). We computed a weighted average for the two variables in order to control for differences in number of items and computed a z score for all subsequent analyses.

Task performance measures

Each e-mail required one or more selections of information. For us to have considered an e-mail “correct,” the information selected by the participant had to exactly match the required selections for that e-mail, implying that the participant omitted no information and provided no additional information.

For this article, we report three measures of performance. One measure reflects overall output quality (speed and accuracy) and was the rate of correct e-mails, which we defined as the total number of correct e-mails processed per hour. A second measure, e-mail processing time, was the time to read an e-mail and search for the required information in the database. Thus, this measure reflects only the time a participant was actively engaged with a particular e-mail but not time spent in between e-mails (“slack time”). We used in the analyses the mean value across all completely correct e-mails processed per day. The third measure, navigational efficiency, was the proportion of the additional navigational moves taken for an e-mail in relation to the minimum number of moves needed for that e-mail. Thus, higher numbers indicate less efficiency. We computed this measure for only those e-mails processed correctly.

Results

The objectives of this study were to evaluate the patterns of change in performance in a computer-based task over the course of 4 days and to relate interindividual differences in these patterns to a function of the predictor variables of age, cognitive abilities (fluid and crystallized intelligence), and prior experience with computers or the Internet. We elected to use multilevel modeling (MLM) using the SPSS Mixed Procedure (Peugh & Enders, 2005) for these analyses. The method of estimation used was restricted maximum likelihood, as this method of estimation is less biased than the full maximum likelihood with small sample sizes. We chose to use an MLM approach as it enables estimation of an individual growth function and assessment of predictors of that function, and hence enables evaluation of predictors of individual differences in growth.

Researchers have widely used MLM to analyze hierarchical and clustered data. Longitudinal data from repeated measures designs are hierarchical in nature and thus can be represented as MLM growth models. These growth models are typically formulated as two-level models wherein individual growth is represented at Level 1 and variation in growth parameters is represented at Level 2 (Singer, 1998). Raudenbush and Bryk (2002) refer to these models, respectively, as the repeated-observations model (i.e., the within-person model) and the person-level model (i.e., the between-person model).

For the analysis of longitudinal data, MLM techniques have several advantages over repeated measures analyses of variance (RM-ANOVAs). In its usual implementation, RM-ANOVA requires the assumption of compound symmetry of covariances. If this assumption is violated, the weaker assumption of sphericity must hold, which assumes the equality of the variances of the differences between any two observation levels. Violation of sphericity requires either a correction to the degrees of freedom or the use of a multivariate approach. However, these approaches may be overly conservative, and thus significant effects may not be detected. MLM does not rely on the assumptions of sphericity, and the covariance structure is modeled directly from the data as a separate step during the analysis (Max & Onghena, 1999).

Researchers obtain estimates of the variance components by using the maximum likelihood or restricted maximum likelihood methods. These methods are designed to deal with correlated data, and, therefore, sphericity is not an issue. Another disadvantage of RM-ANOVA is the overestimation of the standard error (Quené & van den Bergh, 2004). Also, in order to retain a participant missing a single observation, RM-ANOVA must impute the missing observation, which can cause an underestimation of the error term and thus suboptimal F tests.

In our model, at Level 1, we estimated each person's growth (or change) by a linear model containing an intercept, which estimated performance at the beginning of the observation period, and a slope that indicated the rate of change in performance across the time periods. The Level 2 submodel relates individual differences in initial status and rate of change to individual characteristics. Thus, by restricting individual differences purely to their individual change trajectories at Level 1, we could examine interindividual differences in change in terms of their predictors (Chapman, Hesketh, & Kistler, 2002; Singer & Willett, 2003). In this case, the outcome variable was performance on the database search task post training as measured on each of the 4 days by three outcomes variables, and the predictors of individual differences in change were age, prior computer or Internet experience, and cognitive abilities (fluid and crystallized intelligence). For the purposes of these analyses, we computed z scores for prior computer or Internet experience and for fluid and crystallized intelligence. Age was mean centered. The values for the predictors centered around the initial day (the values of the 4 days were fixed at 0, 1, 2 and 3). By centering the predictors and setting the first day to 0, we scaled the intercept to represent average initial status (posttraining performance), thereby facilitating interpretation. Table 1 presents the correlations between the predictor variables and their correlations with the outcome measures. Table 2 presents the descriptive statistics for the outcome measures and predictor variables.

Table 1.

Correlations Between Predictor and Outcome Variables

PC/Internet Experience Crystallized Intelligence Fluid Intelligence Age Rate of Correct E-Mails E-Mail Processing Time Navigational Efficiency
PC/Internet experience
Crystallized intelligence .171
Fluid intelligence .409** .487**
Age −.306* −.070 −.275*
Rate of correct e-mails .454** .489** .678** −.520**
E-mail processing time −.395** −.468** −.511** .540** −.788**
Navigational efficiency −.475** −.324* −.573** .330* −.637** .661**

Notes: PC = personal computer.

*

p ≤ .05;

**

p ≤ .01.

Table 2.

Descriptive Statistics on Predictor and Outcome Variables

Variable M SD Range
Predictor variables
 Age 64.92 8.10 50–79
 Domain knowledge
   PC knowledge 15.48 5.33 5–27
   WWW knowledge 15.25 6.15 0–23
 Crystallized intelligence
   Shipley Vocabulary 35.46 3.80 17–40
   MAB Information 26.25 6.90 11–37
   WAIS Information 22.29 3.30 11–28
 Fluid intelligence
   Paper Folding (correct) 7.10 3.59 1–17
   Letter Sets (correct) 16.52 5.92 2–27
   Computation Span (simple span) 33.65 13.62 10–65
   Alphabet Span (simple span) 39.42 9.68 20–65
   Stroop Test (color–word score) 36.60 8.45 20–61
   Trails (Trails B – Trails A) 69.13 129.27 5–961
Outcome variables
 Rate of Correct e-mails (per hr)
   Day 1 12.60 6.74 .48–28.01
   Day 2 16.15 7.72 .73–32.89
   Day 3 18.68 8.18 1.44–36.43
   Day 4 20.18 8.33 4.18–39.17
 E-mail processing time (min)
   Day 1 2.76 1.04 1.36–6.00
   Day 2 2.08 0.63 .91–4.48
   Day 3 1.93 0.58 .91–3.85
   Day 4 1.81 0.61 .96–4.25
 Navigational efficiency
   Day 1 1.10 0.54 .32–3.56
   Day 2 0.86 0.43 .21–1.96
   Day 3 0.67 0.39 .19–2.06
   Day 4 0.58 0.35 .13–1.65

Note: SD = standard deviation; PC = personal computer; WWW = World Wide Web; MAB = Multidimensional Aptitude Battery; WAIS = Wechsler Adult Intelligence Scale.

Model Construction

We constructed separate sets of models for each of the three outcome variables. Initially at Level 1, the outcome variable was expressed as a linear function of days in the study, yielding an estimate of the intercept and slope for each individual. At Level 2, we examined interindividual differences in the intercept (initial status – post training) and slope (rate of change) as a function of four predictor variables: age, prior computer or Internet experience, and fluid and crystallized intelligence.

For each outcome measure, analyses conducted included the unconditional means model; the unconditional growth model (intercepts only); and the unconditional growth model (intercepts and slopes). Tables 3, 4, and 5 (Models A, B, and C) summarize the results of these analyses. We constructed the unconditional means model (Model A) in order to partition the variation in the outcome variable into within-person (Level 1 residual) and between-person (Level 2 residual) components. There were no predictors, and we did not examine change over time. Based on this partition of the variance, we computed intraclass correlation coefficients for each of the outcome variables. The intraclass correlation coefficient values, which represent the proportion of total variation in the outcome variables that can be attributed to individual differences, were ρ = .759 for rate of correct e-mails, ρ = .519 for e-mail processing time, and ρ = .395 for navigational efficiency. The two unconditional growth models (Models B and C) represent the introduction of task day into the Level 1 submodel. The models represent each individual as a change trajectory instead of by his or her mean, as in the unconditional means model. Model B allows individuals' change trajectories to vary in their intercepts only, whereas Model C allows individuals to vary in both intercept and slope. Because Model C is nested within Model B, with the addition of the random slope component we could compare the two models through their deviance (−2 log likelihood) statistics to determine if there were significant interindividual differences in rate of change in the outcome measure. We also utilized Models A and C to determine within-person variation in the outcome measures over time (R2ε), to determine if there was a significant change in the outcome measures over days, and to compute the pseudo R-square statistics for the remaining models (discussed later). We computed pseudo R-square statistics consistent with the approach outlined by Singer and Willett (2003), and Appendix B presents the relevant formulae.

Table 3.

Prediction of Initial Status and Rate of Change in Rate of Correct E-Mails

Model A Model B Model C Model D Model E Model F Model G







Variable Estimate t Estimate t Estimate t Estimate t Estimate t Estimate t Estimate t
Fixed effects
 Initial status
   Intercept 2.655 30.4*** 2.355 26.2*** 2.355 20.9*** 2.355 23.3*** 2.355 29.3*** 2.355 32.5*** 2.355 28.3***
   Prior experience 0.367 3.6*** 0.183 2.1* 0.122 1.5
   Crystallized intelligence 0.180 1.9 0.204 2.4* 0.173 1.8
   Fluid intelligence 0.377 3.7*** 0.314 3.4** 0.456 4.7***
   Age −0.037 −3.5***
 Rate of change
   Average 0.200 14.4*** 0.200 10.1*** 0.200 10.9*** 0.200 12.2*** 0.200 12.7*** 0.200 12.0***
   Prior experience -0.055 −3.0** −0.031 −1.7 −0.022 −1.3
   Crystallized intelligence −0.029 −1.5 −0.032 −1.8 −0.028 −1.5
   Fluid intelligence −0.049 −2.4* −0.040 −2.0* −0.062 −3.2**
   Age 0.005 2.2*
Variance components Wald Z Wald Z Wald Z Wald Z Wald Z Wald Z
 Level 1 residual variance 0.117 0.050 8.8*** 0.025 7.2*** 0.025 7.2*** 0.025 7.2*** 0.025 7.2*** 0.025 7.2***
 Level 2 variances
   Initial status 0.367 0.383 4.9*** 0.642 4.9*** 0.516 4.8*** 0.319 4.6*** 0.254 4.5*** 0.342 4.7***
   Rate of change 0.015 3.7*** 0.012 3.5*** 0.009 3.0** 0.008 2.9** 0.009 3.1**
Pseudo R-square statistics
R2yy 0.105 0.105 0.282 0.558 0.646 0.523
R20 0.196 0.503 0.468
R21 0.182 0.414 0.379

Notes: R2yy is the proportion of the variance in the dependent variable explained by the model; R20 is the proportion of the between person variation in initial status explained by the model; R21 is the proportion of the between person variable in rate of change explained by the model.

*

p ≤ .05;

**

p ≤ .01;

***

p ≤ .001.

Table 4.

Prediction of Initial Status and Rate of Change in E-Mail Processing Time

Model A Model B Model C Model D Model E Model F Model G







Estimate t Estimate t Estimate t Estimate t Estimate t Estimate t Estimate t
Fixed effects
 Initial status
   Intercept 2.144 23.5*** 2.592 26.1*** 2.592 20.8*** 2.592 22.6*** 2.592 25.5*** 2.592 28.5*** 2.592 24.6***
   Prior experience −0.368 −3.2** −0.244 −2.2* −0.165 −1.6
   Crystallized intelligence −0.312 −2.6* −0.342 −3.2** −0.302 −2.5*
   Fluid intelligence −0.176 −1.4 −0.095 −0.8 −0.281 −2.3*
   Age 0.044 3.6***
 Rate of change
   Average −0.299 −11.4*** −0.299 −9.1*** −0.299 −9.5*** −0.299 −9.9*** −0.299 −10.1*** −0.299 −9.7***
   Prior experience 0.074 2.3* 0.061 1.8 0.050 1.5
   Crystallized intelligence 0.078 2.2* 0.083 2.4* 0.076 2.1*
   Fluid intelligence −0.001 0.0 −0.013 −0.4 0.025 0.7
   Age −0.007 −1.6
Variance components Wald Z Wald Z Wald Z Wald Z Wald Z Wald Z
 Level 1 residual variance 0.325 0.178 8.8*** 0.128 7.2*** 0.128 7.2*** 0.128 7.2*** 0.128 7.2*** 0.128 7.2***
 Level 2 variances
   Initial status 0.351 0.388 4.5*** 0.720 4.5*** 0.596 4.3*** 0.448 4.1*** 0.340 3.8*** 0.489 4.2***
   Rate of change 0.030 2.6*** 0.026 2.4* 0.022 2.1* 0.020 2.0* 0.024 2.3*
Pseudo R-square statistics
R2yy 0.167 0.167 0.276 0.422 0.524 0.387
R20 0.173 0.378 0.321
R21 0.149 0.283 0.207

Notes: R2yy is the proportion of the variance in the dependent variable explained by the model; R20 is the proportion of the between person variation in initial status explained by the model; R 21 is the proportion of the between person variable in rate of change explained by the model.

*

p ≤ .05;

**

p ≤ .01;

***

p ≤ .001.

Table 5.

Prediction of Initial Status in Navigational Efficiency

Model A Model B Model C Model D Model E Model F Model G







Variable Estimate t Estimate t Estimate t Estimate t Estimate t Estimate t Estimate t
Fixed effects
 Initial status
   Intercept 0.802 16.4*** 1.065 19.0*** 1.065 16.2*** 1.065 20.8*** 1.065 22.4*** 1.064 22.4*** 1.065 21.7***
   Prior experience −0.166 −3.8*** −0.102 −2.4* −0.091 −2.1* −0.021 −0.4
   Crystallized intelligence −0.025 −0.5 −0.029 −0.6
   Fluid intelligence −0.148 −3.0** −0.137 −2.8** −0.091 −4.1***
   Age 0.006 1.2
 Rate of change
   Average −0.175 −9.6*** −0.175 −8.5*** −0.175 −9.6*** −0.175 −9.6*** −0.175 −9.6*** −0.175 −9.6***
Variance components Wald Z Wald Z Wald Z Wald Z Wald Z Wald Z
 Level 1 residual variance 0.137 0.087 8.8*** 0.075 7.2*** 0.087 8.8*** 0.087 8.8*** 0.086 8.8*** 0.087 8.8***
 Level 2 variances
   Initial status 0.089 0.102 4.1*** 0.171 3.8*** 0.076 3.9*** 0.057 3.5*** 0.056 3.5*** 0.064 3.7***
   Rate of change 0.007 1.5
Pseudo R-square statistics
R2yy 0.171 0.171 0.293 0.388 0.396 0.350
R20 0.254 0.441 0.368
R21

Notes: R2yy is the proportion of the variance in the dependent variable explained by the model; R20 is the proportion of the between person variation in initial status explained by the model; R21 is the proportion of the between person variable in rate of change explained by the model.

*

p ≤ .05;

**

p ≤ .01;

***

p ≤ .001.

The results from Models B and C also indicated if there were significant interindividual differences in rate of change in performance over the subsequent days; Tables 3, 4, and 5 (Models A, B, and C) summarize these results. We examined the difference in the deviance statistics (−2 log likelihood) between Models B and C by using the chi-square statistic. We also examined the Level 2 variance components by using the Wald Z statistic. The data indicated significant individual differences in rate of change over time for rate of correct e-mails, χ2(1) = 90.50, p < .001, and e-mail processing time, χ2(1) = 19.23, p < .001. The difference in the deviance statistic for navigational efficiency was not significant, χ2(1) = 8.34, p < .01, at a level considered meaningful for this test (Singer & Willett, 2003), implying that the addition of predictors in the model would not reliably explain interindividual differences in rates of change. Although the results indicated no interindividual differences in rate of change in navigational efficiency, we retained the multilevel approach to establish consistency in comparing results.

To examine the influence of the predictor variables on initial performance and rate of change in performance, we added prior computer or Internet experience to the unconditional growth model as the first step (Model D), and we added the crystallized and fluid composites as the second step (Model E). To examine the influence of age after accounting for differences in prior experience and abilities, we then added age as a third step (Model F). Tables 3, 4, and 5 (Models D, E, and F) summarize these models. In addition, to examine the dominance of abilities to skill acquisition, we replicated the above-mentioned analyses with the exception of entering cognitive abilities as the first step (Model G) and prior experience as the second step (Model E). Tables 3, 4, and 5 (Model G) summarize this model. Finally, we computed a separate model (Model H) with age as the only predictor to examine the extent to which differences in prior experience and abilities explained age-related differences in performance (Table 6).

Table 6.

Model H (Age as the Only Level 2 Predictor)

Rate of Correct E-Mails Processing Time Navigational Efficiency



Variable Estimate t Estimate t Estimate t
Fixed effects
 Initial status
   Intercept 2.355 24.0*** 2.592 23.8*** 1.065 19.7***
   Age −0.051 −4.2*** 0.056 4.1*** 0.014 2.4*
 Rate of change
   Average .200 10.9*** −0.299 −9 4***
   Age .007 3.1** −0.008 −2.1*
Variance components Wald Z Wald Z Wald Z
 Level 1 residual variance 0.025 7.2*** 0.128 7.2*** 0.087 8.8***
 Level 2 variances
   Initial status 0.482 4.8*** 0.527 4.3*** 0.091 4.0***
   Rate of change 0.012 3.5*** 0.027 2.4*
Pseudo R-square statistics
R2yy 0.332 0.355 0.228
R20 0.348 0.267 0.106
R21 0.192 0.121

Notes: R2yy is the proportion of the variance in the dependent variable explained by the model; R20 is the proportion of the between person variation in initial status explained by the model; R21 is the proportion of the between person variable in rate of change explained by the model.

*

p ≤ .05;

**

p ≤ .01;

***

p ≤ .001.

Rate of Correct E-Mails

We subjected rate of correct e-mails to a natural log transformation to achieve linearity. Age, fluid intelligence, and crystallized intelligence predicted individual differences in initial performance (post training) for rate of correct e-mails (Table 3, Model F). Generally, increased age was associated with lower initial performance, and people with lower fluid and crystallized intelligence had lower initial performance for this measure. Age was the strongest predictor of individual differences in initial status for this measure, followed by fluid intelligence (Table 3). Age and fluid intelligence predicted individual differences in rate of change over days, such that the rate of change or improvement in performance was greater for older adults (Figure 1) and for those with lower fluid intelligence. The R2ε value computed from Models 1 and 3 was .783, indicating that days on the task explained 78% of the within-person variance. Inclusion of the predictor variables accounted for 60% of between-person variance in initial performance and 48% of between-person variance in rate of change in performance over time.

Figure 1.

Figure 1

Improvement in rate of correct e-mails sent (natural log) over days as a function of age (lines represent predictions for persons with age 1 SD below the mean and 1 SD above the mean).

Prior experience with computers or Internet accounted for 20% of the between-person variance in initial performance and 18% of the between-person differences in rate of change in performance (Table 3). The inclusion of the cognitive ability measures explained an additional 30% of the between-person variance in initial performance and an additional 23% of the interindividual variance in rate of change in performance. Adding age to the model explained an additional 10% of the individual differences in initial performance and an additional 7% of the individual differences in rate of change in performance. Also, when we added abilities to the model, prior computer or Internet experience was no longer significant. In fact, when we added abilities as the first predictors to the model (Model G), they accounted for 47% of the between-person variance in initial status for this variable and 38% of the between-person variance in rate of change in performance. Also, inclusion of prior computer or Internet experience after accounting for individual differences in abilities only explained 3% of the variance in initial performance and 3% of the variance in change over days (Model G). Comparing the results of Models F and H indicated that individual differences in prior computer or Internet experience and cognitive abilities explained about 59% of the age-related variance in initial performance, and individual differences in these variables accounted for about 64% of the age-related variance in rate of change in performance.

E-Mail Processing Time

The final model (Model F) indicated that age and crystallized intelligence predicted individual differences in initial performance for e-mail processing time (Table 4). Generally, people who had higher crystallized intelligence took less time initially to read the e-mails and search the databases for the correct information. In addition, increased age was associated with longer processing times. Age was the strongest predictor of individual differences in initial performance. In terms of rate of change in performance, generally individuals with lower crystallized intelligence exhibited greater improvement in processing time over days (Figure 2). The R2ε for this variable was .606, indicating that task experience explained about 61% of the intra-individual change in e-mail processing time. For this variable, the predictor variables explained 53% of the between-person variance in initial performance and 34% of the between-person variance in rate of change in performance over time.

Figure 2.

Figure 2

Improvement in e-mail processing time (min) over days as a function of crystallized intelligence (lines represent predictions for persons with crystallized intelligence one 1 SD below the mean and 1 SD above the mean).

Similar to rate of correct e-mails, individual differences in cognitive abilities explained the greatest amount of individual differences in initial performance and rate of change in performance across the 4 days. When we added prior computer or Internet experience as the first predictor in the model, it accounted for 17% of the between-person variance in initial processing time and 15% of the between-person variance in rate of change in processing time. Inclusion of the cognitive abilities in the model explained an additional 21% of individual differences in initial processing time and 13% of individual differences in rate of change in processing time. Adding age to the model accounted for an additional 15% of the between-person variance in initial processing time and an additional 6% of the between-person variance in rate of change in processing time. When we added cognitive abilities as the first predictors in the model, they explained 32% of the between-person variance in initial processing time and 21% of the between-person variance in rate of change in processing time. Inclusion of prior experience in the model after accounting for individual differences in abilities explained only 6% of the between-person variance in initial performance and 7% of the between-person variance in change in performance over days. Comparing the results of Models F and H indicated that individual differences in prior computer or Internet experience and cognitive abilities explained about 42% of the age-related variance in initial performance, and individual differences in these variables accounted for about 54% of the age-related variance in rate of change in performance.

Navigational Efficiency

Given that there were no reliable individual differences in rate of change in navigational efficiency as a function of the four predictor variables, we continued with the random-intercepts-only model (Model B) and examined the impact of the predictors on initial performance only for this variable. As shown in Table 5, prior computer or Internet experience and fluid intelligence predicted initial individual differences in navigational efficiency (Model F). Taken together, the predictor variables accounted for 45% of the between-person variance in initial navigational efficiency. The strongest predictor of initial performance for this variable was fluid intelligence.

When we added prior experience as the first predictor to the model, it accounted for 25% of the between-person variance in initial navigational efficiency. The inclusion of the ability measures accounted for an additional 19% of the between-person variance. Adding age to the model produced no appreciable change in the between-person variance in initial status (Table 5). When we added abilities as the initial step, they accounted for 37% of the variance in initial status; prior experience accounted for only an additional 7% of the variance. Also, the data indicated that individual differences in the other predictor variables accounted for 66% of the age-related variance in initial performance.

Discussion

The objectives of this study were to examine factors that predict skill acquisition of an e-mail based customer service task. We used MLM to examine predictors of individual differences in initial performance (post training) as well as the rate of change in performance over 4 days. Use of an MLM approach expanded our previous findings regarding this task (Sharit et al., 2004), as it enabled us to chart trajectories of change in performance over time and determine individual characteristics related to change. An additional advantage of this approach is that we were able to examine the both within-person and between-person variation in skill acquisition. Finally, use of this technique also allowed us to overcome some of the assumptions associated with conventional approaches (e.g., RM-ANOVA), such as homogeneity of variance and sphericity.

Overall, the data indicated that, for all participants, performance improved with task experience. This finding is encouraging, as it suggests that older adults can learn to perform computer-based tasks commonly found in work environments. The data also indicated that rates of improvement in performance were higher for older adults and for those with lower abilities, which suggests that participants with higher initial performance had less opportunity for growth. These findings lend support to the literature, which generally indicates age differences in skill acquisition (e.g., Jenkins & Hoyer, 2000), and underscore the importance of providing older adults with sufficient practice on a newly acquired task and opportunities for extended training (e.g., Sterns, 1986).

However, we also found variation between predictors of initial performance of the task and predictors of rate of change in performance. Specifically, we were able to gain insights into the relative roles of fluid and crystallized intelligence in the skill acquisition process. In particular, for the measure of rate of correct e-mails (our measure of overall output), age, fluid intelligence, and crystallized intelligence predicted initial performance, whereas age and fluid intelligence predicted rate of change in performance. This finding suggests that one's knowledge base is important to initial comprehension of a task, but that further improvements in performance are more dependent on fluid abilities.

Our findings also indicated that prior experience with computers and the Internet only accounted for a small amount of the between-person variance in initial performance and rate of change in performance after accounting for differences in cognitive abilities. These findings support an investment theory framework (Horn & Cattell, 1967), which would suggest that the proportion of variance accounted for by abilities after partialing out the effects of prior knowledge is greater than the portion of variance accounted for by prior knowledge after partialing out the effects of abilities (Beier & Ackerman, 2005). That is, individuals with higher crystallized and fluid intelligence are more likely to acquire knowledge in other domains. These findings also support recent data that indicate that individuals with higher cognitive abilities are more likely to adopt new technologies (Czaja et al., 2006). Most likely this is because adoption of new technologies typically requires learning new skills. It is well established that cognitive abilities are important to new learning. Our findings also lend support to the “rich get richer” hypothesis, which suggests that people with higher cognitive abilities are more likely to be successful at knowledge acquisition. This is consistent with the findings of Hambrick and Engle (2002), which indicated that participants with higher levels of working memory capacity derived greater benefit from domain knowledge than did participants with lower levels of working memory capacity.

However, our results are not totally comparable to those of Beier and Ackerman (2005) or Hambrick and Engle (2002), as we assessed prior computer or Internet experience only as a proxy of domain knowledge as opposed to a direct knowledge test. In addition, our measure was self-report, and the limitations of self-report measures are fairly well established. For example, it may have been difficult for our participants to assess their actual level of proficiency with computer operations. Our findings may also be due to the nature of the domain examined: computers and the Internet. It is well established that an age-related digital divide exists, and thus older people are less likely than their younger counterparts to have a breadth of experience with computer technology (Pew Internet and American Life Project, 2005).

Another important finding of this study was that the relative importance of the predictors depended on the outcome measure. Specifically, fluid intelligence and age were important predictors of overall output, which was a measure that reflected both speed and accuracy. This is not surprising, given that executing a correct response involved identifying relevant text in the query, recalling which database contained the needed information, and performing computer operations such as scrolling. Our measure of fluid intelligence encompassed cognitive abilities that were relevant to these task demands.

In contrast, crystallized intelligence was an important predictor of initial e-mail processing time. Again, this is not surprising, as this measure included time to read the e-mail and search the database for the correct information. Clearly, the ability to understand word meaning was important to this aspect of performance, and, as noted, our measure of crystallized intelligence included a measure of vocabulary and verbal comprehension. Thus, consistent with our findings regarding rate of correct e-mails, prior knowledge was important to initial comprehension. However, for this measure crystallized intelligence (but not fluid intelligence) was also associated with rates of change in performance such that people with lower crystallized intelligence exhibited greater improvements in performance over time. Finally, although there were individual differences in the initial level of navigational efficiency as predicted by fluid intelligence and prior computer or Internet experience, there were no individual differences in rate of change for this measure, which confirms findings that indicate that practice and consistent mapping of associations results in improvements in skilled performance (e.g., Fisk, Cooper, Hertzog, Anderson-Garlach, & Lee, 1995). In this case, our participants learned to map the contents of the information contained within an e-mail query to the database that contained the needed information. The content of the database did not vary.

Overall, our findings indicate that the relationship between individual differences in factors such as age, prior experience, basic abilities, and learning is complex for real world tasks for which performance is multifaceted and defined according to a variety of parameters. This is consistent with the view of other investigators (e.g., Beier & Ackerman, 2005; Diehl, Willis, & Schaie, 1995) and suggests that learning and performance of complex tasks is multifaceted and depends on cognitive abilities as well as knowledge and prior experience.

The findings also have implications for the design of training programs and performance aids. One of the purposes of providing performance support aids is to minimize the cognitive burden of the task as might be reflected by perceptual, memory, and response-time demands. Our findings suggest that we need to first carefully examine the performance objectives of a task (i.e., the critical outcome measures) before designing training programs and implementing aiding systems. For example, our findings indicate that for this task, having some basic knowledge of computers and the Internet was important to initial basic navigational skills. Furthermore, extended practice resulted in improvements in performance efficiency. However, having prior experience and practice on the task may not be sufficient to achieve optimal performance, given the link between abilities and output quality. In this case, performance may have further improved with the provision of some type of a memory aid, such as a graphical depiction of the menu system or the highlighting of relevant text in the e-mail queries. Analogous to the part-task/whole-task distinction in training, we may also want to consider layering training for tasks governed by multiple performance objectives.

There are, of course, limitations to the current study. One is the relatively small sample size, which could have decreased the power of the parameter estimates. However, despite the small sample, we were able to account for a significant portion of the variance in performance. A second limitation is that we had a proxy measure of domain knowledge. As noted, it reflected a self-report measure of computer and Internet experience as compared to an objective knowledge test. A third limitation is that it appears that most participants did not reach asymptotic levels of performance, and thus further performance improvements may have occurred with additional practice. However, despite these limitations, this study contributes to researchers' knowledge of skill acquisition for more complex, real world tasks.

Acknowledgments

This research was supported by the Center for Research and Education on Aging and Technology Enhancement, funded through Grant P01AG17211-0252 from the National Institute on Aging of the National Institutes of Health. We also acknowledge the National Institute of Occupational Safety and Health, in partnership with the National Institute on Aging, for providing additional support for this project. In addition, we would like to thank Chin Chin Lee, Tamer El-Attar, and Christopher Hertzog for their assistance with the preparation of this article.

Appendix A

Detailed Description of Individual Measures

Measure Form of Administration Description
Alphabet Span (Craik, 1986) Group Participants are presented with a series of words. They are then required to write down the words in alphabetical order.
Attitudes Towards Computer Questionnaire (Jay & Willis, 1992) Group A 35-item scale that assesses eight dimensions of attitudes towards computers (comfort, efficacy, gender equality, control, interest, dehumanization, and utility).
California Verbal Learning Test–Delayed (Delis, Kramer, Kaplan, & Ober, 1987) Group Participants are asked to recall as many words as they remember from the original list after a 20-min delay.
California Verbal Learning Test–Immediate (Delis et al., 1987) Group Participants are presented with a list of words and asked to recall as many words as they remember. This is repeated for five trials.
Computation Span (Salthouse & Babcock, 1991) Group Participants are asked to solve arithmetic problems that are presented orally while simultaneously trying to remember the solution for each of the problems.
Computer Anxiety (Loyd & Gressard, 1984) Group A 10-item scale that assesses general anxiety and comfort toward computers.
Choice Reaction Time (Wilkie, Eisdorfer, Morgan, Loewenstein, & Szapocnik, 1990) Individual (computer based) Participants are required to respond to a stimulus, which appears on the screen (solid square), with their right or left hand, depending on the location of the stimulus. There are a total of 60 trials.
Cube Comparison Test (Ekstrom, French, Harman, & Dermen, 1976) Group A measure of spatial orientation wherein participants are required to identify whether two cubes are the same or different.
Digit Span (Wechsler, 1981) Individual Consists of two parts. In the first part, pairs of random-number sequences are read aloud, and the participants' task is to repeat each sequence. In the second part, participants are presented with a series of digits and asked to recall them in reverse order.
Digit Symbol Recall Test (Wechsler, 1981) Group After the standard Digit Symbol Substitution Test, participants are asked to recall the digit–symbol pairs from memory.
Digit Symbol Substitution Test (Wechsler, 1981) Group Participants are presented with a series of rows that pairs digits (1–9) with a nonsense symbol and are then required to fill in symbols below a row of digits.
Inference Test (Ekstrom et al., 1976) Group Participants are required to draw conclusions from information presented in statements.
Letter Sets (Ekstrom et al., 1976) Group Participants are required to determine which of four sets of letters is unrelated to the others.
Meaningful Memory (Institute for Personality and Ability Testing, 1982) Group Initially, participants are given a list of things to study. Following 20 min, participants are asked to pick from a list a word that means the same or about the same as the word that was described in the first list.
Multidimensional Aptitude Battery (Jackson, 1998) Group Participants are required to answer questions related to knowledge of diverse topics.
Nelson–Denney Reading Comprehension (Brown, Fischo, & Hanna, 1993) Group Participants are required to answer questions regarding the meaning of seven reading passages.
Number Comparison (Ekstrom et al., 1976) Group Participants are required to inspect pairs of multidigit numbers and then indicate whether the pairs are the same or different.
Paper Folding (Ekstrom et al., 1976) Group A measure of spatial visualization wherein participants are asked to visualize the folding and unfolding of pieces of paper. They are required to identify the figure being folded.
Self-Efficacy (Rodin & McAvay, 1992) Group An 8-item paper-and-pencil questionnaire that assesses general self-efficacy.
Shipley Vocabulary (Shipley, 1986) Group Participants are asked to circle the word that has the same meaning or most nearly the same meaning as a referent word.
Simple Reaction Time (Wilkie et al., 1990) Individual (computer based) Participants use their dominant hand to press the keyboard when the stimulus (blue box) appears.
Stroop Color Word Association Test (Golden, 1978) Individual Paper-and-pencil test. Participants are required to separate the color names of color words.
Technology and Computer/World Wide Web Experience Questionnaire (CREATE) Group A 14-item paper-and-pencil questionnaire that assesses use of technology and use/breadth of experience with computer technology and the World Wide Web.
Trail Making Test (Forms A and B; Reitan, 1958) Individual In Part A, participants draw lines to connect consecutively numbered circles. In Part B, participants draw lines to connect alternating numbered and lettered circles.
Information Subscale of the Wechsler Adult Intelligence Scale–III (Wechsler, 1997) Group Participants are required to write down responses to questions about factual information that deals with general knowledge about common events, objects, places, and people.

Appendix B

We can write the Level 1 submodel as follows:

Yij=π0i+π1i(dayij)+εij, (1)

where Yij represents the outcome measure for participant i on day j, π0i and π1i represent the intercept and slope for participant i, and εij represents the residual.

We can write the final Level 2 submodel as follows:

π0i=γ00+γ01(dki)+γ02(Gci)+γ03(Gfi)+γ04(agei)+ζ0i, (2)
π1i=γ10+γ11(dki)+γ12(Gci)+γ13(Gfi)+γ14(agei)+ζ1i, (3)

where γ00 represents the average performance during the initial day; γ01, γ02, γ03, and γ04 represent the coefficients of the predictor variables (domain knowledge, crystallized intelligence, fluid intelligence, and age); γ01 represents the average linear growth trajectory; and γ11, γ12, γ13, and γ14 represent the expected change in the growth trajectory based on the predictor variables. ζ0i and ζ0i are the individual deviations in initial status and slope.

Substituting the Level 2 submodel into the Level 1 submodel yields the following:

Yij=γ00+γ01+(dki)+γ02(Gci)+γ03(Gfi)+γ04(agei)+γ10(dayij)+γ11(dki)(dayij)+γ12(Gci)(dayij)+γ13(Gfi)(dayij)+γ14(agei)(dayij)ζ0i+ζ1i(dayij)+εij (4)

Formulae from Singer and Willett (2003) used to calculate intraclass correlation and pseudo R-square statistics are as follows:

ρ=σ02σ02+σε2, (5)
PseudoRε2=σ^ε2(unconditional means)σε2(unconditional growth)σ^ε2(unconditional means), (6)
PseudoRζ2=σ^ζ2(unconditional growth)σζ2(subsequent model)σ^ζ2(unconditional growth). (7)

References

  1. Beier ME, Ackerman PL. Age, ability and the role of prior knowledge on the acquisition of new domain knowledge: Promising results in a real-world learning environment. Psychology and Aging. 2005;20:341–355. doi: 10.1037/0882-7974.20.2.341. [DOI] [PubMed] [Google Scholar]
  2. Chapman RS, Hesketh LJ, Kistler DJ. Predicting longitudinal change in language production and comprehension in individuals with Down syndrome: Hierarchical linear modeling. Journal of Speech and Hearing Research. 2002;45:902–915. doi: 10.1044/1092-4388(2002/073). [DOI] [PubMed] [Google Scholar]
  3. Charness N, Campbell JID. Acquiring skill at mental calculation in adulthood: A task decomposition. Journal of Experimental Psychology. 1988;117:115–129. [Google Scholar]
  4. Charness N, Kelley CL, Bosman EA, Mottram M. Word processing training and retraining: Effects of adult age, experience, and interface. Psychology and Aging. 2001;16:110–127. doi: 10.1037/0882-7974.16.1.110. [DOI] [PubMed] [Google Scholar]
  5. Chase WG, Simon HA. The mind's eye in chess. In: Chase WG, editor. Visual information processing. New York: Academic Press; 1973. pp. 215–281. [Google Scholar]
  6. Craik FIM. A functional account of age differences in memory. In: Klix F, Hagendorf H, editors. Human memory and cognitive abilities. Amsterdam, The Netherlands: Elsevier; 1986. pp. 409–422. [Google Scholar]
  7. Czaja SJ, Charness N, Fisk AD, Hertzog C, Nair SN, Rogers W, et al. Factors predicting the use of technology: Findings from the Center for Research and Education on Aging and Technology Enhancement (CREATE) Psychology and Aging. 2006;21:333–352. doi: 10.1037/0882-7974.21.2.333. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Czaja SJ, Sharit J. Ability-performance relationships as a function of age and task experience for a data entry task. Journal of Experimental Psychology: Applied. 1998;4:332–351. [Google Scholar]
  9. Czaja SJ, Sharit J, Ownby R, Roth D, Nair S. Examining age differences in performance of a complex information search and retrieval task. Psychology and Aging. 2001;16:564–579. doi: 10.1037/0882-7974.16.4.564. [DOI] [PubMed] [Google Scholar]
  10. Delis DC, Kramer JH, Kaplan E, Ober BA. California Verbal Learning Test: Adult version. San Antonio, TX: The Psychological Corporation; 1987. [Google Scholar]
  11. Diehl M, Willis SL, Schaie KW. Everyday problem solving in older adults: Observational assessment and cognitive correlates. Psychology and Aging. 1995;10:478–491. doi: 10.1037//0882-7974.10.3.478. [DOI] [PubMed] [Google Scholar]
  12. Ekstrom RB, French JW, Harman HH, Dermen D. Number Comparison Test: Manual for kit of factor-referenced cognitive tests. Princeton, NJ: Educational Testing Service; 1976. [Google Scholar]
  13. Engle RW, Bukstel LH. Memory processes among bridge players of differing expertise. American Journal of Psychology. 1978;91:673–689. [Google Scholar]
  14. Fisk AD, Cooper BP, Hertzog C, Anderson-Garlach MM, Lee MD. Understanding performance and learning in consistent memory search: An age-related perspective. Psychology and Aging. 1995;10:255–268. doi: 10.1037//0882-7974.10.2.255. [DOI] [PubMed] [Google Scholar]
  15. Fisk AD, Rogers W. Toward an understanding of age-related memory and visual search effects. Journal of Experimental Psychology: General. 1991;120:131–149. doi: 10.1037/0096-3445.120.2.131. [DOI] [PubMed] [Google Scholar]
  16. Golden CJ. Stroop color and word test: A manual for clinical and experimental uses. Wood Dale, IL: Stoelting; 1978. [Google Scholar]
  17. Hambrick DZ, Engle RW. Effects of domain knowledge, working memory capacity, and age on cognitive performance: An investigation of the Knowledge Is Power Hypotheses. Cognitive Psychology. 2002;44:339–387. doi: 10.1006/cogp.2001.0769. [DOI] [PubMed] [Google Scholar]
  18. Horn JL, Cattell RB. Refinement and test of the theory of fluid and crystallized general intelligences. Journal of Educational Psychology. 1966;57:210–220. doi: 10.1037/h0023816. [DOI] [PubMed] [Google Scholar]
  19. Horn JL, Cattell RB. Age differences in fluid and crystallized intelligence. Acta Psychologica. 1967;26:107–129. doi: 10.1016/0001-6918(67)90011-x. [DOI] [PubMed] [Google Scholar]
  20. Jackson DN. MAB-II: Multidimensional Aptitude Battery. Port Huron, MI: Sigma Assessment Systems; 1998. [Google Scholar]
  21. Jenkins L, Hoyer WJ. Instance-based automaticity and aging: Acquisition, reacquisition, and long-term retention. Psychology and Aging. 2000;15:551–565. doi: 10.1037//0882-7974.15.3.551. [DOI] [PubMed] [Google Scholar]
  22. Loyd BH, Gressard C. Reliability and factorial validity of computer attitude scales. Educational and Psychological Measurement. 1984;44:501–505. [Google Scholar]
  23. Max L, Onghena J. Some issues in the statistical analysis of completely randomized and repeated measures designs for speech, language, and hearing research. Journal of Speech, Language and Hearing Research. 1999;42:261–270. doi: 10.1044/jslhr.4202.261. [DOI] [PubMed] [Google Scholar]
  24. Peugh JL, Enders CK. Using the SPSS mixed procedure to fit cross-sectional and longitudinal multilevel models. Educational and Psychological Measurement. 2005;65:717–741. [Google Scholar]
  25. Pew Internet and American Life Project. The mainstreaming of online life. 2005 Retrieved June 20, 2005, from http://www.pewintenet.org/pdfs/Intenet_Status_2005.pdf.
  26. Posner MI, McLeod P. Information processing models: In search of elementary operations. Annual Review of Psychology. 1982;33:477–514. doi: 10.1146/annurev.ps.33.020182.002401. [DOI] [PubMed] [Google Scholar]
  27. Quené H, van den Bergh H. On multi-level modeling of data from repeated measures designs: A tutorial. Speech Communication. 2004;43(12):103–121. [Google Scholar]
  28. Raudenbush SW, Bryk AS. Hierarchical linear models: Applications and data analysis methods. 2nd. Newbury Park, CA: Sage; 2002. [Google Scholar]
  29. Reitan RM. Validity of the Trail Making Test as an indication of organic brain damage. Perceptual and Motor Skills. 1958;9:271–276. [Google Scholar]
  30. Rogers WA, Fisk AD, Hertzog C. Do ability-performance relationships differentiate age and practice effects in visual search? Journal of Experimental Psychology: Learning, Memory, and Cognition. 1994;20:710–738. doi: 10.1037//0278-7393.20.3.710. [DOI] [PubMed] [Google Scholar]
  31. Salthouse TA. Aging associations: Influence of speed on adult aging differences in associative learning. Journal of Experimental Psychology: Learning, Memory, and Cognition. 1994;20:1486–1503. doi: 10.1037//0278-7393.20.6.1486. [DOI] [PubMed] [Google Scholar]
  32. Salthouse TA, Babcock RL. Decomposing adult age differences in working memory. Developmental Psychology. 1991;27:763–776. [Google Scholar]
  33. Schaie KW. Intellectual development in adulthood: The Seattle Longitudinal Study. New York: Cambridge University Press; 1996. [Google Scholar]
  34. Sharit J, Czaja SJ, Hernandez M, Yang Y, Perdomo D, Lewis J, et al. An evaluation of performance by older persons on a simulated telecommuting task. Journal of Gerontology: Psychological Sciences. 2004;59B:P305–P316. doi: 10.1093/geronb/59.6.p305. [DOI] [PubMed] [Google Scholar]
  35. Shipley WC. Shipley Institute of Living Scale. Los Angeles: Western Psychological Services; 1986. [Google Scholar]
  36. Singer JD. Using SAS PROC MIXED to fit multilevel models, hierarchical models, and individual growth models. Journal of Educational and Behavioral Statistics. 1998;23:323–355. [Google Scholar]
  37. Singer JD, Willett JB. Applied longitudinal data analysis: Modeling change and event occurrence. New York: Oxford University Press; 2003. [Google Scholar]
  38. Sterns H. Training and retraining adults and older adult workers. In: Birren J, Robinson P, Livingston J, editors. Age, health and employment. Englewood Cliffs, NJ: Prentice-Hall; 1986. pp. 93–113. [Google Scholar]
  39. Wechsler D. Wechsler Adult Intelligence Scale–Revised Manual. New York: The Psychological Corporation; 1981. [Google Scholar]
  40. Wechsler D. Wechsler Adult Intelligence Scale III Administration and Scoring Manual. San Antonio, TX: The Psychological Corporation; 1997. [Google Scholar]
  41. Wilkie FK, Eisdorfer C, Morgan R, Lowenstein DA, Szapocznik J. Cognition in early HIV infection. Archives of Neurology. 1990;47:433–440. doi: 10.1001/archneur.1990.00530040085022. [DOI] [PubMed] [Google Scholar]

RESOURCES