Skip to main content
International Journal of MS Care logoLink to International Journal of MS Care
. 2020 Mar-Apr;22(2):67–74. doi: 10.7224/1537-2073.2018-108

iCAMS

Assessing the Reliability of a Brief International Cognitive Assessment for Multiple Sclerosis (BICAMS) Tablet Application

Meghan Beier 1,, Kevin Alschuler 1, Dagmar Amtmann 1, Abbey Hughes 1, Renee Madathil 1, Dawn Ehde 1
PMCID: PMC7204362  PMID: 32410901

Abstract

Background:

This study aimed to develop a Brief International Cognitive Assessment for Multiple Sclerosis (BICAMS) tablet application, “iCAMS,” and examine equivalency between the original paper-based and the tablet-based assessments.

Methods:

This study enrolled 100 participants with physician-confirmed multiple sclerosis (MS). Interrater reliability, parallel forms reliability, and concurrent validity were evaluated by incorporating two test administrators in each session: one scoring participant responses with the original paper assessments and the other with iCAMS. Although the participant was exposed to the material only once, responses were recorded on both administration methods. In addition to the standard test procedures, each research assistant used a stopwatch to measure the amount of time required to administer and score each version of BICAMS.

Results:

Pearson correlation coefficients (r) revealed strong and significant correlations for all three tests. Excellent agreement was observed between iCAMS and paper versions of the BICAMS tests, with all intraclass correlation coefficients exceeding 0.93. The scores from all the cognitive tests were not statistically significantly different, indicating no proportional bias. Including scoring, administration of the iCAMS application saved approximately 10 minutes over the paper version.

Conclusions:

Preliminary findings suggest that the tablet application iCAMS is a reliable and fast method for administering BICAMS.

Keywords: Clinical neurology examination, Memory, Multiple sclerosis (MS), Neuropsychological Assessment, Reliability


Up to 65% of people with multiple sclerosis (MS) develop cognitive impairment.14 Although there is considerable variability in the range and severity of cognitive symptoms, the most common include slowed processing speed and difficulty with visual and verbal learning, especially impaired acquisition of new information.47 Neurocognitive dysfunction is a primary factor predicting loss of employment and lowered quality of life.810 Effective detection of cognitive difficulties is critical to prompting initiation of pharmacologic or behavioral interventions, which may then lead to improved treatment and long-term outcomes.11,12 Patient-reported assessment is commonly used by medical staff to screen for cognitive dysfunction. Unfortunately, research in MS suggests that self-report (or perceived) cognitive function is more strongly associated with subjective fatigue and emotional distress (eg, depression, anxiety) than with objective neuropsychological findings.13,14 Thus, although patient self-report is important for understanding the day-to-day experience, objective tests represent the best way to diagnose and track cognitive change over extended periods.

Historically, and in many current practices, objective cognitive assessment relies on 1) comprehensive neuropsychological evaluations conducted during separate medical encounters, 2) brief mental status screening tools (eg, the Mini-Mental State Examination [MMSE]15) administered during a routine neurology or primary care appointment, or 3) patient self-administered computerized tests.16

Unfortunately, comprehensive evaluations are expensive, time-consuming, and vulnerable to patient fatigue, and they may not be readily accessible to patients or clinical providers; and brief screeners are not sensitive to the impairments often seen in MS.17 Recent advancements have attempted to reduce the disadvantages of each of these approaches through the validation of MS population–tailored batteries, such as the Minimal Assessment of Cognitive Function in MS (MACFIMS)2 and the Brief Repeatable Battery of Neuropsychological Tests for MS18; however, similar to their comprehensive predecessors, these brief batteries still require a separate visit with a neuropsychologist for administration and interpretation. Computerized batteries or screeners (eg, CogEval: Processing Speed Test)19 eliminate many of the time and administration barriers; unfortunately, self-administered computerized assessments are still not reliable for assessing memory acquisition or free recall, tasks that require the patient to provide verbal responses.16

In an effort to bring MS-specific cognitive screening into the multidisciplinary medical setting, a multinational committee of MS cognition experts convened in 2010 to recommend an assessment that could be 1) used internationally, 2) completed in 15 minutes or less, 3) completed without specific equipment, 4) used over time via alternate forms, 5) administered by medical professionals without specific training in neuropsychology, and 6) used to assess the commonly affected domains in MS. The resulting truncated battery is referred to as the Brief International Cognitive Assessment for Multiple Sclerosis (BICAMS).20 The BICAMS battery includes three well-known neuropsychological instruments. The oral SDMT21 is an easy-to-administer 90-second test of processing speed that has high sensitivity to cognitive dysfunction in MS.22 The other two instruments capture deficits in acquisition (ie, initial learning) of verbal and visual memory: the five learning trials of the California Verbal Learning Test, Second Edition23 (CVLT-II) and the three learning trials of the Brief Visuospatial Memory Test–Revised24 (BVMT-R). The BICAMS tool demonstrates strong psychometric properties (sensitivity, 94%; specificity, 86%) and can detect cognitive dysfunction when performance on at least one of the three subtests is impaired.25 It also has an advantage over self-administered computerized tests and batteries by assessing memory acquisition.16

Although BICAMS is a major advancement, it still presents barriers to clinical implementation. In particular, the evaluation still demands time and resources through purchasing and managing multiple paper-based assessment materials (eg, score sheet packets, stimulus booklets), manual scoring of each test, transferring scores from the testing materials to the patient’s medical record, and interpreting test scores. Moreover, these neuropsychological tests are not inherently intuitive. They take time to learn, score, and interpret, and relying on nonpsychologist providers for administration and scoring may not be ideal for this measure. Thus, the aim of the present study was to develop a BICAMS tablet computer application (app): “iCAMS.” This tool overcomes the barriers of traditional paper assessments by using technology to increase efficiency and ease of administration. It also addresses the gaps in self-administered computerized assessments by assessing memory acquisition. The goal of this study was to examine equivalency between the two administration procedures (paper-based and tablet-based). We hypothesized that 1) the two versions of BICAMS (traditional paper-administered BICAMS vs tablet-administered iCAMS) would be equivalent and 2) the tablet app would take significantly less time to administer and score than the paper BICAMS.

Methods

Participants

Participants with a physician-confirmed diagnosis of MS were recruited from the University of Washington MS Center. On presenting for an MS Center clinic appointment, patients were provided with an institutional review board–approved brief survey that assessed interest in research participation. The completed form was transferred to a research coordinator and/or two research assistants. Research staff used the form to preliminarily assess eligibility. Potential participants who seemed to meet the inclusion criteria were approached, and an appointment was scheduled to complete the one-time evaluation. To be consistent with the original validation and construction of norms for the neuropsychological measures included in BICAMS, participants were required to be aged 18 to 79 years and able to read and write in English. Individuals also had to be able to draw the BVMT-R shapes using a pencil (for paper administration) or a finger (for tablet administration). If participants could hold a writing utensil to complete initial questionnaires or if they were able to point with a single finger, they were considered eligible. All the procedures were approved by the University of Washington institutional review board, and participants provided written consent before enrolling in the study.

Procedure

iCAMS Development

The paper neuropsychological measures were transferred to tablet form with written permission from Western Psychological Services (Torrance, CA) and Psychological Assessment Resources (Lutz, FL), copyright owners of the SDMT and the BVMT-R, respectively. The principal investigator (M.B.) and institution were unable to reach an agreement with Pearson (London, UK) for use of the copyrighted CVLT-II learning trials in the tablet app. Thus, for the present study, the CVLTII learning trials were replaced with those from a similar auditory verbal learning test, the Rey Auditory Verbal Learning Test (RAVLT). There are two primary differences between the two tests. The CVLT-II consists of a 16-item word list semantically grouped into four categories, and the RAVLT consists of a 15-item word list with no semantic grouping. The RAVLT is in the public domain, and no permissions were required for its use.26 To replicate previous studies confirming that the RAVLT (learning trials) was a comparable alternative measure,27 we administered the paper-based CVLT-II learning trials to every participant. Next, using standard normative data, correcting for age and sex, the total raw scores for the RAVLT and CVLT-II learning trials were converted to their respective standard scores. The RAVLT z scores were transformed to T scores. The two standard scores were compared for statistically significant differences.

The principal investigator (M.B.) worked closely with Versa Computing (San Diego, CA) to build an app that adapted the original paper tests for tablet-based administration. Generation 4 iPads (Apple Inc, Cupertino, CA) were used for this study. With a screen size of 9.5 × 7.31 inches, they were only slightly smaller than the typical 8.5 × 11–inch paper forms. The app includes step-by-step administration instructions and computes automatic scoring. The resulting app functions as follows: The opening iCAMS screen requires input of participant demographic information necessary for standardized scoring (ie, age, education, race/ethnicity). Directly following are the three cognitive subtests in the standardized order: SDMT; RAVLT learning trials; and BVMT-R learning trials, Form 1. After completion of the tests, iCAMS automatically calculates raw scores, z scores, and percentiles. Scoring used manual-based normative data for the BVMTR,24 RAVLT,28 and SDMT.21

Study Design

Study personnel collected demographic data (age, sex, race/ethnicity) and disease-related variables (MS type, date of diagnosis, comorbid medical conditions, medication) via medical record review or a short demographic questionnaire for variables not regularly charted (eg, educational level). Once enrolled and consent given, each participant attended one appointment approximately 30 to 45 minutes in length. During the appointment, each participant was randomly assigned to either paper- or iCAMS-led administration.

Using standardized administration procedures, the RAVLT learning trials and the SDMT were administered orally. The paper and iPad tests were administered by research assistants trained in testing procedures by the principal investigator (M.B.). For the test administrators, participant answers were either written on paper or recorded by selecting appropriate options (finger press) on the tablet screen. For the SDMT, participants were provided with a paper stimuli. Answers were recorded on paper as well as on the tablet with a single tap for a correct answer or a double tap for an incorrect answer. For the RAVLT, a word list was available on screen, and as the patient verbally recalled words, a single tap on each word recorded a correct response. Tapping on the word more than once allowed the administrator to tally the total number of repeated answers, and tapping on an “other” button allowed for the tally of nontarget words.

For the BVMT-R learning trials, participants were asked to view learning trials on a paper booklet or on the tablet screen. However, all other original learning trial administration procedures were maintained per standardized instructions. The administrators were instructed to hold both the tablet and the stimulus book according to standardized procedures. For the tablet, once the learning trial began, the stimuli remained on screen and then automatically switched to the drawing screen after 10 seconds. The iCAMS drawing screen mimics the paper response form. Thus, the participant was asked to draw all six figures as accurately as possible and in the correct location on the screen. If participants made a mistake while drawing, to ensure scoring could take in all qualitative data, they were instructed to place an “X” through the unwanted figure and recreate the remembered figure directly adjacent. However, if needed and instructed by the administrator (not needed for this study), the whole screen could be “wiped” by hitting a “clear” button at the bottom of the screen. Patients were then given the choice of either using a stylus or a finger to draw the remembered geometric shapes. Individuals with poor upper extremity motor control, or those for whom using a writing implement was difficult, often chose to use their finger. After the third and final trial, the administrator was walked through scoring with a dual presentation of each trial’s drawing and the manual’s scoring algorithm. The tablet app tallied the total learning score after each trial was recorded.

Interrater reliability and parallel forms reliability were evaluated by incorporating two test administrators in each session—one scoring participant responses with the paper BICAMS and the other with iCAMS. This varied only with the BVMT-R learning trials. Half of the participants drew responses on paper, the other half on the tablet screen. Both administrators viewed and scored the available drawings on their assigned instrument (paper or tablet). Both administrators recorded participant responses simultaneously, but only one led the session. Half of the testing sessions were led by the paper administrator and half by the iCAMS administrator. Each participant was randomly assigned to either paper-led or iCAMS-led administration. Concurrent validity was assessed using the same study design. Although the participant was exposed to the material only once, the responses were recorded on both administration methods. In addition to the standard test procedures, each research assistant used a stopwatch to measure the amount of time required to administer and score each version of BICAMS. After completing the three subtests (SDMT, RAVLT learning trials, and BVMT-R learning trials), each participant was administered the standard paper CVLT-II learning trials.

Measures

The SDMT is a commonly used measure of processing speed. The test requires the participant to quickly pair geometric designs to one of nine numbers (based on a provided key) for 90 seconds.2,13,29 The measure is commonly used in MS, with the oral version recommended because the written version may be confounded by upper extremity motor weakness.13,20,30

The RAVLT is a verbal learning and memory test. Individuals are read a list of 15 words and then are asked to verbally produce as many words as they can recall. This process is repeated five times. The final score is calculated by totaling the number of correct words over all five trials. Although not part of the traditional BICAMS, the RAVLT has been used in previous studies to assess verbal learning and memory in persons with MS.3134 Only the RAVLT learning trials were included in this study.

The BVMT-R is a visual, nonverbal test of learning and memory. Only the learning trials of the BVMT-R are incorporated into BICAMS. Individuals being tested are asked to study a figure with six geometrical designs for 10 seconds. The figure is removed and the participant is asked to draw as many of the geometrical designs as they can remember, placing them in the correct location on the page. The three learning trials are scored based on accuracy: 1 point for correct shape and 1 point for correct location. A total score is derived from summing the total number of points across all three learning trials.24 The BVMT-R has six different forms. Form 1 was used in this study.

The CLVT-II LT is a verbal learning and memory test frequently used in individuals with MS.2 The BICAMS battery includes only the first five learning trials of this measure. Individuals are read a list of 16 words that are grouped into four semantic categories. Participants are required to verbally produce as many words as they can recall. This process is repeated five times. The final score is calculated by totaling the number of correct words across all five trials.20

Statistical Analysis

Sample size calculations for this study were based on estimation of intraclass correlation coefficient (3,1)35 and account for the two modes of administrations of BICAMS (one paper, one tablet) to the same individuals. Using power analysis calculations described by Walter and colleagues,35 a minimum of 20 individuals are needed to test the equivalency of scores between the two BICAMS versions (α = 0.05, β = 0.2, ρ = 0.90, ρ0 = 0.70, and n = 2). Although a sample of 20 participants was sufficient for detecting a difference between the two modes of administration, we anticipated running secondary analyses and wanted to ensure adequate power if data were lost due to administrator or technology errors. For these two reasons, we enrolled 100 participants in the study.

Regarding missing data, if there was not a score for comparison (ie, paper or tablet score was missing), the single score without a pair was not included in the final comparison analysis. Clinical and demographic characterization of the sample was assessed via descriptive and frequency statistics. Differences in demographic characteristics between the paper- and tablet-led groups were assess using t tests and χ2 analyses (Table 1). Multiple methods were used to compare the BICAMS and iCAMS scores. Pearson correlation coefficients (r) were calculated to assess the concurrent validity between paper and app. A linear relationship of ±0.3 was considered weak, ±0.5 moderate, and ±0.7 strong. Intraclass correlations were used to examine agreement between the scores from the paper-based BICAMS and tablet-based iCAMS groups. Intraclass correlation coefficient values greater than 0.9 were considered excellent reliability; between 0.75 and 0.9, good reliability.36 A Bland-Altman plot was constructed for each cognitive measure to determine whether there were any systematic differences in the two measures and to identify potential outliers. In addition, outliers were also assessed by mean difference scores. Mean difference scores at or close to zero indicate that variability is due to analytical imprecision rather than to a difference attributed to any one mode of administration.37 Regression was performed to determine whether there was significant proportional bias around the mean difference line. To compare efficiency, t tests were used to examine the difference between administration time between paper and iCAMS, as well as comparing RAVLT learning trials with those of the CVLT-II. A lack of statistical significance suggests that any difference in scores can be explained by random variation.

Table 1.

Demographic characteristics of participants stratified by test administration

Characteristic Total sample (N = 100) Paper-led group (n = 50) iCAMS-led group (n = 50) Statistical test P value
Age, y 46.43 ± 12.98 45.31 ± 12.34 47.18 ± 13.51 t = −0.72 .47
Education, y 15.48 ± 2.46 15.51 ± 2.45 15.41 ± 2.5 t = 0.2 .84
Time since MS diagnosis, y 10.68 ± 8.33 10.46 ± 7.54 10.92 ± 9.18 t = −0.27 .79
Sex
 Female 74 (76) 38 (39) 36 (37)
 Male 24 (24) 11 (11) 13 (13) χ2 = 0.22 .64
Type of MS
 Relapsing-remitting 77 (77) 42 (42) 35 (35)
 Other 23 (23) 8 (8) 15 (15) χ2 = 0.77 .10

Note: Values are given as mean ± SD or number (percentage) unless otherwise indicated.

Abbreviation: MS, multiple sclerosis.

Results

Study Participants

Clinical and demographic characterization of the sample was assessed using descriptive and frequency statistics. Consistent with previous MS research, the sample of 100 participants was highly educated (mean ± SD, 15.48 ± 2.46 years), mostly female (74%), with 78% having the relapsing-remitting form of MS. The mean ± SD participant age was 46.43 ± 12.98 years and disease duration (time since diagnosis) was 10.68 ± 8.33 years. There was no statistically significant difference in demographic characteristics between the paper- and tablet-led groups.

Comparison of Paper BICAMS and iCAMS

Scores

Pearson correlation coefficients (r) between paper-and app-based tests are presented in Table 2. Strong and significant correlations were observed for all three tests. Excellent agreement was observed between iCAMS and paper versions of BICAMS, with all intraclass correlation coefficients exceeding 0.93 (Table 2). The scores from all cognitive tests were not statistically significantly different, indicating no proportional bias. Finally, mean ± SD differences in the total raw scores between paper and tablet were all nearly zero (SDMT: −0.55 ± 1.4; RAVLT: −0.13 ± 1.6; and BVMT-R: −0.16 ± 2.48), indicating that any variance in scores was likely due to normal analytical imprecision (Figures S1 (123.9KB, pdf) , S2 (123.9KB, pdf) , and S3 (123.9KB, pdf) , published in the online version of this article at ijmsc.org).

Table 2.

Association between paper- and iCAMS-led administration of SDMT, RAVLT learning trials, and BVMT-R learning trials

Paper score iCAMS score Pearson r ICC (ICC 95% CI)
Oral SDMT (raw) 49.27 ± 12.09 48.73 ± 12.07 0.993a 0.996 (0.993–0.998)
RAVLT learning trials (raw) 47.41 ± 9.54 47.28 ± 9.7 0.987a 0.993 (0.99–0.996)
BVMT-R learning trials (raw) 23.01 ± 7.87 22.94 ± 7.43 0.949a 0.973 (0.96–0.982)

Note: Values are given as mean ± SD unless otherwise indicated. Abbreviations: BVMT-R, Brief Visuospatial Memory Test–Revised; ICC, intraclass correlation coefficient; RAVLT, Rey Auditory Verbal Learning Test; SDMT, Symbol Digit Modalities Test.

aP < .0001.

Administration Time

There was a significant difference in mean ± SD total administration time between the paper-based BICAMS (22.79 ± 7.01 minutes) and the iCAMS app (13.61 ± 3.048 minutes) (t99 = 16.51, P < .001, 95% CI = 8.08 to 10.28). Due to automatic scoring, administration of the iCAMS app saved approximately 10 minutes. Using the iCAMS app, administration and scoring of all three cognitive tests takes less than 15 minutes.

Comparison of RAVLT and CVLT-II Learning Trials

There was no significant difference in mean ± SD scores between the RAVLT learning trials (47.48 ± 12.87) and those of the CVLT-II (49.06 ± 12.88) (t98 = −1.39, P = .17, 95% CI = −3.85 to 0.68).

Discussion

The purpose of the present study was to address barriers presented by both traditional paper-and-pencil neuropsychological assessments and screeners, as well as those inherent in self-administered computerized assessments, by creating a tablet-based BICAMS tool (iCAMS) that would potentially allow a more efficient mode of administration for assessing cognitive impairment in individuals with MS. Results of the present study demonstrated concurrent validity and reliability between the original paper-based neuropsychological tests and the new tablet-based iCAMS, with iCAMS taking approximately 10 minutes (40%) less time, due to automatic scoring, than BICAMS.

It is well established that cognitive impairment is a common and often disabling symptom of MS. Given that self-report is an unreliable way to screen or assess for cognitive symptoms, fast, efficient, and accessible objective measures are needed. Although original (ie, paper-based) versions of BICAMS and MACFIMS are brief, reliable, and valid for use in MS, administration and scoring of these assessments can be time-consuming and require specialized training. A growing number of studies have used automated, computerized cognitive tests, such as CogState,38 NeuroTrax,39 Cognitive Drug Research Battery,40 Automated Neuropsychological Assessment Metrics,41,42 NIH Toolbox,43 the Processing Speed Test,19 and the Cambridge Neuropsychological Test Automated Battery.44 Although these measures have important strengths (eg, efficient administration and automated scoring), many of the automated computerized tests have not been fully validated for use in MS. The Processing Speed Test, although valid in MS, does not assess verbal or visual learning. The present findings support iCAMS as a tool that capitalizes on the strengths of these existing measures while also expanding on these measures to reliably assess the multiple cognitive domains affected in MS.

Tablet- and app-based technologies offer potential solutions to many of the barriers to routine administration that the paper-based BICAMS presents. The iCAMS app eliminates the need for paper record forms (and thus storage) at a time when hospitals and clinics are transitioning away from paper-based records. The iCAMS app simplifies and standardizes test administration for nonneuropsychologists by providing automatic prompts and written instructions for the administrator to follow. The iCAMS app also saves clinic time by automatically calculating scores based on standardized normative data. Finally, it allows for assessment of learning (ie, memory acquisition) through the ability of patients to provide verbal responses to a technician, an advantage over self-administered computerized assessments.16 Importantly, iCAMS takes approximately the same amount of time to administer as other cognitive screening tools that are briefer than BICAMS (eg, MMSE) but offers improved sensitivity and specificity to MS-related cognitive impairments.

Seventy-five percent of medical residents use a tablet daily for clinical responsibilities.45 Tablets are being implemented in emergency departments,46 for clinic self-report questionnaires,47,48 and to screen for dementia in older adults.19,49,50 Tablet-based screening for patient-reported outcomes and cognitive assessment is also occurring in MS.51 Therefore, it is imperative that validated instruments be accessible in a technological world.

As with any study, there are several limitations. First, test-retest reliability of iCAMS was not assessed. Test-rest reliability has been well established for paper versions of the SDMT,13,30 CVLT-II,52,53 RAVLT,54 and BVMT-R.53 It was determined that, for this preliminary study, incorporating this aim would complicate the design and could introduce learning effects that might influence primary aim statistical results. Future studies will examine test-retest reliability of iCAMS. Second, we were unable to secure an agreement to use the CVLTII in the iCAMS tablet app. Therefore, iCAMS is not identical to the established BICAMS tool. A German BICAMS validation study substituted RAVLT for CVLT-II, which provides precedence for this change.55 In addition, similar to other studies conducted in controls, MS, and traumatic brain injury, we did not find a significant difference between CVLT-II learning trials and those of the RAVLT in the present study.27,56,57 Future studies will need to assess the psychometric properties of the combined iCAMS assessment (SDMT, RAVLT learning trials, and BVMT-R learning trials) against a full neuropsychological assessment to confirm that it is equally as effective as BICAMS in screening for cognitive impairment in MS. Third, replication of these findings in a more diverse MS sample is needed to generalize the reliability of the iCAMS tool. Fourth, although this tablet app is a step toward eliminating the need for paper-based records, in its present form it still requires the SDMT stimuli. In a future version of iCAMS we hope to incorporate voice recognition to eliminate all need for paper forms, making the app fully self-contained. This would necessitate a replication study to ensure reliability. Finally, the present study focused exclusively on the administration and scoring of iCAMS relative to those of BICAMS. The study did not examine aspects of clinical application, such as how the screening results may be delivered to patients. Based on the equivalence of the measures, we assume that they would be used similarly to paper-based measures; however, we believe that the more efficient nature of iCAMS and its potentially better scoring accuracy could make use of this screening approach more appealing.

In summary, the present findings suggest that the novel iCAMS app is comparable with the paper-based BICAMS, with no significant differences in results between the paper- and tablet-based measures. Furthermore, iCAMS is significantly more efficient, saving the administrator 40% of time and eliminating the need for paper-based record forms. iCAMS is a promising tool that will facilitate the use of an established and recommended cognitive battery for MS in clinical settings.

PRACTICE POINTS

  • iCAMS is a tablet-based application used to assess cognition in MS. It is comparable with the paper-based Brief International Cognitive Assessment for Multiple Sclerosis (BICAMS), with no significant differences in results between the paper- and tablet-based measures. Furthermore, iCAMS is significantly more efficient, saving the administrator 40% of time and eliminating the need for paper-based record forms.

  • iCAMS takes the same amount of time to administer as other neurologic cognitive screening tools (eg, the Mini-Mental State Examination) but offers improved sensitivity and specificity to MS-related cognitive impairments.

  • iCAMS simplifies and standardizes test administration of the BICAMS cognitive tests for nonneuropsychologists by providing automatic prompts and written instructions for the administrator to follow.

  • iCAMS allows for assessment of memory acquisition, an advantage over self-administered computerized assessments.

Acknowledgments

The authors thank the individuals who participated in this study. We also thank Katie Rutter, the study’s research assistant, whose efforts made this project possible.

Financial Disclosures

The authors declare no conflicts of interest.

Funding/Support

This study was developed under a grant from the Consortium of Multiple Sclerosis Centers (CMSC) and was also supported, in part, by the National Multiple Sclerosis Society (grant MB 0008).

Prior Presentation

This study was presented, in part, in poster form at the Annual Meeting of the CMSC, May 2015, Indianapolis, Indiana, and at the Sixth Cooperative Meeting of the CMSC and the Americas Committee for Treatment and Research in Multiple Sclerosis (ACTRIMS), May 2014, Dallas, Texas.

Test Materials

Adapted and reproduced by special permission of the publisher, Psychological Assessment Resources Inc, Lutz, Florida, from the Brief Visuospatial Memory Test–Revised by Ralph H.B. Benedict, PhD, © 1988, 1995, 1996, 1997 by PAR, Inc. Further reproduction is prohibited without permission of PAR, Inc. An additional license agreement was obtained from Western Psychological Services for tablet adaptation of the Symbol Digit Modalities Test in this study; they hold the copyright and further reproduction is prohibited without permission.

References

  • 1.Olazarán J, Cruz I, Benito-León J, Morales JM, Duque P, Rivera-Navarro J. Cognitive dysfunction in multiple sclerosis: methods and prevalence from the GEDMA Study. Eur Neurol. 2009;61:87–93. doi: 10.1159/000177940. [DOI] [PubMed] [Google Scholar]
  • 2.Benedict RHB, Cookfair D, Gavett R et al. Validity of the Minimal Assessment of Cognitive Function in Multiple Sclerosis (MACFIMS) J Int Neuropsychol Soc. 2006;12:549–558. doi: 10.1017/s1355617706060723. [DOI] [PubMed] [Google Scholar]
  • 3.Rao SM, Leo GJ, Bernardin L, Unverzagt F. Cognitive dysfunction in multiple sclerosis, I: frequency, patterns, and prediction. Neurology. 1991;41:685–691. doi: 10.1212/wnl.41.5.685. [DOI] [PubMed] [Google Scholar]
  • 4.Chiaravalloti ND, DeLuca J. Cognitive impairment in multiple sclerosis. Lancet Neurol. 2008;7:1139–1151. doi: 10.1016/S1474-4422(08)70259-X. [DOI] [PubMed] [Google Scholar]
  • 5.DeLuca J, Barbieri-Berger S, Johnson SK. The nature of memory impairments in multiple sclerosis: acquisition versus retrieval. J Clin Exp Neuropsychol. 1994;16:183–189. doi: 10.1080/01688639408402629. [DOI] [PubMed] [Google Scholar]
  • 6.DeLuca J, Gaudino EA, Diamond BJ, Christodoulou C, Engel RA. Acquisition and storage deficits in multiple sclerosis. J Clin Exp Neuropsychol. 1998;20:376–390. doi: 10.1076/jcen.20.3.376.819. [DOI] [PubMed] [Google Scholar]
  • 7.Thornton AE, Raz N, Tucke KA. Memory in multiple sclerosis: contextual encoding deficits. J Int Neuropsychol Soc. 2002;8:395–409. doi: 10.1017/s1355617702813200. [DOI] [PubMed] [Google Scholar]
  • 8.Ruet A, Deloire M, Hamel D, Ouallet J-C, Petry K, Brochet B. Cognitive impairment, health-related quality of life and vocational status at early stages of multiple sclerosis: a 7-year longitudinal study. J Neurol. 2013;260:776–784. doi: 10.1007/s00415-012-6705-1. [DOI] [PubMed] [Google Scholar]
  • 9.Morrow SA, Drake A, Zivadinov R, Munschauer F, Weinstock-Guttman B, Benedict RHB. Predicting loss of employment over three years in multiple sclerosis: clinically meaningful cognitive decline. Clin Neuropsychol. 2010;24:1131–1145. doi: 10.1080/13854046.2010.511272. [DOI] [PubMed] [Google Scholar]
  • 10.Rao SM, Leo GJ, Ellington L, Nauertz T, Bernardin L, Unverzagt F. Cognitive dysfunction in multiple sclerosis, II: impact on employment and social functioning. Neurology. 1991;41:692–696. doi: 10.1212/wnl.41.5.692. [DOI] [PubMed] [Google Scholar]
  • 11.Patti F, Leone C, D’Amico E. Treatment options of cognitive impairment in multiple sclerosis. Neurol Sci. 2010;31(suppl 2):S265–S269. doi: 10.1007/s10072-010-0438-7. [DOI] [PubMed] [Google Scholar]
  • 12.Amato MP, Langdon D, Montalban X et al. Treatment of cognitive impairment in multiple sclerosis: position paper. J Neurol. 2013;260:1452–1468. doi: 10.1007/s00415-012-6678-0. [DOI] [PubMed] [Google Scholar]
  • 13.Benedict RHB, Duquin JA, Jurgensen S et al. Repeated assessment of neuropsychological deficits in multiple sclerosis using the Symbol Digit Modalities Test and the MS Neuropsychological Screening Questionnaire. Mult Scler. 2008;14:940–946. doi: 10.1177/1352458508090923. [DOI] [PubMed] [Google Scholar]
  • 14.Julian L, Merluzzi NM, Mohr DC. The relationship among depression, subjective cognitive impairment, and neuropsychological performance in multiple sclerosis. Mult Scler. 2007;13:81–86. doi: 10.1177/1352458506070255. [DOI] [PubMed] [Google Scholar]
  • 15.Folstein MF, Folstein SE, McHugh PR. “Mini-mental state”: a practical method for grading the cognitive state of patients for the clinician. J Psychiatr Res. 1975;12:189–198. doi: 10.1016/0022-3956(75)90026-6. [DOI] [PubMed] [Google Scholar]
  • 16.Lapshin H, O’Connor P, Lanctôt KL, Feinstein A. Computerized cognitive testing for patients with multiple sclerosis. Mult Scler Relat Disord. 2012;1:196–201. doi: 10.1016/j.msard.2012.05.001. [DOI] [PubMed] [Google Scholar]
  • 17.Aupperle RL, Beatty WW, Shelton Fde N, Gontkovsky ST. Three screening batteries to detect cognitive impairment in multiple sclerosis. Mult Scler. 2002;8:382–389. doi: 10.1191/1352458502ms832oa. [DOI] [PubMed] [Google Scholar]
  • 18.Bever CT, Grattan L, Panitch HS, Johnson KP. The Brief Repeatable Battery of Neuropsychological Tests for Multiple Sclerosis: a preliminary serial study. Mult Scler. 1995;1:165–169. doi: 10.1177/135245859500100306. [DOI] [PubMed] [Google Scholar]
  • 19.Rao SM, Losinski G, Mourany L et al. Processing speed test: validation of a self-administered, iPad®-based tool for screening cognitive dysfunction in a clinic setting. Mult Scler. 2017;23:1929–1937. doi: 10.1177/1352458516688955. [DOI] [PubMed] [Google Scholar]
  • 20.Langdon DW, Amato MP, Boringa J et al. Recommendations for a Brief International Cognitive Assessment for Multiple Sclerosis (BICAMS) Mult Scler. 2012;18:891–898. doi: 10.1177/1352458511431076. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Smith A. Symbol Digit Modalities Test (SDMT) Los Angeles, CA: Western Psychological Services; 1982. [Google Scholar]
  • 22.Van Schependom J, D’hooghe MB, Cleynhens K et al. The Symbol Digit Modalities Test as sentinel test for cognitive impairment in multiple sclerosis. Eur J Neurol. 2014;21:1219–1225. e71–e72. doi: 10.1111/ene.12463. [DOI] [PubMed] [Google Scholar]
  • 23.Delis DC, Kramer JH, Kaplan E, Ober BA. California Verbal Learning Test, Second Edition (CVLT-II) London, UK: Pearson; 2000. [Google Scholar]
  • 24.Benedict RHB. Brief Visuospatial Memory Test–Revised. Lutz, FL: Psychological Assessment Resources Inc; 1997. [Google Scholar]
  • 25.Dusankova JB, Kalincik T, Havrdova E, Benedict RHB. Cross cultural validation of the Minimal Assessment of Cognitive Function in Multiple Sclerosis (MACFIMS) and the Brief International Cognitive Assessment for Multiple Sclerosis (BICAMS) Clin Neuropsychol. 2012;26:1186–1200. doi: 10.1080/13854046.2012.725101. [DOI] [PubMed] [Google Scholar]
  • 26.IMPACT: International Mission for Prognosis and Analysis of Clinical Trials in TBI. Neuropsychological impairment. http://www.tbi-impact.org/cde/neuroimp.html Accessed July 31, 2017.
  • 27.Crossen JR, Wiens AN. Comparison of the Auditory-Verbal Learning Test (AVLT) and California Verbal Learning Test (CVLT) in a sample of normal subjects. J Clin Exp Neuropsychol. 1994;16:190–194. doi: 10.1080/01688639408402630. [DOI] [PubMed] [Google Scholar]
  • 28.Schmidt M. Rey Auditory Verbal Learning Test: A Handbook. Western Psychological Services; 1996. [Google Scholar]
  • 29.Benedict RHB, Smerbeck A, Parikh R, Rodgers J, Cadavid D, Erlanger D. Reliability and equivalence of alternate forms for the Symbol Digit Modalities Test: implications for multiple sclerosis clinical trials. Mult Scler. 2012;18:1320–1325. doi: 10.1177/1352458511435717. [DOI] [PubMed] [Google Scholar]
  • 30.Brochet B, Deloire MSA, Bonnet M et al. Should SDMT substitute for PASAT in MSFC? a 5-year longitudinal study. Mult Scler. 2008;14:1242–1249. doi: 10.1177/1352458508094398. [DOI] [PubMed] [Google Scholar]
  • 31.Minden SL, Moes EJ, Orav J, Kaplan E, Reich P. Memory impairment in multiple sclerosis. J Clin Exp Neuropsychol. 1990;12:566–586. doi: 10.1080/01688639008401002. [DOI] [PubMed] [Google Scholar]
  • 32.Yu HJ, Christodoulou C, Bhise V et al. Multiple white matter tract abnormalities underlie cognitive impairment in RRMS. Neuroimage. 2012;59:3713–3722. doi: 10.1016/j.neuroimage.2011.10.053. [DOI] [PubMed] [Google Scholar]
  • 33.Krupp LB, Christodoulou C, Melville P et al. Multicenter randomized clinical trial of donepezil for memory impairment in multiple sclerosis. Neurology. 2011;76:1500–1507. doi: 10.1212/WNL.0b013e318218107a. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.Godoy JF, Perez M, Sanchez-Barrera MB, Muela JA, Mari-Beffa P, Puente A. Recency effect in multiple sclerosis. Appl Neuropsychol. 1996;3:93–96. doi: 10.1207/s15324826an0302_9. [DOI] [PubMed] [Google Scholar]
  • 35.Walter SD, Eliasziw M, Donner A. Sample size and optimal designs for reliability studies. Stat Med. 1998;17:101–110. doi: 10.1002/(sici)1097-0258(19980115)17:1<101::aid-sim727>3.0.co;2-e. [DOI] [PubMed] [Google Scholar]
  • 36.Koo TK, Li MY. A guideline of selecting and reporting intraclass correlation coefficients for reliability research. J Chiropr Med. 2016;15:155–163. doi: 10.1016/j.jcm.2016.02.012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 37.Giavarina D. Understanding Bland Altman analysis. Biochem Med (Zagreb) 2015;25:141–151. doi: 10.11613/BM.2015.015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38.Falleti MG, Maruff P, Collie A, Darby DG. Practice effects associated with the repeated assessment of cognitive function using the CogState battery at 10-minute, one week and one month test-retest intervals. J Clin Exp Neuropsychol. 2006;28:1095–1112. doi: 10.1080/13803390500205718. [DOI] [PubMed] [Google Scholar]
  • 39.Golan D, Wilken J, Doniger GM et al. Validity of a multi-domain computerized cognitive assessment battery for patients with multiple sclerosis. Mult Scler Relat Disord. 2019;30:154–162. doi: 10.1016/j.msard.2019.01.051. [DOI] [PubMed] [Google Scholar]
  • 40.Lavrencic LM, Richardson C, Harrison SL et al. Is there a link between cognitive reserve and cognitive function in the oldest-old? J Gerontol A Biol Sci Med Sci. 2018;73:499–505. doi: 10.1093/gerona/glx140. [DOI] [PubMed] [Google Scholar]
  • 41.Kane RL, Roebuck-Spencer T, Short P, Kabat M, Wilken J. Identifying and monitoring cognitive deficits in clinical populations using Automated Neuropsychological Assessment Metrics (ANAM) tests. Arch Clin Neuropsychol. 2007;22(suppl 1):S115–S126. doi: 10.1016/j.acn.2006.10.006. [DOI] [PubMed] [Google Scholar]
  • 42.Settle JR, Robinson SA, Kane R, Maloni HW, Wallin MT. Remote cognitive assessments for patients with multiple sclerosis: a feasibility study. Mult Scler. 2015;21:1072–1079. doi: 10.1177/1352458514559296. [DOI] [PubMed] [Google Scholar]
  • 43.Carlozzi NE, Goodnight S, Casaletto KB et al. Validation of the NIH Toolbox in individuals with neurologic disorders. Arch Clin Neuropsychol. 2017;32:555–573. doi: 10.1093/arclin/acx020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 44.Cotter J, Vithanage N, Colville S et al. Investigating domain-specific cognitive impairment among patients with multiple sclerosis using touchscreen cognitive testing in routine clinical care. Front Neurol. 2018;9:331. doi: 10.3389/fneur.2018.00331. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 45.Patel BK, Chapman CG, Luo N, Woodruff JN, Arora VM. Impact of mobile tablet computers on internal medicine resident efficiency. Arch Intern Med. 2012;172:436–438. doi: 10.1001/archinternmed.2012.45. [DOI] [PubMed] [Google Scholar]
  • 46.Horng S, Goss FR, Chen RS, Nathanson LA. Prospective pilot study of a tablet computer in an Emergency Department. Int J Med Inform. 2012;81:314–319. doi: 10.1016/j.ijmedinf.2011.12.007. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 47.Holzner B, Giesinger JM, Pinggera J et al. The Computer-based Health Evaluation Software (CHES): a software for electronic patient-reported outcome monitoring. BMC Med Inform Decis Mak. 2012;12:126. doi: 10.1186/1472-6947-12-126. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 48.Dy CJ, Schmicker T, Tran Q, Chadwick B, Daluiski A. The use of a tablet computer to complete the DASH questionnaire. J Hand Surg Am. 2012;37:2589–2594. doi: 10.1016/j.jhsa.2012.09.010. [DOI] [PubMed] [Google Scholar]
  • 49.Clionsky M. iPad screening for dementia holds great promise. J Alzheimers Dis. 2012;2(2):e112. [Google Scholar]
  • 50.Onoda K, Hamano T, Nabika Y et al. Validation of a new mass screening tool for cognitive impairment: Cognitive Assessment for Dementia, iPad version. Clin Interv Aging. 2013;8:353–360. doi: 10.2147/CIA.S42342. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 51.Miller D, Mowry E, Planchon S, de Moore C, Bermel R. Association between Neuro-QoL scale scores and employment status in MS PATHS (Multiple Sclerosis Partners Advancing Technology and Health Solutions) patients. Qual Life Res. 2018;27:S54–S54. [Google Scholar]
  • 52.Woods SP, Delis DC, Scott JC, Kramer JH, Holdnack JA. The California Verbal Learning Test–second edition: test-retest reliability, practice effects, and reliable change indices for the standard and alternate forms. Arch Clin Neuropsychol. 2006;21:413–420. doi: 10.1016/j.acn.2006.06.002. [DOI] [PubMed] [Google Scholar]
  • 53.Benedict RHB. Effects of using same-versus alternate-form memory tests during short-interval repeated assessments in multiple sclerosis. J Int Neuropsychol Soc. 2005;11:727–736. doi: 10.1017/S1355617705050782. [DOI] [PubMed] [Google Scholar]
  • 54.Delaney RC, Prevey ML, Cramer J, Mattson RH, VA Epilepsy Cooperative Study #264 Research Group Test-retest comparability and control subject data for the Rey-Auditory Verbal Learning Test and Rey-Osterrieth/Taylor Complex Figures. Arch Clin Neuropsychol. 1992;7:523–528. [PubMed] [Google Scholar]
  • 55.Filser M, Schreiber H, Pöttgen J, Ullrich S, Lang M, Penner IK. The Brief International Cognitive Assessment in Multiple Sclerosis (BICAMS): results from the German validation study. J Neurol. 2018;265:2587–2593. doi: 10.1007/s00415-018-9034-1. [DOI] [PubMed] [Google Scholar]
  • 56.Beier M, Hughes AJ, Williams MW, Gromisch ES. Brief and cost-effective tool for assessing verbal learning in multiple sclerosis: comparison of the Rey Auditory Verbal Learning Test (RAVLT) to the California Verbal Learning Test - II (CVLT-II) J Neurol Sci. 2019;400:104–109. doi: 10.1016/j.jns.2019.03.016. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 57.Stallings G, Boake C, Sherer M. Comparison of the California Verbal Learning Test and the Rey Auditory Verbal Learning Test in head-injured patients. J Clin Exp Neuropsychol. 1995;17:706–712. doi: 10.1080/01688639508405160. [DOI] [PubMed] [Google Scholar]

Articles from International Journal of MS Care are provided here courtesy of The Consortium of Multiple Sclerosis Centers

RESOURCES