ABSTRACT
Background
Cognitive impairment is one of the most common and debilitating symptoms of relapsing–remitting multiple sclerosis (RRMS). Digital cognitive biomarkers require less time and resources and are rapidly gaining popularity in clinical settings. We examined the longitudinal trajectory of the iPad‐based Processing Speed Test (PST) and predictors of PST scores.
Methods
We prospectively enrolled RRMS patients between 2017 and 2021 across six Australian MS centres. Longitudinal data was analysed with mixed effect modelling and latent class mixed models. We then examined whether latent class group membership predicted confirmed decrease in correct PST responses.
Results
We recruited a total of 1093 participants, of which 724 had complete baseline data with a median follow up duration of 2 years. At a population level, PST trajectory was stable. A small practice effect was present up to the 4th visit. Age, baseline disability, T2 lesion volume, male sex and depression were associated with lower correct PST responses, whilst years of education and full/part‐time employment were associated with more correct PST responses.
We identified four latent class trajectories of PST. The worst latent class was typified by low baseline PST and lack of a practice effect. Being in the worst latent class was associated with a greater hazard of time to sustained 5% decrease in PST (HR 2.84, 95% CI 1.16–6.94, p = 0.02).
Conclusion
Worse baseline cognitive performance and lack of a practice effect predicted future cognitive decline in RRMS.
Keywords: cognition, digital biomarkers, latent class, multiple sclerosis, processing speed test
1. Introduction
Multiple sclerosis (MS) is the most prevalent chronic inflammatory demyelinating neurological disease, affecting 2.8 million people worldwide [1]. Whilst the physical manifestations of MS have been well‐studied, cognitive impairment is a common, debilitating, and under‐recognised aspect of the disease. It affects up to 75% of people with MS (pwMS) and is independently associated with underemployment, poorer quality of life, driving impairment and social isolation [2, 3, 4, 5, 6]. As a result, the National MS society recommends baseline and annual screening for cognitive impairment in pwMS [7].
However, practical implementation of these recommendations is difficult. There is no standardised, easily administered screening tool appropriate for a routine clinical setting with limited appointment time and staff resources. The Symbol Digit Modalities Test (SDMT) is well‐validated in MS and relatively quick to perform, thus suggested by Kalb and colleagues as the test of choice [7]. However, there are fees for its use and staff time for testing and scoring is required. As a result, it is rarely used in routine clinical practice.
The Processing Speed Test (PST), an iPad‐based adaptation of the SDMT, has been introduced as an alternative. It has demonstrated excellent test–retest reliability, ecological and discriminative validity in cross‐sectional studies [8, 9, 10, 11, 12]. However, due to the absence of longitudinal data we are unable to predict those at risk of future cognitive decline. This has significant treatment implications as early intervention may slow cognitive decline and increase cognitive reserve [13, 14]. Accordingly, we examined the trajectories of PST scores in a cohort of pwMS with mild‐to‐moderate disability and to identify predictors of PST scores.
2. Methods
2.1. Study Design
We recruited adult MS participants between December 2017 and March 2021 from six tertiary MS clinics in Australia. All participants were also enrolled in the MSBase registry, an international, observational cohort of pwMS [15]. Inclusion criteria were a confirmed diagnosis of relapsing–remitting MS (RRMS) or diagnosis of clinically isolated syndrome with evidence of MRI lesions that meet the Paty A or Paty B criteria, age > 18 years and an Expanded Disability Status Scale (EDSS) of less than 4 [16]. There were 28 minor deviations from the inclusion criteria where EDSS was four or above, mostly due to delays in baseline EDSS assessment because of restrictions in attending clinics in person due to COVID‐19. These participants were allowed to continue in this study. For MSBase, all patients with MS who were attending a participating site and have provided informed consent were eligible. All assessments (EDSS, patient‐reported outcomes (PROs), Multiple Sclerosis Performance Test (MSPT)) were performed during routine clinic visits (approximately six monthly). Clinical data were extracted from the MSBase registry. This study was approved by the Melbourne Health Human Research Ethics Committee and the University of Tasmania Human Research Ethics Committee. All participants provided informed consent prior to any data collection.
2.2. Processing Speed Test
The PST is an iPad‐based cognitive test, administered as part of the MSPT. The iPad displays a symbol key which contains nine symbols with corresponding numbers. After a practice test, participants were presented with rows of 15 symbols and instructed to select corresponding numbers. Once a row was completed a new row of symbols was presented. The total duration of the test is 2 min. The total number of correct responses in each test was used as the primary outcome for this assessment.
2.3. Patient Reported Outcomes (PROs)
We utilised the Patient Health Questionnaire (PHQ‐9) as a measure of depression. The PHQ‐9 is a 9‐item measure of depressive symptoms in the past 2 weeks with a scale of 0–3 [17].
Higher scores in the PHQ‐9 indicate a worse tendency to depression and has been validated in pwMS [18]. Electronic versions of these PROs were administered electronically using an iPad as previously described [19].
2.4. Statistical Analysis
The baseline characteristic of the population was reported as the median (interquartile range) or mean (standard deviation) for continuous variables and number (percentile) for discrete variables, respectively.
We used a spaghetti plot with a fitted line to visualise the longitudinal trajectories of the PST. We examined mean change between visits to identify the magnitude and persistence of the practice effect. We then used multivariable linear mixed‐effects models [20] with PST as the dependent variable and baseline covariates of age, disease duration (years), sex, EDSS, T2 lesion volume, T2 lesion number, 2‐year relapse count, education (years), employment (unemployed, part‐time, full‐time), PHQ score and follow up duration. We then added interaction terms for all covariates and follow up duration in another model. A random intercept at the individual level was used to account for heterogeneity across patients.
We then examined the proportion of participants with 150 day sustained 5% sustained worsening from baseline assessment. This cut‐off was used as in the original studies assessing change in SDMT for pwMS with relapses versus pwMS without relapse, the within group percentage change were as low as 2.2% [21]. We postulated that in our cohort of mild to moderate pwMS, primarily treated with high‐efficacy disease modifying therapy (DMT), a lower cut‐off could increase sensitivity for detection of clinically significant change [22]. We also conducted sensitivity analysis assessing a 4‐point decrease in PST from baseline assessment [21, 23]. This cut‐off has previously been validated as clinically meaningful for the SDMT [24].
A latent class mixed model (LCMM) was used to identify trajectories of digital cognitive biomarkers [25]. Initially, we explored the relationship between underlying latent processes and outcome. We fitted the LCMM of a single class with different linking functions, including linear, beta cumulative distribution function or varying number of quadratic I‐splines (3–5 and) with either equidistant knots or knots at quantiles. We selected the most appropriate link function by comparing the Bayesian Information Criterion (BIC) [26]. We next fitted models with varying numbers of latent classes (1–4) and the selected linking function appropriate for the data. To avoid selecting overfitted models based on BIC alone, we also considered clinically plausible trajectories and only included classes including at least 5% of the study cohort.
We examined the posterior probability of classification in a particular trajectory. A posterior probability of > 70% was considered acceptable, and proportions of individuals classified into each trajectory are reported [26].
A p < 0.05 was considered statistically significant. All statistical analyses were performed using R 4.1.2.
3. Results
3.1. Participant Characteristics
Participant characteristics are presented in Table 1. In total, 1093 participants were consented. Of these, 918 had baseline MSPT data, and 724 participants had complete questionnaire and MRI data for inclusion in the mixed models. The median number of study visits by participants was three, with a median follow‐up duration of 728 days (IQR 264 to 1099; full table of visit numbers in Table A1).
TABLE 1.
Participant characteristics at baseline (n = 724 unless otherwise specified).
| Median (IQR) | |
|---|---|
| Age (years) | 40.3 (33.4–47.9) |
| Disease duration (years) | 7.5 (3.7–11.2) |
| Female (n, %) | 549 (75.8) |
| EDSS | 1.5 (1–2) |
| T2 lesion number | 38 (19–74) |
| T2 lesion volume (mm3) | 3074 (1187–7086) |
| 2‐year relapse count b | 0 (0–1) |
| Years of education completed | 14 (12–16) |
| PST at baseline a | 55 (48–61) |
| PHQ‐9 score (range 0–27) | 5 (2–9) |
Abbreviations: EDSS, Expanded Disability Status Scale; IQR, interquartile range; PHQ, Patient Health Questionnaire; PST, Processing Speed Test.
Number of correct responses.
Prior to baseline.
3.2. Longitudinal Analysis
Figure 1 shows the spaghetti plot of PST scores over time with a line of best fit. At a population level, PST scores stayed relatively stable over time. A practice effect was evident up to the 4th visit (Table 2 and Figure 1).
FIGURE 1.

Spaghetti plot of PST trajectory over time with line of best fit.
TABLE 2.
Mean change in PST up to the 6th visit.
| Change between visits | Mean PST score (at preceding visit) | Mean PST change between visits | Total participant number |
|---|---|---|---|
| Visit 1 to 2 | 54.63 | 0.75 | 597 |
| Visit 2 to 3 | 55.87 | 0.83 | 430 |
| Visit 3 to 4 | 56.59 | 1.06 | 277 |
| Visit 4 to 5 | 58.47 | −0.35 | 154 |
| Visit 5 to 6 | 58.69 | −0.50 | 70 |
In the final linear mixed‐effect model (Table 3), age, baseline EDSS, T2 lesion volume, male sex, and PHQ‐9 score were negatively associated with PST, whilst years of education and full/part‐time employment status were positively associated with PST. There was an increase in PST scores by 1 point every year, which we interpreted as the practice effect. None of the interaction terms between covariates and time were significant (data not shown).
TABLE 3.
Predictors of PST scores in linear mixed‐effects model (n = 724).
| Independent variables | Beta coefficient | 95% CI | p |
|---|---|---|---|
| Age | −0.39 | −0.46 to −0.33 | < 0.01 |
| Disease duration, years | −0.01 | −0.13 to 0.10 | 0.84 |
| EDSS | −0.66 | −1.29 to −0.03 | 0.0 |
| T2 lesion volume (per 1000 mm3) | −0.25 | −0.36 to −0.14 | < 0.01 |
| T2 lesion number | 0.00 | −0.01 to 0.02 | 0.56 |
| 2‐year relapse count b | −0.55 | −1.39 to 0.28 | 0.20 |
| Male sex | −3.54 | −4.94 to −2.14 | < 0.01 |
| Education years | 0.65 | 0.44 to 0.86 | < 0.01 |
| Part‐time employment a | 1.88 | 0.19 to 3.58 | 0.03 |
| Full‐time employment a | 1.84 | 0.28 to 3.40 | 0.02 |
| PHQ‐9 score | −0.13 | −0.25 to −0.02 | 0.03 |
| Follow‐up duration, years | 0.58 | 0.40 to 0.77 | < 0.01 |
Note: All covariates were baseline measures. Results with p < 0.05 are in boldface.
Abbreviations: CI, confidence interval; EDSS, Expanded Disability Status Scale; PHQ, Patient Health Questionnaire.
Compared to unemployed/retired.
Prior to baseline.
3.3. Sustained 4‐Point Drop in PST
We then analysed those who had a confirmed 4‐point drop in PST. Of the 430 participants with sufficient follow‐up (minimum of three visits), 40 had a confirmed 4‐point drop. Figure 2 shows the hazard of time to confirmed 4‐point drop in PST.
FIGURE 2.

Cumulative hazard of time to sustained 4‐point PST decrease.
3.4. Latent Class Analysis
The linking function that provided the best fit to the longitudinal PST scores was a nonlinear linking function of three quadratic I‐splines with nodes spaced in quantiles. Of the 430 people with sufficient follow‐up, 77 (17.9%) were in class 1, 191 (44.4%) in class 2, 119 (27.67%) in class 3 and 43 (10.0%) in class 4. Figure 3 shows the scatterplot of the PST scores with overlying lines of mean SDMT scores per latent class. Class 1 started with the highest baseline PST score, and a slight improvement was noted over time. Class 2 and 3 had started with medium PST scores and were largely able to maintain this over time—however, after 3 years a gradual decrease was noted in class 3. Finally, class 4 started with the lowest baseline PST scores, which remained relatively stable. Notably, a practice effect was lacking in this group. It should be noted that due to the small numbers in class 4, the increase towards the end of the follow‐up period was driven by outliers.
FIGURE 3.

PST trajectories defined by latent class analysis.
3.5. Latent Classes Predict Sustained Change in PST
We hypothesised that a sustained 4‐point change would be more difficult to achieve if baseline PST score was low. For example, in class 4, the mean PST score was 40, whereas in class 1, mean PST score was 65. This represents a nearly 10% change for group 4 but only a 6% change for class 1. Therefore, we performed the primary analysis with sustained 5% change in PST scores from baseline instead of a 4‐point change.
With this cut‐off, 54 out of the 430 subjects with sufficient follow‐up had a 5% sustained decrease in their PST. We found that class 4 had an increased hazard of time to 5% sustained PST decrease (HR 2.84, 95% CI 1.16 to 6.94, p = 0.02) (Table 4 and Figure 4). No other classes were associated.
TABLE 4.
Hazard ratio and 95% CI for latent class trajectories and time to sustained 5% change in PST, compared to class 1 (reference).
| Latent class group | Hazard ratio | 95% CI | p |
|---|---|---|---|
| Class 2 | 0.87 | 0.39 to 2.00 | 0.75 |
| Class 3 | 1.25 | 0.54 to 2.93 | 0.60 |
| Class 4 | 2.84 | 1.16 to 6.94 | 0.02 |
Note: Results with p < 0.05 are in boldface.
FIGURE 4.

Cumulative hazard of time to sustained 5% PST decrease. Kaplan–Meier curves were applied to show the cumulative hazard of time to sustained 5% PST decrease in latent classes 1–4.
3.6. Sensitivity Analyses
3.6.1. 4‐Point Sustained PST Decrease
We then analysed whether latent class membership predicted a sustained 4‐point decrease in PST. There was no significant relationship between latent class membership and 4‐point confirmed PST change (Table 5 and Figure 5).
TABLE 5.
Hazard ratio + 95% CI for latent class groupings and time to 4‐point change in PST, compared to class 1 (reference).
| Latent class group | Hazard ratio | 95% CI | p |
|---|---|---|---|
| Class 2 | 0.67 | 0.28 to 1.59 | 0.36 |
| Class 3 | 0.91 | 0.37 to 2.22 | 0.83 |
| Class 4 | 1.32 | 0.46 to 3.80 | 0.61 |
FIGURE 5.

Cumulative hazard of time to sustained 4‐point PST decrease, stratified by latent classes. Kaplan–Meier curves were applied to show the cumulative hazard of time to sustained 4‐point PST decrease in latent classes 1–4.
4. Discussion
In this large observational study investigating a digital cognitive biomarker in pwMS with mild to moderate disability, we observed that processing speed remained stable over 4 years' follow‐up at a population level. We found that age, baseline EDSS, T2 lesion volume, male sex and PHQ‐9 score were associated with lower PST scores, whilst years of education and full/part‐time employment were associated with higher PST scores. A small practice effect was noted up to the 4th visit despite the long intervals between tests. We identified four trajectories of PST over follow‐up. The worst performing group was notable for having the worst baseline PST scores and lack of a practice effect. This group had greater hazard of time to sustained 5% decrease in PST. No other trajectories were associated with risks of sustained 5% decrease in PST, and no trajectories significantly predicted risk of sustained 4‐point PST decrease. Our study illustrates that pwMS who have low baseline cognitive performance and no practice effect should be considered at greater risk of future cognitive decline, and early intervention should be considered to maximise cognitive reserve.
In this cohort of pwMS with mild to moderate disability, the worst latent class (class 4) predicted a sustained 5% PST change. This is a novel finding. Given the stability of latent classes over time, this signifies that baseline cognitive performance is an important predictor of subsequent cognitive decline. This class was also notable for a lack of practice effect. It may be that a practice effect demonstrates cognitive reserve that reduces the risk of subsequent cognitive decline, although this is difficult to delineate, as initial improvement with practice effect self‐selects for a decreased likelihood of sustained decrease in PST. Overall, this represents a promising method to select pwMS at greater risk of deterioration for future progressive MS trials.
Our findings also show that for pwMS with minimal disability, a percentage‐based measure instead of a uniform 4‐point measure is the more sensitive change measure. In a prior longitudinal trajectory study on the digital MSReactor platform, a percentage‐based cut‐off was used as well [22]. In the original studies assessing change in SDMT for pwMS with relapses versus controls, the within group percentage change were as low as 2.2% [21]. In those with relapses with cognitive impairment, there was a mean 6.2% within group change. Furthermore, 6% change in SDMT has previously been shown to predict loss of employment over 3 years [23]. It should also be noted that most patients in the aforementioned studies were on low‐efficacy DMT and had more severe disability. Our findings suggest that for pwMS with milder disability with access to high efficacy DMT use, a lower cut‐off may increase sensitivity for detection of significant change. This will need to be clarified in future longitudinal studies to prove definitively that a 5% change in SDMT is clinically meaningful.
At a population level, digital cognitive performance remained stable over time. Even within latent classes, deterioration was not seen until after the 4th visit. Even then, they did not drop below baseline PST values. This is similar to a recent study in a real‐world cohort which showed improvement in the PST over time [27]. Prior studies with the SDMT have shown conflicting results. Some report deterioration over time [28], whilst a recent meta‐analysis and a study over 11 years have reported similar findings with relative stability over time [29, 30]. It should be noted that given the lower level of disability in our cohort, significant cognitive decline would not be expected at a population level.
Our findings of age, baseline disability (as assessed by EDSS), male sex, education, employment and PHQ‐9 as predictive of PST is similar to that found in prior studies [27, 31, 32, 33, 34, 35]. It should be noted that the association with employment could be bidirectional—whilst it is intuitive that declining cognition may lead to reduced employment, maintaining mental stimulation at work may protect against cognitive decline [36].
Our findings help to identify pwMS who are at greater risk of future cognitive deterioration. Early recognition and intervention should be considered in these patients. Confounding factors such as sleep disorders, pain fatigue, anxiety, depression, polypharmacy should be identified and addressed [37]. Whilst evidence is still lacking as to the benefit of a specific DMT compared to another, there is emerging evidence that early high‐efficacy DMT use may slow cognitive decline [38]. Finally, exercise and cognitive rehabilitation should be encouraged as a way to slow cognitive decline [37].
Limitations of our study include the exclusion of progressive phenotypes and pwMS with severe disability. This limits the generalisability of our findings. Due to the COVID‐19 pandemic, in‐person clinic visits were significantly restricted, leading to longer than usual time between visits [39, 40, 41]. The number of follow up visits was relatively low, which affects the posterior probability of latent class trajectory membership. More frequent follow up would have allowed for better posterior probabilities. Furthermore, a practice effect was evident up to the 4th visit. Given the relative infrequency of the iPad testing, this meant that practice effects were evident up to the 2nd year of the study. Higher frequency of testing, particularly in the initial stages of the study, may alleviate this issue [42, 43, 44]. The Neuro‐QOL, a patient‐reported quality of life measure included in some MSPT versions, was not used in this study. Future studies of the MSPT may consider comparing the cognitive domains of Neuro‐QOL against the PST.
5. Conclusion
In this cohort of mild to moderate RRMS, PST scores remained stable over time at a population level. Latent class modelling identified a trajectory with low baseline PST and a lack of practice effect that predicted sustained 5% PST decrease. Our study supports digital cognitive biomarkers and latent class modelling as a screening tool to identify those that may benefit from early intervention for maximising cognitive function. Future studies are needed to explore whether a 5% PST decrease predicts clinically meaningful cognitive decline.
Author Contributions
Y.F., H.B., A.v.d.W., D.M., M.G., C.Z., K.B., P.D., J.v.B., R.H., D.D., To.K., Tr.K., B.T., M.B. and J.L.S. conceived the study and methodology. J.L.S., M.B., B.T., To.K., Tr.K., D.D., H.B. and A.v.d.W contributed to data preparation. Y.F., M.G. and C.Z. performed statistical analysis and data extraction. Y.F. wrote the original draft of the manuscript. M.B. and C.W. contributed to the imaging data. M.G., D.M., C.Z., K.B., H.B. and A.v.d.W supervised the project. S.S.Y. provided domain‐related feedback. H.B. and A.v.d.W were considered co‐senior authors and contributed equally. All authors provided critical feedback and intellectual input throughout the study, revised the initial draft, and contributed to the final version, of the submitted manuscript.
Conflicts of Interest
Yi Chao Foong received travel compensation and speaker's honoraria from Biogen. He also receives research funding support from National Health and Medical Research Council, Multiple Sclerosis Research Australia and Australian and New Zealand Association of Neurologists; Melissa Gresle is currently working on observational studies funded by Biogen and Roche; Daniel Merlo has received honoraria from Novartis; Katherine Buzzard has received honoraria for presentations and/or educational support from Biogen, Sanofi Genzyme, Merck, Roche, Alexion and Teva. She serves on medical advisory boards for Merck and Biogen; Jeannette Lechner‐Scott received travel compensation from Novartis, Biogen, Roche and Merck. Her institution receives the honoraria for talks and advisory board commitment as well as research grants from Biogen, Merck, Roche, TEVA and Novartis; Michael Barnett served on scientific advisory boards for Biogen, Novartis and Genzyme and has received conference travel support from Biogen and Novartis. He serves on steering committees for trials conducted by Novartis. His institution has received research support from Biogen, Merck and Novartis; Chenyu Wang has no competing interests.; Trevor Kilpatrick receives support from Novartis in the form of consultancy fees, honoraria for giving lectures and funding for a pre‐clinical Investigator‐Initiated Study (IIS).; David Darby was a founder and shareholder in Cogstate Ltd., but has not been involved in this company since 2011. He is a consultant to UBrain, Brazil. His company CereScape Ltd. receives a stipend to maintain the msreactor.com website.;Tomas Kalincik served on scientific advisory boards for MS International Federation and World Health Organisation, BMS, Roche, Janssen, Sanofi Genzyme, Novartis, Merck and Biogen, steering committee for Brain Atrophy Initiative by Sanofi Genzyme, received conference travel support and/or speaker honoraria from WebMD Global, Eisai, Novartis, Biogen, Roche, Sanofi‐Genzyme, Teva, BioCSL and Merck and received research or educational event support from Biogen, Novartis, Genzyme, Roche, Celgene and Merck; Bruce Taylor received funding for travel and speaker honoraria from Bayer Schering Pharma, CSL Australia, Biogen and Novartis, and has served on advisory boards for Biogen, Novartis, Roche and CSL Australia; Robert Hyde was an employee of Biogen and holds stock/stock options in Biogen; Johan van Beek is an employee of Biogen; Pamela Dobay is a full‐time employee of Biogen and owns Biogen stock.; Steve Simpson‐Yap has no competing interests; Anneke van der Walt served on advisory boards for Novartis, Biogen, Merck and Roche and NervGen. She received unrestricted research grants from Novartis, Biogen, Merck and Roche. She is currently a co‐Principal investigator on a co‐sponsored observational study with Roche, evaluating a Roche‐developed smartphone app, Floodlight‐MS. She has received speaker's honoraria and travel support from Novartis, Roche, Biogen and Merck. She serves as the Chief operating Officer of the MSBase Foundation (not for profit). Her primary research support is from the National Health and Medical Research Council of Australia and MS Research Australia; Helmut Butzkueven's Institution has received compensation for advisory boards or lecture fees from Novartis, Biogen, Merck, UCB Pharma and Roche. His institutions receive research funding from Novartis, Biogen, Merck, Roche, The National Health and Medical Research Council of Australia, The Medical Research Future Fund (Australia), Monash Partners, the Trish MS Foundation, The Pennycook Foundation, and MS Australia. He receives personal compensation as the Managing Director of the MSBase Foundation and from the Oxford Health Policy Forum Brain Health Initiative.
Acknowledgements
This work was supported by an investigator‐initiated study grant from Biogen. YF would like to acknowledge funding support from National Health and Medical Research Council (NHMRC), MS Australia, AVANT Foundation and the Australia and New Zealand Association of Neurologists (ANZAN). Open access publishing facilitated by Monash University, as part of the Wiley ‐ Monash University agreement via the Council of Australian University Librarians.
Appendix A.
TABLE A1.
Total number of visits by participant count and inter‐visit intervals.
| Total number of visits | Participant count | Median days from prior visit |
|---|---|---|
| 1 | 127 | N/A |
| 2 | 167 | 218 |
| 3 | 153 | 236 |
| 4 | 123 | 350 |
| 5 | 84 | 330 |
| 6 | 50 | 207 |
| 7 | 14 | 231 |
| 8 | 6 | N/A |
Funding: This work was supported by Australia and New Zealand Association of Neurologists (ANZAN), Biogen, MS Australia, National Health and Medical Research Council (NHMRC), AVANT Foundation.
Helmut Butzkueven and Anneke van der Walt contributed equally as senior authors.
Funding Statement
This work was funded by Australia and New Zealand Association of Neurologists (ANZAN); Biogen ; MS Australia ; National Health and Medical Research Council (NHMRC) ; AVANT Foundation .
Data Availability Statement
The data that support the findings of this study are available from the corresponding author upon reasonable request.
References
- 1. Walton C., King R., Rechtman L., et al., “Rising Prevalence of Multiple Sclerosis Worldwide: Insights From the Atlas of MS,” Multiple Sclerosis Journal 26 (2020): 1816–1821. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2. Benedict R. H., Amato M. P., DeLuca J., and Geurts J. J., “Cognitive Impairment in Multiple Sclerosis: Clinical Management, MRI, and Therapeutic Avenues,” Lancet Neurology 19 (2020): 860–871. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3. Krasniuk S., Classen S., Morrow S. A., et al., “Clinical Predictors of Driving Simulator Performance in Drivers With Multiple Sclerosis,” Multiple Sclerosis Journal 27 (2021): 2085–2092, 10.1177/1352458521992507. [DOI] [PubMed] [Google Scholar]
- 4. Weber E., Goverover Y., and DeLuca J., “Beyond Cognitive Dysfunction: Relevance of Ecological Validity of Neuropsychological Tests in Multiple Sclerosis,” Multiple Sclerosis Journal 25 (2019): 1412–1419, 10.1177/1352458519860318. [DOI] [PubMed] [Google Scholar]
- 5. Glanz B. I., Healy B. C., Rintell D. J., et al., “The Association Between Cognitive Impairment and Quality of Life in Patients With Early Multiple Sclerosis,” Journal of the Neurological Sciences 290 (2010): 75–79, 10.1016/j.jns.2009.11.004. [DOI] [PubMed] [Google Scholar]
- 6. Goverover Y., Strober L., Chiaravalloti N., and DeLuca J., “Factors That Moderate Activity Limitation and Participation Restriction in People With Multiple Sclerosis,” American Journal of Occupational Therapy 69 (2015): 1–9. [DOI] [PubMed] [Google Scholar]
- 7. Kalb R., Beier M., Benedict R. H., et al., “Recommendations for Cognitive Screening and Management in Multiple Sclerosis Care,” Multiple Sclerosis Journal 24 (2018): 1665–1680, 10.1177/1352458518803785. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8. Foong Y. C., Merlo D., Gresle M., et al., “Longitudinal Trajectories of Digital Upper Limb Biomarkers for Multiple Sclerosis,” European Journal of Neurology 32 (2025): e70000. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9. Yam C., Merlo D., Stankovich J., et al., “The MSReactor Computerized Cognitive Battery Correlates With the Processing Speed Test in Relapsing‐Remitting Multiple Sclerosis,” Multiple Sclerosis and Related Disorders 43 (2020): 102212, 10.1016/j.msard.2020.102212. [DOI] [PubMed] [Google Scholar]
- 10. Foong Y. C., Merlo D., Gresle M., et al., “Patient‐Determined Disease Steps Is Not Interchangeable With the Expanded Disease Status Scale in Mild to Moderate Multiple Sclerosis,” European Journal of Neurology 31, no. 1 (2023): e16046. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11. Rao S. M., Losinski G., Mourany L., et al., “Processing Speed Test: Validation of a Self‐Administered, iPad®‐Based Tool for Screening Cognitive Dysfunction in a Clinic Setting,” Multiple Sclerosis Journal 23 (2017): 1929–1937. [DOI] [PubMed] [Google Scholar]
- 12. Macaron G., Baldassari L. E., Nakamura K., et al., “Cognitive Processing Speed in Multiple Sclerosis Clinical Practice: Association With Patient‐Reported Outcomes, Employment and Magnetic Resonance Imaging Metrics,” European Journal of Neurology 27 (2020): 1238–1249, 10.1111/ene.14239. [DOI] [PubMed] [Google Scholar]
- 13. Grant J. G., Rapport L. J., Darling R., et al., “Cognitive Enrichment and Education Quality Moderate Cognitive Dysfunction in Black and White Adults With Multiple Sclerosis,” Multiple Sclerosis and Related Disorders 78 (2023): 104916, 10.1016/j.msard.2023.104916. [DOI] [PubMed] [Google Scholar]
- 14. Sumowski J. F., Rocca M. A., Leavitt V. M., et al., “Brain Reserve and Cognitive Reserve in Multiple Sclerosis: What you've Got and How You Use It,” Neurology 80 (2013): 2186–2193. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15. Butzkueven H., Chapman J., Cristiano E., et al., “MSBase: An International, Online Registry and Platform for Collaborative Outcomes Research in Multiple Sclerosis,” Multiple Sclerosis Journal 12 (2006): 769–774. [DOI] [PubMed] [Google Scholar]
- 16. Barkhof F., Filippi M., Miller D. H., et al., “Comparison of MRI Criteria at First Presentation to Predict Conversion to Clinically Definite Multiple Sclerosis,” Brain: A Journal of Neurology 120 (1997): 2059–2069. [DOI] [PubMed] [Google Scholar]
- 17. Kroenke K., Spitzer R. L., and Williams J. B., “The PHQ‐9: Validity of a Brief Depression Severity Measure,” Journal of General Internal Medicine 16 (2001): 606–613. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18. Patrick S. and Connick P., “Psychometric Properties of the PHQ‐9 Depression Scale in People With Multiple Sclerosis: A Systematic Review,” PLoS One 14 (2019): e0197943. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19. Merlo D., Kalincik T., Zhu C., et al., “Subjective Versus Objective Performance in People With Multiple Sclerosis Using the MSReactor Computerised Cognitive Tests,” Multiple Sclerosis and Related Disorders 58 (2022): 103393. [DOI] [PubMed] [Google Scholar]
- 20. Kuznetsova A., Brockhoff P. B., and Christensen R. H., “lmerTest Package: Tests in Linear Mixed Effects Models,” Journal of Statistical Software 82 (2017): 1–26. [Google Scholar]
- 21. Morrow S., Jurgensen S., Forrestal F., Munchauer F. E., and Benedict R. H., “Effects of Acute Relapses on Neuropsychological Status in Multiple Sclerosis Patients,” Journal of Neurology 258 (2011): 1603–1608. [DOI] [PubMed] [Google Scholar]
- 22. Merlo D., Stankovich J., Bai C., et al., “Association Between Cognitive Trajectories and Disability Progression in Patients With Relapsing‐Remitting Multiple Sclerosis,” Neurology 97 (2021): e2020, 10.1212/WNL.0000000000012850. [DOI] [PubMed] [Google Scholar]
- 23. Morrow S. A., Drake A., Zivadinov R., et al., “Predicting Loss of Employment Over Three Years in Multiple Sclerosis: Clinically Meaningful Cognitive Decline,” Clinical Neuropsychologist 24 (2010): 1131–1145. [DOI] [PubMed] [Google Scholar]
- 24. Benedict R. H. B., DeLuca J., Phillips G., et al., “Validity of the Symbol Digit Modalities Test as a Cognition Performance Outcome Measure for Multiple Sclerosis,” Multiple Sclerosis Journal 23 (2017): 721–733, 10.1177/1352458517690821. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25. Proust‐Lima C., Philipps V., and Liquet B., “Estimation of Extended Mixed Models Using Latent Classes and Latent Processes: The R Package Lcmm,” Journal of Statistical Software 78 (2017): 1–56, 10.18637/jss.v078.i02. [DOI] [Google Scholar]
- 26. Lennon H., Kelly S., Sperrin M., et al., “Framework to Construct and Interpret Latent Class Trajectory Modelling,” BMJ Open 8 (2018): e020683, 10.1136/bmjopen-2017-020683. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27. Aboseif A., Amin M., Bena J., et al., “Association Between Disease‐Modifying Therapy and Information Processing Speed in Multiple Sclerosis,” International Journal of MS Care 26 (2024): 91–97, 10.7224/1537-2073.2023-010. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28. Glanz B. I., Healy B. C., Hviid L. E., Chitnis T., and Weiner H. L., “Cognitive Deterioration in Patients With Early Multiple Sclerosis: A 5‐Year Study,” Journal of Neurology, Neurosurgery & Psychiatry 83 (2012): 38–43. [DOI] [PubMed] [Google Scholar]
- 29. Ezegbe C., Zarghami A., van der Mei I., et al., “Instruments Measuring Change in Cognitive Function in Multiple Sclerosis: A Systematic Review,” Brain and Behavior: A Cognitive Neuroscience Perspective 13 (2023): e3009. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30. Longinetti E., Englund S., Burman J., et al., “Trajectories of Cognitive Processing Speed and Physical Disability Over 11 Years Following Initiation of a First Multiple Sclerosis Disease‐Modulating Therapy,” Journal of Neurology, Neurosurgery & Psychiatry 95, no. 2 (2023): 134–141. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 31. Jacobsen C., Zivadinov R., Myhr K. M., et al., “Brain Atrophy and Clinical Characteristics Predicting SDMT Performance in Multiple Sclerosis: A 10‐Year Follow‐Up Study,” Multiple Sclerosis Journal—Experimental, Translational and Clinical 7 (2021): 2055217321992394, 10.1177/2055217321992394. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 32. Kiely K. M., Butterworth P., Watson N., and Wooden M., “The Symbol Digit Modalities Test: Normative Data From a Large Nationally Representative Sample of Australians,” Archives of Clinical Neuropsychology 29 (2014): 767–775, 10.1093/arclin/acu055. [DOI] [PubMed] [Google Scholar]
- 33. Marrie R. A., Patel R., Bernstein C. N., et al., “Anxiety and Depression Affect Performance on the Symbol Digit Modalities Test Over Time in MS and Other Immune Disorders,” Multiple Sclerosis 27 (2021): 1284–1292, 10.1177/1352458520961534. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 34. Strober L. B., Bruce J. M., Arnett P. A., et al., “A Much Needed Metric: Defining Reliable and Statistically Meaningful Change of the Oral Version Symbol Digit Modalities Test (SDMT),” Multiple Sclerosis and Related Disorders 57 (2022): 103405, 10.1016/j.msard.2021.103405. [DOI] [PubMed] [Google Scholar]
- 35. van Ballegooijen H., van der Hiele K., Enzinger C., de Voer G., and Visser L. H., “The Longitudinal Relationship Between Fatigue, Depression, Anxiety, Disability, and Adherence With Cognitive Status in Patients With Early Multiple Sclerosis Treated With Interferon Beta‐1a,” eNeurologicalSci 28 (2022): 100409, 10.1016/j.ensci.2022.100409. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 36. Hussenoeder F. S., Riedel‐Heller S. G., Conrad I., and Rodriguez F. S., “Concepts of Mental Demands at Work That Protect Against Cognitive Decline and Dementia: A Systematic Review,” American Journal of Health Promotion 33 (2019): 1200–1208, 10.1177/0890117119861309. [DOI] [PubMed] [Google Scholar]
- 37. Lechner‐Scott J., Agland S., Allan M., et al., “Managing Cognitive Impairment and Its Impact in Multiple Sclerosis: An Australian Multidisciplinary Perspective,” Multiple Sclerosis and Related Disorders 79 (2023): 104952, 10.1016/j.msard.2023.104952. [DOI] [PubMed] [Google Scholar]
- 38. Labiano‐Fontcuberta A., Costa‐Frossard L., Sainz de la Maza S., et al., “The Effect of Timing of High‐Efficacy Therapy on Processing Speed Performance in Multiple Sclerosis,” Multiple Sclerosis and Related Disorders 64 (2022): 103959, 10.1016/j.msard.2022.103959. [DOI] [PubMed] [Google Scholar]
- 39. Lal A. P., Foong Y. C., Sanfilippo P. G., et al., “A Multi‐Centre Longitudinal Study Analysing Multiple Sclerosis Disease‐Modifying Therapy Prescribing Patterns During the COVID‐19 Pandemic,” Journal of Neurology 271 (2024): 5813–5824, 10.1007/s00415-024-12518-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 40. Foong Y. C., Green M., Zargari A., et al., “Mobile Phones as a Potential Vehicle of Infection in a Hospital Setting,” Journal of Occupational and Environmental Hygiene 12 (2015): D232–D235. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 41. Cobo‐Calvo A., Zabalza A., Río J., et al., “Impact of COVID‐19 Pandemic on Frequency of Clinical Visits, Performance of MRI Studies, and Therapeutic Choices in a Multiple Sclerosis Referral Centre,” Journal of Neurology 269 (2022): 1764–1772, 10.1007/s00415-021-10958-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 42. van der Walt A., Butzkueven H., Shin R. K., et al., “Developing a Digital Solution for Remote Assessment in Multiple Sclerosis: From Concept to Software as a Medical Device,” Brain Sciences 11 (2021): 1247. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 43. Foong Y. C., Bridge F., Merlo D., et al., “Smartphone Monitoring of Cognition in People With Multiple Sclerosis: A Systematic Review,” Multiple Sclerosis and Related Disorders 73 (2023): 104674, 10.1016/j.msard.2023.104674. [DOI] [PubMed] [Google Scholar]
- 44. Woelfle T., Pless S., Wiencierz A., et al., “Practice Effects of Mobile Tests of Cognition, Dexterity, and Mobility on Patients With Multiple Sclerosis: Data Analysis of a Smartphone‐Based Observational Study,” Journal of Medical Internet Research 23 (2021): e30394, 10.2196/30394. [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Availability Statement
The data that support the findings of this study are available from the corresponding author upon reasonable request.
