Skip to main content
Digital Health logoLink to Digital Health
. 2022 Nov 4;8:20552076221135387. doi: 10.1177/20552076221135387

Feasibility of a smartphone app to monitor patient reported outcomes in multiple sclerosis: The haMSter interventional trial

Patrick Altmann 1,, Markus Ponleitner 1, Tobias Monschein 1, Nik Krajnc 1, Gudrun Zulehner 1, Tobias Zrzavy 1, Fritz Leutmezer 1, Paulus Stefan Rommer 1, Barbara Kornek 1, Thomas Berger 1, Gabriel Bsteh 1
PMCID: PMC9638697  PMID: 36353697

Abstract

Background

Monitoring of patient outcomes in multiple sclerosis (MS) is fundamental for individualized treatment decisions. So far, these decisions have been motivated by conventional outcomes, i.e., relapses or clinical disability supported by radiological disease activity. Complementing this concept, patient reported outcomes (PROs) assess individual health-related quality of life, among other constructs. Their inclusion in clinical routine, however, has been challenging as assessing them requires resources of time and personnel.

Objective

This interventional feasibility study investigated the haMSter app, a mobile health solution for remote and longitudinal monitoring of PROs in a sample of people with MS (pwMS).

Methods

The core feature of haMSter is the provision of three PRO questionnaires relevant to MS (anxiety/depression, MS-related quality of life, and fatigue) that patients can fill out once a month. For this feasibility trial, we offered 50 volunteers to use the haMSter app over six months and to take part in a haMSter study visit. This consultation concluded the study and participants had the opportunity to discuss their graphically plotted PRO results with their treating physician.

Results

The main outcome was overall patient adherence to monthly completion of the PRO questionnaires, which remained high up to 4 months (98%) and dropped over time (months 5: 83% and 6: 66%). Exploratory outcomes included patient satisfaction as estimated on the Telemedicine Perception Questionnaire (TMPQ, 17–85 points). The mean TMPQ score was 64 (95%CI: 62–66) points, indicating a high degree of approval. Ancillary tests included subgroup analyses of participants with particularly high or low satisfaction and upper extremity disability as a potential obstacle to utility or acceptance. We found no distinct characteristics separating participants with high or low satisfaction.

Conclusions

In this first feasibility trial, the haMSter app for longitudinal PRO monitoring was well received in terms of adherence and satisfaction.

ClinicalTrials.gov identifier: NCT04555863.

Keywords: remote patient monitoring, telemedicine, m-health, multiple sclerosis, patient reported outcomes

Introduction

At a global level, 2.5 million people are estimated to live with multiple sclerosis (MS).1 MS onset typically peaks between 20–40 years of age, with those affected finding themselves in the midst of higher education, career planning, or entry into parenthood.2,3 At this pivotal moment, neurologists treating MS play a vital role in monitoring their patients’ health and informing them of treatment options. This therapeutic process can benefit immensely from taking patient reported outcomes (PROs) into consideration. PROs, among other psychosocial factors, collect information on a broad range of topics that can help indicate an individual's perception of their symptoms. PROs can help cultivate awareness of many aspects of a person's life, including health-related quality of life (HRQoL).4,5 In clinical research, PROs can help to understand treatment effects on a subjective level be it in randomized controlled trials or real world studies. Regulatory authorities have moved to encourage or even require PRO data, recognizing that conventional disease outcomes in MS (e.g., mere physical disability, clinical relapses or radiographic features) do not fully account for the personal impact that MS may have on an individual.6,7 However, assessing PROs in routine care is somewhat challenging as they require significant resources of time, personnel and, ultimately, costs.8 Hence, convenient, i.e., timesaving or easily accessible, methods that would allow to monitor PROs in clinical practice are needed.

The term mobile health (m-health) broadly refers to the exchange of health information via mobile devices, and all strategies orchestrating this exchange.9 Previously, efforts have been made to deploy m-health in monitoring physical disability in MS. As an example, one smartphone app uses sensor-based technology to assess surrogate markers for dexterity and mobility in a longitudinal manner.10,11 The Multiple Sclerosis Performance Test (MSPT), a tablet application and hence another representative of m-health for MS, has been attracting broad attention. It circumvents the problem of rater variability while offering a comprehensive battery of neuroperformance testing.12,13 Another earlier study tested adherence to an online self-assessment tool in over 300 people with MS (pwMS) operating via web-based interactions. Results of the study showed an overall adherence to two PRO questionnaires for HRQoL at about 50% over two years.14,15 Each of the innovations mentioned above illustrate the value that m-health could add to MS treatment already. Nevertheless, our analysis of current MS-related m-health implementations generated two observations that suggest further opportunities for improvement: Our first observation revolves around studies that assess and compare adherence rates to PROs in MS longitudinally. We noticed that, so far, most studies have not been embedded into any clinical context, much less a person-to-person patient doctor consultation. Second, although a multitude of applications document several MS-related digital biomarkers for physical disability there seems to be less attention for PROs related to HRQoL (i.e., fatigue or affective disorders) in m-health applications for MS. As mentioned above, PROs constitute highly significant biomarkers for MS, and as such require cost- and time-efficient measurements to enable implementation in clinical routine. This raises the question as to how m-health can be utilized to monitor an individual's HRQoL in a clinical setting involving routine patient-doctor-interactions.

Therefore, this interventional feasibility trial tested the utility of a new smartphone application named haMSter for remote and longitudinal monitoring of PROs. Main outcomes of our study involved adherence and patient satisfaction.

Methods

Ethics review, patient consent and trial registration

The ethics review board at the Medical University of Vienna approved this study (EK1798/2019). We obtained written informed consent from all study participants and followed guidelines set by the Declaration of Helsinki. We adhered to STROBE guidelines for reporting on this observational study.16 This trial is registered with ClinicalTrials.gov (identifier: NCT04555863).

Study protocol and the haMSter smartphone app

The detailed study protocol and an overview of the haMSter app along with information about its inception are described elsewhere.8 The haMSter app was implemented as an m-health tool to monitor select PROs for people with MS. We chose the name haMSter because we wanted our app's name to meet certain requirements: (i) the name should not expose the user's diagnosis, (ii) yet the term “MS” should somehow still be included, and (iii) we looked for a name which, generally speaking, raises pleasant associations or memories (i.e., summer vacations, childhood memories, animals). In short, the haMSter app offers, among other functions, the haMStercare feature, which provides three PRO questionnaires that patients can fill out once a month: (i) the HADS17 for anxiety and depression (Hospital Anxiety and Depression Scale), (ii) the MSIS18 for MS-related quality of life (Multiple Sclerosis Impact Scale, reporting two subscales for physical and psychological impairment), and (iii) the FSMC19 for MS-related fatigue (Fatigue Scale for Motor and Cognitive Fatigue, with a motor and cognitive subscale). The app was conceived so that patients can fill out either all PROs or none per time point. This feasibility trial focuses on adherence to this PRO-related function. We programmed the app to run on smart devices operating on Android version 8.0.0 or above (Google International LLC) and Apple's iOS version 5.0.0 or higher (Apple Inc.).

Trial design and participants

This is a single center observational study. We enrolled 50 patients at the MS outpatient clinic at the Department of Neurology at the Medical University of Vienna, Austria. All patients fulfilling current McDonald diagnostic criteria for MS were eligible to participate in this study.20 During a routine outpatient consultation, recruiting neurologists informed all their patients about the possibility to participate in a study that would allow them to use a smartphone application that monitors their HRQoL by means of questionnaires regarding aspects of their mental health (HADS), MS-related quality of life (MSIS) and MS-related fatigue (FSMC). We showed screenshots of the haMSter app to potential participants and explained to them that they would be asked to fill out said questionnaires once a month, which would take approximately 15–30 min.8 The whole study period would span over a period of 6 months. All patients that expressed an interest in enrollment received detailed instructions on how to use the app. We offered inclusion irrespective of patient characteristics or disease parameters such as gender, age, disease phenotype and duration or treatment. Exclusion criteria were obvious language barriers or technical obstacles such as smart devices that would not run the haMSter app. There was no change in protocol at any point in this study. At baseline, the treating neurologist documented the participants’ age, gender, disease duration, disease phenotype (relapsing or progressive MS),21 clinical disability (Expanded Disability Status Scale, EDSS),22 number of relapses over the past 12 months, and disease-modifying treatment available upon enrollment (DMT, categorized as highly [cladribine, fingolimod, natalizumab, ocrelizumab, rituximab] or moderately effective [dimethyl fumarate, glatiramer acetate, interferon beta preparations, teriflunomide].

Interventions and outcome measures

Upon consent, we informed participants about the study protocol, which consisted of five steps. First, the regular (baseline) visit at our MS outpatient clinic, at which we offered study participation and assessed baseline disease characteristics (see above). Second, patients would receive a code to download the haMSter app. Third, the study period of six months during which patients could freely use the haMSter app while being reminded once every 30 days to fill out the PRO questionnaires. This reminder was implemented as to automatically send out push notifications every 30 days reading “please feed your haMSter”. Fourth, the haMSter study visit. This study visit was arranged as a regular on-site appointment, additionally involving an open discussion between the patient and treating neurologist about the PRO results. For this matter, we provided a Bluetooth printer in the exam room that would allow participants to connect with their phone and let them print out the haMSter scoring sheet. This scoring sheet can be viewed as a graphical output chart which plots the results as bar graphs of each PRO questionnaire over time. In that way, the observer can estimate longitudinal changes in PRO scores (for a detailed description and example, please refer to the study.8 We chose this setting as we wanted to discuss the PRO results based on a visual presentation. At the same time, we found a discussion between patient and physician while simultaneously looking at a small smartphone screen to possibly invade a personal (physical) space. Another reason why we chose to print said scoring sheet was a mere practical one, as the different bar graphs would not fit on just one smartphone screen but on a regular letter-sized page. In the future, these scoring sheets could easily be filed in a patient chart if needed. As there are, currently, no guidelines on how to discuss PROs as part of clinical care, we chose the following approach: The main goal was for both patient and physician to look at the bar charts illustrating the test results and to initiate an open discussion of what conclusions they may draw from what they see. We predetermined two aspects that should be discussed on every haMSter visit: (i) general trends in PRO scores (improvement vs. worsening), and (ii) consideration of certain life events that would explain a change in PRO scores. It is important to highlight that we openly affirmed our patients that this discussion did not fall under any guideline that would be considered as evidence based medicine. This haMSter visit was the foundation for our exploratory outcome measures as the patients’ experience when discussing their PRO results should reflect on our feedback questionnaires. Fifth, after this consultation, we invited participants to provide feedback on their experience with the haMSter app. We used the Telemedicine Perception Questionnaire (TMPQ), a 17-item survey, through which patients can rate their experience with a telehealth application. On the TMPQ, participants can award 1–5 points per question resulting in a total score of 17–85 with higher scores indicating greater satisfaction.23,24 Furthermore, we asked patients to rate specific aspects of the haMSter app using Likert-scale questions. These questions are referenced in Figure 5. Finally, at the end of our survey, we offered our patients the opportunity to give written open statements about what they did or did not like about the app.

Figure 5.

Figure 5.

Patient perspectives from this study (n = 47).

The primary outcome was adherence to the haMSter app, assuming a result of 50% or greater to be clinically meaningful. Overall adherence was calculated as the maximum number of actually filled out questionnaires by all participants divided by the maximum number of questionnaires. The haMSter app implemented the questionnaires in a way so that each question needed to be answered and, therefore, a submission of incomplete surveys was not possible. The maximum number of filled-out questionnaires in the per-protocol analysis was 282 (1 set of questionnaires per month multiplied with the study period of 6 months multiplied with the number of patients on final analysis: 1 × 6 × 47 = 282). Exploratory outcome measures included patient perception based on mean TMPQ values and further analyses aiming to identify a subgroup of patients particularly pleased with the haMSter app. Ancillary analyses were set to investigate satisfaction with the app for persons with upper extremity (UE) disability. We defined UE impairment based on the EDSS scoring system which generates a total score ranging from 0 (no disability) to 10 points (death) with scores in between indicating the degree of neurological disability.22 For the purpose of this subgroup analysis, we identified patients who received a score of 2 or above on the pyramidal or cerebellar functional system subsections.

We intended to make an effort to report as much of the users’ experience as possible. Therefore, we devoted the last section of our analyses to patient ratings and comments.

Considerations of data protection and privacy

We described the data storage process within the haMSter app in the study protocol.8 In short, data is stored exclusively offline on the smartphone's hard drive, bearing in mind that data would be lost in case participants lost their phone or accidentally deleted the app. We educated prospective participants on privacy guidelines in context with use of the haMSter app on the informed consent form. Patients consented to have their relevant study information implemented in their regular patient records (i.e., PRO scores). We analyzed our data in a pseudo-anonymized manner through a study identification number (study ID), which we assigned to our participants upon enrollment. We provided patients with an opportunity to contact the trial's principal investigator if they encountered any technical issues.

Sample size consideration and statistical analyses

This is a pilot trial testing feasibility of a new method. Sample size was determined based on the pre-specified assumption of an adherence of >50% to be relevant, at 80% power and an alpha-error of 5%. We based this assumption on research guidelines on relevant measures for medication adherence.25 We estimated the dropout rate to not exceed 10%. We performed statistical analysis using SPSS 26.0 (SPSS Inc, Chicago, IL, USA). Continuous variables are described by the mean (±standard deviation, SD) or the median value (range) as appropriate depending on presence of normal distribution (assessed by Kolmogorov-Smirnov test with Lilliefors correction). Group comparisons were calculated by independent analyses of variance (ANOVA), Kruskal-Wallis-Test or chi-square test as appropriate. Correlation analyses were conducted by Spearman correlation coefficient (rs). Change of questionnaire completion percentage over the study period was calculated by chi-square test for trend. Dependence of changes in PRO scores on patient baseline characteristics were tested in multivariable general linear models of repeated measures. A two-sided value of p <0.05 was considered statistically significant. We performed our final analysis in a per-protocol manner.

Results

Patient characteristics

Figure 1 demonstrates the patient flow. We asked 53 patients to take part in this study. Three patients declined participation upon informed for consent giving the following reasons: (i) not possessing a smart-device to run the haMSter app on (n = 1), (ii) the smartphone's operating system was too outdated to run the app (n = 1), or (iii) not being interested in PRO research (n = 1). Three participants were excluded over the course of the study because they (i) had accidentally deleted the app (n = 1), (ii) had lost their smartphone (n = 1), or (iii) were lost to follow-up (n = 1). Recruitment was open from April 2020 (first patient in) and lasted until November 2020 (last patient in). The haMSter visits that would mark the end of this study took place from October 2020 (first patient out) through June 2021 (last patient out). The trial ended with data analysis in September 2021. Table 1 lists clinical and sociodemographic information and baseline PRO results from the 47 patients who completed this study and who were included in the final analysis. The mean (SD) age in our cohort was 35 (9) years, 43 (91%) participants had relapsing MS and 27 (57%) were female.

Figure 1.

Figure 1.

STROBE flow diagram for the haMSter trial. STROBE16 flow diagram showing participant flow through each stage of the haMSter feasibility trial.

Table 1.

Demographic and clinical characteristics for this study cohort.

Parameter Category Study cohort
Participants analyzed1 number 47 (100%)
Age2 years 35 (9)
Gender1 female 27 (57%)
male 20 (43%)
Disease phenotype1 relapsing MS 43 (91%)
progressive MS 4 (9%)
Number of relapses 2 last 12 months 0.68 (0.86)
Disease duration 3 years 6.7 (0–30)
Disease modifying treatment for MS1 moderately effective 19 (40%)
highly effective 22 (47%)
no treatment 6 (13%)
Family status1 single 13 (28%)
relationship 18 (38%)
married 16 (34%)
Number of children3 number 0 (0–5)
Education1 9 years of schooling 13 (28%)
secondary schooling 14 (30%)
college degree 20 (42%)
Occupation type1 mainly seated 27 (58%)
physically demanding 4 (9%)
mixed seated/moving 11 (23%)
inability to work or retired 5 (11%)
EDSS at baseline3 1 (0–6.5)
HADS anxiety at baseline3 5 (0–13)
HADS depression at baseline3 1 (0–10)
MSIS physical at baseline3 9 (0–63)
MSIS psychological at baseline3 19 (0–58)
FSMC cognitive at baseline3 16 (10–44)
FSMC motor at baseline3 20 (10–44)
1 absolute number (percentage)
2 mean (standard deviation)
3 median (range)

EDSS: Expanded Disability Status Scale, FSMC: Fatigue Scale for Motor and Cognitive Fatigue, HADS: Hospital Anxiety and Depression Scale, MS: multiple sclerosis, MSIS: Multiple Sclerosis Impact Scale.

Primary outcome measure: adherence

Figure 2 shows the percentage of duly completed questionnaires over the course of this study. As mentioned before, we included three types of PRO measures: HADS, MSIS and FSMC (i.e., “one set”). Patients were asked to fill out one set of these three PROs once a month over a period of 6 months. Over the first three months, all of the 47 participants completed all three sets of the PRO questionnaires (i.e., 100% adherence). At 4 months, adherence diminished slightly to 98%, dropping to 83% at 5 months and 66% at 6 months. Overall, all participants filled out 257 of a possible 282 questionnaires, which constitutes an overall 91% adherence rate. The median (range) number of filled out sets of questionnaires was 6 (3–6) out of a total possible 6 filled out sets over the study period of 6 months. We found some patient characteristics influencing adherence. In our sample, higher age (rs = -0.27, p = 0.035), longer disease duration (rs = -0.32, p = 0.031) and more pronounced cognitive fatigue on the FSMC (rs = -0.43, p = 0.003) moderately correlated with lower adherence (Table 2).

Figure 2.

Figure 2.

Total number of filled out PRO questionnaires over the course of the haMSter trial. PRO: patient reported outcome.

Table 2.

Correlations between the number of filled out questionnaires and patient characteristics.

Patient characteristic r s p-value
EDSS −0.27 0.066
Age −0.31 0.035*
Disease duration −0.32 0.031*
DMT −0.12 0.424
Relationship status 0.02 0.911
Education 0.02 0.882
Disease course 0.08 0.869
Job description −0.38 0.801
EDSS ad baseline −0.27 0.066
HADS anxiety −0.21 0.163
HADS depression −0.20 0.182
MSIS physical −0.09 0.565
MSIS psychological −0.19 0.210
FSMC cognitive −0.43 0.003*
FSMC motor −0.28 0.053
TMPQ 0.06 0.689

rs: Spearman correlation coefficient, *: significant under the assumption of α<0.05, DMT: Disease modifying treatment, EDSS: Expanded Disability Status Scale, FSMC: Fatigue Scale for Motor and Cognitive Fatigue, HADS: Hospital Anxiety and Depression Scale, MS: multiple sclerosis, MSIS: Multiple Sclerosis Impact Scale, TMPQ: Telemedicine Perception Questionnaire

Exploratory outcome measure: patient satisfaction

The mean TMPQ score in the whole cohort was 64 (SD 5.9) points. We found no correlation between patient satisfaction (based on TMPQ scores) and individual patient characteristics (Table 3). This suggests that, in our sample, we could not predict satisfaction with the haMSter app based on disease parameters such as neurological disability or HRQoL (MS-related quality of life, affective disorders or fatigue). Furthermore, we investigated whether there were parameters associated with lower/higher satisfaction with the haMSter app. Comparing patients in the highest quartile of satisfaction with the lowest quartile, we found no significant difference in the distribution of baseline characteristics or PROs between the groups.

Table 3.

Correlations between satisfaction with the haMSter app and patient characteristics.

Disease parameter r s p-value
EDSS −0.05 0.775
Age −0.12 0.427
Disease duration 0.06 0.690
HADS anxiety 0.12 0.416
HADS depression −0.08 0.588
MSIS physical −0.04 0.788
MSIS psychological 0.11 0.452
FSMC cognitive 0.03 0.843
FSMC motor −0.04 0.843

rs: Spearman correlation coefficient, EDSS: Expanded Disability Status Scale, FSMC: Fatigue Scale for Motor and Cognitive Fatigue, HADS: Hospital Anxiety and Depression Scale, MS: multiple sclerosis, MSIS: Multiple Sclerosis Impact Scale.

Individual change in PROs, example scoring sheet and investigation of potential systemic bias

In the future, the haMSter app may be used to monitor individual changes in PROs for pwMS. To demonstrate this possible output, Figure 3 illustrates longitudinal PRO scores for each individual participant from this study. We aimed to establish that there is no systematic bias inherent to the haMSter app, meaning that we investigated whether individual changes in PRO scores were depending on disease severity or other baseline characteristics such as gender, age, disability, medication, the number of relapses and relationship or educational status. To that end, a multivariable general linear model of repeated measures showed no significant impact of these baseline characteristics on changes in PRO scores. In order to highlight the importance of discussing individual HRQoL, we present in Figure 4 two example scoring sheets from two female patients participating in this study. Both patients had accumulated the same extent of physical disability (EDSS rating of 6.0 points) at baseline, meaning they were reliant on a persistent walking aid. Their MS-related quality of life, in contrast, as demonstrated on the MSIS could hardly be further apart. This is an example of how haMSter can reveal aspects of disability that may been missed if PROs are not part of routine care. When we discussed these results with these two patients, we were surprised to learn about their different biopsychosocial burden of MS while living with the same amount of disability. The patient with better PRO scores had a more favorable job situation, greater support from her family and access to physical and occupational therapy. Even though this assumption is based on a single observation, we believe this is a relevant example of how haMSter can aid in the pursuit of a new narrative for holistic disease monitoring.

Figure 3.

Figure 3.

Spaghetti plots illustrating changes in PROs over the course of this study. FSMC: Fatigue Scale for Motor and Cognitive Fatigue, HADS: Hospital Anxiety and Depression Scale, MSIS: Multiple Sclerosis Impact Scale. Thick dots represent the median scores, whiskers the interquartile range.

Figure 4.

Figure 4.

Example scoring sheet showing MSIS results from two female patients participating in the haMSter trial experiencing the same level of disability (EDSS 6.0). EDSS: Expanded Disability Status Scale, MSIS: Multiple Sclerosis Impact Scale.

Ancillary analyses

We predefined upper extremity disability (UE) as a potential obstacle concerning satisfaction with the haMSter app. To investigate UE disability as a potential confounder, we performed group comparisons between patients with impaired and unimpaired hand function. We identified nine (19.1%) participants with impaired hand function. Mean TMPQ scores did not significantly differ between patients with and without upper extremity disability (64 [SD 6.1] vs. 65 [5.1], p = 0.195). In addition to this subgroup analysis, we handed out empirical Likert-scale questions to participants specifically addressing the haMSter app. Results from this inquiry are presented as pie charts in Figure 5. The majority of patients agreed (pooled for “strongly agree” and “agree”) that the haMSter app helped advance their communication capabilities with their physician (74%) and improve their care (70%). Furthermore, haMSter users in our sample stated a benefit from discussing the haMSter results (81%, pooled for “strongly agree” and “agree”). Most of our patients would have liked to continue using the app and were satisfied with their overall experience from using the haMSter app (80% and 84%, respectively). Most of our participants (94%) would recommend it to another person with MS. For most patients, data collection took a mere 30 min or less each month.

Open comments from participants

As the final part of the satisfaction questionnaire, we invited patients to give an open statement about what they did or did not enjoy about the haMSter app. These comments may be helpful for implementation of this m-health application into clinical practice or when planning similar studies. Table 4 describes these results and lists all these statements which are summarized in terms of similar and overlapping ideas. Out of 47 patients analyzed, three patients did not give an answer. Overall, the summary of open statements in our sample corresponds well to the results from our survey discussed earlier (Figure 5). The majority of comments related to haMSter helping participants to discover changes in their health and improving communication with their physician. Surprisingly, seventeen comments touched the subject of haMSter raising awareness of symptoms, while putting this in a positive context (“makes me think about my symptoms” and “helps detect symptoms”).

Table 4.

Summary of open statements from patients participating in the haMSter trial.

haMster helped me discover changes in aspects of my health over time (n = 18).
haMSter improved communication between patient and physician as symptoms can be portrayed better (n = 14).
I appreciated haMSter for making me think about my symptoms (n = 9).
haMSter helped me detect symptoms that might otherwise be missed (n = 8).
I thought the reminder for taking my medication was very useful (n = 3).
I did not see much benefit in using haMSter as I have only mild disease (n = 3).
The haMSter app is easy to use in my daily life (n = 2).

Harms or unintended effects

We asked patients at the end of the haMSter visit if they had felt uncomfortable with the app in terms of (i) feeling disturbed by the PRO questionnaires, (ii) discovering new or persisting symptoms or (iii) feeling helpless because there was no live support with the app. There were no such reports or concerns voiced in this study.

Discussion

A progressing and chronic condition, MS can be considered a prime example of a disease that affects HRQoL throughout adulthood.26 In the clinical context, there is a striking discrepancy between acknowledging impaired HRQoL as a burden for pwMS and adopting the monitoring of PROs into practice. Undoubtedly, the medical community is asked to allocate resources toward deciding how HRQoL of pwMS can be monitored and, subsequently, inform treatment decisions.8 It is against this background that m-health has been instrumental in facilitating longitudinal and remote patient monitoring.1013 This interventional feasibility trial introduces the haMSter smartphone application as a tool for monitoring PROs in pwMS.8 All in all, the validated outcome questionnaires on affective disorders, MS-related quality of life and fatigue were well received by the participants of this study.

Over time, our cohort demonstrated a decline in adherence to the haMSter smartphone application. Up to an observation period of 4 months, adherence was nearly at 100%, dropping to 83% at the 5 month and 66% at the 6 month mark. Another study investigating adherence to a web-based PRO assessment reported comparable results in terms of declining adherence.15 Unfortunately, reasons for the drop in adherence were not collected in our survey. It is conceivable that participants may have been less inclined to use the haMSter app as they were approaching their next appointment at the 6-month mark. Overall, we identified several patient factors that moderately influenced adherence in our study: Individuals of older age were less inclined to preserve adherence (rs = −0.27). A similar effect was found for a longer disease duration (rs = −0.32). Greater cognitive fatigue as demonstrated on the FSMC constituted another possible barrier to adherence (rs = −0.43). With regards to patient satisfaction, the mean TMPQ score of our participants was at 64 out of a maximum of 85 points. This result compares well to another study that tested a web chat triage system for generic health problems (81% vs. 75% satisfaction rate).27 Subgroup analyses did not reveal any disease characteristics that would predict satisfaction. We would have expected satisfaction to be skewed towards younger age and unfavorable disease outcomes in terms of disability. However, this study was not designed to detect such nuances or individual preferences. It is important to acknowledge that, in order to ensure a meaningful monitoring of individual HRQoL, changes measured need to be independent of an a-priori condition that would introduce bias. In this context, multivariable analyses did not show a significant impact of patient characteristics at baseline on changes in PRO scores per individual patient over time. We believe these results demonstrate that the haMSter app is able to detect actual changes in HRQoL without interference from patient or disease parameters. This is highly relevant, as MS remains a variable disease exposing a variety of symptoms limiting capacity for prognosis.

One of the limitations of our study design lies within its sampling method, namely the recruitment of an unselected real-life cohort of pwMS. In this context, we made several efforts to reduce bias: First, as this study mostly reports results in a descriptive manner, we do not suspect analysis bias. Further, we assume that selection bias is on the low end of the spectrum, considering that virtually any patient was eligible to participate and we addressed this further by reporting reasons for patients to decline participation. This is evidenced by the balanced demographic- and clinical characteristics of participants that are representative of an MS population. However, we should mention that since we chose to recruit our participants without any excluding criteria, our cohort characteristics were somewhat skewed to a more benign phenotype. The median EDSS was 1.0 (range: 0–6.5), 91% of patients had relapsing MS. As this was a feasibility trial, we believe our main outcome measures to still hold. Three patients declined inclusion due to a lack of technological resources, and three patients dropped out over the course of this study. In this context, it appears noteworthy that two of these dropouts might have been avoidable as they were related to a technological error (one patient deleted the haMSter app and another one lost their phone). However, these limitations should not diminish the significance of our results. We deliberately chose outcome measures that capture the experience of pwMS. Undoubtedly, Including the treating physician's experience with this app might have provided valuable additional information. Nevertheless, we decided to center our study around the immediate haMSter user's point of view, attempting to gain insight about needs and perspectives of those affected. Furthermore, our study protocol did not include an elaborate qualitative analysis of participating patients’ and doctors’ experiences with the haMSter visit per se. This constitutes a shortcoming of our study, particularly as this is still an important gap in the literature. Future studies testing the app, e.g., in a multicenter setting, should use a mixed-methods approach. In addition, it should be pointed out that this study was carried out in an academic tertiary care center and neither the app's conception nor the study design were influenced by a third party. We should further mention that all app data was stored exclusively offline and on the participant phone's hard drive. Unfortunately, this entailed the exclusion of participants who lost their phones throughout the duration of the study. Nonetheless, we chose to handle the data this way to prioritize privacy and security of online data storage. We realize that prospective generations of the haMSter app must address this issue. On the whole, insights gained in this trial inspire a picture of haMSter as a platform embedded in clinical routine in the future: Using this app, pwMS could document PROs personally important to them or their treating physician. In turn, these PROs might spark a long term dialogue about HRQoL between those affected and their caregivers.

In terms of promoting m-health in the field of MS and using it to improve patient care, we believe the haMSter app has the potential to turn heads. Over the past few years, strategies for remote monitoring have raised a considerable amount of attention. Current applications for pwMS offer solutions for the screening of physical symptoms, rehabilitation, education and disease management.28 These days, biosensing and wearable technologies constitute a large segment and have even spiked commercial interests.29,30 However, these technologies have been somewhat limited to physical symptoms. The haMSter app is unique in the way that it ascertains symptoms that might be missed if not explicitly examined. Such silent symptoms often reside within a blind spot outside the scope of routine care. Yet, it is known that these symptoms can significantly impact HRQoL.31,32 Our results illustrate that haMSter is capable of uncovering these symptoms. Our participants appreciated haMSter for exhibiting changes in their health that might have slipped their own attention, while simultaneously improving communication with their treating physician.

In summary, this feasibility trial of the haMSter smartphone application reveals multifaceted aspects of adherence and satisfaction using a new method for monitoring PROs in pwMS. Adherence was 98% at 4 months, dropping to 83% after 5 months and 66% after 6 months. Patients appreciated haMSter for enabling their discovery of changes in their health over time, improving communication with their physician and helping them detect symptoms that might have otherwise been missed. Consequently, we have reason to believe that testing haMSter's concept in other chronic diseases could yield great results as well.

Acknowledgements

The authors would like to thank Mr Werner Hinterberger and Mr Karl Schauenstein (both: CAKE Communications Ltd, Vienna) for counselling and coding the haMSter app. Furthermore, we would like to thank Ms. Nicole Kirbisch (Medical University of Vienna) for her administrative help and scheduling.

Footnotes

Contributorship: PA: Conceptualization, Data curation, Formal Analysis, Investigation, Methodology, Project Administration, Writing – Original Draft, Writing – Review & Editing; MP: Data curation; Investigation, Project Administration, Writing – Review & Editing; TM: Data curation; Investigation, Writing – Review & Editing; NK: Visualization, Writing – Review & Editing; GZ: Data curation; Investigation, Writing – Review & Editing; TZ: Data curation; Investigation, Writing – Review & Editing; FL: Conceptualization, Data curation, Investigation, Methodology, Resources, Supervision, Writing – Review & Editing; BK: Data curation; Investigation, Writing – Review & Editing; TB: Conceptualization, Data curation, Methodology, Resources, Supervision, Writing – Review & Editing, GB: Conceptualization, Data curation, Formal Analysis, Investigation, Methodology, Writing – Original Draft, Writing – Review & Editing.

Availability of data and materials: Data is available from the corresponding author upon reasonable request and after approval from the ethics review board at the Medical University of Vienna.

Declaration of conflicting interests: Patrick Altmann: has participated in meetings sponsored by, received speaker honoraria or travel funding from Biogen, Merck, Roche, Sanofi-Genzyme and Teva, and received honoraria for consulting from Biogen. He received a research grant from Quanterix International and was awarded a sponsorship from Biogen, Merck, Sanofi-Genzyme, Roche, and Teva to programme the haMSter app.

Markus Ponleitner: nothing to declare.

Tobias Monschein: has participated in meetings sponsored by or received travel funding from Biogen, Merck, Novartis, Roche, Sanofi-Genzyme and Teva.

Gudrun Zulehner: has participated in meetings sponsored by or received travel funding from Biogen, Merck, Novartis, Roche, Sanofi-Genzyme and Teva. She received speaker honoraria from Biogen.

Nik Krajnc: has participated in meetings sponsored by, received speaker honoraria or travel funding from Merck, Novartis and Roche, and holds a grant for a Multiple Sclerosis Clinical Training Fellowship Programme from the European Committee for Treatment and Research in Multiple Sclerosis (ECTRIMS).

Tobias Zrzavy: has participated in meetings sponsored by or received travel funding from Biogen, Merck, Novartis, Roche, Sanofi-Genzyme and Teva.

Fritz Leutmezer: has participated in meetings sponsored by, received speaker honoraria or travel funding from Actelion, Almirall, Biogen, Celgene, MedDay, Merck, Novartis, Roche, Sanofi-Genzyme and Teva, and received honoraria for consulting Biogen, Celgene, Merck, Novartis, Roche, Sanofi-Genzyme and Teva.

Paulus Stefan Rommer: has received honoraria for consultancy/speaking from AbbVie, Allmiral, Alexion, Biogen, Merck, Novartis, Roche, Sandoz, Sanofi Genzyme, has received research grants from Amicus, Biogen, Merck, Roche.

Barbara Kornek: has received speaking honoraria or travel support from Biogen, Celgene, Merck, Novartis, Roche, Sanofi-Genzyme, and Teva and gives advice to Biogen, Celgene, Johnson & Johnson, Merck, Novartis, Roche and Sanofi-Genzyme.

Thomas Berger: has participated in meetings sponsored by and received honoraria (lectures, advisory boards, consultations) from pharmaceutical companies marketing treatments for MS: Allergan, Bayer, Biogen, Bionorica, Celgene/BMDS, GSK, Janssen-Cilag, MedDay, Merck, Novartis, Octapharma, Roche, Sandoz, Sanofi-Genzyme, Teva. His institution has received financial support in the past 12 months by unrestricted research grants (Biogen, Bayer, Celgene/BMS, Merck, Novartis, Sanofi Aventis, Teva) and for participation in clinical trials in multiple sclerosis sponsored by Alexion, Bayer, Biogen, Celgene/BMS, Merck, Novartis, Octapharma, Roche, Sanofi-Genzyme, Teva.

Gabriel Bsteh: has participated in meetings sponsored by, received speaker honoraria or travel funding from Biogen, Celgene/BMS, Lilly, Merck, Novartis, Roche, Sanofi-Genzyme and Teva, and received honoraria for consulting Biogen, Celgene/BMS, Novartis, Roche, Sanofi-Genzyme and Teva. He has received unrestricted research grants from Celgene/BMS and Novartis.

The ethics review board at the Medical University of Vienna approved this study (EK1798/2019).

Funding: The authors disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This study was funded by the Medical University of Vienna.

Guarantor: Patrick Altman

ORCID iD: Patrick Altmann https://orcid.org/0000-0002-2983-3693

References

  • 1.Dobson R, Giovannoni G. Multiple sclerosis-a review. Eur J Neurol 2019; 26: 27–40. [DOI] [PubMed] [Google Scholar]
  • 2.Oh J, Vidal-Jordana A, Montalban X. Multiple sclerosis: clinical aspects. Curr Opin Neurol 2018; 31: 752–759. [DOI] [PubMed] [Google Scholar]
  • 3.Kobelt G, Thompson A, Berg Jet al. et al. New insights into the burden and costs of multiple sclerosis in Europe. Mult Scler 2017; 23: 1123–1136. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Nowinski CJ, Miller DM, Cella D. Evolution of patient-reported outcomes and their role in multiple sclerosis clinical trials. Neurotherapeutics 2017; 14: 934–944. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Khurana V, Sharma H, Afroz Net al. et al. Patient-reported outcomes in multiple sclerosis: a systematic comparison of available measures. Eur J Neurol 2017; 24: 1099–1107. [DOI] [PubMed] [Google Scholar]
  • 6.The Lancet Neurology. Patient-reported outcomes in the spotlight. Lancet Neurol 2019; 18: 981. [DOI] [PubMed] [Google Scholar]
  • 7.D'Amico E, Haase R, Ziemssen T. Review: patient-reported outcomes in multiple sclerosis care. Mult Scler Relat Disord 2019; 33: 61–66 [DOI] [PubMed] [Google Scholar]
  • 8.Altmann P, Hinterberger W, Leutmezer Fet al. The smartphone app haMSter for tracking patient-reported outcomes in people with multiple sclerosis: protocol for a pilot study. JMIR Res Protoc 2021; 10: e25011. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Bashshur R, Shannon G, Krupinski Eet al. et al. The taxonomy of telemedicine. Telemed J E Health 2011; 17: 484–494. [DOI] [PubMed] [Google Scholar]
  • 10.Midaglia L, Mulero P, Montalban Xet al. Adherence and satisfaction of smartphone- and smartwatch-based remote active testing and passive monitoring in people with multiple sclerosis: nonrandomized interventional feasibility study. J Med Internet Res 2019; 21: e14863. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Montalban X, Graves J, Midaglia Let al. et al. A smartphone sensor-based digital outcome assessment of multiple sclerosis. Mult Scler 2021; 28: 654–664. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Rao SM, Galioto R, Sokolowski Met al. Multiple sclerosis performance test: validation of self-administered neuroperformance modules. Eur J Neurol 2020; 27: 878–886. [DOI] [PubMed] [Google Scholar]
  • 13.Rudick RA, Miller D, Bethoux F, , et al. The multiple sclerosis performance test (MSPT): an iPad-based disability assessment tool. J Visualized Exp 2014: e51318. doi: 10.3791/51318. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Jongen PJ, Heerings M, Lemmens WAet al. et al. A prospective web-based patient-centred interactive study of long-term disabilities, disabilities perception and health-related quality of life in patients with multiple sclerosis in the Netherlands: the Dutch multiple sclerosis study protocol. BMC Neurol 2015; 15: 28. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Jongen PJ, Kremer IEH, Hristodorova Eet al. et al. Adherence to web-based self-assessments in long-term direct-to-patient research: two-year study of multiple sclerosis patients. J Med Internet Res 2017; 19: e249. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.von Elm E, Altman DG, Egger Met al. et al. The strengthening the reporting of observational studies in epidemiology (STROBE) statement: guidelines for reporting observational studies. Lancet 2007; 370: 1453–1457. [DOI] [PubMed] [Google Scholar]
  • 17.Zigmond AS, Snaith RP. The hospital anxiety and depression scale. Acta Psychiatr Scand 1983; 67: 361–370. [DOI] [PubMed] [Google Scholar]
  • 18.Schäffler N, Schönberg P, Stephan Jet al. et al. Comparison of patient-reported outcome measures in multiple sclerosis. Acta Neurol Scand 2013; 128: 114–121. [DOI] [PubMed] [Google Scholar]
  • 19.Penner IK, Raselli C, Stöcklin Met al. et al. The fatigue scale for motor and cognitive functions (FSMC): validation of a new instrument to assess multiple sclerosis-related fatigue. Mult Scler 2009; 15: 1509–1517. [DOI] [PubMed] [Google Scholar]
  • 20.Thompson AJ, Banwell BL, Barkhof Fet al. Diagnosis of multiple sclerosis: 2017 revisions of the McDonald criteria. Lancet Neurol 2018; 17: 162–173. [DOI] [PubMed] [Google Scholar]
  • 21.Lublin FD, Reingold SC, Cohen JAet al. Defining the clinical course of multiple sclerosis: the 2013 revisions. Neurology 2014; 83: 278–286. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Kurtzke JF. Rating neurologic impairment in multiple sclerosis: an expanded disability status scale (EDSS). Neurology 1983; 33: 1444–1452. [DOI] [PubMed] [Google Scholar]
  • 23.Altmann P, Ivkic D, Ponleitner Met al. Individual perception of telehealth: validation of a German translation of the telemedicine perception questionnaire and a derived short version. Int J Environ Res Public Health 2022; 19: 902. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Demiris G, Speedie S, Finkelstein S. A questionnaire for the assessment of patients’ impressions of the risks and benefits of home telecare. J Telemed Telecare 2000; 6: 278–284. [DOI] [PubMed] [Google Scholar]
  • 25.GBD 2016 Multiple Sclerosis Collaborators. Global, regional, and national burden of multiple sclerosis 1990-2016: a systematic analysis for the Global Burden of Disease Study 2016. Lancet Neurol 2019; 18: 269–285. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Collaborators GMS. Global, regional, and national burden of multiple sclerosis 1990-2016: a systematic analysis for the global burden of disease study 2016. Lancet Neurol 2019; 18: 269–285. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Eminovic N, Wyatt JC, Tarpey AMet al. et al. First evaluation of the NHS direct online clinical enquiry service: a nurse-led web chat triage service for the public. J Med Internet Res 2004; 6: 17. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Marziniak M, Brichetto G, Feys Pet al. et al. The use of digital and remote communication technologies as a tool for multiple sclerosis management: narrative review. JMIR Rehabil Assist Technol 2018; 5: e5. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Graves JS, Montalban X. Biosensors to monitor MS activity. Mult Scler 2020; 26: 605–608. [DOI] [PubMed] [Google Scholar]
  • 30.Yousef A, Jonzzon S, Suleiman Let al. et al. Biosensing in multiple sclerosis. Expert Rev Med Devices 2017; 14: 901–912. [DOI] [PubMed] [Google Scholar]
  • 31.Lechner-Scott J, Waubant E, Levy Met al. et al. Silent symptoms of multiple sclerosis. Mult Scler Relat Disord 2019; 36: 101453. [DOI] [PubMed] [Google Scholar]
  • 32.Altmann P, Leutmezer F, Leithner Ket al. Predisposing factors for sexual dysfunction in multiple sclerosis. Front Neurol 2021; 12: 618370. [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from Digital Health are provided here courtesy of SAGE Publications

RESOURCES