Skip to main content
Digital Health logoLink to Digital Health
. 2024 Apr 9;10:20552076241238133. doi: 10.1177/20552076241238133

Feasibility and usability of remote monitoring in Alzheimer's disease

Marijn Muurling 1,2, Casper de Boer 1,2,, Chris Hinds 3, Alankar Atreya 3, Aiden Doherty 3, Vasilis Alepopoulos 4, Jelena Curcic 5, Anna-Katharine Brem 6,7, Pauline Conde 6, Sajini Kuruppu 6, Xavier Morató 8, Valentina Saletti 9, Samantha Galluzzi 9, Estefania Vilarino Luis 10, Sandra Cardoso 11, Tina Stukelj 12, Milica Gregorič Kramberger 12,13, Dora Roik 14, Ivan Koychev 15, Ann-Cecilie Hopøy 16, Emilia Schwertner 13,17, Mara Gkioka 18, Dag Aarsland 6,16, Pieter Jelle Visser 1,2,13,19; the RADAR-AD consortium20
PMCID: PMC11005503  PMID: 38601188

Abstract

Introduction

Remote monitoring technologies (RMTs) can measure cognitive and functional decline objectively at-home, and offer opportunities to measure passively and continuously, possibly improving sensitivity and reducing participant burden in clinical trials. However, there is skepticism that age and cognitive or functional impairment may render participants unable or unwilling to comply with complex RMT protocols. We therefore assessed the feasibility and usability of a complex RMT protocol in all syndromic stages of Alzheimer's disease and in healthy control participants.

Methods

For 8 weeks, participants (N = 229) used two activity trackers, two interactive apps with either daily or weekly cognitive tasks, and optionally a wearable camera. A subset of participants participated in a 4-week sub-study (N = 45) using fixed at-home sensors, a wearable EEG sleep headband and a driving performance device. Feasibility was assessed by evaluating compliance and drop-out rates. Usability was assessed by problem rates (e.g., understanding instructions, discomfort, forgetting to use the RMT or technical problems) as discussed during bi-weekly semi-structured interviews.

Results

Most problems were found for the active apps and EEG sleep headband. Problem rates increased and compliance rates decreased with disease severity, but the study remained feasible.

Conclusions

This study shows that a highly complex RMT protocol is feasible, even in a mild-to-moderate AD population, encouraging other researchers to use RMTs in their study designs. We recommend evaluating the design of individual devices carefully before finalizing study protocols, considering RMTs which allow for real-time compliance monitoring, and engaging the partners of study participants in the research.

Keywords: Alzheimers < disease, dementia < disease, digital health < general, eHealth < general, neurology < medicine, remote patient monitoring <personalised medicine, wearables < personalised medicine, apps < personalised medicine, remote clinical trials < studies

Introduction

The use of remote monitoring technologies (RMTs), such as smartphone applications, wearables or at-home sensors, is rapidly growing worldwide 1 and widely represented in our everyday life. 2 RMTs become increasingly important in clinical research as well, especially after the Coronavirus Disease 2019 (COVID-19) pandemic, in which participants and patients could not visit healthcare facilities for assessments. 3 Aside from their ability to collect data remotely, other potential benefits of RMTs over standard clinical tests are their real-world, continuous, and objective nature. These benefits are especially helpful in diseases with a slow progression, such as multiple sclerosis, 4 recurrent episodes such as in depression, 5 or a long preclinical phase such as in Alzheimer's disease (AD). 6 Despite these benefits, there is skepticism that age, cognitive or functional impairment, or other factors such as unfamiliarity with technology, may render participants unable or unwilling to comply with RMT protocols. This study addresses the knowledge gap in the current literature on the perspective on RMT assessments of people with AD.

AD is a progressive neurodegenerative disease, meaning that clinical symptoms of cognitive and functional impairment worsen over time, leading to the point where someone with AD loses their independence. Cognitive and functional decline are therefore important endpoints in clinical trials, typically measured using pen-and-paper questionnaires or tests such as the Clinical Dementia Rating, 7 for example, used in the clinical trials of the recently approved drugs Aducanumab and Lecanemab.8,9 Because of the properties of RMTs, it is expected that they can measure cognition and function more reliably and continuously and are more sensitive to change in the early stages of AD. Previous studies mainly researched the feasibility and usability of assistive technologies1015 or activity trackers 16 in people with dementia and caregivers, or of digital assessments in cognitively normal participants only. 17 Information on the feasibility and usability of RMT protocols, in which multiple sensors are combined that measure functioning both passively and actively across all stages of AD, is missing. This information is vital for researchers designing future studies using RMTs.

The aim of the current study is to assess the feasibility and usability of a protocol using multiple RMTs. Feasibility was assessed by using compliance rates and drop-out rates. Usability was assessed by using bi-weekly semi-structured interviews. This was done across RMTs, in all syndromic stages of AD and in healthy control participants.

Methods

The RADAR-AD study

The project Remote Assessment of Disease and Relapse – Alzheimer's Disease (RADAR-AD) 18 is a cross-sectional observational study and aims to find and validate RMTs to assess cognitive and functional decline in multiple stages of AD. The study consists of two tiers: tier 1 is the main study and includes smartphone apps and wearables, while tier 2 is a sub-study and includes at-home sensors (Table 1). Tier 1 offers an 8-week RMT protocol, asking participants to wear two activity trackers, one wearable camera, and use three smartphone apps, amongst which an active app with a daily schedule with 4–8 tasks per day and a parallel schedule for caregivers. Tier 2 offers an optional 4-week sub-study protocol, using fixed at-home sensors, a wearable EEG sleep device, and a device to monitor driving behavior (Table 2). RADAR-AD is therefore in a unique position to assess usability and feasibility of different RMTs in different stages of AD. The rationale for the assessed functional domains and their related RMTs are discussed by Owens, Hinds 19 and Muurling, de Boer. 18 The wearable camera was only approved to use in Amsterdam, Brescia, Bucharest, Geneva, Ljubljana, London, Oxford, Stavanger, and Thessaloniki. 20 The wearable camera gives background information on which activities are done during the day using established algorithms for annotation of the photos,21,22 while the wearables only give information on how active participants are. Participants from the sites in Oxford, London, Stavanger and Amsterdam were asked to participate in the tier 2 sub-study, which was optional, and this sub-study could be either done parallel to tier 1 or after the end of the tier 1 protocol. The developers of the Altoida and Mezurio apps, and the wearable camera were involved in the RADAR-AD consortium, while the other developers were not.

Table 1.

Protocol characteristics of the main and sub-study of the RADAR-AD project.

Tier 1 (main study) Tier 2 (sub-study)
Remote monitoring technologies Wearables and smartphone apps At-home sensors
Sample size N = 229 N = 45
Duration 8 weeks 4 weeks

Table 2.

Remote monitoring technologies used in the RADAR-AD study.

RMT Description Passive/active Participant instructions Notes
Tier 1 – Study participant
Altoida Augmented reality app to measure spatial navigation and memory 23 . Active Complete task once per week (15 min). Only for participants who were able to do the task independently during a baseline test in the clinic. Mild-to-moderate AD participants were not tested.
Axivity AX3 Wrist-worn activity tracker to measure sleep and activities 24 . Passive After 4 weeks, switch to wearing the second device provided by the researchers. Can be worn at all times, even during swimming. Compliance was not monitored.
Fitbit Charge 3 Wrist-worn activity tracker to measure sleep, heart rate and number of steps 25 . Passive Charge once per week. Take off when swimming. Can be worn at all times, except for during swimming. Compliance was monitored real-time, and researcher contacted participant if no data was coming in for >2 days in a row.
Mezurio Smartphone app with daily cognitive tasks and questionnaires on mood and sleep 26 . Active Complete new task 2–3 times per day (5–10 min per day). Study partner also installed Mezurio on their own phone and received their own schedule with daily questionnaires (no cognitive tasks).
Wearable camera Neck-worn camera that takes a photo every 15 s to capture context and background information on which activities are actually done during the day 21 . Passive Wear 3 times on 2 consecutive self-chosen days. Optional, not available for participants in Spain, Sweden, Germany and Portugal, due to rejection of the ethical committee in those sites 20 .
Tier 1 – Study partner
Mezurio Smartphone app with daily questionnaires on mood and sleep of the participant 26 . Active Complete new questionnaires twice per day (5 min per day).
Amsterdam-iADL Tablet/PC app with weekly questionnaires on the daily life functioning of the participant Active Complete new questionnaire once per week (10 min per week)
Tier 2
CANedge Car logger attached to OBD port of a car to monitor driving behavior 27 . Passive Attach when participant is driving. Only for participants who drive a car (except electric cars or cars older than 2008)
Dreem EEG headband to measure brain wave activity, heart rate and breathing rate during the night 28 . Passive Wear every night. Start recording when going to sleep, and stop recording when waking up. Compliance was monitored real-time and researcher contacted participant if no data or bad quality data was coming in for >2 days in a row.
Fibaro In-home sensors consisting of 3 motion sensors, 2 door sensors and 5 wall plugs to monitor in-home activity patterns 29 Passive Do not move the sensors.

Note. An active RMT means that specific tasks performed by the participant were required to collect data. The Axivity could only store 4 weeks of data, and therefore had to be switched to a new device after 4 weeks. OBD: on board diagnostics; EEG: electroencephalogram.

Participants

Healthy control participants and participants with AD were enrolled in the RADAR-AD study, and divided into four study groups, based on Mini-Mental State Examination (MMSE), 30 Clinical Dementia Rating (CDR) 7 and amyloid status as measured using either cerebrospinal fluid or a positron emission tomography scan (Table 3). If MMSE and CDR were conflicting, the CDR defined the study group, and the group was confirmed using the results of neuropsychological tests and questionnaires. Inclusion criteria were being older than 50 years of age, having a smartphone, having a study partner available, and being able to read and communicate in the local language. Exclusion criteria were presence of an additional neurological or psychiatric disease, or any other kind of disorders that may affect activities of daily living, mobility or social interactions. Additional inclusion criteria for tier 2 were that the home of the participant should have a stable Wi-Fi connection. Participants and their study partners gave written informed consent before study start. The authors assert that all procedures contributing to this work comply with the ethical standards of the relevant national and institutional committees on human experimentation and with the Helsinki Declaration of 1975, as revised in 2008. The protocol was approved by each local Ethics Committee in each separate country. 20

Table 3.

Inclusion criteria per study group.

Study group MMSE CDR Amyloid N
Healthy control > 27 0 Negative 70
Preclinical AD > 26 0 Positive 36
Prodromal AD > 23 0.5 Positive 65
Mild-to-moderate AD > 17 > 0.5 Positive (or APOE e4 carrier) 56

Note. Amyloid status was measured using cerebrospinal fluid or an amyloid PET scan. MMSE: Mini-mental state examination; CDR: clinical dementia rating.

Study procedures

The study started with a baseline visit to the clinic, during which a short physical examination and neuropsychological examination were performed, together with questionnaires about sleep, social functioning and activities of daily living. The smartphone apps were installed on the participant's phone, the devices were handed out, and the device usage was explained. Participants received a study manual, which summarized the instructions and explained how the apps and devices worked (Supplement 1 and 2). After the baseline visit, the participants wore two activity trackers (Axivity AX3 and Fitbit Charge 3), used the Mezurio app daily (see Supplement 3 for the task schedule and Lancaster, Koychev 31 for a detailed description of the tasks), used the Altoida app weekly, and optionally used the wearable camera on 6 self-chosen days. Every two weeks, a local researcher called the participant to ask about usability and solve any potential technical issues. Data from the Fitbit was monitored in real-time and participants were contacted if no data was coming in for more than two days in a row. After 8 weeks, a close-out visit was scheduled via either phone or in-person, and again the same questions as during the bi-weekly phone calls were asked. All apps were uninstalled from the phone and devices were handed over to the researcher.

If participants agreed to participate in tier 2, a home visit was scheduled to install the sensors. During the home visit, the Fibaro in-home sensors and CANedge car logger were installed, and use of the Dreem headband was explained. Data completeness of Dreem was monitored in real-time during the 4-week data collection. Participants were contacted when no data or bad quality data was coming in for more than two days in a row.

Outcomes

Feasibility was assessed using compliance and drop-out rates. Compliance rates were assessed as percentage wear time for Axivity and Fitbit, the number of days on which the device was used for wearable camera and Dreem, and for Mezurio and Altoida, the compliance and commitment. For the Axivity and Fitbit wearables, wear time was calculated in hours using the manufacturer's algorithm, meaning that 100% wear time corresponded to a participant wearing the device 24 h for 56 days. For the Dreem device, compliance was calculated in number of nights, meaning that 100% compliance corresponded to a participant using the Dreem device for 28 nights. A night was counted as night when there was a least 1 h of data. For the Mezurio and Altoida apps, compliance was defined as the task completion percentage, calculated as the number of completed tasks divided by the number of possible tasks (i.e., how reliably did they do what was asked?). Commitment was the study duration completion percentage, calculated as the total duration of participation in days divided by the total possible duration of 8 weeks (i.e., how far through the study protocol did they get?). No compliance rates were calculated for the passive sensors Fibaro and CANedge. A drop-out was defined as someone who withdrew from the study before the data collection period ended (8 weeks in tier 1 or 4 weeks in tier 2). A participant could stop with one of the RMTs without withdrawing from the complete study, for example, when the Altoida app did not work properly on the participant's phone, participants could continue with the other RMTs without it being counted as a drop-out. Data from participants that dropped out of the study was still included in the results.

Usability was addressed using the bi-weekly interview data. The bi-weekly calls and close-out visit consisted of semi-structured interviews (Appendix 1), meaning that a set of predefined questions was asked per RMT. Still, additional open questions were asked when the researcher felt that this was needed. Questions addressed topics such as difficulty understanding the instructions, discomfort, technical issues or frustration with the length of time the app or device had to be used (Appendix 1). The answers to the questions were on a likert-type scale and were treated as continuous variables: never = 0, sometimes = 1, often = 2, always = 3. The answers per RMT were summed, and divided by the number of questions, so that a total score of 0 represented no problems with any of the questions/topics, and a score of 3 represented always problems with all questions/topics. To avoid biases due to missed phone calls or drop-outs, the RMT scores from all phone calls were averaged. RMTs were also categorized into 3 RMT categories: active apps (e.g., Mezurio and Altoida), wearables (e.g., Fitbit, Axivity, wearable camera, and Dreem), and passive sensors (e.g., Fibaro and CANedge). Problem rates per RMT category were calculated as the average problem rate over the RMTs in that category.

Statistical analyses

Groups were compared on demographics using ANOVA or chi-square test when appropriate. For each study group separately, the problem rates per RMT were compared using linear models (1 model per study group), with the problem rates of the bi-weekly calls as the dependent variable and the RMT as independent variable. Group comparisons per RMT were also conducted using linear models (1 model per RMT), with the problem rates of the bi-weekly calls as the dependent variable and the study group as the independent variable. All models were corrected for age, sex, and years of education. Statistical analyses were performed in R (version 4.2.1). A p < 0.05 was considered significant.

Results

We included 229 participants (Table 4) in the four pre-defined study groups. Groups did not differ in age (p = 0.09), sex (p = 0.22) or years of education (p = 0.15). A total of n = 23 participated in the main study and the sub-study in parallel.

Table 4.

Demographics per study group.

Healthy control Preclinical AD Prodromal AD Mild-to-moderate AD P-value
Tier 1
Age 67 (8) 71 (6) 70 (8) 70 (9) 0.09
N 69 39 65 56
Age 67 (8) 71 (6) 70 (8) 70 (9) 0.09
Male, n (%) 31 (45%) 16 (41%) 38 (59%) 31 (55%) 0.22
Years of education 14 (4) 16 (3) 15 (5) 14 (4) 0.15
MMSE 29 (1) 29 (1) 27 (2) 22 (3) <0.001
Site, n (%)
- Amsterdam 20 (29.0%) 14 (35.9%) 12 (18.5%) 15 (26.8%)
- London 5 (7.2%) 0 (0.0%) 4 (6.2%) 8 (14.3%)
- Oxford 6 (8.7%) 3 (7.7%) 2 (3.1%) 0 (0.0%)
- Stockholm 0 (0.0%) 3 (7.7%) 7 (10.8%) 1 (1.8%)
- Thessaloniki 6 (8.7%) 0 (0.0%) 6 (9.2%) 8 (14.3%)
- Bucharest 0 (0.0%) 1 (2.6%) 1 (1.5%) 0 (0.0%)
- Ljubljana 4 (5.8%) 2 (5.1%) 2 (3.1%) 2 (3.6%)
- Lisbon 7 (10.1%) 0 (0.0%) 8 (12.3%) 5 (8.9%)
- Brescia 6 (8.7%) 0 (0.0%) 4 (6.2%) 1 (1.8%)
- Geneva 2 (2.9%) 5 (12.8%) 3 (4.6%) 2 (3.6%)
- Mannheim 0 (0.0%) 1 (2.6%) 5 (7.7%) 3 (5.4%)
- Barcelona 8 (11.6%) 3 (7.7%) 6 (9.2%) 3 (5.4%)
- Stavanger 5 (7.2%) 7 (17.9%) 5 (7.7%) 8 (14.3%)
Tier 2
N 11 12 10 12
Age 72 (7) 69 (5) 69 (8) 66 (8) 0.26
Male, n (%) 8 (73%) 4 (33%) 8 (80%) 9 (75%) 0.07
Years of education 15 (4) 16 (3) 17 (3) 16 (4) 0.78
MMSE 30 (1) 29 (1) 28 (1) 22 (3) <0.001
Site, n (%)
- Amsterdam 7 (64%) 7 (58%) 7 (70%) 9 (75%)
- London 0 (0%) 0 (0%) 0 (0%) 0 (0%)
- Oxford 2 (18%) 1 (8%) 0 (0%) 0 (0%)
- Stavanger 2 (18%) 4 (33%) 3 (30%) 3 (25%)

Note. Numbers show mean (SD) unless specified otherwise. The p-values are from ANOVA or chi-square test when appropriate. MMSE: Mini-Mental State Examination.

Feasibility - compliance rates

Compliance rates per study group were overall high across RMTs (Table 5). Although differences were small, wear time of the wearables, Mezurio compliance and the number of participants with Altoida data seemed to decrease with disease severity. The Fibaro sensors were not installed in tier 2 participants because of technical reasons (n = 2), the participant and/or partner did not want the sensors (n = 4), or someone (not the participant) in the home had dementia (n = 1). The Dreem headband was not used by 1 participant, because this participant had concerns about their sleep. The CANedge device was not installed in the car because the participant did not drive due to cognitive problems (n = 5), participant had no driving license (n = 1), participant did not own a car (n = 1), participant did not drive because of other reasons (n = 1), or technical reasons (n = 12). For n = 4, the CANedge was installed, but did not produce any data. These technical issues were mainly due to different car types. Cars older than 2008 electrical vehicles were not compatible.

Table 5.

Compliance rates per remote monitoring technology per study group.

Healthy control Preclinical AD Prodromal AD Mild-to-moderate AD P-value
Tier 1
Total N 69 39 65 56
Wearable camera
 Participants, n (%) 28 (57%) 18 (60%) 10 (29%) 15 (35%)
 Wear time, hours 14 (10–15) 15 (13–18) 16 (14–22) 14 (6–18) 0.55
Axivity
 Participants, n (%) 60 (87%) 32 (82%) 56 (86%) 44 (79%)
 Wear time, % 57 (49–98) 95 (50–99) 85 (50–99) 52 (45–95) 0.21
Fitbit
 Participants, n (%) 69 (100%) 39 (100%) 59 (91%) 53 (95%)
 Wear time, % 92 (84–97) 94 (85–96) 84 (67–96) 83 (43–93) 0.01
Mezurio
 Participants, n (%) 69 (100%) 37 (95%) 60 (92%) 47 (84%)
 Commitment, % 100 (92–100) 100 (95–100) 97 (92–100) 97 (65–100) N/A
 Compliance, % 92 (81–96) 94 (81–97) 87 (78–94) 83 (58–92) N/A
Altoida
 Participants, n (%) 41 (59%) 19 (49%) 22 (34%) N/A
 Compliance, % 75 (50–112) 75 (50–100) 63 (38–88) N/A 0.09
Tier 2
Total N 11 12 10 12
Fibaro
 Participants, n (%) 8 (73%) 10 (83%) 9 (90%) 11 (92%)
Dreem
 Participants, n (%) 11 (100%) 11 (92%) 10 (100%) 11 (92%)
 Compliance, % 93 (88–93) 93 (64–102) 89 (83–93) 84 (62–93) 0.65
CANedge
 Participants, n (%) 4 (36%) 7 (58%) 4 (40%) 6 (50%)

Note. Number of participants are given as n (%), while compliance or wear time is given as median (Q1-Q3). The participant n's show the number of participants for which data was available. Data from participants that dropped out of the study is still included in the results. Wearable camera percentages are not from the total sample, but from the number of participants that were asked to wear a camera (thus excluding the sites in Barcelona, Lisbon, Brescia, Mannheim and Stockholm). 100% wear time for the Axivity and Fitbit means 24 h a day for 56 days, 100% compliance of Altoida tasks means 8 at-home tasks completed (once per week), 100% compliance for Dreem means 28 nights with a night meaning at least one hour of data. The p-value shows the result from the ANOVA to test group differences for wear time and compliance.

Feasibility - drop-outs

The total number of drop-outs was 17 (7.5%) for tier 1 (Table 6). The ‘other’ drop-out reasons included concerns about data privacy (n = 1), forgetting to wear the watches and use the apps (n = 1), and participation reminded them too much about cognitive decline (n = 1) or was too stressful (n = 2).

Table 6.

Number of drop-outs per study group for the main study (tier 1) and their characteristics.

HC PreAD ProAD MildAD
n (% from total sample) 3 (4%) 4 (11%) 4 (6%) 6 (11%)
Age 66 (6) 73 (5) 67 (7) 69 (7)
Male, n(%) 1 (33) 2 (50) 1 (25) 4 (67)
Years of education 11 (3) 16 (1) 14 (2) 14 (2)
MMSE 28 (1) 29 (1) 26 (1) 25 (4)
Study duration in days 15 (5) 17 (11) 13 (8) 27 (12)
Main drop-out reason, n
- Discomfort from wearing one or both wristbands 3 0 1 2
- Frustrated about daily Mezurio tasks 0 1 0 1
- Medical reason 0 1 0 0
- Study partner wanted or needed to stop with the study 0 0 1 2
- Other 0 2 2 1

Note. Numbers show mean (SD) unless specified otherwise. HC: healthy control; PreAD: Preclinical AD; ProAD: Prodromal AD; MildAD: Mild-to-moderate AD; MMSE: Mini-Mental State Examination.

For tier 2, there were 2 (4.4%) drop-outs. One mild-to-moderate AD participant only used the Dreem headband and experienced discomfort from the Dreem headband, and one preclinical AD participant mentioned that it was too stressful, and had the feeling of being under surveillance because of the Fibaro sensors.

Usability

Problem rates per study group are presented in Figure 1 and Table 7, and in more detail per RMT in Figure 2. Statistical differences are discussed here, while complete outcomes of the statistical models are included in Appendix 2. Overall, Altoida and Dreem showed the highest problem rates for all groups. For the healthy control group, most problems were observed with the Altoida app, followed by Dreem and the wearable camera. For the preclinical AD group, again most problems were seen for Altoida, followed by Dreem, wearable camera, and Mezurio (Appendix 2, Supplemental Table S1). For the prodromal AD group, most problems were seen for Altoida and Dreem, followed by Mezurio. Mild-to-moderate AD participants did not use the Altoida app, and therefore most problems for the mild-to-moderate AD group were seen for Dreem, followed by Mezurio.

Figure 1.

Figure 1.

Indications of problems with the different remote monitoring technologies (RMTs). 0 means no problems, with higher scores indicating more problems. HC: healthy control; PreAD: preclinical AD; ProAD: prodromal AD; MildAD: mild-to-moderate AD; N: number of participants that answered questions regarding that RMT during at least one phone call.

Table 7.

Problem rates for all study groups.

All HC PreAD ProAD MildAD
Fitbit 0.19 (0.32) 0.13 (0.21) 0.12 (0.13) 0.20 (0.41) 0.31 (0.38)
Axivity 0.21 (0.32) 0.17 (0.24) 0.18 (0.15) 0.24 (0.45) 0.25 (0.31)
Camera 0.28 (0.41) 0.24 (0.33) 0.32 (0.14) 0.30 (0.73) 0.29 (0.36)
Mezurio 0.31 (0.41) 0.20 (0.26) 0.31 (0.32) 0.39 (0.53) 0.37 (0.45)
Altoida 0.60 (0.68) 0.46 (0.62) 0.74 (0.63) 0.73 (0.78) N/A
Dreem 0.49 (0.43) 0.27 (0.23) 0.58 (0.61) 0.59 (0.23) 0.52 (0.47)
Fibaro 0.11 (0.25) 0.07 (0.12) 0.20 (0.41) 0.06 (0.12) 0.09 (0.22)
CANedge 0.01 (0.05) 0.05 (0.08) 0.00 (0.00) 0.00 (0.00) 0.00 (0.00)

Note. All problem rates, with 0 meaning no problems, and higher scores indicating more problems. Numbers show mean (SD). HC: healthy control; PreAD: preclinical AD; ProAD: prodromal AD; MildAD: mild-to-moderate AD.

Figure 2.

Figure 2.

Problems per remote monitoring technology per study group. The horizontal axis shows percentage of participants giving a certain answer to the corresponding question/topic. The data is a mean of all calls together, to avoid biases due to drop-outs or missed phone calls. The data presented here was not analyzed statistically.

Compared to healthy controls, preclinical AD participants had more problems with Axivity, Fitbit, Mezurio, Altoida, and the wearable camera, prodromal AD participants had more problems with Mezurio, Altoida and the Dreem device, and the mild-to-moderate AD group had more problems with Axivity, Fitbit, and Mezurio (Figure 1, Appendix 2, Supplemental Table S2). Compared to healthy controls and preclinical AD, mild-to-moderate AD participants experienced more problems with Fitbit. Those problems were mostly related to discomfort wearing the watch and difficulty understanding the watch instructions. No group differences were seen for Fibaro and CANedge.

Discussion

Our aim was to assess the feasibility and usability of participants in different stages of AD using an 8-week RMT protocol. Compliance rates declined with disease severity but remained high, and the complex protocol was feasible in all groups. Usability was overall positive, although more problems were reported when more interaction with the RMT was needed.

Feasibility

Recruitment targets were met, both for tier 1 and tier 2, with a low drop-out rate in all study groups (<7.5% overall). This shows that an RMT protocol with multiple apps and wearables is feasible in an AD population. The different recruitment rates per study site can be explained by the enrollment period during the COVID-19 pandemic: healthcare facilities needed to close for longer periods of time for different countries. For tier 2, the low recruitment rate in the United Kingdom (n = 3) was unexpected, even though all tier 1 participants from the London and Oxford sites were asked to participate in tier 2 as well, suggesting that not all countries are willing to participate yet in this type of research. Unfortunately, no information is available on why participants refused to participate in tier 2. Overall compliance was high, despite the higher age of our study population, confirming the findings of a large meta-analysis 32 showing that older age increases adherence. It is important to note that the intensive interaction between researcher and participant (i.e., bi-weekly phone calls), study partner and participant, and real-time monitoring of several RMTs, contributed to higher engagement and commitment in our study.

Usability

In general, fewer problems were reported when less active interaction with the RMT was required, which confirms the findings from a previous study showing that the most accepted and usable devices in a dementia population were passive devices. 33 Participants reported more problems with active RMTs than with wearables and more with wearables than with passive sensors. Although these findings were consistent with prior research, 33 when examined in more detail, we found that there were more substantial differences between devices within these categories than between the categories themselves. For example, the Altoida app and the Dreem headband both had substantially higher problem rates than the Mezurio active app, despite the fact that this app required by far more interaction than any other RMT in the study. It may therefore be more sensible to evaluate the design of an RMT individually rather than make assumptions about acceptability based on simple categories like active vs passive.

Problems increased with disease stage, especially for the Mezurio app and Fitbit. We noticed that the help of the study partner was crucial in keeping the cognitively impaired participants engaged with the RMTs, as was also found in another study testing an assistive system prototype. 13 The absence of a study partner might also explain why some participants from the prodromal and mild-to-moderate AD groups dropped out. The mild-to-moderate AD group showed more difficulties with the two activity trackers compared to all other groups, mainly because of discomfort wearing the watch. This was also reflected in the lower wear time of the activity trackers in this mild-to-moderate AD group. During the interviews, it was often mentioned that the participant would be happy to take off the watches. The Axivity wristband was most frequently implicated in this respect as it was made from a stiff material that was unpleasant for many participants. A previous study using wrist-worn activity trackers showed that using activity trackers in a dementia population was feasible, but this study showed shorter measuring periods. 16 Future studies with longer measuring periods should therefore take into account that activity trackers in a dementia population can be burdensome. On the other hand, n = 8 participants (2 participants from each study group) mentioned that they liked Fitbit so much that they wanted to buy one themselves after the study was finished.

A concern that was raised by members of the RADAR-AD patient advisory board before the study started was possible stigmatization because of wearing the RMTs. 34 Fortunately, none of the participants mentioned that they experienced unpleasant situations with other people, although one participant mentioned that they did not wear the devices at work to avoid stigma. The watches were sometimes noticed by others, especially the Axivity, as this watch looks different than a usual watch with no screen. Only one participant mentioned that other people commented on the at-home sensors. The camera, however, led to some discomfort for several participants because of photographing others while the others were unaware. For example, none of the Geneva site participants agreed to wear the camera due to confidentiality and privacy issues. In the end, none of the participants experienced problems with people who did not want to be photographed. Still, this device was noticed the most by other people, and sometimes an explanation of the device had to be given, or participants were asked to turn off the camera or cover the lens. Some participants mentioned that they only wanted to wear the camera while there were no other people around or only indoors. In the mild-to-moderate AD group, study partners often had to make participants aware of turning off the camera while doing private activities, such as using the toilet, and also to turn the camera back on after finishing that activity. At the end of the study, participants and their partners, therefore, got the possibility to go through their photos and delete photos if they wanted to.

Recommendations for future RMT protocols

The key message for future research is that a complex RMT protocol is feasible, even in a mild-moderate AD dementia population. We recommend that future RMT studies carefully consider the needs of the study before choosing an RMT, that RMTs are included that can be monitored real-time, and that consideration is given to mandating the participation of a study partner.

Firstly, to determine the needs for a study, the user perspective needs to be considered. Passive sensors are less problematic than wearables, which are less bothersome than active apps. Especially in longer trials, or when the possibility of giving (research) support during a trial is low, RMTs requiring less active interaction of the user are therefore preferable. However, researchers need to balance many additional factors, including the quality and value of the signal from each RMT, the cost of the RMT, the cost of the study team required to set it up, the target population, and the scale of the study. 19 This study showed that all types of RMTs are successful in collecting data, although cognitive and functional impairment of participants introduced challenges. This makes it important to evaluate RMTs individually on their own merits, which fit the study's needs. Secondly, data that can be monitored in real-time is preferable over sensors that collect data offline. For example, in the current study, the wear time of the real-time monitored activity tracker Fitbit was higher than the activity tracker Axivity, from which the data could only be downloaded after the 8-week data collection period. Any possible technical problems with data collection or study team errors leading to data loss could only be detected after the data collection period and could therefore not be resolved in time. For Fitbit, real-time feedback was provided to the participant, which might have increased adherence, and helped researchers to detect if something went wrong. A last recommendation is the involvement of a study partner, particularly for cognitively impaired participants. The study partner could help with using the apps and turning on and charging the devices, which might be too difficult for participants in the later stages of AD.

Strengths and limitations

One of the strengths of this study was that a wide variety of RMTs was included in this study, from active to passive, from continuous to periodic, and from at-home sensors to body-worn sensors to smartphone apps. This variety made it possible to compare different types of RMTs in people living with different levels of functional impairment. Another strength was the representation of well-phenotyped participants in all stages of AD. Moreover, the large number of sites across Europe enlarges the generalizability of the study. However, some limitations were noticed as well. Firstly, one of the inclusion criteria was to own a smartphone. These criteria could have led to a selection bias, including participants that were already more ‘tech savvy’ and higher educated than participants that do not own a smartphone. Although the use of smartphones is rising, differences due to age and geographical location are still present. 2 Within RADAR-AD, for some sites it was difficult to find older participants with cognitive impairment who owned a smartphone. Therefore, we decided that participants could also participate without a smartphone and let them only use the activity trackers and at-home sensors. This decision might have reduced the selection bias. Secondly, the view of the study partner was not reviewed, while the study partner also installed the Mezurio app and answered twice-daily a questionnaire. Moreover, several people could not participate in the study because they did not have a study partner available, especially in the cognitively normal groups, which might have introduced a selection bias. Thirdly, the study is relatively short. Clinical trials usually take over 18 months, while the duration of this study was only 2 months. Longer measuring periods could have led to different results in terms of both compliance rates and reported problems. After 8 weeks, n = 27 (12%) participants mentioned that they were happy that the study was finished. On the contrary, the protocol was highly demanding due to the large number of RMT, which might have led to lower compliance rates than when only one RMT was offered. Fourthly, the study started after the start of the COVID-19 pandemic, which could have benefitted our results in two ways: older people and people with dementia used more technology during the pandemic to stay in touch with their friends and relatives, 35 which could have made them more familiar with using their smartphone than before, and people had more time to do the daily tasks because all other activities were not allowed. It is encouraging that a study with a highly complex protocol such as RADAR-AD, met the recruitment targets, despite the pandemic. Fifthly, the questions asked during the bi-weekly interviews were different for each RMT. This might have influenced the results and therefore makes it difficult to interpret the comparisons between RMTs. For example, a question about technical issues was asked only about the smartphone apps. The issue was mitigated by averaging the answers to the questions, so that each RMT problem rate was represented on a similar scale, independent of the number of questions. Lastly, although we looked at compliance, we did not take into account data quality. It is possible that someone with a high wear time of the sleep headband has bad quality data, which cannot be used for analyses.

Conclusion

This study shows that a high-intensity protocol of active apps, wearables and passive sensors is feasible, even in a mild-to-moderate AD population, encouraging other researchers to use RMTs in their study designs. As impairment increases, the problem rates for active apps and wearables also increases, but not generally to the point of making the RMTs infeasible. We recommend evaluating the design of individual devices carefully before finalizing study protocols, considering RMTs which allow for real-time compliance monitoring, and engaging the partners of study participants in the research.

Supplemental Material

sj-pdf-1-dhj-10.1177_20552076241238133 - Supplemental material for Feasibility and usability of remote monitoring in Alzheimer's disease

Supplemental material, sj-pdf-1-dhj-10.1177_20552076241238133 for Feasibility and usability of remote monitoring in Alzheimer's disease by Marijn Muurling, Casper de Boer, Chris Hinds, Alankar Atreya, Aiden Doherty, Vasilis Alepopoulos, Jelena Curcic, Anna-Katharine Brem, Pauline Conde, Sajini Kuruppu, Xavier Morató, Valentina Saletti, Samantha Galluzzi, Estefania Vilarino Luis, Sandra Cardoso, Tina Stukelj, Milica Gregorič Kramberger, Dora Roik, Ivan Koychev, Ann-Cecilie Hopoy, Emilia Schwertner, Mara Gkioka, Dag Aarsland, Pieter Jelle Visser and the RADAR-AD consortium in DIGITAL HEALTH

sj-pdf-2-dhj-10.1177_20552076241238133 - Supplemental material for Feasibility and usability of remote monitoring in Alzheimer's disease

Supplemental material, sj-pdf-2-dhj-10.1177_20552076241238133 for Feasibility and usability of remote monitoring in Alzheimer's disease by Marijn Muurling, Casper de Boer, Chris Hinds, Alankar Atreya, Aiden Doherty, Vasilis Alepopoulos, Jelena Curcic, Anna-Katharine Brem, Pauline Conde, Sajini Kuruppu, Xavier Morató, Valentina Saletti, Samantha Galluzzi, Estefania Vilarino Luis, Sandra Cardoso, Tina Stukelj, Milica Gregorič Kramberger, Dora Roik, Ivan Koychev, Ann-Cecilie Hopoy, Emilia Schwertner, Mara Gkioka, Dag Aarsland, Pieter Jelle Visser and the RADAR-AD consortium in DIGITAL HEALTH

sj-pdf-3-dhj-10.1177_20552076241238133 - Supplemental material for Feasibility and usability of remote monitoring in Alzheimer's disease

Supplemental material, sj-pdf-3-dhj-10.1177_20552076241238133 for Feasibility and usability of remote monitoring in Alzheimer's disease by Marijn Muurling, Casper de Boer, Chris Hinds, Alankar Atreya, Aiden Doherty, Vasilis Alepopoulos, Jelena Curcic, Anna-Katharine Brem, Pauline Conde, Sajini Kuruppu, Xavier Morató, Valentina Saletti, Samantha Galluzzi, Estefania Vilarino Luis, Sandra Cardoso, Tina Stukelj, Milica Gregorič Kramberger, Dora Roik, Ivan Koychev, Ann-Cecilie Hopoy, Emilia Schwertner, Mara Gkioka, Dag Aarsland, Pieter Jelle Visser and the RADAR-AD consortium in DIGITAL HEALTH

Acknowledgements

The authors thank all RADAR-AD participants and clinical sites for their contribution. We thank all past and present RADAR-AD consortium members for their contribution to the project (in alphabetical order): Dag Aarsland, Halil Agin, Vasilis Alepopoulos, Alankar Atreya, Sudipta Bhattacharya, Virginie Biou, Joris Borgdorff, Anna-Katharine Brem, Neva Coello, Pauline Conde, Nick Cummins, Jelena Curcic, Casper de Boer, Yoanna de Geus, Paul de Vries, Ana Diaz, Richard Dobson, Aidan Doherty, Andre Durudas, Gul Erdemli, Amos Folarin, Suzanne Foy, Holger Froehlich, Jean Georges, Dianne Gove, Margarita Grammatikopoulou, Kristin Hannesdottir, Robbert Harms, Mohammad Hattab, Keyvan Hedayati, Chris Hinds, Adam Huffman, Dzmitry Kaliukhovich, Irene Kanter-Schlifke, Ivan Koychev, Rouba Kozak, Julia Kurps, Sajini Kuruppu, Claire Lancaster, Robert Latzman, Ioulietta Lazarou, Manuel Lentzen, Federica Lucivero, Florencia Lulita, Nivethika Mahasivam, Nikolay Manyakov, Emilio Merlo Pich, Peyman Mohtashami, Marijn Muurling, Vaibhav Narayan, Vera Nies, Spiros Nikolopoulos, Andrew Owens, Marjon Pasmooij, Dorota Religa, Gaetano Scebba, Emilia Schwertner, Rohini Sen, Niraj Shanbhag, Laura Smith, Meemansa Sood, Thanos Stavropoulos, Pieter Stolk, Ioannis Tarnanas, Srinivasan Vairavan, Nick van Damme, Natasja van Velthogen, Herman Verheij, Pieter Jelle Visser, Bert Wagner, Gayle Wittenberg, and Yuhao Wu.

APPENDIX 1

Questions asked during the bi-weekly calls

  • - Fitbit: Did you experience any of these problems when wearing the Fitbit Charge 3?
    • Forgetting to wear the watch
    • People's reaction to me wearing the watch
    • Discomfort wearing the watch
    • Difficulty understanding the watch instructions
    • Frustration with the length of time I had to wear the watch
  • - Axivity: Did you experience any of these problems when wearing the Axivity AX3?
    • Forgetting to wear the watch
    • People's reaction to me wearing the watch
    • Discomfort wearing the watch
    • Difficulty understanding the watch instructions
    • Frustration with the length of time I had to wear the watch
  • - Mezurio: Did you experience any of these problems when using the Mezurio app?
    • Remembering to use the app
    • The length of time it took to complete the daily task in the app
    • Completing all tasks
    • Difficulty understanding the instructions of the task
    • Technical issues preventing me to complete the task
  • - Altoida: Did you experience any of these problems when using the Altoida app?
    • Remembering to use the app
    • The length of time it took to complete the weekly task in the app
    • Completing all tasks
    • Difficulty understanding the instructions of the task
    • Technical issues preventing me to complete the task
  • - Camera: Did you experience any of these problems when wearing the camera?
    • Forgetting to wear the camera
    • People's reaction to me wearing the camera
    • Discomfort wearing the camera
    • Problems with the camera slipping/moving about
    • Difficulty understanding the camera instructions
    • Frustration with the length of time I had to wear the camera
    • Being uncertain whether the camera was turned on
  • - Dreem headband: Did you experience any of these problems when wearing the Dreem device?
    • Forgetting to wear the device
    • Discomfort wearing the device
    • Difficulty understanding the device instructions
    • Frustration with the length of time I had to wear the device
    • Do you have any comments about your experience of wearing the Dreem device?
  • - Fibaro: Did you experience any of these problems with Fibaro?
    • People's reaction to the sensors
    • Discomfort, having the feeling to be ‘watched’
    • Frustration with the length of time I had to use the sensors
    • Do you have any comments about your experience of using the Fibaro sensors?
  • - CANedge: Did you experience any of these problems when using the CANedge device?
    • Forgetting to use the device
    • People's reaction to me using the device
    • Discomfort using the device, having the feeling to be watched during driving
    • Frustration with the length of time I had to use the device
    • Do you have any comments about your experience with the CANedge device?

Appendix 2

Table S1.

Differences in problem rates between RMTs per study group.

Healthy control Preclinical AD Prodromal AD Mild-to-moderate AD
Fitbit vs Mezurio 0.06(0.04), p = 0.1042 0.14(0.05), p = 0.006** 0.12(0.05), p = 0.01* 0.04(0.05), p = 0.41
Fitbit vs Axivity 0.03(0.04), p = 0.3773 0.05(0.05), p = 0.27 0.02(0.05), p = 0.67 −0.04(0.05), p = 0.44
Fitbit vs Camera 0.09(0.04), p = 0.05 0.16(0.06), p = 0.009** 0.04(0.08), p = 0.60 −0.02(0.06), p = 0.78
Fitbit vs Altoida 0.20(0.04), p < 0.001*** 0.39(0.06), p < 0.001*** 0.31(0.06), p < 0.001*** N/A
Fitbit vs Dreem 0.13(0.07), p = 0.047* 0.30(0.07), p < 0.001*** 0.31(0.09), p = 0.001** 0.15(0.08), p = 0.06
Fitbit vs Fibaro −0.02(0.07), p = 0.77 0.05(0.08), p = 0.50 −0.09(0.10), p = 0.34 −0.16(0.08), p = 0.06
Fitbit vs CANedge −0.04(0.08), p = 0.60 −0.10(0.09), p = 0.27 −0.13(0.14), p = 0.37 −0.24(0.11), p = 0.03*
Mezurio vs Axivity −0.03(0.04), p = 0.46 −0.08(0.05), p = 0.09 −0.10(0.05), p = 0.04* −0.08(0.05), p = 0.12
Mezurio vs Camera 0.03(0.04), p = 0.51 0.02(0.06), p = 0.70 −0.08(0.08), p = 0.28 −0.06(0.06), p = 0.36
Mezurio vs Altoida 0.15(0.04), p < 0.001*** 0.26(0.06), p < 0.001*** 0.18(0.06), p = 0.002** N/A
Mezurio vs Dreem 0.08(0.07), p = 0.25 0.17(0.07), p = 0.02* 0.19(0.09), p = 0.047* 0.11(0.08), p = 0.18
Mezurio vs Fibaro −0.08(0.07), p = 0.29 −0.08(0.08), p = 0.26 −0.22(0.10), p = 0.028* −0.20(0.08), p = 0.018*
Mezurio vs CANedge −0.10(0.08), p = 0.23 −0.24(0.09), p = 0.01* −0.25(0.14), p = 0.08 −0.28(0.11), p = 0.0095**
Axivity vs Camera 0.06(0.04), p = 0.21 0.11(0.06), p = 0.08 0.02(0.08), p = 0.80 0.02(0.06), p = 0.75
Axivity vs Altoida 0.17(0.04), p < 0.001*** 0.34(0.06), p < 0.001*** 0.29(0.06), p < 0.001*** N/A
Axivity vs Dreem 0.10(0.07), p = 0.13 0.25(0.07), p < 0.001*** 0.29(0.09), p = 0.002** 0.19(0.08), p = 0.02*
Axivity vs Fibaro −0.05(0.07), p = 0.47 0.00(0.08), p = 0.97 −0.11(0.10), p = 0.25 −0.12(0.08), p = 0.14
Axivity vs CANedge −0.07(0.08), p = 0.37 −0.16(0.09), p = 0.09 −0.15(0.14), p = 0.30 −0.21(0.11), p = 0.06
Camera vs Altoida 0.12(0.05), p = 0.01* 0.23(0.07), p < 0.001*** 0.27(0.08), p = 0.002** N/A
Camera vs Dreem 0.05(0.07), p = 0.51 0.14(0.08), p = 0.08 0.27(0.11), p = 0.02* 0.17(0.09), p = 0.06
Camera vs Fibaro −0.11(0.08), p = 0.17 −0.11(0.08), p = 0.19 −0.13(0.11), p = 0.25 −0.14(0.09), p = 0.13
Camera vs CANedge −0.13(0.09), p = 0.13 −0.26(0.10), p = 0.009** −0.17(0.16), p = 0.28 −0.22(0.11), p = 0.05
Altoida vs Dreem −0.07(0.07), p = 0.32 −0.09(0.08), p = 0.24 0.00(0.10), p = 0.96 N/A
Altoida vs Fibaro −0.23(0.08), p = 0.003** −0.34(0.08), p < 0.001*** −0.40(0.10), p < 0.001*** N/A
Altoida vs CANedge −0.25(0.08), p = 0.004** −0.49(0.10), p < 0.001*** −0.44(0.15), p = 0.003** N/A
Dreem vs Fibaro −0.16(0.09), p = 0.09 −0.25(0.09), p = 0.007** −0.41(0.13), p = 0.002** −0.31(0.10), p = 0.003**
Dreem vs CANedge −0.18(0.10), p = 0.08 −0.40(0.11), p < 0.001*** −0.44(0.16), p = 0.007** −0.39(0.12), p = 0.002**
Fibaro vs CANedge −0.02(0.10), p = 0.84 −0.15(0.11), p = 0.16 −0.04(0.17), p = 0.83 −0.08(0.13), p = 0.51

Note. Numbers show beta (SE), p-value, without correction for multiple testing. N's for each RMT can be found in Figure 1. * indicates p < 0.05, ** indicates p < 0.01, *** indicates p < 0.001.

Table S2.

Differences in problem rates between study groups per RMT

Axivity Fitbit Mezurio Altoida Camera Dreem Fibaro CANedge
HC vs PreAD 0.05(0.02),
p = 0.03*
0.00(0.02),
p = 0.81
0.09(0.03),
p = 0.001**
0.15(0.05),
p = 0.005**
0.09(0.04),
p = 0.03*
0.18(0.10),
p = 0.08
0.07(0.06),
p = 0.22
−0.04(0.03),
p = 0.13
HC vs ProAD 0.03(0.02),
p = 0.12
0.03(0.02),
p = 0.06
0.09(0.02),
p < 0.001***
0.14(0.05),
p = 0.003**
−0.01(0.04),
p = 0.78
0.23(0.09),
p = 0.01*
−0.03(0.05),
p = 0.61
−0.02(0.03),
p = 0.47
HC vs MildAD 0.06(0.02),
p = 0.003**
0.11(0.02),
p < 0.001***
0.10(0.02),
p < 0.001***
N/A 0.05(0.04),
p = 0.17
0.13(0.09),
p = 0.16
0.01(0.05),
p = 0.87
−0.02(0.03),
p = 0.48
PreAD vs ProAD −0.02(0.02),
p = 0.44
0.03(0.02),
p = 0.17
0.01(0.03),
p = 0.84
−0.01(0.06),
p = 0.92
−0.10(0.05),
p = 0.04*
0.06(0.10),
p = 0.57
−0.10(0.06),
p = 0.08
0.02(0.03),
p = 0.61
PreAD vs MildAD 0.01(0.02),
p = 0.60
0.10(0.02),
p < 0.001***
0.02(0.03),
p = 0.58
N/A −0.03(0.04),
p = 0.43
−0.05(0.10),
p = 0.62
−0.06(0.05),
p = 0.22
0.02(0.03),
p = 0.48
ProAD vs MildAD 0.03(0.02),
p = 0.14
0.07(0.02),
p < 0.001***
0.01(0.03),
p = 0.67
N/A 0.06(0.04),
p = 0.14
−0.10(0.09),
p = 0.25
0.04(0.05),
p = 0.47
0.00(0.03),
p = 0.92

Note. Numbers show beta (SE), p-value, without correction for multiple testing. N's for each RMT can be found in Figure 1. * indicates p < 0.05, ** indicates p < 0.01, *** indicates p < 0.001.

Footnotes

JC is an employee and shareholder of Novartis. AD is supported by the Welcome Trust [223100/Z/21/Z]. Research of Alzheimer center Amsterdam is part of the neurodegeneration research program of Amsterdam Neuroscience. Alzheimer Center Amsterdam is supported by Stichting Alzheimer Nederland and Stichting Steun Alzheimercentrum Amsterdam. DA has received research support and/or honoraria from Astra-Zeneca, H. Lundbeck, Novartis Pharmaceuticals, Biogen, and GE Health, and served as paid consultant for H. Lundbeck, Eisai, Heptares, Mentis Cura, and Roche Diagnostics. IK declares support for this work through the National Institute of Health Research (personal award and Oxford Health Biomedical Research Centre) and the Medical Research Council (Dementias Platform UK grant), and is a paid medical advisor for digital healthcare technology companies (Five Lives SAS and Cognetivity Ltd). All other authors declare that there is no conflict of interest.

Ethical approval: Each local ethics committee approved the study separately: Medisch Ethische Toetsingscommissie VUmc (2019.518), Drug Research Ethics Committee (CEIm) of Universitat International de Catalunya (MED-FACE-2020-07), Comitato Etico IRCCS Centro San Giovanni di Dio – Fatebenefratelli di Brescia, Commission cantonale d'éthique de la recherché (2022-00002), Comissão de Ética do Centro Académico de Medicina de Lisboa (388/19), London – West London & GTAC (Gene Therapy Advisory Committee) Research Ethics Committee (20/LO/0183), Ethics Committee II of the Ruprecht-Karls-University of Heidelberg (Medical Faculty Mannheim) (2020-508N), Regionale komiteer for medisinsk og helsefaglig forskningsetikk (98842), Swedish Ethical Review Authority (2020-03497), Ethics Committee of Medical Faculty of Aristotle University of Thessaloniki and Ethics Committee of Alzheimer Hellas (198/2018 AI).

Funding: The authors disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: The RADAR-AD project has received funding from the Innovative Medicines Initiative 2 Joint Undertaking under grant agreement No 806999. This Joint Undertaking receives support from the European Union’s Horizon 2020 research and innovation programme and EFPIA and Software AG. See www.imi.europa.eu for more details. This communication reflects the views of the RADAR-AD consortium and neither IMI nor the European Union and EFPIA are liable for any use that may be made of the information contained herein. SG declares support for this work through the Italian Ministry of Health (Ricerca Corrente).

Guarantor: CB

Contributorship: MM did the statistical analyses and wrote the first draft. CB, CH, DA, and PJV were involved in protocol development, and supervising the project. MM, CH, AA, AD, VA, JC, AKB, and PC were involved in data processing, feature extraction, and data analysis. MM, CB, SK, XMA, VS, SG, EVL, SC, TS, MGK, DR, IV, ACH, ES, and MG were involved in gaining ethical approval, patient recruitment, and data collection. All authors reviewed and edited the manuscript and approved the final version of the manuscript

ORCID iD: Marijn Muurling https://orcid.org/0000-0001-9397-4602

Supplemental material: Supplemental material for this article is available online.

References

  • 1.Pew Research Center. Smartphone Ownership Is Growing Rapidly Around the World, but Not Always Equally, https://www.pewresearch.org/global/2019/02/05/smartphone-ownership-is-growing-rapidly-around-the-world-but-not-always-equally/ (2019).
  • 2.CBS. ICT-gebruik van huishoudens en personen, https://longreads.cbs.nl/ict-kennis-en-economie-2020/ict-gebruik-van-huishoudens-en-personen/ (2020, accessed 2022-12-22).
  • 3.Izmailova ES, Ellis R, Benko C. Remote monitoring in clinical trials during the COVID-19 pandemic. Clin Transl Sci 2020; 13: 838–841. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.De Angelis M, Lavorgna L, Carotenuto A, et al. Digital technology in clinical trials for multiple sclerosis: systematic review. J Clin Med 2021; 10: 2328. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Matcham F, Barattieri di San Pietro C, Bulgari V, et al. Remote assessment of disease and relapse in major depressive disorder (RADAR-MDD): A multi-centre prospective cohort study protocol. BMC Psychiatry 2019; 19: 1–11. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Cohen S, Cummings J, Knox S, et al. Clinical trial endpoints and their clinical meaningfulness in early stages of Alzheimer’s disease. J Prev Alzheimers Dis 2022; 9: 507–522. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Morris JC. The Clinical Dementia Rating (CDR): current version and scoring rules. Neurology 1993; 43: 2412–2414. [DOI] [PubMed] [Google Scholar]
  • 8.Budd Haeberlein S, Aisen P, Barkhof F, et al. Two randomized phase 3 studies of aducanumab in early Alzheimer’s disease. J Prev Alzheimers Dis 2022; 9: 197–210. [DOI] [PubMed] [Google Scholar]
  • 9.van Dyck CH, Swanson CJ, Aisen P, et al. Lecanemab in early Alzheimer’s disease. N Engl J Med 2023; 388: 9–21. [DOI] [PubMed] [Google Scholar]
  • 10.Evans N, Boyd H, Harris N, et al. The experience of using prompting technology from the perspective of people with dementia and their primary carers. Aging Ment Health 2021; 25: 1433–1441. [DOI] [PubMed] [Google Scholar]
  • 11.Goodall G, André L, Taraldsen Ket al. et al. Supporting identity and relationships amongst people with dementia through the use of technology: A qualitative interview study. Int J Qualit Stud On Health And Well-Being 2021; 16: 1920349. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Holthe T, Halvorsrud L, Karterud D, et al. Usability and acceptability of technology for community-dwelling older adults with mild cognitive impairment and dementia: A systematic literature review. Clin Interv Aging 2018; 13: 863. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.König T, Pigliautile M, Águila O, et al. User experience and acceptance of a device assisting persons with dementia in daily life: A multicenter field study. Aging Clin Exp Res 2022; 34: 869–879. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Malmgren Fänge A, Carlsson G, Chiatti Cet al. et al. Using sensor-based technology for safety and independence–the experiences of people with dementia and their families. Scand J Caring Sci 2020; 34: 648–657. [DOI] [PubMed] [Google Scholar]
  • 15.Sriram V, Jenkinson C, Peters M. Informal carers’ experience of assistive technology use in dementia care at home: A systematic review. BMC Geriatr 2019; 19: 1–25. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Farina N, Sherlock G, Thomas S, et al. Acceptability and feasibility of wearing activity monitors in community-dwelling older adults with dementia. Int J Geriatr Psychiatry 2019; 34: 617–624. 2019/02/01. [DOI] [PubMed] [Google Scholar]
  • 17.Berron D, Ziegler G, Vieweg P, et al. Feasibility of digital memory assessments in an unsupervised and remote study setting. Front In Dig Health 2022; 4: 892997. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Muurling M, de Boer C, Kozak R, et al. Remote monitoring technologies in Alzheimer’s disease: design of the RADAR-AD study. Alzheimers Res Ther 2021; 13: 1–13. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Owens AP, Hinds C, Manyakov NV, et al. Selecting remote measurement technologies to optimize assessment of function in early Alzheimer's disease: A case study. Front Psychiatry 2020; 11: 582207. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Muurling M, Pasmooij AM, Koychev I, et al. Ethical challenges of using remote monitoring technologies for clinical research: A case study of the role of local research ethics committees in the RADAR-AD study. Plos One 2023; 18: e0285807. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Doherty AR, Hodges SE, King AC, et al. Wearable cameras in health: the state of the art and future possibilities. Am J Prev Med 2013; 44: 320–323. [DOI] [PubMed] [Google Scholar]
  • 22.Doherty AR, Kelly P, Kerr J, et al. Using wearable cameras to categorise type and context of accelerometer-identified episodes of physical activity. Int J Behav Nutr Phys Act 2013; 10: 22. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Meier IB, Buegler M, Harms R, et al. Using a digital neuro signature to measure longitudinal individual-level change in Alzheimer’s disease: the altoida large cohort study. NPJ Digital Medicine 2021; 4: 1–9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Doherty AR, Jackson D, Hammerla N, et al. Large scale population assessment of physical activity using wrist worn accelerometers: the UK biobank study. PLoS One 2017; 12: e0169649. 2017/02/02. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Feehan LM, Geldman J, Sayre EC, et al. Accuracy of Fitbit devices: systematic review and narrative syntheses of quantitative data. JMIR Mhealth Uhealth 2018; 6: e10527. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Lancaster C, Koychev I, Blane J, et al. The Mezurio smartphone application: Evaluating the feasibility of frequent digital cognitive assessment in the PREVENT dementia study. medRxiv 2019: 19005124.
  • 27.CSS Electronics. CANedge1, https://www.csselectronics.com/products/can-logger-sd-canedge1 (2023).
  • 28.Arnal PJ, Thorey V, Ballard ME, et al. The dreem headband as an alternative to polysomnography for EEG signal acquisition and sleep staging. BioRxiv 2019: 662734. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.FIBAR GROUP S.A. www.fibaro.com (2023).
  • 30.Folstein MF, Folstein SE, McHugh PR. Mini-mental state’ A practical method for grading the cognitive state of patients for the clinician. J Psychiatr Res 1974; 12: 189–198. [DOI] [PubMed] [Google Scholar]
  • 31.Lancaster C, Koychev I, Blane J, et al. Evaluating the feasibility of frequent cognitive assessment using the Mezurio smartphone app: observational and interview study in adults with elevated dementia risk. JMIR Mhealth Uhealth 2020; 8: e16142. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.Pratap A, Neto EC, Snyder P, et al. Indicators of retention in remote digital health studies: A cross-study evaluation of 100,000 participants. NPJ Dig Med 2020; 3: 21. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Riikonen M, Mäkelä K, Perälä S. Safety and monitoring technologies for the homes of people with dementia. Gerontechnology 2010; 9: 32–45. [Google Scholar]
  • 34.Stavropoulos TG, Lazarou I, Diaz A, et al. Wearable devices for assessing function in Alzheimer's disease: A European public involvement activity about the features and preferences of patients and caregivers. Front Aging Neurosci 2021; 13: 643135. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.Chirico I, Giebel C, Lion K, et al. Use of technology by people with dementia and informal carers during COVID-19: A cross-country comparison. Int J Geriatr Psychiatry 2022; 37: 1–10. [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

sj-pdf-1-dhj-10.1177_20552076241238133 - Supplemental material for Feasibility and usability of remote monitoring in Alzheimer's disease

Supplemental material, sj-pdf-1-dhj-10.1177_20552076241238133 for Feasibility and usability of remote monitoring in Alzheimer's disease by Marijn Muurling, Casper de Boer, Chris Hinds, Alankar Atreya, Aiden Doherty, Vasilis Alepopoulos, Jelena Curcic, Anna-Katharine Brem, Pauline Conde, Sajini Kuruppu, Xavier Morató, Valentina Saletti, Samantha Galluzzi, Estefania Vilarino Luis, Sandra Cardoso, Tina Stukelj, Milica Gregorič Kramberger, Dora Roik, Ivan Koychev, Ann-Cecilie Hopoy, Emilia Schwertner, Mara Gkioka, Dag Aarsland, Pieter Jelle Visser and the RADAR-AD consortium in DIGITAL HEALTH

sj-pdf-2-dhj-10.1177_20552076241238133 - Supplemental material for Feasibility and usability of remote monitoring in Alzheimer's disease

Supplemental material, sj-pdf-2-dhj-10.1177_20552076241238133 for Feasibility and usability of remote monitoring in Alzheimer's disease by Marijn Muurling, Casper de Boer, Chris Hinds, Alankar Atreya, Aiden Doherty, Vasilis Alepopoulos, Jelena Curcic, Anna-Katharine Brem, Pauline Conde, Sajini Kuruppu, Xavier Morató, Valentina Saletti, Samantha Galluzzi, Estefania Vilarino Luis, Sandra Cardoso, Tina Stukelj, Milica Gregorič Kramberger, Dora Roik, Ivan Koychev, Ann-Cecilie Hopoy, Emilia Schwertner, Mara Gkioka, Dag Aarsland, Pieter Jelle Visser and the RADAR-AD consortium in DIGITAL HEALTH

sj-pdf-3-dhj-10.1177_20552076241238133 - Supplemental material for Feasibility and usability of remote monitoring in Alzheimer's disease

Supplemental material, sj-pdf-3-dhj-10.1177_20552076241238133 for Feasibility and usability of remote monitoring in Alzheimer's disease by Marijn Muurling, Casper de Boer, Chris Hinds, Alankar Atreya, Aiden Doherty, Vasilis Alepopoulos, Jelena Curcic, Anna-Katharine Brem, Pauline Conde, Sajini Kuruppu, Xavier Morató, Valentina Saletti, Samantha Galluzzi, Estefania Vilarino Luis, Sandra Cardoso, Tina Stukelj, Milica Gregorič Kramberger, Dora Roik, Ivan Koychev, Ann-Cecilie Hopoy, Emilia Schwertner, Mara Gkioka, Dag Aarsland, Pieter Jelle Visser and the RADAR-AD consortium in DIGITAL HEALTH


Articles from Digital Health are provided here courtesy of SAGE Publications

RESOURCES