Abstract
Objective
To assess the feasibility of using smartphones to longitudinally collect objective behavior measures and establish the extent to which they can predict gold-standard depression severity in patients with ischemic stroke and transient ischemic attack (IS/TIA) symptoms.
Patients and Methods
Participants with IS/TIA symptoms were monitored in real-world settings using the Beiwe application for 8 or more weeks during March 1, 2024 to November 15, 2024. Depression symptoms were tracked via weekly Patient Health Questionnaire (PHQ)-8 surveys, monthly personnel-administered Montgomery–Åsberg Depression Rating Scale (MADRS) assessments, and weekly averages of smartphone sensor measures. Repeated measures correlation established associations between PHQ-8 scores and objective behavior measures. To investigate how closely smartphone data predicted MADRS scores, linear mixed models were used.
Results
Among enrolled participants (n=54), 35 completed the study (64.8%). PHQ-8 scores were associated with distance from home (r=0.173), time spent at home (r=−0.147) and PHQ-8 administration duration (r=0.151). Using demographic data and the most recent PHQ-8 scores, average root-mean-squared error for depression severity prediction across models was 1.64 with only PHQ-8 scores, 1.49 also including accelerometer and GPS data, and 1.36 also including PHQ-8 administration duration.
Conclusion
Smartphone sensors captured objective behavior measures in patients with IS/TIA. In predictive models, the accuracy of depression severity scores improved as measures from additional smartphone sensors were included. Future research should validate this decentralized, exploratory approach in a larger cohort. Our work is a step toward showing that real-world monitoring with active and passive data may triage patients with IS/TIA for efficient depression screening and provide digital mobility and response time endpoints.
Post-stroke depression (PSD) is an established risk factor for poor outcomes.1 Although one-third of stroke survivors develop PSD, screening for PSD is not routine.2 While PSD symptoms often improve in the first few months, most patients with PSD experience recurrent depression in subsequent years.3 Its diagnosis is further complicated by characteristic post-stroke fatigue and neurologic deficits that can complicate longitudinal monitoring and follow-up.4
The recent reclassification of stroke in the International Classification of Diseases, 11th edition, as a neurologic disorder, rather than a circulatory one, has sparked interest in studying populations with overlapping behavioral symptoms, including related diagnoses of transient ischemic attack (TIA); yet, the range of symptoms associated with individual PSD cases complicates the designation of a tool for PSD screening.4, 5, 6 Despite the introduction of several such tools, they are rarely used in practice.7, 8, 9 Historically, the identification of PSD has relied on observation by clinical staff with specialized training in stroke-related psychiatric conditions; however, questions from the Patient Health Questionnaire (PHQ) have emerged as reliable longitudinal measures of PSD, helping triage patients for diagnostic assessment.10, 11, 12, 13
Commonly used in clinical practice, self-reported surveys, like the PHQ, reflect patient self-perception, and objectively measured data from wearable devices may augment or supplement these surveys.14 Powered by digital sensors in activity trackers and other devices, novel data streams of passive measurements are actively being applied to exploratory biomarker research in neurology.15 Research evaluating measures from digital sensors to better phenotype and predict disease trajectories with neurologic conditions is on the rise, and opportunities exist to validate digital solutions in routine clinical practice.16 Considering that smartphone ownership is nearly ubiquitous, modern smartphones may be a useful wearable for remote monitoring, capturing high-quality data from built-in sensors.17 To date, a growing number of studies have demonstrated the feasibility of using wearable accelerometers to assess post-stroke function outside of clinical settings, but only a few have studied PSD, primarily focused on ischemic stroke (IS).17, 18, 19, 20
One such study by Bui et al21 required participants (n=212; 90% mild-moderate IS cases) to report activity and mood surveys via a smartphone application 5 times per day over a 2-week period. Of the 95% of participants who completed the study, most completed surveys at home (70%).22 Lau et al23 combined real-world smartphone and wearable accelerometer monitoring to deliver 1 week of surveys, 8 times per day, to patients with stroke (n=40; 70% IS cases), finding that accelerometer-derived physical activity (PA) was associated with self-reported positive affect and, therefore, may be a behavioral proxy for mood.21 The results by Ashizawa et al24 expand on the potential role of objectively measured PA as a proxy for mood. In a cohort of patients with minor IS (n=76), they applied a hybrid monitoring model, using a belt-worn accelerometer to measure in-hospital PA and delivered the Geriatric Depression Scale-15 survey to participants 3 months after discharge. They found direct and inverse relationships between depression severity and sedentary and light PA, respectively.24 Błaszcz et al25 also used a hybrid study design, instructing participants (n=21) to wear an accelerometer on their hips for 6 weeks and complete in-person Hospital Anxiety and Depression Scale assessments. In contrast to the results by Lau et al23 and Ashizawa et al,24 they found that although PA increased each week after hospital discharge, no change in depression was observed from baseline to study conclusion. Although these studies employed dissimilar methodologies, the results suggest that frequent assessment of mood in patients with stroke is feasible outside of clinical environments for short time periods, yet the potential for objectively measured behaviors, such as PA, to serve as proxies for mood is unclear.
Compared with traditional forms of remote stroke care, such as tele-stroke visits, little is known about novel wearable sensors, such as global positioning system (GPS), applied to stroke care.26 More importantly, the translational potential of extracting useful information from these sensors, such as via the creation of digital study endpoints, remains largely unexplored. To date, no research on stroke or TIA has evaluated the potential of objectively measured data from real-world settings to augment clinical workflows via longitudinal monitoring, such as by predicting gold-standard depression severity scores.
The objectives of this pilot study were to (1) evaluate the feasibility of collecting objective behavior measures from smartphone accelerometer, GPS, and screen touch sensors and (2) assess the accuracy of those measures at predicting depression severity in patients with IS/TIA symptoms. Knowing that the longest comparable study design collected 6 weeks of wearable accelerometer data, we sought to extend the literature to assess the feasibility of collecting data over an 8-week study.24 The scientific rationale underpinning this work is that self-reported mood is closely associated with functional ability in patients with stroke.27
Patients and Methods
Study Design and Population
This study used a prospective cohort design to recruit adult participants (18 years and older) recently admitted to Mayo Clinic Hospital in Arizona with IS/TIA symptoms. Patients were screened for eligibility using electronic health records (EHRs) and recruited via email after discharge between March 1 and August 31, 2024. We recruited a cohort of participants with IS/TIA symptoms at hospital admission as well as those participating in sequelae monitoring with a previous IS/TIA diagnosis. Discharge diagnoses were confirmed by imaging results based on diagnostic criteria. Participants had to own a smartphone and were included if they had one of the following discharge diagnoses: TIA, IS, or other transient stroke-like symptoms. Those with a previous dementia diagnosis were excluded. All participants provided informed consent digitally and were registered remotely. Mayo Clinic Institutional Review Board approved the study procedure (22-009345).
Given that the impact of IS on functional status is, on average, less severe with shorter recovery times than that of hemorrhagic stroke, we focused recruitment on the former cohort, ensuring that mobility data would be generated for analysis.28 Additionally, we included patients with TIA because TIA is often studied in combination with IS in the literature, due to its shared etiology, and does not permanently impair mobility.29, 30, 31, 32 Patients with suspected IS/TIA at hospital admission but whose discharge diagnosis was a non-stroke cerebrovascular syndrome were also included. Considering the lack of comparable studies and knowledge about how patients would receive such a study design, this study was launched as a proof-of-concept, offering a starting point for future predictive mood monitoring in cerebrovascular disease departments.
Data Collection
Demographic data were obtained from EHRs. Participants downloaded the HIPAA-compliant Beiwe application at baseline. Passive accelerometer, GPS, and screen touch sensor data were recorded along with remote self-entry of the PHQ-8 survey (score 0-24) using the application. Beiwe is an open-source smartphone platform for digital phenotyping compatible with both iOS and Android operating systems that includes Health Insurance Portability and Accountability Act–compliant data storage and the Forest library of data processing packages, generating measures from sensor streams (Table 1).33 Study data were collected for 8 weeks, with the platform capturing up to 90 days of data to accommodate patient delays in scheduling Montgomery–Åsberg Depression Rating Scale (MADRS) assessments, as 7 days of passive data prior to a MADRS assessment were required for the predictive models (Supplemental Figure 1, available online at https://www.mcpdigitalhealth.org/).
Table 1.
Predictors Used in Linear Mixed Models
Baseline predictors | Active data predictors | Passive data predictors |
---|---|---|
Age | Antecedent PHQ-8 survey score | GPS (number of significant locations visited, significant location entropy, time spent at home, distance traveled, maximum diameter of travel, maximum travel distance from home, radius of gyration, average flight length, SD of flight length, average flight duration, SD of flight duration, total time spent in pause/stationary, average stationary/pause time, SD of stationary/pause time, and hours per day without GPS data) |
Sex | Accelerometer (cadence, walking time, number of steps, and hours per day without accelerometer data) | |
Diagnostic category | Touch screen (time spent completing antecedent PHQ-8 survey, or PHQ-8 administration duration) | |
Previous depression diagnosis |
The application sent a notification with the survey to participant phones at baseline and once per week (Supplemental Figure 2, available online at https://www.mcpdigitalhealth.org/). As continuous data streaming is not a standard approach for any wearable device study, due to battery drain, the sampling frequencies for smartphone sensors were determined based on previous studies using the application.17 Accelerometer sensors streamed data for 10 seconds on/10 seconds off, and GPS data were streamed for 1 minute on/10 minutes off.
Every 30 days, participants completed a virtual interview, during which personnel administered a MADRS assessment (score 0-60). Participants whose smartphone stopped streaming data or who were no longer responsive were classified as lost to follow-up. The application was uninstalled upon study completion.
Smartphone Sensor Data Processing
Accelerometer data were cleaned using the Oak algorithm, previously validated via 20 public datasets, in the open-source Forest Python library. The GPS data were processed using the Jasmine algorithm in the Forest library. This algorithm applies a validated quality control feature, filtering participants with suboptimal data collected, including those with location coordinates characterized by <50 m of horizontal accuracy.33,34 Information about each GPS measure derived and its processing protocol is available in Supplemental Table 1 (available online at https://www.mcpdigitalhealth.org/). A summary table of data completeness by sensor for each participant is in Supplemental Table 2 (available online at https://www.mcpdigitalhealth.org/).
Statistical Analyses
First, we evaluated the feasibility of using smartphone sensor data to capture objective measures of behavior over time. Applying the approach by Wu et al,35 we obtained correlations for repeated measures between weekly average smartphone sensor measures and PHQ-8 survey scores. Repeated measures correlation determines common within-subject associations using analysis of covariance.35 All sensor measures were the average of 7 days prior to a survey.
Then, we assessed the potential of smartphone sensor measures to predict gold-standard depression severity scores using linear mixed models. Linear mixed models are uniquely able to use longitudinal study data with clusters of missing data, handling different numbers of repeated measures across participants and uneven time periods between repeated measures. Using a modified approach to the study focused on psychiatry patients by Pellegrini et al,36 we developed 5 models with 2 variations—with and without demographic data—for predicting MADRS scores. Baseline characteristics along with study compliance metrics were obtained. Considering that the passive measures correlated with one another, principal component analysis (PCA) was used to obtain a principal component (PC) predictor (PC1) from weekly averages across measures. An antecedent survey score was defined as the most recent weekly PHQ-8 score submitted within 7 days before an MADRS assessment. Missingness variables for accelerometer and GPS sensors described earlier were included in PCA.
Before fitting the model, any instance with 1 or more missing predictor values was excluded. Leave-one-subject-out cross-validation was performed for each participant, with the model fitted with data from other participants. This required that all participants complete 2 or more MADRS assessments. Then, MADRS scores for the excluded participant were predicted with the model. Model 1 used only the antecedent survey score to predict MADRS scores; model 2 used only passive data (PC1 from weekly average); model 3 used both the survey score and PC1; model 4 applied the predictors from model 3 plus the open survey time (screen sensor) predictor; and model 5 used no predictors. Tests confirming linear mixed model assumptions were met are detailed in Supplemental Appendix 1 (available online at https://www.mcpdigitalhealth.org/).
To assess the fit of each model, the root-mean-squared error (RMSE) for each participant was calculated, taking the square of the error between the predicted MADRS and actual MADRS scores for each MADRS assessment, and averaged across participants. For a given model, lower RMSE values correspond to better model fit.
Sensitivity Analysis
We compared the antecedent smartphone survey scores with staff-administered MADRS scores, using Spearman's rank-order correlation coefficient across MADRS timepoints. In addition, we repeated the model development process using only accelerometer data to compare single and multisensor approaches.
Results
Participant Characteristics
Of the 54 participants enrolled (Figure), 44 downloaded the application (81.5%). Of the 38 participants who completed the initial MADRS assessment, 35 completed the study. Study participants were primarily men (59.1%; n=26), middle-aged (57 years), White (88.6%; n=39), and iPhone users (77.3%; n=34). Only 7.0% of participants reported Hispanic ethnicity (n=3). At baseline, most participants had 1 or more of the following diagnoses: heart disease (65.9%; n=27), diabetes (24.4%; n=10), anxiety (22.0%; n=9), and depression (26.8%; n=11). More than 35% had experienced multiple IS/TIAs. At the time of hospital admission or most recent sequelae monitoring visit, participants experienced a wide range of stroke-related symptoms, with one-sided weakness (20.5%; n=9), dizziness (15.9%; n=7), and numbness (15.9%; n=7) among the most common (Table 2).
Figure.
Consort diagram of study enrollment and retention. IS, ischemic stroke; MADRS, Montgomery–Åsberg Depression Rating Scale; TIA, transient ischemic attack.
Table 2.
Baseline Demographic Characteristics for Participants
Characteristic | Participants (n = 44) |
---|---|
Sex | |
Male | 59.1 (26) |
Female | 40.9 (18) |
Age (y) | |
Mean (SD) | 57 |
Minimum, Q1, Q3, maximum | 23, 49, 68, 83 |
Discharge diagnosis | |
TIA | 22.7 (10) |
Ischemic stroke | 22.7 (10) |
Other transient stroke-like symptoms | 34.1 (15) |
Previous ischemic stroke/TIA only | 20.5 (9) |
Race | |
White | 88.6 (39) |
Native American | 2.3 (1) |
Did not disclose | 5.8 (3) |
Asian | 2.3 (1) |
Hispanic ethnicity | 7.0 (3) |
Smartphone operating system | |
iOS | 77.3 (34) |
Android | 22.7 (10) |
Baseline PHQ-8, mean ± SD | |
TIA | 5 ± 4 |
Ischemic stroke | 3 ± 3 |
Other transient stroke-like symptoms | 7 ± 6 |
Previous ischemic stroke/TIA only | 7 ± 4 |
Heart disease | 65.9 (27) |
Diabetes | 24.4 (10) |
Multiple strokes | 36.6 (15) |
Depression | 26.8 (11) |
Anxiety | 22.0 (9) |
IS/TIA symptoms | |
Vision loss | 9.1 (4) |
Numbness | 15.9 (7) |
Headache/migraine | 13.6 (6) |
Tremor | 2.3 (1) |
Aphasia | 13.6 (6) |
Facial droop | 11.4 (5) |
Memory loss | 13.6 (6) |
Dizziness | 15.9 (7) |
One-sided weakness | 20.5 (9) |
Confusion | 2.3 (1) |
Loss of consciousness | 2.3 (1) |
Values are % (n) unless specified.
Survey completion rates over time were moderately high, with 73% of participants completing 8 or more weeks of PHQ-8 surveys (Supplemental Figure 3, available online at https://www.mcpdigitalhealth.org/). Individual PHQ-8 trajectories were highly variable (Supplemental Figure 4, available online at https://www.mcpdigitalhealth.org/). Two participants presented at the hospital for stroke symptoms during the study (Supplemental Table 3, available online at https://www.mcpdigitalhealth.org/). Additional information about post-stroke disabilities experienced by participants is available (Supplemental Appendix 2).
Feasibility of Capturing Behavior via Smartphone Sensors
Before analysis, 1 participant was excluded because of sampling below the quality check threshold for the Forest package. Using repeated measures correlation, we assessed relationships between smartphone sensor measures and survey scores (Table 3). PHQ-8 scores correlated positively with PHQ-8 administration duration (r=0.151) and distance from home (r=0.173) and negatively with time spent at home (r=−0.147). Relationships between individual PHQ-8 behaviors and passive measures are explained in detail in Supplemental Appendix 3 (available online at https://www.mcpdigitalhealth.org/).
Table 3.
Repeated Measures Correlations for PHQ-8 Survey Score and Smartphone Measures
Smartphone sensor measure | Correlation coefficient (r) | P | 95% CI |
---|---|---|---|
PHQ-8 survey administration duration | 0.151 | 0.004∗∗ | (0.048-0.251) |
Distance diameter | 0.013 | 0.85 | (−0.115 to 0.140) |
Distance from home | 0.173 | 0.008∗∗ | (0.057-0.294) |
Distance traveled | 0.014 | 0.83 | (−0.114 to 0.141) |
Flight distance, average | 0.008 | 0.90 | (−0.119 to 0.135) |
Flight distance, SD | 0.021 | 0.75 | (−0.106 to 0.148) |
Flight duration, average | −0.066 | 0.31 | (−0.192 to 0.062) |
Flight duration, SD | −0.094 | 0.15 | (−0.219 to 0.034) |
Time spent at home | −0.147 | 0.024∗ | (−0.269 to −0.020) |
Gyration radius | 0.006 | 0.92 | (−0.121 to 0.134) |
Significant location count | −0.055 | 0.40 | (−0.181 to 0.073) |
Significant location entropy | −0.070 | 0.28 | (−0.196 to 0.058) |
Pause time | −0.081 | 0.21 | (−0.207 to 0.047) |
Total flight time | −0.054 | 0.41 | (−0.180 to 0.074) |
Pause duration, average | −0.014 | 0.83 | (−0.142 to 0.113) |
Pause duration, SD | 0.025 | 0.70 | (−0.103 to 0.152) |
Walking time | −0.026 | 0.65 | (−0.137 to 0.085) |
Steps | −0.028 | 0.62 | (−0.139 to 0.083) |
Cadence | −0.024 | 0.67 | (−0.135 to 0.087) |
Depression Severity Prediction Models
After excluding 15 participants with missing data for 1 or more measures, primarily due to GPS measures, PCA generated PC1, the passive data predictor in models (Supplemental Table 4, available online at https://www.mcpdigitalhealth.org/). Compared to the model variation without demographic variables, RMSE values were lower for the variation including demographic variables (variation 2), indicating better fit (Table 4). The lowest RMSE was 1.36 (model 4 with demographic data), which included accelerometer, GPS, screen sensor data, and the antecedent survey score. While variation 1 showed no clear trends, including passive smartphone data in variation 2’s model 1 (survey only) yielded a modest RMSE improvement of 0.15. Also, in variation 2, MADRS score prediction accuracy was improved by 0.13 with the addition of a measure from smartphone screen sensors: PHQ-8 administration duration.
Table 4.
RMSE Predicting MADRS Scores Using Models 1-5 With 2 Variations: No Demographic Characteristics and With Demographic Characteristics
Model | Variation 1 (no demographic characteristics) | Variation 2 (demographic characteristics) |
---|---|---|
Model 1 (PHQ-8 score) | 1.70 | 1.64 |
Model 2 (Accelerometer + GPS) | 1.70 | 1.56 |
Model 3 (Accelerometer + GPS + PHQ-8 score) | 1.61 | 1.49 |
Model 4 (Accelerometer + GPS + PHQ-8 score + PHQ-8 survey administration duration) | 1.60 | 1.36 |
Model 5 (only participant effect) | 1.73 | 1.65 |
Sensitivity Analysis
In aggregate, antecedent PHQ-8 scores moderately correlated with personnel-administered MADRS scores (r=0.667), a finding supported by literature (Supplemental Figures 5 and 6a-c, available online at https://www.mcpdigitalhealth.org/). For the accelerometer-only MADRS prediction models, no trends were observed (Supplemental Table 5, available online at https://www.mcpdigitalhealth.org/).
Discussion
To the best of our knowledge, this is the first study to investigate the application of smartphone monitoring with GPS and accelerometer sensors in a population with IS/TIA.37 It offers feasibility data for smartphone-based monitoring in patients with IS/TIA. We demonstrated that longitudinal measures derived from multiple smartphone sensors may capture objective measures of behavior and that those measures appear to predict depression severity in patients with IS/TIA. Some of those measures also correlate with self-reported mood, offering a first step for future researchers to further develop digital endpoints from smartphone measures. Though its results must be interpreted as preliminary, this type of monitoring program could one day use objective measures as proxies to detect clinically meaningful aberrations in mood in patients who otherwise would not receive in-person screening or complete surveys after hospital discharge, but such an analysis is beyond the scope of this study.
Compared to the model using only survey responses, the greatest improvement in model fit was found when measures from 3 smartphone sensors (accelerometer, GPS, and screen touch) were included, improving RMSE by 0.28. This preliminary finding highlights the potential for multimodal sensors combined with self-reported surveys to offer a more comprehensive observation of patients in future real-world studies. The improved prediction performance of passive data extends other proof-of-concept work, including the 1-week wearable accelerometer study on patients with mild stroke by Lau et al23 and the 12-month Beiwe study with nurses by Yi et al,38 which found that self-reported mood and daily activities correlate strongly with accelerometer measures.39,40 In contrast to the study by Pellegrini et al,36 which evaluated the performance of GPS and accelerometer data in predicting MADRS scores in a transdiagnostic population of psychiatric patients, we found that passive data did improve MADRS score prediction in an IS/TIA cohort; however, the range of PHQ-8 and MADRS scores in our cohort was smaller than theirs, as reflected in the lower RMSE values for our models.36
A key strength of this study is that it shows the potential for smartphone sensors to collect passive and self-report data related to mood in an IS/TIA cohort for a minimum of 8 weeks. Also, we exclusively examined smartphone accelerometer data in our sensitivity analysis and found improved model fit when GPS and screen touch sensor data were included, creating an early-stage rationale for future researchers to use multiple sensors to capture behavior in patients with IS/TIA. Another strength of this study is the inclusion of participants with stroke-related disabilities, although TIA symptoms were resolved by study enrollment or during participation. Though previous work has demonstrated the feasibility of smartphone monitoring in memory loss, migraine, and tremor cohorts, our findings extend these results to patients with stroke-related disabilities and other transient neurologic symptoms, such as numbness, dizziness, and one-sided loss of function.41, 42, 43, 44
While this work should be considered as a starting point for future research, it is subject to numerous limitations. First, the pilot study design includes a small sample size, although comparable with other smartphone studies. Second, passive measures in this analysis were not processed to consider weekday versus weekend averages. Next, participants most likely did not carry their phones constantly. As such, differences in participant data could be partly driven by phone use patterns. Also, behaviors assessed via self-report are limited by the short set of PHQ-8 questions. Common in real-world studies, missing data due to missed surveys and sick days occurred, but interpretation of GPS data should be considered with extra caution, because of the longer off cycles required to prevent excessive battery drain.17,45 Although the Forest package applies validated imputation methods to approximate sensor data points for off cycles, average measures may be biased when proportions of collected data are low (Supplemental Table 2).36 Given that previous research found GPS and accelerometer data missingness levels predictive of symptoms in a cohort of patients with schizophrenia, researchers should explore similar hypotheses in IS/TIA cohorts.32 Also, GPS measures were the primary source of missing measures in the week before a MADRS assessment, potentially biasing the results by excluding patients with older smartphones and unreliable cellular data networks. In addition to enrolling larger cohorts, future research should explore how to design study protocols that reduce data missingness, perhaps by testing whether alternative GPS sensors, such as those in the Apple Watch, capture more complete data in some cohorts with IS/TIA.46
Considering participant demographic characteristics, these results may not generalize to non-White patients. As most participants had an iOS smartphone, data may be biased in ways unnoticed. Beyond issues associated with wireless and cellular access in rural areas in Arizona, participant travel for vacation or work to different cities may also have introduced unforeseen variations in data. Although not a limitation, we chose to take the average of 7 days for each measure in our analyses, as is common in the growing set of studies in this space; however, other time periods may be more useful at predicting scores.37 Lastly, it is important to consider that the improved model fit offered by the inclusion of passive measures, compared to self-reported scores alone, is small (RMSE decrease of 0.28) and may be more useful if applied to predicting scores during weeks of lapsed survey compliance rather than for screening purposes. A larger sample population is necessary to investigate the optimal role of remote monitoring with smartphones in this context and to validate the significant correlates of mood (distance from home, time spent at home, and PHQ-8 administration duration) as digital mobility and response time endpoints.
Conclusion
Our findings suggest that real-world data from smartphone sensors combined with self-reported data can (1) capture objective behavior measures and (2) approximate gold-standard depression screening scores in patients with IS/TIA. Repeated smartphone sensor measures, including distance from home, time spent at home, and PHQ-8 administration duration, correlated significantly with self-reported mood. Passive GPS, accelerometer, and screen sensor data slightly improved the prediction of gold-standard depression severity scores compared to models using fewer sensors or only patient-reported surveys. Our preliminary work lays a foundation for using mobility, spatial, and mobile device screen data to investigate environmental and behavioral factors influencing mood in patients with IS/TIA. Future research should involve larger cohorts to identify which measures are most meaningful for mood prediction, to compare the effectiveness of different combinations of wearable sensors and devices, and to establish clinically useful thresholds for screening patients with depression symptoms at different timepoints in recovery.
Potential Competing Interests
Given their role as Editorial Board Members, Drs Erickson and Demaerschalk had no involvement in the peer-review of this article and have no access to information regarding its peer-review.
Ethics Statement
The institutional review board at Mayo Clinic approved the study (22-009345). This study was conducted in compliance with the Declaration of Helsinki regarding ethics principles for medical research involving human subjects. Patients were informed of their right to object to the use of their data for the present study and gave their informed consent.
Footnotes
Grant Support: This research was funded by Mayo Clinic (no grant number), a P.E.O. Scholar Award, and a Kosciuszko Foundation Graduate Award (Sofia Dembia Scholarship). Stephanie Zawada, PhD, MS was supported by a predoctoral fellowship in health outcomes/value assessment form PhRMA Foundation and a National Center for Advancing Translational Sciences TL1 training grant 5TL1TR002380-05. The funders had no role in the design of the study, in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.
Data Previously presented: A preliminary report of this work was previously published on medRxiv, on January 25, 2025. https://www.medrxiv.org/content/10.1101/2025.01.23.25320624v1. These data were accepted for presentation at American Academy of Neurology 2025 Conference in San Diego, CA.
Supplemental material can be found online at https://www.mcpdigitalhealth.org/. Supplemental material attached to journal articles has not been edited, and the authors take responsibility for the accuracy of all data.
Supplemental Online Material
References
- 1.Nannetti L., Paci M., Pasquini J., Lombardi B., Taiti P.G. Motor and functional recovery in patients with post-stroke depression. Disabil Rehabil. 2005;27(4):170–175. doi: 10.1080/09638280400009378. [DOI] [PubMed] [Google Scholar]
- 2.Towfighi A., Ovbiagele B., El Husseini N., et al. Poststroke depression: a scientific statement for healthcare professionals from the American Heart Association/American Stroke Association. Stroke. 2017;48(2):e30–e43. doi: 10.1161/STR.0000000000000113. [DOI] [PubMed] [Google Scholar]
- 3.Liu L., Marshall I.J., Pei R., et al. Natural history of depression up to 18 years after stroke: a population-based South London Stroke Register study. Lancet Reg Health Eur. 2024;40 doi: 10.1016/j.lanepe.2024.100882. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Zawada S.J., Ganjizadeh A., Conte G.M., Demaerschalk B.M., Erickson B.J. Exploring remote monitoring of poststroke mood with digital sensors by assessment of depression phenotypes and accelerometer data in UK biobank: cross-sectional analysis. JMIR Neurotech. 2025;4 doi: 10.2196/56679. [DOI] [Google Scholar]
- 5.Shakir R., Davis S., Norrving B., et al. Revising the ICD: stroke is a brain disease. Lancet. 2016;388(10059):2475–2476. doi: 10.1016/S0140-6736(16)31850-5. [DOI] [PubMed] [Google Scholar]
- 6.Everson-Rose S.A., Roetker N.S., Lutsey P.L., et al. Chronic stress, depressive symptoms, anger, hostility, and risk of stroke and transient ischemic attack in the multi-ethnic study of atherosclerosis. Stroke. 2014;45(8):2318–2323. doi: 10.1161/STROKEAHA.114.004815. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Ojagbemi A., Owolabi M., Akinyemi J., Ovbiagele B. Criterion validity of the “HRQOLISP-E”: a new context-specific screening tool for poststroke depression. Behav Neurol. 2017;2017 doi: 10.1155/2017/6515769. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.van Dijk M.J., de Man-van Ginkel J.M., Hafsteinsdóttir T.B., Schuurmans M.J. Identifying depression post-stroke in patients with aphasia: a systematic review of the reliability, validity and feasibility of available instruments. Clin Rehabil. 2016;30(8):795–810. doi: 10.1177/0269215515599665. [DOI] [PubMed] [Google Scholar]
- 9.Stuckart I., Siepmann T., Hartmann C., et al. Sertraline for functional recovery after acute ischemic stroke: a prospective observational study. Front Neurol. 2021;12 doi: 10.3389/fneur.2021.734170. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Ferreira I.S., Pinto C.B., Saleh Velez F.G., Leffa D.T., Vulcano de Toledo Piza P., Fregni F. Recruitment challenges in stroke neurorecovery clinical trials. Contemp Clin Trials Commun. 2019;15 doi: 10.1016/j.conctc.2019.100404. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Lavu V.K., Mohamed R.A., Huang R., et al. Evaluation and treatment of depression in stroke patients: a systematic review. Cureus. 2022;14(8) doi: 10.7759/cureus.28137. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Conroy S.K., Brownlowe K.B., McAllister T.W. Depression comorbid with stroke, traumatic brain injury, Parkinson’s disease, and multiple sclerosis: diagnosis and treatment. Focus (Am Psychiatr Publ) 2020;18(2):150–161. doi: 10.1176/appi.focus.20200004. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.Dong L., Williams L.S., Briceno E., Morgenstern L.B., Lisabeth L.D. Longitudinal assessment of depression during the first year after stroke: dimensionality and measurement invariance. J Psychosom Res. 2022;153 doi: 10.1016/j.jpsychores.2021.110689. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Dogan E., Sander C., Wagner X., Hegerl U., Kohls E. Smartphone-based monitoring of objective and subjective data in affective disorders: where are we and where are we going? Systematic review. J Med Internet Res. 2017;19(7) doi: 10.2196/jmir.7006. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Youn B.Y., Ko Y., Moon S., Lee J., Ko S.G., Kim J.Y. Digital biomarkers for neuromuscular disorders: a systematic scoping review. Diagnostics (Basel) 2021;11(7):1275. doi: 10.3390/diagnostics11071275. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.Cobb B., Karpowicz R.J., Jr., Cousin C., et al. Clinical applications of digital biomarkers in multiple sclerosis: a systematic literature review (P5-6.013) Neurology. 2024;102(17 suppl 1) doi: 10.1212/wnl.0000000000205303. [DOI] [Google Scholar]
- 17.Zawada S.J., Ganjizadeh A., Hagen C.E., Demaerschalk B.M., Erickson B.J. Feasibility of observing cerebrovascular disease phenotypes with smartphone monitoring: study design considerations for real-world studies. Sensors. 2024;24(11):3595. doi: 10.3390/s24113595. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18.Smith M.J., Pellegrini M., Major B., et al. Improving physical movement during stroke rehabilitation: investigating associations between sleep measured by wearable actigraphy technology, fatigue, and key biomarkers. J Neuroeng Rehabil. 2024;21(1):84. doi: 10.1186/s12984-024-01380-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.Rintala A., Kossi O., Bonnechère B., Evers L., Printemps E., Feys P. Mobile health applications for improving physical function, physical activity, and quality of life in stroke survivors: a systematic review. Disabil Rehabil. 2023;45(24):4001–4015. doi: 10.1080/09638288.2022.2140844. [DOI] [PubMed] [Google Scholar]
- 20.Straeten F.A., van Zyl S., Maus B., et al. EXERTION: a pilot trial on the effect of aerobic, smartwatch-controlled exercise on stroke recovery: effects on motor function, structural repair, cognition, mental well-being, and the immune system. Neurol Res Pract. 2023;5:18. doi: 10.1186/s42466-023-00244-w. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.Bui Q., Kaufman K.J., Munsell E.G., et al. Smartphone assessment uncovers real-time relationships between depressed mood and daily functional behaviors after stroke. J Telemed Telecare. 2024;30(5):871–884. doi: 10.1177/1357633X22110006. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22.Uchida H., Hiragaki Y., Nishi Y., et al. An iPad application-based intervention for improving post-stroke depression symptoms in a convalescent rehabilitation ward: a pilot randomized controlled clinical trial protocol. Internet Interv. 2020;21 doi: 10.1016/j.invent.2020.100340. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23.Lau S.C.L., Connor L.T., Baum C.M. Motivation, physical activity, and affect in community-dwelling stroke survivors: an ambulatory assessment approach. Ann Behav Med. 2023;57(4):334–343. doi: 10.1093/abm/kaac065. [DOI] [PubMed] [Google Scholar]
- 24.Ashizawa R., Honda H., Yoshizawa K., Kameyama Y., Yoshimoto Y. Association between physical activity levels and depressive symptoms in patients with minor ischemic stroke. J Stroke Cerebrovasc Dis. 2022;31(9) doi: 10.1016/j.jstrokecerebrovasdis.2022.106641. [DOI] [PubMed] [Google Scholar]
- 25.Błaszcz M., Prucnal N., Wrześniewski K., et al. Physical activity, psychological and functional outcomes in non-ambulatory stroke patients during rehabilitation—a pilot study. J Clin Med. 2022;11(24):7260. doi: 10.3390/jcm11247260. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26.Silva R.S.D., Silva S.T.D., Cardoso D.C.R., et al. Psychometric properties of wearable technologies to assess post-stroke gait parameters: a systematic review. Gait Posture. 2024;113:543–552. doi: 10.1016/j.gaitpost.2024.08.004. [DOI] [PubMed] [Google Scholar]
- 27.Forster S.D., Gauggel S., Petershofer A., Völzke V., Mainz V. Ecological momentary assessment in patients with an acquired brain injury: a pilot study on compliance and fluctuations. Front Neurol. 2020;11:115. doi: 10.3389/fneur.2020.00115. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28.Girgenti S., Lu J., Marsh E. Longitudinal outcomes of ischemic versus hemorrhagic stroke: differences may impact future trial design. J Stroke Cerebrovasc Dis. 2024;33(11) doi: 10.1016/j.jstrokecerebrovasdis.2024.107952. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29.Singh N., Marko M., Ospel J.M., Goyal M., Almekhlafi M. The risk of stroke and TIA in nonstenotic carotid plaques: a systematic review and meta-analysis. AJNR Am J Neuroradiol. 2020;41(8):1453–1459. doi: 10.3174/ajnr.A6613. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30.Boulanger M., Béjot Y., Rothwell P.M., Touzé E. Long-term risk of myocardial infarction compared to recurrent stroke after transient ischemic attack and ischemic stroke: systematic review and meta-analysis. J Am Heart Assoc. 2018;7(2) doi: 10.1161/JAHA.117.007267. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 31.Bhatia R., Sharma G., Patel C., et al. Coronary artery disease in patients with ischemic stroke and TIA. J Stroke Cerebrovasc Dis. 2019;28(12) doi: 10.1016/j.jstrokecerebrovasdis.2019.104400. [DOI] [PubMed] [Google Scholar]
- 32.Torous J., Staples P., Barnett I., Sandoval L.R., Keshavan M., Onnela J.P. Characterizing the clinical relevance of digital phenotyping data quality with applications to a cohort with schizophrenia. NPJ Digit Med. 2018;1:15. doi: 10.1038/s41746-018-0022-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 33.Liu G., Onnela J.P. Bidirectional imputation of spatial GPS trajectories with missingness using sparse online Gaussian Process. J Am Med Inform Assoc. 2021;28(8):1777–1784. doi: 10.1093/jamia/ocab069. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 34.Kiang M.V., Chen J.T., Krieger N., et al. Sociodemographic characteristics of missing data in digital phenotyping. Sci Rep. 2021;11(1) doi: 10.1038/s41598-021-94516-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 35.Wu T., Sherman G., Giorgi S., et al. Smartphone sensor data estimate alcohol craving in a cohort of patients with alcohol-associated liver disease and alcohol use disorder. Hepatol Commun. 2023;7(12) doi: 10.1097/HC9.0000000000000329. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 36.Pellegrini A.M., Huang E.J., Staples P.C., et al. Estimating longitudinal depressive symptoms from smartphone data in a transdiagnostic cohort. Brain Behav. 2022;12(2) doi: 10.1002/brb3.2077. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 37.Leaning I.E., Ikani N., Savage H.S., et al. From smartphone data to clinically relevant predictions: a systematic review of digital phenotyping methods in depression. Neurosci Biobehav Rev. 2024;158 doi: 10.1016/j.neubiorev.2024.105541. [DOI] [PubMed] [Google Scholar]
- 38.Yi L., Hart J., Straczkiewicz M., et al. Measuring environmental and behavioral drivers of chronic diseases using smartphone-based digital phenotyping: intensive longitudinal observational mhealth substudy embedded in 2 prospective cohorts of adults. JMIR Public Health Surveill. 2024;10 doi: 10.2196/55170. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 39.Loosen A.M., Kato A., Gu X. Revisiting the role of computational neuroimaging in the era of integrative neuroscience. Neuropsychopharmacology. 2024;50(1):103–113. doi: 10.1038/s41386-024-01946-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 40.Triana A.M., Saramäki J., Glerean E., Hayward N.M.E.A. Neuroscience meets behavior: a systematic literature review on magnetic resonance imaging of the brain combined with real-world digital phenotyping. Hum Brain Mapp. 2024;45(4) doi: 10.1002/hbm.26620. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 41.Hackett K., Xu S., McKniff M., Paglia L., Barnett I., Giovannetti T. Mobility-based smartphone digital phenotypes for unobtrusively capturing everyday cognition, mood, and community life-space in older adults: feasibility, acceptability, and preliminary validity study. JMIR Hum Factors. 2024;11 doi: 10.2196/59974. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 42.Ó Breasail M., Biswas B., Smith M.D., et al. Wearable GPS and accelerometer technologies for monitoring mobility and physical activity in neurodegenerative disorders: a systematic review. Sensors (Basel) 2021;21(24):8261. doi: 10.3390/s21248261. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 43.Kuosmanen E., Wolling F., Vega J., et al. Smartphone-based monitoring of Parkinson disease: quasi-experimental study to quantify hand tremor severity and medication effectiveness. JMIR Mhealth Uhealth. 2020;8(11) doi: 10.2196/21543. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 44.Stojchevska M., Van Der Donckt J., Vandenbussche N., et al. Uncovering the potential of smartphones for behavior monitoring during migraine follow-up. BMC Med Inform Decis Mak. 2025;25(1):88. doi: 10.1186/s12911-025-02916-w. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 45.Fleming T.R. Addressing missing data in clinical trials. Ann Intern Med. 2011;154(2):113–117. doi: 10.7326/0003-4819-154-2-201101180-00010. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 46.Toon E., Davey M.J., Hollis S.L., Nixon G.M., Horne R.S., Biggs S.N. Comparison of Commercial Wrist-Based and Smartphone Accelerometers, Actigraphy, and PSG in a Clinical Cohort of Children and Adolescents. J Clin Sleep Med. 2016;12(3):343–350. doi: 10.5664/jcsm.5580. [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.