Skip to main content
Annals of Clinical and Translational Neurology logoLink to Annals of Clinical and Translational Neurology
. 2025 Jun 9;12(8):1556–1565. doi: 10.1002/acn3.70082

Actionable Wearables Data for the Neurology Clinic: A Proof‐of‐Concept Tool

Nicolette Miller 1, Ebenezer Chinedu‐Eneh 1, Jaeleene Wijangco 1, Kyra Henderson 1, Nikki Sisodia 1, Narender Sara 1, Jennifer Reihm 1, Shane Poole 1, Jim Rowson 1, Chu‐Yueh Guo 1, Jeffrey M Gelfand 1, Valerie J Block 1, Riley Bove 1,
PMCID: PMC12343321  PMID: 40491260

ABSTRACT

Objective

Wearable devices can monitor key health and fitness domains. In multiple sclerosis (MS), monitoring step count and sleep is feasible, valid, and offers a holistic glimpse of patient functioning and worsening. However, data generated from wearables are typically unavailable at the point of care. We describe the design, development, and evaluation of a proof‐of‐concept tool that delivers patient‐generated wearables data in accessible, interpretable, and actionable formats to an outpatient Neurology clinic.

Methods

Through a process of human‐centered design, stakeholders were engaged to create a technological solution to access and display wearables data via a launch from the electronic health record (EHR). Designs were informed by patient‐centered observational research and clinician interviews to ensure user alignment and validity. Qualitative and quantitative evaluations (Health ITUES framework) as well as live observations of use assessed functionality, ease of use, and clinical integration.

Results

During development, 25 clinicians and four engineers across eight clinical settings provided feedback. The proof‐of‐concept wearable selected was Fitbit, which was connected via a custom solution to the EHR dashboard (BRIDGE). In the final validation, clinician satisfaction was high: mean scores > 4/5 for all performance and ease of use components. Live visualizations during clinical encounters highlighted a range of potential uses: detecting progression by step count, setting motivating and realistic activity goals despite limitations, referring to rehabilitation after activity drops, and measuring effects of interventions on sleep quality.

Interpretation

Integrating wearables data into a clinical workflow for neurology is feasible and highlights novel areas to support shared decision‐making and data‐driven, personalized care.

Keywords: fitbit, sleep, steps/activity, visualizations, wearables

1. Introduction

1.1. Background

Wearable technologies have emerged as a low‐cost and informative approach to generating digital biomarkers for neurological diseases. These devices are increasingly prevalent and accessible, monitoring important health and fitness metrics that provide vital insights into a patient's daily life. Multiple sclerosis (MS), a condition characterized by its unpredictable course and varied symptoms, is an ideal candidate for this approach. The discrepancy between subjective patient experiences and objective clinical measures in MS care is a microcosm of the broader trend of depersonalization in healthcare. Addressing this gap is crucial for effective MS management and can serve as a model for other complex diseases.

Monitoring step count and sleep is feasible, valid, and provides insights on real‐world performance in MS research [1, 2]. For neurological care, however, these wearables generally operate outside the purview of health systems, and clinicians lack routine access to wearable data that might inform clinical care. Successfully delivering wearable data into the clinical workflow in accessible, interpretable, and actionable formats, integrated with the patient's other health data available in the electronic health record (EHR), has the potential to support data‐driven, personalized care.

The current manuscript describes the design, development, and evaluation of one approach to bridging this gap in neurological care, an application that can access and visualize patient‐generated data in the outpatient MS clinic. To accomplish this, the study used an established framework for human‐centered design (HCD) that engaged stakeholders in a collaborative, iterative approach to the design process emphasizing users' needs [3]. Using a low‐cost wearable device (Fitbit) as a proof of concept, the project evaluated usability and sought feedback to inform refinements to data interpretability and actionability, to explore whether continuously monitoring health indicators like step count and sleep could contribute to a comprehensive health profile for each MS patient that could be used to enhance shared decision‐making around optimizing activity and well‐being.

2. Methods

2.1. Phases of Human‐Centered Design: Overview

A tool aimed for the wearable data contextualization at the point‐of‐care must be designed to prioritize the intended users (i.e., patients and their care teams). Therefore, HCD was utilized as an established approach to digital tool design—a process that starts with identifying the needs of all stakeholders involved in the system that the technology hopes to change, continues with iterative feedback, and finally accounts for how the outcomes of the digital intervention compare with the intended goals [3, 4] (Figure 1).

FIGURE 1.

FIGURE 1

Human‐centered design process: The phases of development and user engagement. Human‐centered design places the people who use a product at the center of all activities; in this case: Multiple Sclerosis clinicians. This process has iterations through discovery, design, development and delivery.

The current research activities took place within the University of California, San Francisco (UCSF) health system, which serves a large and diverse population of individuals with MS. The clinical research team included an MS neurologist (RB), an MS physical therapist (VB) and a medical student (EC), and the developer team (NM, JR, SW and NS). This team was selected to ensure expertise in the challenges faced by patients with MS, knowledge of wearable technology, and an understanding of how digital health solutions could address these issues. Patient activity was recorded remotely by Fitbit devices over the course of the study.

In Phase I (discovery), MS stakeholders and the technical team were engaged to identify the necessary technological solutions, including the demonstration wearable device and its data requirements. In Phase II (development), the health metric visualizations were conceptualized. In Phase III (design and implement), the visualizations were developed, and an initial round of evaluations with MS clinicians gauged likability. In Phase IV (deliver), after ensuring that the tool was satisfactory to clinicians, the visualizations were piloted in clinical encounters. The visualizations then went through Phase III and Phase IV twice more, iterating designs based on stakeholder feedback to ensure that they achieved the scoring targets for navigation, ease of use, and usefulness within clinical practice (Figure 1).

2.2. Phase I: (Discover) Initial Technical Discussion

2.2.1. Overview

The initial brainstorming phase focused on the technological solutions required to bring wearables data to the point of care. This phase included interviews with three senior MS clinicians who were independent of the current project, as well as experts in epidemiology, cardiology, orthopedics, and internal medicine (N = 10 total). While the team also considered leveraging existing architecture and approaches from other health systems, it was determined that no point‐of‐care, EHR‐embedded, wearables solutions for neurological care existed and that such an integration would need to be built de novo.

  1. Wearable device: The team considered different commercially available wearable devices to determine which would be most feasible for this proof‐of‐concept study. Devices considered included Fitbit, Garmin, Apple Watch, and the Oura ring. Key considerations included: cost‐effectiveness, ease of distribution to patients, ability to obtain wearable health data from the device, and what data would be insightful for MS clinicians to aid in patient care. A priori, devices calculating step count and sleep were selected given their broad relevance in neurological conditions and general health [1, 2, 5, 6, 7, 8, 9, 10, 11].

  2. Wearables data access: A second design solution was needed to access and store the wearables data and display them in our application at the point of care. This tool would need to provide the ability to add a patient, obtain their permission to then connect to their wearable account, and store and export their data. The team considered commercial wearables aggregators such as HumanAPI, MyChart, which connects only to apple health data, as well as a locally developed one. To build and maintain a locally developed application, extensive technical capabilities are required. The tool must handle sensitive patient data, create a comprehensive platform to connect with the wearable device, and store the collected data securely. Regardless of which aggregator is selected, it must seamlessly connect to the point of care display, BRIDGE [12], which is already integrated with the EHR. An overview of costs, project timelines, wearables accessed, and ease of integration with the point of care display, UCSF regulatory and security environment was considered.

  3. Point of care display: The third component was a point‐of‐care dashboard to display the wearables data once accessed and ingested. Here, BRIDGE was chosen a priori. BRIDGE is a technologically scalable, institutionally approved, cross‐disease modular precision medicine platform developed through extensive HCD. BRIDGE launches from within a patient's encounter from the Epic EHR at UCSF using industry‐standard integration, delivering a near‐seamless one‐click experience for clinicians. This tool accesses and visualizes patient data from varied clinical and research sources. A disease‐specific version of BRIDGE is live within multiple clinics at UCSF, including the MS clinic.

2.3. Phase II: (Design) Stakeholder Engagement and Conceptualization

2.3.1. Ideation and Concept Development

Here, both the visualization formats and the key metrics from the health data were initially designed. Key visualization considerations included size, color schemes, types of metrics to be displayed, placement strategies, and visualization techniques. The primary objective was to craft an intuitive and informative design capable of succinctly conveying critical health metrics to clinicians in a user‐friendly manner, and to include key features shown from the scientific literature to be relevant. Continuous remote monitoring of step count provides a sensitive, real‐world measure of ambulatory activity in MS, capturing changes that may go undetected by traditional clinic‐based assessments. Our team has shown that lower daily step counts have been associated with a fourfold increase in the odds of disability worsening 1 year later [2]. Additionally, higher step counts were correlated with greater spinal cord gray matter area (a surrogate for disability), suggesting a link between activity levels and neuroanatomical integrity [13]. Deepening our understanding of remote ambulatory behavior, our group and others have shown that minute‐by‐minute step count analysis can unearth distinct daily activity patterns, offering a more nuanced understanding of mobility impairments and their impact on daily function [5, 14].

  • Stakeholder interviews and feedback collection: Feedback was solicited from a cohort of nine neurologists from a range of clinical environments and familiarity with wearables research about initial impressions and feedback on the conceptual designs. Structured interviews were conducted, followed by an anonymous electronic survey to yield a nuanced understanding of the dashboard's effectiveness, user experience, and potential areas for enhancement. The survey followed the Health‐IT usability model (Health ITUEM) to evaluate key features likely to lead to digital tool use throughout engagement of users [4, 15, 16, 17]. Specifically, a Health IT Usability Evaluation Scale (Health ITUES) gauged clinicians' assessment of the usability, usefulness, ease of use, likability, understandability, and actionability of the visualization mock‐ups. Each domain was scored on a Likert scale, ranging from 1 (not at all ideal) to 9 (ideal).

  • Data analysis and insight generation: The lead authors analyzed interview responses to distill insights and themes. The quantitative survey responses were reviewed, and descriptive statistics were performed to assess clinicians' initial reactions to the visualization concepts. These combined insights were instrumental in guiding the subsequent design iterations.

2.4. Phase III: (Develop) Development and First Iterative Testing

2.4.1. Prototype Development

Feedback from stakeholders gathered during Phase II was translated into concrete design improvements that were subsequently used to guide the development of prototype health visualizations within the BRIDGE application.

  • Clinician testing and interaction: This initial prototype was then presented to 10 MS clinicians in varying phases of training. Utilizing anonymized patient data from clinic patients unknown to the clinician, we facilitated a 20‐min interactive session. Neurologists engaged with the health visualization widgets during this session and provided immediate feedback on their usability and clinical relevance.

  • Post‐interaction survey: Following the interactive session, neurologists were asked to complete a follow‐up survey using a Likert scale to critically evaluate various dimensions of the visualizations' design and functionality. Responses were then analyzed to inform further refinements of the visual displays.

2.5. Phase IV: (Deliver) Refinement and Deployment

2.5.1. Final Adjustments and Implementation

Building on the feedback received in Phase III, additional modifications to the health visualizations were performed. This final phase focused on precisely calibrating the visual displays in preparation for their integration into the BRIDGE dashboard, ensuring optimal functionality and user experience in real‐world clinical settings.

  • Deployment and user testing: The visualizations were launched in the MS clinic on the MS BRIDGE dashboard and utilized within the context of usual clinical care as well as within the context of an ongoing clinical trial focused on falls [18] (NCT05837949—NIH number: R01LM013396), for which Fitbit data were collected and available at the point of care via BRIDGE for study monitoring. The team continued to iterate designs based on clinician use and feedback following each design change.

  • Final round of interviews: A final round of interviews was conducted to ensure that the technological features and designs met user needs when real data were presented live. Here, the heart rate visualization was again displayed and evaluated. To support generalizability and external validity, clinicians across various disciplines were invited to participate. Visualizations were again presented live and subsequently scored by clinicians using the Health ITUES framework and a Likert scale ranging from 1 (strongly disagree) to 5 (strongly agree) to assess operational functionality, ease of use, and integration into clinical practice. Switching to a 5‐point scale from the 9‐point scale used in prior phases reduced response granularity and provided more definitive scoring.

2.6. Approvals and Consent

Evaluation of the platform and data collection were approved by the UCSF Institutional Review Board (IRB #18–26,148). Electronically signed informed consent was obtained from clinicians and patients prior to participation.

3. Results

3.1. Phase I: (Discover) Initial Technical Decisions

The discovery phase confirmed the lack of existing, clinically available tools displaying patient wearable health metrics for neurological care. This was combined with robust clinician interest in bringing wearables data to the point of care. It was also informed by the team's extensive experience longitudinally validating wearables as biomarkers in MS and engaging patient stakeholders around understanding the ideal use of their data for clinical action. Prior feedback from patients revealed a preference for low‐burden devices, the desire to discuss their data with their clinicians, and the desire to set personalized goals based on their level of functioning [2, 18, 19].

  1. Wearable device: Fitbit was selected as the proof‐of‐concept wearable device. This was chosen because of the extensive experience of the research team [1, 2, 5, 6, 20] and others [14, 21, 22, 23, 24, 25, 26] in generating benchmarks for ambulatory activity for patients with MS and because of its widespread commercial availability, relatively low cost, and range of metrics available. The Fitbit API allows developers to interact with Fitbit data in their own applications, products, and services.

  2. Wearables data types: Three key data types were selected: physical activity (step count/active minutes), heart rate variability, and sleep. These three were selected by an expert clinical panel as the most feasible to visualize and represent, as well as being broadly accepted as health indicators including for chronic conditions. Physical activity levels can be assessed through step counts and active minutes. Active minutes are calculated by fitbit using metabolic equivalents (METs) and heart rate data to estimate exercise intensity. Higher activity is indicated by a closer clustering of steps and active minute bars reaching above the goal line. Lower activity is indicated by more dispersed patterns of steps and active minute bars lower than the goal line. Previous research indicates specific step count thresholds for an individual's level of neurological impairment, outside which clinicians could investigate or intervene to support preventative care. A specific need identified was to visualize patients' data within the parameters of others in their range of disability. Recent evidence also highlights the associations between sleep quality/duration and heart rate variability as important factors in people with MS, with potential clinical relevance for providing a more comprehensive assessment of disease severity and enabling earlier, tailored interventions based on smartwatch‐derived metrics [11, 27].

  3. Wearables Data Access: Gather: The decision was made to develop UCSF‐Gather, a specific standalone open source application created to gather and store data from Fitbit. This de novo solution was selected over other data aggregators because of the lower cost, lower institutional barriers, and the greater flexibility in integrating with other technical tools utilized (BRIDGE). Shown in a figure (Figure 2), the patient consents to UCSF‐Gather accessing their Fitbit data. Once successful, Gather can retrieve, manipulate, store, and visualize the patient's data.

FIGURE 2.

FIGURE 2

Overview of the technological approach utilized to launch patients' Fitbit data with the point‐of‐care visualization. Fitbit tracks the patient's activity. (1) Patient consents, then UCSF‐Gather requests access to the patient's data. (2) Patient is redirected to Fitbit for authentication, successfully returned token is stored in UCSF‐Gather. (3) A scheduled background task pulls patient data from the Fitbit API to UCSF‐Gather. (4) The data is formatted and stored. (5) BRIDGE ingests the patient's data from UCSF‐Gather. (6) BRIDGE is launched from a single click in EPIC through SMART‐on‐FHIR.

3.2. Phase II: (Design) Stakeholder Engagement and Conceptualization

The wearables health visualization mockups used for the Phases II and III of the study are depicted in a figure (Figure 3). The qualitative and quantitative feedback solicited from nine neurologists yielded a nuanced understanding of the dashboard's effectiveness, user experience, and potential areas for enhancement.

  • Quantitative survey insights: Overall, quantitative feedback was positive, with mean scores of 7.0 on a 9‐point scale across all queried metrics (Table S1). Notably, drawbacks received a low rating (2.1, SD 1.1), which is favorable, as the goal was to minimize drawbacks.

  • Qualitative user feedback: Most clinicians were enthusiastic about the dashboard's capability to visualize long‐term trends and to integrate seamlessly within the clinical encounter (i.e., accessible with one click from the Epic EHR). Specific areas for optimization were the need for improved visual and data clarity (ease of use), an explanation of how health metrics were calculated (e.g., formula for sleep latency), and how to visualize time periods where the device was not used or charged (interpretability). Since the Heart Rate visualization was not rated as likely to be informative or to influence clinical care, the decision was made to build it for future use but to exclude it from future rounds of feedback.

FIGURE 3.

FIGURE 3

Initial wearables visualization mockups.

3.3. Phase III: (Develop) Development and First Iterative Testing

The wearables visualizations were developed within the BRIDGE dashboard using python and D3.js, and data from Fitbit was ingested from Gather to BRIDGE. During Phase III testing, clinicians could evaluate the tool with real patient data and identify further areas of feedback not possible from static mock‐ups. The same mixed qualitative and quantitative approach as Phase II was utilized, with a cohort of 10 clinicians each specializing in MS outpatient care providing feedback on the Activity and Sleep visualizations separately.

On the quantitative survey, overall, the Activity and Sleep visualizations scored at least 7.0 on a 9‐point scale for most domains assessed (Table S2). Quantitative and qualitative feedback primarily addressed user interface and user experience changes to assist with ease of use. Suggestions included the ability to expand the graphs to make them larger and easier to read, as well as adding a date range picker to expand how much patient data could be viewed at one time. Specific suggestions on improving the Activity visualization were minimal, and for the Sleep visualization, suggestions were made to flip the axes and use more distinct colors to improve readability.

3.4. Phase IV: (Deliver) Refinement and Deployment

Ad hoc qualitative feedback was solicited for approximately 50 encounters for patients participating in the ongoing clinical trial focused on falls [18] (NCT05837949—NIH R01LM013396). Suggested modifications related to fine‐tuning definitions and thresholding of data, as well as accounting for missing data.

3.4.1. Quantitative Usability Scores

In the final round of interviews, to inform generalizability and external validity, clinicians across various disciplines participated (MS, movement disorders, obstetrics, neuromuscular, oncology, cardiology, rehabilitation, and population health). General feedback from the MS clinicians were positive, noting improvements made based upon previous feedback and expressing enthusiasm for the ability to view their patient's health data in the EHR. Overall, all three visualizations scored above 4.0 for navigation and ease of understanding, two key dimensions of usability (Table 1). Two features were noteworthy. First, the Activity visualization was overall scored as lower for “easy to understand,” likely reflecting the complexity of a graph displaying two sets of metrics (step count and active minutes). While a suggested approach was to split the data into two separate graphs, after more qualitative probing, the study team determined that overall, keeping the data in one figure was more parsimonious and brief one‐on‐one training with individual clinicians (2–5 min) could improve the understandability substantially. Second, clinicians expressed some apprehension about usefulness of the visualizations in their practice, reflecting a general lack of familiarity with, and training in, how to use patients' wearables data at the point of care.

TABLE 1.

Final phase IV scoring of the visualizations for usefulness and functionality by MS (n = 5) and non‐MS (n = 5) clinicians. Responses were scored on a Likert scale, ranging from 1 (strongly disagree) to 5 (strongly agree).

Outcome Clinicians
All mean (SD) Non‐MS mean (SD) MS mean (SD)
Activity Navigation 4.5 (1.0) 4.8 (0.4) 4.2 (1.3)
Easy to understand 4.0 (0.9) 4.2 (0.4) 3.8 (1.3)
Useful in my practice 3.8 (1.2) 4.0 (0.7) 3.6 (1.7)
Sleep Navigation 4.5 (0.7) 4.4 (0.9) 4.6 (0.5)
Easy to understand 4.4 (0.7) 4.4 (0.5) 4.4 (0.9)
Useful in my practice 3.8 (0.9) 3.8 (1.1) 3.8 (0.8)
Heart rate Navigation 4.7 (0.5) 4.8 (0.4) 4.6 (0.5)
Easy to understand 4.6 (0.5) 4.6 (0.5) 4.6 (0.5)
Useful in my practice 3 (1.2) 3.2 (0.8) 2.8 (1.5)

3.4.2. Example Visualization

This feedback led the team to generate a figure (Figure 4) that could illustrate key examples and aid in dissemination and implementation. Screenshots of informative patient‐generated data are presented. The visualization under (A) shows the Activity graph, which includes the daily step count and active minutes for a 57‐year‐old woman with progressive MS. This patient walks significantly more steps per day than other individuals of their EDSS level (orange line). During the summer, the patient experienced a personal setback that led to a dramatic decline in walking. After identifying the decline, the patient and clinician decided on a new plan for physical activity on weekends, when she walked fewer steps than during the week. Over the coming months, she was able to increase her activity level and step count steadily, even in the setting of cane dependence and progressive disease.

FIGURE 4.

FIGURE 4

Patient wearables visualizations designed for the MS point of care, including (A) Steps/Activity and (B) Sleep. (A) Steps/Activity visualizes the patients daily step count (black line) and active minutes (blue bars) over time. (B) Sleep visualizes the patients time asleep (blue) and awakenings (orange). Two patients are shown, denoted by the blue numbers. Key features are denoted by green numbers and are described in Box 1.

The visualizations under (B) depicts Sleep from two individuals. The graphs show the patients' time asleep and awakenings. The patient in Sleep 1 has lower sleep efficiency and more sporadic sleep patterns. The patient in Sleep 2 has more regular sleep patterns and a higher sleep efficiency. Both patients do show frequent awakenings during the night, providing an opportunity to discuss reasons for sleep interruptions. This granular information is provided.

BOX 1. Patient wearables visualizations: Key features. The numbers correspond to the green numbers in Figure 4.
1 Each visualization has a date‐range picker which allows the user to customize the timeline for the data viewed
For example: the user can choose to view only the last month of data or the last 6 months of data
2 A table summarizes the patient's average daily step count and number of active minutes. This shows the patient's daily means, range, and goal
3 Legend for the Steps/Activity graph. The graph shows the patient's step count, “Active minutes”, and EDSS reference
Zone determined based on their and their active minute's goal
4 The left x‐axis shows step count, and the y‐axis shows date
5 The right x‐axis shows active minutes
6 A summary table for the patient's sleep trends. This shows the patient's mean hours of night sleep, mean
Awakenings and sleep efficiency
7 Legend for the Sleep graph. The graph shows the patient's time asleep, and time spent awake in bed

3.4.3. Qualitative Insights

Many uses for the wearables were elicited from patients and clinicians during the observation period. Participants with MS also expressed satisfaction with using the Fitbit to monitor their activity and the insights generated by bringing the data to the point of care for discussion with their clinicians. They specifically noted greater awareness of activity and sleep levels, the benefit of having real‐life data to inform discussions with their clinicians about daily habits, as well as the ability to set specific goals.

  • Examples of how clinicians and patients might use Activity visualizations include monitoring step count over time as a sign of progression, congratulating patients on activity efforts despite their limitations, responding to a drop in activity after a fall with a referral to specialized rehabilitation, and setting realistic step count goals that patients find motivating. “Having the Activity visualization in the EHR is a motivating feature to engage patients about the importance of activity and to appreciate efforts they may be putting in sometimes despite disease” (Neurologist, mid‐career).

  • The benefits of visualizing sleep metrics include helping to characterize difficulties with sleep latency and maintenance, relate the timing of symptomatic medications to wake‐up times, and set sleep hygiene goals. Wearables data also provided key objective corollaries during specific scenarios, such as a period of prolonged unresponsiveness. “The Sleep visualization is really helpful when evaluating my patients' fatigue, which is a common complaint” (Neurologist, trainee).

  • Quotes from patients revealed similar feedback. “The activity and the sleep visualizations helped in managing my movements and the quality of sleep I am getting. The information did make me aware of what I am doing as well as what I am not doing” (patient, male, 70s). “I believe I'm more confident now in managing my MS. Wearing the Fitbit is a good reminder to keep moving and to get adequate sleep (patient, female, 50s).”

  • General feedback from the other departments was also positive, with all clinicians expressing interest in having these visualizations live in their clinics' BRIDGE dashboards as well. Some provided suggestions on how to adjust visualizations to make them more useful in their clinical context. “The Activity widget would be useful for when a patient is making a medication change. We can see if the patient is able to become more active and what the time course of that was (Neurologist, Movement Disorders, early career). To improve the Heart Rate visualization for pregnancy, a suggestion would be to look at maternal heart rate and fetal heart rate (Obstetrician, mid‐career). Sleep is underrated for a patient's health; this is a nice quick snapshot for patients sleep/health overall (Neurologist, Neuromuscular disorders, early career). Another suggested that the sleep visualization could be used to visualize Rem Sleep Behavior Disorder (PD).”

4. Discussion

Wearables provide a wealth of information relevant to the function of neurological populations, such as information about their activity, sleep, and fitness levels. While extensive research has supported the potential utility of wearables as digital biomarkers for research and clinical utility, how to harness this information for actionable change is stymied by clinicians' lack of access to the data in an actionable, interpretable, usable format. The current paper presents a method for accessing and delivering wearables data relevant to neurological care at the point of care and demonstrates good usability and likability, as well as an initial range of insights about the possible use and effectiveness of the visualizations. We are not aware of other systems bringing patient wearable data into the EHR or integrating these with typical clinical data for neurological care.

Prior research on how clinicians and patients might act together based on patient‐generated wearables data focused on reviewing patients' wearables data and setting goals for them [19]. But to evaluate and test the true clinical impact, the data must be available clinically, permitting, in essence, in silico trials. To achieve this, that is, accessing and delivering wearables data to the clinical context, requires technological solutions that bridge patient permission to access their data, technically accessing the data through APIs, processing the data into visual formats, and delivering these in the clinical workflow.

For the current proof of concept tool, our choices were focused on prioritizing the users. This meant a simple, consumer‐grade wearable (Fitbit) for patients and delivering visuals right into the EHR to provide seamless, 1‐click access for clinicians. Several wearables aggregators are available, but none were fully integrated into the EHR. Ultimately, for this proof of concept, accessibility was selected over scalability since access was a prerequisite to understanding the use and, therefore, making a case to scale the tool. Clinician convenience was prioritized, and the team opted to build its own solution (UCSF‐Gather) that would integrate with our SMART on FHIR application (BRIDGE). In the future, should health systems utilize such aggregators as plug‐ins to their EHR, then additional data sources could be accessed, visualized, and utilized for clinical care. Indeed, while we utilized Fitbit as a proof of concept, the visualizations could be repurposed for various other wearable devices that track standard metrics such as Steps and Sleep. Although connecting UCSF‐Gather to another wearable device would be straightforward, it would still require a significant amount of time. Each device has unique accessibility features, endpoints, and data formats, necessitating restructuring of the aggregator code to ensure accurate data connection, formatting, and storage.

Despite the encouraging feedback, the current study has limitations. We sacrificed possible scalability in the proof‐of‐concept phase, and the application backed (UCSF‐Gather) will require modifications to accept data from larger numbers (thousands) of patients. As described above, we chose accessibility at the point of care over scalability, so that we could understand the potential impact of these visualizations at the point of care before scaling them. While the integrations were built on 10 years of observational studies using the Fitbit, the commercial technological space continues to evolve, and modifications will be required. For instance, Fitbit Inc. was purchased by Google, leading to changes that require researchers to adapt to new functionalities, although core tracking features remain intact. The current visualizations, adapted to the MS space and integrated into the UCSF‐specific BRIDGE platform, can be utilized for other clinical settings and other wearable devices and could be integrated into other point‐of‐care solutions in other health systems, such as the MS‐SHARE [28].

The proof‐of‐concept technological solution presented here represents a significant advance in our ability to access and utilize patient‐generated data to support clinical goals such as activity goal setting and monitoring treatment responses, and overarchingly point to the feasibility and importance of leveraging patient‐generated data for their own care. As ongoing trials continue to inform how the data are utilized and the impact of their use, this will inform the ongoing modification and scalability of the technological solutions that enable them. The proof‐of‐concept technological solution presented here marks a significant advance in leveraging patient‐generated data for clinical goals, including activity tracking, treatment monitoring, and personalized care. Insights from ongoing trials will guide the refinement and scalability of these solutions, ensuring their effective integration into patient care.

Author Contributions

Nicolette Miller and Ebenezer Chinedu‐Eneh were responsible for the acquisition and analysis of data and drafted and revised the manuscript. Jaeleene Wijangco, Kyra Henderson, Nikki Sisodia, Narender Sara, Jennifer Reihm, Shane Poole, Jim Rowson, Chu‐Yueh Guo, Jeffrey M. Gelfand and Valerie J. Block were responsible for the acquisition and analysis of data and revised the manuscript. Riley Bove conceptualized and supervised the study and drafted and revised the manuscript.

Disclosure

N.M., E.C.‐E., K.H., N.S., J.W., N.S., J.R., S.P., J.R.: No funding to disclose. C.‐Y.G. provided medical consulting for TG Therapeutics. J.M.G. reports research support to UCSF from Hoffman LaRoche and Vigil Neurosciences for clinical trials, and consulting for Arialys and Ventyx Bio. V.J.B. is funded by the National Multiple Sclerosis Society Career Transition Award. R.B. reports research awards by the NMSS Harry Weaver Award, NIH, DOD, NSF, as well as Biogen, Eli Lilly, Novartis and Roche Genentech. She has received personal fees for consulting from Alexion, Amgen, Cadenza, EMD Serono, Genzyme Sanofi, and TG Therapeutics.

Conflicts of Interest

The authors declare no conflicts of interest.

Supporting information

Data S1.

ACN3-12-1556-s001.docx (16.6KB, docx)

Acknowledgments

This study was funded by the NIH‐NLM (R01LM013396; PI: Bove). R.B. is funded by the National Multiple Sclerosis Society Harry Weaver Scholar Award. The authors wish to thank the patient participants and clinicians who participated in the design and validation process.

Funding: This work was supported by U.S. National Library of Medicine, R01LM013396.

Funding Statement

This work was funded by U.S. National Library of Medicine grant R01LM013396.

Data Availability Statement

The data that support the findings of this study are available on request from the corresponding author. The data are not publicly available due to privacy or ethical restrictions.

References

  • 1. Block V. J., Zhao C., Hollenbach J. A., et al., “Validation of a Consumer‐Grade Activity Monitor for Continuous Daily Activity Monitoring in Individuals With Multiple Sclerosis,” Multiple Sclerosis Journal—Experimental, Translational and Clinical 5 (2019): 2055217319888660, 10.1177/2055217319888660. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2. Block V. J., Bove R., Zhao C., et al., “Association of Continuous Assessment of Step Count by Remote Monitoring With Disability Progression Among Adults With Multiple Sclerosis,” JAMA Network Open 2 (2019): e190570, 10.1001/jamanetworkopen.2019.0570. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3. Melles M., Albayrak A., and Goossens R., “Innovating Health Care: Key Characteristics of Human‐Centered Design,” International Journal for Quality in Health Care 33, no. Supplement_1 (2021): 37–44, 10.1093/intqhc/mzaa127. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4. Brown E. G., Schleimer E., Bledsoe I. O., et al., “Enhancing Clinical Information Display to Improve Patient Encounters: Human‐Centered Design and Evaluation of the Parkinson Disease‐BRIDGE Platform,” JMIR Human Factors 9, no. 2 (2022): e33967, 10.2196/33967. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5. Block V. J., Waliman M., Xie Z., et al., “Making Every Step Count: Minute‐by‐Minute Characterization of Step Counts Augments Remote Activity Monitoring in People With Multiple Sclerosis,” Frontiers in Neurology 13 (2022): 860008, 10.3389/fneur.2022.860008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6. Block V. J., Lizée A., Crabtree‐Hartman E., et al., “Continuous Daily Assessment of Multiple Sclerosis Disability Using Remote Step Count Monitoring,” Journal of Neurology 264, no. 2 (2017): 316–326, 10.1007/s00415-016-8334-6. Epub 2016 Nov 28. PMID: 27896433; PMCID: PMC5292081. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7. Lai B., Sasaki J. E., Jeng B., et al., “Accuracy and Precision of Three Consumer‐Grade Motion Sensors During Overground and Treadmill Walking in People With Parkinson Disease: Cross‐Sectional Comparative Study,” JMIR Rehabilitation and Assistive Technologies 7, no. 1 (2020): e14059, 10.2196/14059. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8. Norris M., Anderson R., Motl R. W., et al., “Minimum Number of Days Required for a Reliable Estimate of Daily Step Count and Energy Expenditure, in People With MS Who Walk Unaided,” Gait & Posture 53 (2017): 201–206, 10.1016/j.gaitpost.2017.02.005. Epub 2017 Feb 8. PMID: 28199925. [DOI] [PubMed] [Google Scholar]
  • 9. Dlugonski D., Pilutti L. A., Sandroff B. M., et al., “Steps Per Day Among Persons With Multiple Sclerosis: Variation by Demographic, Clinical, and Device Characteristics,” Archives of Physical Medicine and Rehabilitation 94, no. 8 (2013): 1534–1539, 10.1016/j.apmr.2012.12.014. Epub 2013 Feb 15. PMID: 23419331. [DOI] [PubMed] [Google Scholar]
  • 10. Motl R. W., Snook E. M., and Agiovlasitis S., “Does an Accelerometer Accurately Measure Steps Taken Under Controlled Conditions in Adults With Mild Multiple Sclerosis?,” Disability and Health Journal 4, no. 1 (2011): 52–57, 10.1016/j.dhjo.2010.02.003. Epub 2010 Apr 3. PMID: 21168808. [DOI] [PubMed] [Google Scholar]
  • 11. Woelfle T., Pless S., Reyes Ó., et al., “Smartwatch‐Derived Sleep and Heart Rate Measures Complement Step Counts in Explaining Established Metrics of MS Severity,” Multiple Sclerosis and Related Disorders 80 (2023): 105104, 10.1016/j.msard.2023.105104. Epub 2023 Oct 24. PMID: 37913676. [DOI] [PubMed] [Google Scholar]
  • 12. Bove R., Schleimer E., Sukhanov P., et al., “Building a Precision Medicine Delivery Platform for Clinics: The University of California, San Francisco, BRIDGE Experience,” Journal of Medical Internet Research 24, no. 2 (2022): e34560, 10.2196/34560. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13. Block V. J., Cheng S., Juwono J., et al., “Association of Daily Physical Activity With Brain Volumes and Cervical Spinal Cord Areas in Multiple Sclerosis,” Multiple Sclerosis 29, no. 3 (2023): 363–373, 10.1177/13524585221143726. Epub 2022 Dec 27. PMID: 36573559; PMCID: PMC9972237. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14. Keller J. L., Tian F., Fitzgerald K. C., et al., “Using Real‐World Accelerometry‐Derived Diurnal Patterns of Physical Activity to Evaluate Disability in Multiple Sclerosis,” Journal of Rehabilitation and Assistive Technologies Engineering 9 (2022): 20556683211067362, 10.1177/20556683211067362. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15. W. Brown, 3rd , Yen P. Y., Rojas M., and Schnall R., “Assessment of the Health IT Usability Evaluation Model (Health‐ITUEM) for Evaluating Mobile Health (mHealth) Technology,” Journal of Biomedical Informatics 46, no. 6 (2013): 1080–1087, 10.1016/j.jbi.2013.08.001. Epub 2013 Aug 23. PMID: 23973872; PMCID: PMC3844064. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16. Göttgens I. and Oertelt‐Prigione S., “The Application of Human‐Centered Design Approaches in Health Research and Innovation: A Narrative Review of Current Practices,” JMIR mHealth and uHealth 9, no. 12 (2021): e28102, 10.2196/28102. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17. Mathews S. C., McShea M. J., Hanley C. L., et al., “Digital Health: A Path to Validation,” npj Digital Medicine 2, no. 1 (2019), 10.1038/s41746-019-0111-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18. Block V. J., Koshal K., Wijangco J., et al., “A Closed‐Loop Falls Monitoring and Prevention App for Multiple Sclerosis Clinical Practice: Human‐Centered Design of the Multiple Sclerosis Falls InsightTrack,” JMIR Human Factors 11 (2024): e49331, 10.2196/49331. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19. Block V. J., Gopal A., Rowles W., et al., “CoachMS, an Innovative Closed‐Loop, Interdisciplinary Platform to Monitor and Proactively Treat MS Symptoms: A Pilot Study,” Multiple Sclerosis Journal – Experimental, Translational and Clinical 7, no. 1 (2021): 2055217321988937, 10.1177/2055217321988937. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20. Block V. J., Pitsch E. A., Gopal A., et al., “Identifying falls remotely in people with multiple sclerosis,” Journal of Neurology 269, no. 4 (2021): 1889–1898, 10.1007/s00415-021-10743-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21. Lavelle G., Norris M., Flemming J., et al., “Validity and Acceptability of Wearable Devices for Monitoring Step‐Count and Activity Minutes Among People With Multiple Sclerosis,” Frontiers in Rehabilitation Sciences 2 (2022), 10.3389/fresc.2021.737384. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22. Sagawa Y., Watelain E., Moulin T., and Decavel P., “Physical Activity during Weekdays and Weekends in Persons with Multiple Sclerosis,” Sensors 21, no. 11 (2021): 3617, 10.3390/s21113617. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23. Alharbi M., Bauman A., Neubeck L., and Gallagher R., “Validation of Fitbit‐Flex as a Measure of Free‐Living Physical Activity in a Community‐Based Phase III Cardiac Rehabilitation Population,” European Journal of Preventive Cardiology 23, no. 14 (2016): 1476–1485, 10.1177/2047487316634883. [DOI] [PubMed] [Google Scholar]
  • 24. Diaz K. M., Krupka D. J., Chang M. J., et al., “Fitbit®: An Accurate and Reliable Device for Wireless Physical Activity Tracking,” International Journal of Cardiology 185 (2015): 138–140, 10.1016/j.ijcard.2015.03.038. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25. Takacs J., Pollock C. L., Guenther J. R., et al., “Validation of the Fitbit One Activity Monitor Device During Treadmill Walking,” Journal of Science and Medicine in Sport 17, no. 5 (2014): 496–500, 10.1016/j.jsams.2013.10.241. Epub 2013 Oct 31. PMID: 24268570. [DOI] [PubMed] [Google Scholar]
  • 26. Nelson B. W. and Allen N. B., “Accuracy of Consumer Wearable Heart Rate Measurement During an Ecologically Valid 24‐Hour Period: Intra individual Validation Study,” JMIR mHealth and uHealth 7, no. 3 (2019): e10828, 10.2196/10828. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27. Abedalaziz W., Al‐Sharman A., Aburub A., et al., “The Relationship Between Sleep Quality and Gait in People With Multiple Sclerosis: A Pilot Study,” Hong Kong Physiotherapy Journal 44, no. 01 (2023): 11–19, 10.1142/s1013702523500129. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28. Bove R., Bruce C. A., Lunders C. K., et al., “Electronic Health Record Technology Designed for the Clinical Encounter,” Neurology Clinical Practice 11, no. 4 (2021): 318–326, 10.1212/cpj.0000000000000986. [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Data S1.

ACN3-12-1556-s001.docx (16.6KB, docx)

Data Availability Statement

The data that support the findings of this study are available on request from the corresponding author. The data are not publicly available due to privacy or ethical restrictions.


Articles from Annals of Clinical and Translational Neurology are provided here courtesy of Wiley

RESOURCES