Abstract
Continuously operating wearables offer detailed insight into chronic health conditions and have the potential to reshape diagnostic and screening tools. However, the energy demands and large datasets created by constant monitoring over weeks to months are difficult or impossible to integrate into existing clinical practice, limiting the utility of this device class. Machine learning offers the opportunity to condense these large datasets into streamlined, digestible trends with the potential for significant clinical impact, although off-device inference requires advanced network infrastructure and substantial power availability for radios. Here, we introduce a device framework that integrates artificial intelligence with clinical grade biosignal acquisition at the edge, performing on-device inference with clinical grade fidelity over extended durations with no interaction required by the wearer. We utilize this framework to perform gait-based frailty assessment during in vivo trials (N1 = 16) with results that match gold standard diagnostic tools. Clinical utility, model stability, and on-device inference are validated through in vivo trials (N2 = 14) and ten-day-long extended wear experiments, demonstrating continuous operation without wearer intervention and autonomous longitudinal analysis of high sampling rate biosignals.
Subject terms: Geriatrics, Biomedical engineering, Hardware and infrastructure, Machine learning
Energy demand and intensive computation limit the use of machine learning on-device for wearables. Here, the authors deploy edge AI in a wearable form factor to provide clinical-grade gait-based frailty assessment over weeks with no interaction required from the wearer at any point.
Introduction
Often confounded with natural aging and disability, frailty syndrome is a clinically recognizable predisposition to negative health outcomes, including falls, hospitalization, and mortality1–3. Defined by decreases in biologic reserves, exacerbated sarcopenia, and diminished strength2, frailty poses significant risk to a patient, with 3-year mortality of frail patients 6× higher than healthy patients1. Early intervention, however, shows significant improvement in clinical outcomes for patients at-risk for frailty or pre-frailty2, highlighting the need to advance existing diagnostic capacity and accessibility.
The current clinical standard for frailty diagnosis, illustrated Fig. 1a, is reactionary rather than preventive, with diagnoses often only occurring after incidental fall or hospitalization1,4. From there, consulting with a clinician typically results in scheduling one or more follow-up visits and in-clinic frailty assessments, requiring time, reimbursement, and the capacity to repeatedly transport a potentially frail patient to a clinic. Although no one gold standard exists, the primary approach for frailty assessment is according to the Fried Frailty Phenotype (FFP)1,3, which broadly considers five metrics: exhaustion, slowness, weakness, low activity, and unexpected weight loss. Exhaustion, low activity, and unexpected weight loss are all subjective, self-reported metrics, whereas weakness is typically evaluated using a grip strength test and slowness through various mobility/gait tests, with research actively examining the efficacy of different testing modalities3. Alternative approaches for frailty assessment utilize gait analysis through instrumented walkways, force plates, vision-based systems, and wearable technologies3,5–7.
Fig. 1. Continuously operating wearable AI for on-device frailty assessment.
a Illustration showing the reactive current clinical standard for diagnosing frailty. [Graphics derived from stock assets licensed under commercial Royalty-Free License from Noun Project, Inc., and under an Extended License from andi/stock.adobe.com.] b Implementation of wearable performing on-device inference from IMU recorded gait, wireless far-field charging of the wearable device, and Bluetooth Low Energy data output from wearable. ML Machine Learning, AI Artificial Intelligence. [Graphics derived from stock assets licensed under commercial Royalty-Free License from Noun Project, Inc., and under Extended Licenses from elenabsl/stock.adobe.com].
Gait-based frailty analysis traditionally relies upon assessing speed, variability, and step length during habitual walking3,7,8. Short clinical assessments, therefore, do not provide an accurate representation of potentially pathological gait for numerous reasons, such as patient comfort and intentionality and differences in walking surfaces6, e.g., flat, vinyl flooring in laboratory and medical settings versus uneven and textured materials elsewhere. Therefore, continuous monitoring during daily living is critical for establishing ground truth in gait-based frailty determination6. While continuously operating devices provide significantly richer datasets and opportunities for valuable health insight or intervention9,10, the large volumes of data they produce, especially over weeks or months, pose a notable hindrance for integration into existing clinical practice as it is impractical for a physician to meaningfully analyze large datasets in a timely manner. Since conventional statistical analysis approaches for gait-based frailty assessment, such as examining stance time, step length, or frequency component analysis11, are difficult to automate and tailor to individuals12,13, these large datasets provide a strong use case for machine learning (ML)-based data analysis, which can rapidly and accurately identify features that may be difficult for humans to recognize in a timely manner14, especially repeated over longer time scales. ML approaches also capture complex, non-linear interactions which can be missed with conventional approaches, scale well, and offer flexibility with the potential to learn from data13. Performing on-device ML-based data analysis significantly condenses these large datasets, transforming weeks of continuous analysis into easily digested trends that can be transmitted with minimal bandwidth and data management requirements, shifting point-of-care from hospitals and clinics to homes through telehealth, reducing financial burdens on patients and decreasing strain on healthcare systems.
By embedding edge AI into a continuously operating wearable device, we introduce a method for gait-based frailty assessment during habitual walking over extended durations, shown Fig. 1b. The biosymbiotic edge AI device (BEAD) utilizes biosymbiotic15–20 design to comfortably enable autonomous gait acquisition via inertial measurement unit (IMU) over weeks, requiring no interaction by the wearer at any point, including during recharging and data collection. On-device continuous step isolation and inference reduce the amount of data transmitted by the device by nearly 99% (8 bytes for a timestamped step inference vs 436 bytes of raw data per step, see “Continuous gait monitoring for frailty detection” “Results”), conserving battery life with a 21% decrease in average power consumption (see “Continuously operating frailty assessment” “Results”), enhancing data privacy by minimizing data management requirements, and providing quantifiable long-term metrics with an overall accuracy greater than 90% when classifying between healthy and pre-frail steps (see “On-device machine learning for gait-based frailty assessment” “Results”). In vivo studies with healthy and pre-frail subjects aged 65 and older demonstrate gait measurement fidelity on par with current clinical gold standard devices (see “Continuously operating frailty assessment” “Results”), and showcase the BEAD’s ability to perform real-time (<330 ms from raw data to inference result, “On-device machine learning for gait-based frailty assessment” “Results”), exclusively on-device inference for continuous frailty status determination.
Results
Gold standard validation
Wearable technologies for gait-based frailty assessment have significant advantages over instrumented walkways, force plates, and vision-based systems, namely portability and ease of data collection5,7,21. Although technically portable, walkways and force plates remain fixed in one location during testing and require setup and monitoring, typically in a laboratory or medical setting, precluding their use for habitual monitoring. Vision-based systems are costly, may compromise patient privacy or comfort, and eventually require manual data evaluation by trained personnel. Therefore, wearable technologies capable of chronic operation22,23, e.g., biosymbiotic electronics, are critical for establishing a modernized standard for frailty determination. Biosymbiotic technology utilizes soft, conformal 3D-printed mesh bodies, far-field power harvesting, and autonomous ultralow-power electronics to create sensing platforms that operate continuously without any interaction required from the wearer15–20.
To demonstrate the high sensing fidelity of the BEAD introduced in this work, gold standard clinical-grade gait assessment devices are compared against biosymbiotic wearable devices, shown in Fig. 2A, during in vivo trials. Notable is the brick-and-strap architecture of the gold standard (Biosensics, LEGSys), which results in significant inertia-induced accelerometer/gyroscope noise that is not present in the biosymbiotic devices’ gait recordings15 due to their distributed, conformal form factor and low-mass sensor design.
Fig. 2. Gold standard validation.
A Photograph of subject during 60-s continuous walk. Labels indicate devices used for gait collection, gold standard gait analysis device (BioSensics, LEGSys), and biosymbiotic wearable devices15. B Comparison of gait parameters associated with walking extracted from gold standard and biosymbiotic devices across healthy (n = 5) and pre-frail (n = 7) populations. Box plots show the 25th, 50th (median), and 75th percentile, whiskers range to non-outlier minimum and maximum. Outliers computed as 1.5 interquartile range. Significance was determined using ordinary unpaired two-tailed t-test (ns not significant, p* < 0.05). P values by order of appearance from left to right: Continuous walk trunk: 0.9387, 0.6524; 0.8523, 0.9439; 0.3002, 0.6383; 0.4511, 0.9147. Continuous walk leg: 0.8857, 0.9401; 0.9138, 0.2483; 0.6351, 0.9341; 0.9545, 0.3487. M.S. Velocity Mid-swing velocity, Dps degrees per second, Biosym. Biosymbiotic. Background red/green shading indicates device placement (A) used to generate metrics. C Photograph of subject during timed 5STS test. Circle indicates placement of left upper leg devices. D Data from gold standard device and biosymbiotic device across healthy (n = 5) and prefrail populations (n = 6, one subject did not complete STS and their data are not included) during 5STS assessment. Box plots show the 25th, 50th (median), and 75th percentile, whiskers range down/up to the lowest/highest points. Significance was determined using ordinary unpaired two-tailed t-test (ns not significant, p* < 0.05). P values by order of appearance from left to right: 0.8085, 0.9130; 0.5745, 0.9967; 0.4477, 0.0789. Background blue shading indicates device placement (C) used to generate metrics. STS Sit-to-stand.
Gait parameters from continuous 60-s walk test during in vivo trials (n = 12 subjects, 5 healthy, 7 pre-frail) are shown in Fig. 2B. Step/stride time and step/stride variability are gait parameters typically assessed when performing frailty assessment in clinic, and can reliably distinguish frailty severity3,5. No statistically significant differences are noted between data collected by gold standard devices versus biosymbiotic devices across all assessed gait metrics, see Supplementary Table S1. The data agree with current gait-based frailty assessment tools5, notably step/stride variability increased in pre-frail versus healthy subjects, and left/right thigh mid-swing velocities higher in healthy versus pre-frail subjects8. Data and extracted metrics from continuous walking demonstrate continuous, high-fidelity, clinical grade gait assessment in line with existing literature.
Shown Fig. 2C is a photo of a subject during 5 sit-to-stand (5STS) testing, with a circle highlighting upper leg placement used to extract the 5STS metrics shown in Fig. 2D (n = 11 subjects, one subject did not complete the test). Consistent with Fig. 2B, there is no statistically significant difference in data collected by the gold standard devices versus biosymbiotic devices. A distinct separation between data from healthy and pre-frail subjects during 5STS is in line with current literature, namely increased sit/stand transition times and increased time required to complete the 5STS cycles24. Timed Up-and-Go (TUG) results, shown in Supplementary Fig. 1, further demonstrate continuous functionality and clinical grade fidelity during high acceleration daily activities; however, TUG tests do not reliably identify prefrail versus healthy individuals25,26, and the data collected are used solely to calculate minimum walking speed for FFP slowness determination.
Results from in vivo studies demonstrate clear and consistent differences between healthy and pre-frail subjects. Additionally, the data recorded by the gold-standard clinical grade LEGSys system and the biosymbiotic wearable devices show high consistency and concurrency at all placements with no statistically significant differences in data collected by both device types across all assessed gait parameters, demonstrating the biosymbiotic wearable device’s clinical grade gait assessment ability.
Continuously operating frailty assessment
Clinical frailty analysis, outlined above, requires trained personnel, specialized equipment, and a defined experimental arena, making diagnostics cumbersome and localized to well-equipped care facilities. Shifting diagnosis into an ambulant setting would provide a substantial step in diagnostic efficiency, however several key hurdles remain unaddressed: transmitting raw data at the sampling rates required27 (100 – 500+ Hz) for this application is prohibitively energy expensive, managing a constantly connected data processing device is impractical for most lifestyles, and significant datasets would be created even by short recordings: 12 kB/min (assuming 100 Hz and 16-bit single axis resolution). ML inference exclusively on the wearable device (versus transmitting raw/pre-processed data off-device for inference) poses a significant opportunity for optimizing battery life over longer recording durations, safeguarding patient privacy, and minimizing data management requirements28.
Current generation wearable devices face issues with subject interaction through charging requirements, technology management of smartphones or specialized devices, and secure placement and skin attachment, with patient discomfort and irritation often arising from rigid device bodies, abrasive textile or non-breathable plastic attachment straps, and adhesives which need to be replaced often23,29. To avoid these common issues, BEADs employ biosymbiotic device architecture, which ensures lightweight comfort and the ability to operate continuously for weeks without input from the wearer15,20. A photograph of a BEAD with subsystems labeled is shown in Fig. 3a. Biosymbiotic devices embed soft electronics in lightweight, breathable, water- and sweatproof, patient-tailored (using 3D scans, photogrammetry, or direct measurements) polymer meshes that are rapidly 3D-printed using FDA-approved skin-safe thermoplastic polyurethane (TPU) filament15. The thin, low mass form factor of the device (6 mm height max, weight 15 × g with 30 mAh LiPo battery) and breathable mesh allow for nearly imperceptible wear without slippage, adhesives, or irritation15,19. A functional block diagram, a labeled component diagram, and an electrical schematic are included in Fig. 3b and Supplementary Figs. 2 and 3, respectively. Onboard wireless far-field power harvesting using commercially available, FCC-approved power casting systems enable wireless recharging at-distance, such as at a work desk or during sleep, without wearer interaction15,16. These power casting systems plug into any wall outlet and require only a quick initial setup for continuous operation: they are simply plugged in and rotated to face the BEAD, providing wireless power up to 2 m away15,16.
Fig. 3. AI-embedded continuously operating device for frailty detection.
a Left: Photograph of wearable device on subject’s leg. Labels indicate various subsystems. LDO Low dropout voltage regulator, BLE Bluetooth Low Energy, IMU Inertial Measurement Unit. b Functional block diagram illustrating the subsystems enabling BEAD functionality over extended durations. LiPo Lithium-polymer, PWR Power, CPU Central Processing Unit. c Plot of current draw for a device that performs inference exclusively on device (top) and for a device that transmits recorded data off device for inference (bottom). Total energy consumption over entire snippet duration (14 s) is shown (right, above each plot). Shaded regions indicate different operating conditions (blue = device sleep, purple = BLE pairing & data offloading, green = step data collection, and yellow = on-device inference). mJ milliJoule, mA milliamp.
Although BLE consumes low power relative to other communication modalities23,28, it uses considerable power during transmission and receiving events (up to 30+ mW depending on chipset) relative to the power consumption of the microcontroller core during typical operation (<8 mW active, <0.25 mW sleep/idle), see Supplementary Fig. 4. The BEAD’s onboard IMU (Bosch, BMI270) embeds a proprietary decision tree model that continuously evaluates accelerometer signals to detect motion and perform walking/running activity recognition, generating hardware-level interrupts on activity change. Using this interrupt allows the wearable device to enter a low-power mode that consumes 10× less power while maintaining quick wake timing (<35 ms from low-power standby to normal operation, Supplementary Fig. 5) to collect real time gait data immediately once the wearer begins walking, with no additional effort or interaction required to begin data logging.
Figure 3c illustrates current consumption for a BEAD performing on-device inference versus transmitting raw step data for off-device analysis. Integrating over the entire 14 s duration reveals a 21% decrease in energy consumption (19.68 milliJoule versus 24.97 milliJoule, respectively) for single-step inference. Shown in Supplementary Fig. 6, system energy consumption depends on walking speed, with the IMU’s motion detection hardware interrupt triggering more reliably at higher step rates. In response to the interrupt, the microcontroller activates the IMU’s gyroscope, which has high power demands relative to the low-power idle state of the BEAD. Walking speeds below 2.89 kph (1.8 mph) did not reliably trigger the IMU’s activity detection during this test; thus, no changes in system current consumption are noted for 0.97 and 1.93 kph (0.6 mph and 1.2 mph, respectively). At or above 3.86 kph (2.4 mph), the gyroscope is practically enabled continuously and current consumption plateaus. By employing more conservative inference approaches, e.g., limiting the number of steps classified for a given duration, one can maintain low energy consumption. Since the BEAD is worn for extended durations and can store inference results (8 bytes/step) when external logging or processing devices are not available, rate-limiting inference is not as prohibitive as during data collection from short, dedicated, in-clinic sessions. Results from nearly 27,000 events can be stored on-device, enabling 7–14 days (depending on daily step count) without the need to activate BLE, providing exceptional flexibility in firmware design and increased reliability without the need for large onboard memory. Analyzed data, i.e., timestamped inferences, can be offloaded via an internet-connected device such as a smartphone, aggregated into total steps classified as frail versus healthy, and deposited directly into electronic health records (EHRs), shown in Supplementary Fig. 7A.
Unlike with traditional brick-and-strap wearable devices, a patient is not required to continuously interact with the BEAD to perform data collection, see Supplementary Fig. 7B for the control flow that enables autonomous data collection. Patients also do not need to remove the device for charging or when bathing. Only a brief initial setup of the power caster is required for indefinite collection of high-fidelity biosignals, enhancing accessibility, user experience, and improving overall patient compliance10,15.
On-device machine learning for gait-based frailty assessment
The significance of this device compared to existing wearable technologies for frailty detection3,5,7,12,27,30–36, see Supplementary Table S2, is demonstrated by its ability to autonomously and efficiently evaluate a patient’s gait in real time without any external infrastructure.
Angular velocity from upper thigh located biosymbiotic devices, sample data shown in Fig. 4a (left) and device placement shown in Fig. 2C, is normalized (Fig. 4a, middle) and used to create the training dataset Fig. 4a (right). This placement and modality are chosen because of their reduced sensitivity to intersubject variance, i.e., the data are less noisy relative to shank placement. Since biosymbiotic devices have low mass relative to current gold standard gait analysis devices, the data from on-device IMUs contain features which may be occluded by the inertia of brick-and-strap clinical standard devices15.
Fig. 4. Edge AI implementation.
a Illustration of training dataset creation with step isolation, normalization, and aggregation. Left thigh angular velocity recorded by on-device gyroscope from all subjects was combined into a dataset (n = 2057). FP Floating-point. b Illustration of workflow for deploying ML-based analysis on-device. [Graphics derived from stock assets licensed under commercial Royalty-Free License from Noun Project, Inc.] c Comparison of accuracy of several ML-based time series classification approaches. RF + MIRO: Random Forest classifier using 2-dimensional (time & angular velocity) MINIROCKET transform. Optimized RF + MIRO: Random Forest classifier using 2-dimensional (time & angular velocity) MINIROCKET transform with reduced kernel and dilation counts. 3-, 5-, 7-, 9-KNN: 1-dimensional (angular velocity) K-nearest-neighbors, validation set accuracy presented as mean +/− SD with Nearest Neighbors = 3, 5, 7, 9. Ridge Reg. + ROCKET: Ridge regression classifier using 1-dimensional (angular velocity) ROCKET transform. n = 4 validation set accuracies for 3-, 5-, 7-, 9- KNN, otherwise n = 1 validation set accuracy per approach. d Plot showing comparison of characteristic single-step left thigh angular velocity data during 60s-continuous walk for subjects classified as healthy vs pre-frail. e Confusion matrix for final classification pipeline using RF classifier and optimized 2-D MINIROCKET transform.
Illustrated in Fig. 4b, a classifier model trained on a workstation is readily quantized and optimized into memory-efficient C code for deployment on resource-constrained embedded devices. Once integrated with the wearable device, inference can be performed on-device using incoming sensor data in real time, in this case allowing for angular velocity data from individual steps to be extracted, preprocessed, and evaluated by the µC while consuming minimal power and memory footprint.
In this work, ROCKET37 and MINIROCKET38 are used for feature extraction from time series data containing 1-dimensional data (angular velocity, 218 points per step), and 2-dimensional data (angular velocity, 218 points per step, with timestamps for each point, 218 × 2 = 436 points per step). One significant advantage of MINIROCKET over ROCKET is its ability to be deployed fully deterministic, meaning transform biases and kernels can be pre-calculated by the external workstation and stored in non-volatile memory on the microcontroller, reducing both computation time and complexity with little impact to model accuracy. Assessing several common machine learning approaches, including a ridge regression classifier, K-Nearest-Neighbors classifiers with 3-, 5-, 7-, and 9-nearest neighbors, and a Random Forest (RF) classifier, the class of the evaluation subset is predicted with an accuracy around 70%, 75%, and 95%, respectively, results shown Fig. 4c. Accuracy evaluation of a ridge regression classifier using a ROCKET transform is included, although validation accuracy is significantly worse than other approaches, likely due to overfitting the relatively small training dataset (N = 2057 steps from 12 unique subjects) and imbalance in sample number between each class (5 healthy versus 7 pre-frail).
Deployment of a RF classifier with a fully deterministic MINIROCKET transformer provides the highest average classification accuracy among the ML approaches surveyed, ~96.2%, however time per inference and memory footprint are both high, averaging 3 s per inference and occupying over 80 kB of non-volatile memory for a 1-dimensional transformer. The high time per inference precludes real-time operation, causing a significant number of missed steps while the microcontroller is busy processing the transform, and the large memory footprint significantly limits the number of microcontrollers which can fit the final transformer. It may also increase overall system design complexity as off-µC nonvolatile memory becomes necessary. Shown Supplementary Fig. 8, the MINIROCKET transform can be optimized for memory footprint and computational time by reducing the number of convolutional kernels from 10,000 (actually 9996) to 420, and the maximum number of dilations per kernel from 32 to 16, achieving nearly tenfold reductions in both transform time per step (~330 ms vs 3 s) and memory footprint (8 kB of weights per transform dimension vs 80 kB) while maintaining high model accuracy (91% for optimized MINIROCKET vs 96% for default parameters). Model optimization indicates that around 500 kernels are necessary to achieve reliable performance and accuracy in this case. Although the MINIROCKET algorithm requires a multiple of 84 kernels, performance at 504 kernels versus 420 kernels is not substantially better to merit the higher computational cost and RAM requirement. Decreasing computation time per step to 330 ms enables real-time continuous per-step inference, uninterrupted device functionality, and lowers overall system power consumption since the ultralow-power microcontroller can sleep between subject steps.
Figure 4d contains characteristic steps from subjects who met FFP criteria for either healthy or pre-frail classification, showing small variations which are common amongst each class. The differences between the two classes are subtle, making clinical assessment of an extended study impractical to do by hand and impossible to integrate into existing clinical practice. The issue is further complicated by high waveform variance across subjects, with pre-frail individuals exhibiting additional intrasubject gait variance relative to healthy subjects, see Supplementary Fig. 9.
Shown in Fig. 4e, the RF classifier with compressed MINIROCKET transform retains a high overall accuracy of 91.33% (with a false-negative rate of 8.05% and false-positive rate of 6.35%), as compared to 96.2% before parameter optimization. Given the significant imbalance between class sample n, accuracy does not necessarily provide an absolute metric of classification performance. The f1-scores of the classification pipeline are therefore assessed and shown in Supplementary Table S3, with scores of 0.92 and 0.90 when predicting healthy and pre-frail, respectively, indicating excellent precision, recall, and ability to distinguish between classes. Figure S10 contains learning curves generated using two different cross-validation schemes: stratified, fivefold (Supplementary Fig. 10A) and leave-one-group-out (Supplementary Fig. 10B), showing the model’s ability to learn and generalize as the training set increases in size, although it is limited by the number of pre-frail subjects in the training set.
Ultimately, these results demonstrate the ability to deploy a performant, memory-, time-, and power-optimized classification pipeline on ultralow-power devices without interfering with designed device operation. They also reveal subject-specific overfitting that limits generalizability. While minor improvements may be gained through hyperparameter tuning or with a more advanced classification algorithm, clinical deployment will require substantially larger, more diverse training cohorts. Given local population dynamics (Tucson, AZ: 92,990 adults aged 65+)39, the general incidence of frailty in adults age 65 and older (~10–15%)40,41, and the ability of the local university-healthcare network to reach, recruit, and retain longitudinal study subjects, there become fewer than 200 potentially recruitable subjects in a large city. Considering recruitment rates of non-oncological clinical trials average as low as 0.85 subjects recruited per site per month42,43, statistically significant subject recruitment would require considerable resources, necessitating years-long, multi-institution, multi-site collaboration. Such a study would set the foundation for establishing and translating a new clinical tool.
Continuous gait monitoring for frailty detection
At this time, there are no technologies capable of automated frailty detection, i.e., continuous operation, monitoring gait, and performing automated classification, either on- or off-device, critical for assessment in the absence of trained personnel27. The device introduced in this work, showcased Fig. 5a, provides an approach for home-based frailty diagnostics in a low-profile, unobtrusive form factor which is easily hidden under clothing, shifting point of care away from medical clinics, and enhancing current telehealth offerings.
Fig. 5. Continuous gait monitoring for frailty detection.
a Photograph demonstrating the low profile of BEADs that allows them to be worn discretely under clothing. b Proposed approach for home-based frailty diagnostics and preventive care. [Graphics derived from stock assets licensed under commercial Royalty-Free License from Noun Project, Inc.] c Plots showing results of step inference during 60 s continuous walk from a healthy (left) subject and a pre-frail (right) subject, with blue shaded regions highlighting an individual step from each subject. d Battery voltage and inference results from ten-day continuous wear on healthy subject. Yellow-orange shaded regions indicate periods of far-field charging.
This proposed clinical use, illustrated Fig. 5b, can be offered preventively rather than post-hospitalization, as is often the case with the current approach, improving patient outcomes44,45. An initial prediagnostic meeting with a telehealth provider results in a device and commercially available wireless powercaster shipped directly to the patient. Since the BEAD requires no interaction from the patient beyond initial placement of the powercaster at a desk or bedside, is recharged wirelessly, and can be worn while bathing, the patient would simply wear the device on their thigh and continue with their normal routine. During habitual gait, i.e., step activity recognized by the BEAD’s onboard IMU, the device wakes up, records continuous angular velocity of the leg, isolates individual steps, and performs inference, aggregating the results over time and transmitting them off-device when possible, potentially integrating them directly into EHRs/electronic patient health information28 or transmitting them directly to the telehealth provider. After several weeks, or given an early, clear trend in the inference results, a follow-up appointment with a healthcare provider enables informed advanced diagnostics and treatment plans.
Validation of on-device inference is performed with a second round of in vivo trials (N2 = 14 subjects: 11 healthy, 3 pre-frail). The BEAD monitors gait continuously using real time angular velocity from upper thigh placement devices, extracting individual steps and performing inference, with results from 60 s continuous walk testing plotted Fig. 5c. Characteristic angular velocity data from a healthy subject is plotted Fig. 5c (left), with inset of a step shown (second left). A well-defined peak with little oscillation is visible. Characteristic data from a pre-frail subject is shown in Fig. 5c (second right) with step highlighted inset (right). This step is diminished in magnitude and exhibits a wider, more oscillatory, and unstable waveform relative to the healthy subject’s step. The inference results are obtained after adjusting the confidence threshold required for the model to predict one class versus the other, see Supplementary Fig. 11. A threshold of 70% confidence resulted in zero false-positives or false-negatives for either subject, highlighting the reliability of the model. On-device inference does not capture every step, approximately 70–80%, however this is sufficient over days to weeks given that the current clinical approach assesses only steps taken during very limited on-location continuous walk testing.
To demonstrate feasibility of chronic operation, a ten-day extended wear experiment is performed. During the experiment, a BEAD worn by a healthy subject monitors their gait continuously, storing only timestamped step inference results in nonvolatile memory on-device until data can be transferred off. In this typical use case, the BLE packet rate can be decreased to conserve power without affecting sensing fidelity, since all processing and inference occurs on-device without BLE. System performance is likewise unaffected by decreasing the packet rate, since only small packets of timestamped inference data are offloaded only when a logging device connects over BLE. Figure 5d plots continuous battery life from this experiment, with far-field charging shown in orange sections. Numerous healthy step predictions are logged daily, with fewer than ten false positives detected during the ten-day experiment. The experiment shows continuous high-fidelity digital gait assessment without impact by state of charge (Supplementary Fig. 12), reliable recharging, and long-term model stability throughout, with only ~8.6% false positive rate. Combined, these results demonstrate that the power of per-step inference is realized through temporal ML analysis, i.e., statistical analysis of per-step inferences recorded from routine life over weeks to months. Cross-sectional analysis, i.e., statistical analysis of per-step inference results from short 60-s walks in labs or clinics once a month, is less representative and less accurate than temporal analysis46. Extracted through temporal ML analysis, the diagnostic indicator is the percentage of steps classified as frail (or pre-frail) (PSCF) relative to steps classified as healthy over the study duration. Ratios can be easily integrated into EHRs and read by a clinician at a glance, providing insight into a patient’s status.
In cases where a patient’s PSCF is not decisive, deeper analysis can generate longitudinal trend reports to enhance understanding. Since each inference result is timestamped, one can extract potentially insightful metrics such as the number of frail steps per day and explore trends by time of day, day of the week, or from day-to-day and week-to-week, providing a complete profile of when diagnosis occurs. Combining clinical history, patient lifestyle, and ML-based symptom analysis drives precision medicine and personalized care, likely improving patient outcomes and reducing healthcare cost47.
Discussion
Habitual gait is a significant predictor of frailty status and age-related mental and physical decline3,48, but remains underexplored due to technological limitations. The current standard for diagnosing frailty syndrome is reactive, resource intensive, and diagnoses are often only made after serious injury or hospitalization1,4. As an alternative to the current standard, gait-based frailty analysis holds significant promise for modernizing frailty diagnostics and improving diagnostic accessibility, especially for patients who cannot readily meet with a clinician or receive transport to a clinic for evaluation.
Existing approaches for gait assessment utilize costly sensing equipment and techniques which are not readily adapted to home-based diagnostics, especially in communities that lack infrastructure, and current clinical-grade gait assessment devices face tradeoffs between battery life/size and inertia-induced noise, which degrades sensing fidelity. Furthermore, their limited battery capacities require consistent management of battery life, making at home diagnostics difficult or impossible.
Biosymbiotic wearable technologies with on-device ML offer substantial advantages over existing wearable devices, allowing patients in low-resource settings to receive clinical-grade frailty assessment. The soft, breathable, lightweight form factor that defines biosymbiotic architecture is adhesive-free and tailored to conform to each patient for maximum comfort, allowing for weeks-long wear without irritation. Far-field power harvesting enables continuous operation without removal for charging (which is typically required by conventional brick-and-strap wearable devices), critical for potentially frail individuals with limited mobility. Exclusively on-device inference removes the requirement for constant connectivity that is often limited or unavailable, especially in rural communities, eliminates the need for storing the large amounts of data on devices or costly cloud infrastructure, and allows for straightforward integration into EHRs. Most importantly, this technology also removes the need for trained staff and specialized facilities, which make current diagnostics expensive and difficult to disseminate.
By reducing diagnostic barriers, these devices present compelling use cases in preventive medicine. For instance, continuous ECG monitors identify over 40% of arrhythmias only after day 3 of continuous recording49, allowing clinicians to intervene before sudden cardiac events50 in cases that would otherwise be missed by short clinical assessments. This same principle could extend to ML-integrated biosymbiotic devices for frailty assessment in at-risk individuals, potentially reducing fall-related hospitalizations through early detection and intervention. In this case, BEADs offer a significant advantage over current approaches, which cannot provide scalable frailty diagnostics, let alone continuous monitoring.
Realizing this clinical potential will require user acceptability studies and a multi-institution, multi-site clinical trial with a significantly larger, diverse cohort. Longitudinal in-home tracking of frailty progression will also be essential to create a generalizable, clinically relevant model. The limited sample size and subject-specific overfitting observed in this study highlight the need for these larger-scale validation efforts before clinical deployment. Nevertheless, the approach demonstrated here has the potential to overcome current operational, clinical, user, and payer challenges by enabling seamless diagnostics with minimal interruption of daily routine and clinical practice.
Methods
In vivo gold standard validation
All human subject studies were approved by a University of Arizona Internal Review Board, protocol #2005648028, and were performed indoors in a hospital setting. All subjects provided both written and verbal consent to the experiments. For comparison against gold standard/training data collection (N1 = 16), subjects aged 65 and older with no history of sensorimotor disorders or conditions (Parkinson’s disease, multiple sclerosis, stroke, recent hip, leg, or foot surgery, etc.) were recruited and their frailty status assessed according to the FFP and frailty determination protocol described below. Subjects must have been able to walk 60 s unassisted to participate in the study.
Subjects responded to multiple questionnaires, including the Patient Health Questionnaire-9 (PHQ-9), a Mini-Mental State Examination (MMSE), the Barthel Index for Activities of Daily Living (ADL), the Falls Efficacy Scale-International (FES-I), and a questionnaire asking about their average and recent levels of activity, with breakdowns by activity and frequency, their average and recent levels of exhaustion, and if they had lost ten or more pounds unintentionally within the past year.
Subjects were then asked to perform a grip strength test where they squeezed a hand dynamometer 3× with each hand. A member of the study team recorded results from each trial. Finally, for gait analysis, initial cohort subjects were outfitted with five gold standard gait analysis devices (BioSensics, LEGSys), each co-located with a biosymbiotic wearable device with 6-axis IMU (Bosch, BMI270), on their left and right shanks, left and right thighs, and trunk. All ten devices recorded continuous accelerometer and gyroscope data. Subjects were then asked to perform a series of movement tasks commonly used in gait-based frailty assessment: 1. 60-S Continuous Walk: Subjects were allowed to walk at their own pace and unassisted indoors on level ground for sixty seconds. Step count was extracted from the angular velocity data of each thigh-located device (one gold-standard LEGSys device and one biosymbiotic device per thigh, four devices total). Step count extraction was automated using a MATLAB script and validated by a member of the study team; the MATLAB script isolated per-device all peaks in the angular velocity data with magnitude at least 50 degrees per second and inter-peak spacing at least 1 s apart. 2. TUG: Subjects began seated, stood up, walked around a cone in the center of the room 3 meters away, and sat back down. Their total time from start to re-seating was recorded by a member of the study team. Walking speed was determined from this test by manual analysis of the angular velocity data of each thigh-located device (one gold-standard LEGSys device and one biosymbiotic device per thigh, four devices total): the distance covered during the walk (3 m from chair to cone + 3 m from cone to chair) was divided by the time from toe-off of the first step after standing up to heel-strike of the last step before turning around to sit down. These times were extracted by manual inspection of the angular velocity waveforms. 3. STS: Subjects began seated and were asked to stand up and sit back down without using their hands. This was repeated five times with their total time recorded by a member of the study team directly observing the subjects. Four subjects did not complete grip strength testing and their frailty status could not be determined; thus, their data are not included in our analysis.
Frailty determination protocol
Frailty status was determined as follows: TUG, STS, and daily activity survey data collected from each subject were input into a custom Microsoft Access database preloaded with normative and threshold data derived from the National Heart, Lung, and Blood Institute’s Cardiovascular Health Study (CHS). The database automatically processed these metrics using established CHS criteria, including gait speed, grip strength, and reported activity levels, to classify subjects as healthy, pre-frail, or frail according to the FFP. None of the subjects recruited met the classification for frailty.
Wearable electronics fabrication
All wearable devices were hand assembled using externally manufactured 2-layer flexible printed circuit boards (flex PCBs). The flexPCBs were manufactured in a panelized design with a layer stackup as follows: a polyimide substrate (25 µm) between two layers of electroless nickel immersion gold-finished copper (18 µm/layer, 12 µm Cu + 6 µm Au each). Polyimide layers (27.5 µm/layer) above the finished copper on both sides served as solder mask. Devices were depaneled using UV (355 nm) laser ablation (LPKF; Protolaser U4) and cleaned by hand in isopropanol (IPA). Surface mount components were placed and reflowed manually with temperature-stable solder paste (Chip Quick; TS391LT). After assembly, devices were cleaned once more with IPA.
Wearable mesh fabrication
Flexible mesh designs were drawn in two sizes (small/medium and large/extra-large). Mesh drawings were exported from 2D CAD (AutoDesk, AutoCAD) and imported into 3D modeling software (Dassault Systèmes, Solidworks) for extrusion. Stereolithography (STL) files generated from the 3D model were imported into 3D slicing software (Bambu Lab, Bambu Studio) to generate machine code for a fusion deposition modeling 3D printer (Bambu Lab, X1-C). A TPU filament (NinjaTek, NinjaFlex) was printed at 100 mm/s and 240 °C with a bed temperature of 45 °C. After printing, segmented sections of the mesh structure were joined by melting TPU material together at junctions to form the completed linear structures.
Wearable device electronics were embedded into the 3D-printed TPU mesh. After embedding, the electronics were covered in a transparent, flexible, UV-curable flexible resin (3DMaterials; SuperFlex) dyed white (Alumilite, Resin Dye), to provide wear resistance and protect them from the external environment. The resin-covered electronics were cured with a UV lamp (24 W) for ten minutes.
Wearable device power consumption characterization
A BEAD was programmed with firmware to perform on-device inference. The device was worn on the thigh of a member of the study team. Using an STLINK-V3PWR and STM32CubeMonitor-Power, the BEAD was powered on, with the STLINK sampling current consumption at 10 kHz with Vout at 1.9 V.
After initial boot, the BEAD entered sleep mode. The study team member then took steps in place until the onboard IMU’s step detection triggered the µC to begin step data collection, which took place over ten seconds. The device autonomously performed inference and then returned to sleep mode. Data collection was terminated.
The BEAD was then reflashed with firmware that transmitted raw angular velocity data over BLE instead of performing on-device inference. The setup was performed as above, with the study team member additionally initiating a connection with the BEAD using an iPhone 14 Pro Max and LightBlue app before stepping in place to trigger data collection. The BEAD transmitted step data off device before returning to sleep. Data collection was terminated.
Training dataset creation
Angular velocity data from upper thigh placed devices were isolated from each original cohort subject’s continuous 60-s walk dataset. Subject data were separated into individual steps and normalized using an automated MATLAB (r2024b) script, with all single steps aggregated into the training dataset (n = 2057 steps). The individual step snippets were manually reviewed to remove any erroneously tagged peaks or step boundary detection errors. The dataset was split 70–30 for training and validation. Labels were generated per subject according to their FFP status determined from in vivo studies.
Model creation and deployment
Single-step angular velocity data were isolated from continuous walk segments from 12/16 subjects in the initial cohort (4 subjects did not complete the grip strength evaluation and frailty status could not be determined). The single-step dataset was normalized in time and magnitude and split 70–30 into training and validation sets.
Using Keras 3.11.3 and Python version 3.10.0, a MINIROCKET transform was fitted to the training data. The biases and kernels for the transformer (k = 420 kernels, 16 max dilations per kernel) were exported as arrays and hardcoded into the wearable device’s firmware. A time series RF classifier was trained off device on a desktop workstation using the transformed training dataset and validated using the transformed test set. During training, fivefold stratified cross-validation was employed and model parameters adjusted accordingly to mitigate model overfitting on training data and improve generalizability. The trained model was quantized using the open-source artificial neural network library Keras and converted into Open Neural Network Exchange (version 1.17.0) format, floating-point layers compressed/optimized using STMicroelectronics’ X-Cube-AI version 9.0.0 toolset, and deployed onto a microcontroller. To mitigate the effect of class imbalance on the model, balanced class weights and randomly shuffled, stratified sampling of the training dataset were used during model training. Class weights were set to be inversely proportional to frequency of the class in the training set. This imposed a greater penalty on misclassification of the minority class (pre-frail) and encouraged better recall and f1-score.
Optimization of MINIROCKET for resource constrained devices
Using Python version 3.10.0, default parameters of a MINIROCKET transform were adjusted to minimize compute time and RAM and flash memory footprint. Predictive performance was evaluated using randomly shuffled training and test sets as during model creation. Final parameters (420 kernels, 16 max dilations per kernel) were chosen to maintain average accuracy above 90% across shuffled test sets, ensuring high generalizability while mitigating overfitting to the training set.
On-device inference
When the BEAD’s onboard IMU recognizes walking/running activity (through its proprietary embedded decision tree model analyzing its accelerometer signals), the IMU sends a hardware-level interrupt to the onboard microcontroller to wake it up. The microcontroller then enables the IMU’s gyroscope and buffers ten seconds of 16-bit angular velocity data sampled at 200 Hz.
Individual step isolation is then performed by finding peaks in the angular velocity buffer (local maxima with at least 50 degrees per second angular velocity and ~1.1 s inter-peak spacing, parameters determined from the training dataset) and copying 218-sample step snippets centered at the index of each peak to a new buffer.
The sample times and angular velocity from a step are normalized [0, 1] and transformed using the minimized, embedded MINIROCKET to extract features. The features are then passed to the on-device random forest classifier, with the output class timestamped and stored in on-device non-volatile memory. All steps detected in the buffer are analyzed sequentially. During this time, the IMU’s gyroscope remains active and continues to collect samples. Once the microcontroller has processed all steps, it retrieves any samples from the gyroscope and continues feature extraction and inference until the IMU detects that the wearer has stopped moving. The microcontroller then performs any last inferences, disables the IMU’s gyroscope to save power, and enters a low power sleep until motion is detected again.
Walking speed versus current consumption
Current consumption by a BEAD was monitored using a source measurement unit (SMU) (STMicroelectronics, STLINK-V3PWR) and STM32CubeMonitor-Power software version 1.2.1. The output voltage of the SMU was adjusted to 1.9 V, the 1.8 V operating voltage of the BEAD with overhead for the LDO’s (Low dropout voltage regulator) dropout voltage. The SMU recorded sourced current with 50 kHz sampling rate. A member of the study team wore a BEAD above the knee and stood adjacent to a treadmill (ASUNA, 8730). The treadmill speed was set to 0.6 mph, and the study team member stepped on to the treadmill and walked at speed for 60 s. The team member then stepped off the treadmill for ten seconds while the speed was adjusted to 1.2 mph. The team member then stepped on to the treadmill and walked at speed for 60 s. This procedure was repeated for 1.8, 2.4, and 3.0 mph.
On-device inference was triggered automatically by the BEAD IMU’s activity detection, with results logged to on-device non-volatile memory. For off-device inference, the same setup was used with a BLE connection established before the initial 0.6 mph test. During off-device inference measurements, the BEAD transmitted continuous raw angular velocity data off-device to a laptop.
IMU data fidelity versus system voltage
The same SMU, BEAD, and treadmill setup was used as during the walking speed versus current consumption test. A member of the study team wore the BEAD above the knee while walking on the treadmill at 1.8 mph. Continuous gait data rate was measured while the output voltage of the SMU powering the wearable was adjusted to 1.8, 3.0, 3.3, and 3.6 Volts.
In vivo device validation
A second cohort of subjects (N2 = 14) were recruited for in vivo device validation, subject to the same selection criteria and methodology as the gold standard validation cohort. All subjects provided both written and verbal consent to the experiments. Subjects in the second cohort wore only one device, on the thigh of their preference. The device performed continuous angular velocity data recording and on-device step isolation/inference. Data and inference results collected during 60 s continuous walk test were isolated and used to generate the plots in Fig. 5c.
Ten-day continuous wear trials
All on-body device characterization experiments were approved by a University of Arizona Internal Review Board, study #STUDY00005692. Subjects provided both written and verbal consent to the experiments. Using thigh measurements taken from a member of the study team, a BEAD was assembled and flashed with firmware for on-device inference. The BLE advertising interval was modified from the default 400 ms to 10,024 ms to conserve power, since regular BLE connections are not necessary. The device was worn on the thigh by a member of the study team for ten days.
During the test, the device remained in a low power mode until the IMU accelerometer detected steps. The IMU then raised a hardware interrupt to wake the µC. The µC then enabled the IMU’s gyroscope and recorded angular velocity data for ten seconds while the subject took steps, sleeping between data ready interrupts from the IMU. After ten seconds, the data was processed to identify and extract steps, which were transformed using MINIROCKET and fed to the on-device RF classifier. Inference results and timestamps were stored in nonvolatile memory on the device. If the wearer remained in motion, data collection and processing continued, otherwise, the device returned to low power mode.
A commercial 915 MHz power caster (Powercast, TX91501B) was placed under the study team member’s office desk. The wearable was recharged during the work week while the team member was at their desk. The wearable device was not recharged during the weekend (days 3 and 4) or when the team member was out of office (day 6). No step predictions are logged on day 3, as the wearer performed vigorous non-running sport during most of the day and led a more sedentary lifestyle in the evening. Aggregated inference data were offloaded over BLE at regular intervals. The BEAD battery voltage was also measured at these times with a multimeter (AstroAI, AM33B), timestamped, and recorded.
Statistical analysis
All statistical analyses were carried out in MATLAB (R2024b). Gold standard validation used continuous gait data from co-located biosymbiotic devices and gold standard LEGSys devices during a single session per-subject (N1 = 16) in a medical setting. Gait metrics were derived for each subject and device type. An ordinary unpaired two-tailed t-test was performed to determine statistical significance between derived gait metrics from each device type. Effect size was calculated using ordinary Cohen’s d (mean difference between two groups divided by the pooled standard deviation). All bar charts are shown as mean +/− standard deviation. Statistical significance is demonstrated as p* < 0.05, p** < 0.01, p*** < 0.001, and p**** < 0.0001.
Ethics
Every experiment involving animals, human participants, or clinical samples has been carried out following a protocol approved by an ethical commission. Each participant gave informed written consent. Consent to publish personally identifiable information was not requested from participants, and no such information has been shared. Participants were compensated $25 in cash at the end of their study session.
Reporting summary
Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article.
Supplementary information
Description of Additional Supplementary Files
Supplementary Movie 1. Real-time in vivo healthy and prefrail gait monitoring.
Acknowledgements
We thank Woody March-Steinman at the University of Arizona Department of Applied Mathematics, Professor Marat I. Latypov at the University of Arizona Materials Science & Engineering Department, Gustavo Almeida and Qusai Almustafa at the University of Arizona Sensor Lab, and Professor Nima Toosizadeh for their expertise. This work was supported by the Technology and Research Initiative Fund (TRIF) (P.G.), by the Flinn Foundation (P.G.), and by the NIH Infection and Inflammation as Drivers of Aging (IIDA) training grant 1T32AG058503-04 (K.K.).
Author contributions
Conceptualization & Methodologies: T.S., K.A.K., P.G. Software/Firmware: K.A.K. Hardware design: T.S., K.A.K., P.G. Investigation: T.S., R.T., K.A.K., J.K. In vivo trials: T.S., R.T., K.A.K., A.B. Visualization: T.S., R.T., K.A.K., J.K. Funding: PG. Administration & Supervision: P.G. Writing—original draft: All authors. Writing–review & editing: K.A.K., P.G. Data analysis: All authors contributed.
Peer review
Peer review information
Nature Communications thanks Hubin Zhao and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. A peer review file is available.
Data availability
All data supporting the findings of this study are available within the article and its supplementary files. Any additional requests for information can be directed to and will be fulfilled by the corresponding author. Datasets used to generate the figures (with relevant code) are available on ReData (https://redata.arizona.edu/), the University of Arizona’s official public data archive: 10.25422/azu.data.29614193.
Code availability
All custom data collection code and data processing code (CC BY 4.0 License) will be available with datasets (where relevant) and without access restriction on ReData (https://redata.arizona.edu/), the University of Arizona’s official public data archive: 10.25422/azu.data.29614193.
Competing interests
The authors declare no competing interests.
Footnotes
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary information
The online version contains supplementary material available at 10.1038/s41467-025-67728-y.
References
- 1.Fried, L. P. et al. Frailty in older adults: evidence for a phenotype. J. Gerontol. A. Biol. Sci. Med. Sci.56, M146–M157 (2001). [DOI] [PubMed] [Google Scholar]
- 2.Chen, X., Mao, G. & Leng, S. X. Frailty syndrome: an overview. Clin. Interv. Aging9, 433–441 (2014). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Schwenk, M. et al. Frailty and technology: a systematic review of gait analysis in those with frailty. Gerontology60, 79–89 (2014). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Landré, B. et al. Association between hospitalization and change of frailty status in the Gazel cohort. J. Nutr. Health Aging23, 466–473 (2019). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Ritt, M. et al. High-technology based gait assessment in frail people: associations between spatio-temporal and three-dimensional gait characteristics with frailty status across four different frailty measures. J. Nutr. Health Aging21, 346–353 (2017). [DOI] [PubMed] [Google Scholar]
- 6.Salchow-Hömmen, C. et al. Review—emerging portable technologies for gait analysis in neurological disorders. Front. Hum. Neurosci.16, 768575 (2022). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Vavasour, G., Giggins, O. M., Doyle, J. & Kelly, D. How wearable sensors have been utilised to evaluate frailty in older adults: a systematic review. J. NeuroEngineering Rehabil.18, 112 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Thiede, R. et al. Gait and balance assessments as early indicators of frailty in patients with known peripheral artery disease. Clin. Biomech.32, 1–7 (2016). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Zhou, Z. et al. Edge intelligence: paving the last mile of artificial intelligence with edge computing. Proc. of the IEEE.107, 1738–1762 (2019).
- 10.Stuart, T., Hanna, J. & Gutruf, P. Wearable devices for continuous monitoring of biosignals: challenges and opportunities. APL Bioeng.6, 021502 (2022). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Hommen, J. M. et al. Movement patterns during gait initiation in older adults with various stages of frailty: a biomechanical analysis. Eur. Rev. Aging Phys. Act.21, 1 (2024). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Lien, W.-C. et al. Inertial sensor-based gait classification for frailty status in older adults: a cross-sectional study. Comput. Struct. Biotechnol. J.28, 199–210 (2025). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.Rajula, H. S. R., Verlato, G., Manchia, M., Antonucci, N. & Fanos, V. Comparison of conventional statistical methods with machine learning in medicine: diagnosis, drug development, and treatment. Medicina56, 455 (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Korteling, J. E.(Hans), Van De Boer-Visschedijk, G. C., Blankendaal, R. A. M., Boonekamp, R. C. & Eikelboom, A. R. Human- versus artificial intelligence. Front. Artif. Intell.4, 622364 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Stuart, T. et al. Biosymbiotic, personalized, and digitally manufactured wireless devices for indefinite collection of high-fidelity biosignals. Sci. Adv.7, eabj3269 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.Stuart, T. et al. Context-aware electromagnetic design for continuously wearable biosymbiotic devices. Biosens. Bioelectron.228, 115218 (2023). [DOI] [PubMed] [Google Scholar]
- 17.Stuart, T. et al. Biosymbiotic platform for chronic long-range monitoring of biosignals in limited resource settings. Proc. Natl. Acad. Sci. USA.120, 2017 (2023). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18.Tyree, A. et al. Biosymbiotic haptic feedback - Sustained long term human machine interfaces. Biosens. Bioelectron.261, 116432 (2024). [DOI] [PubMed] [Google Scholar]
- 19.Clausen, D. et al. Chronic biosymbiotic electrophysiology. Adv. Funct. Mater.35, 2407086 (2025). [Google Scholar]
- 20.Kasper, K. A. et al. Continuous operation of battery-free implants enables advanced fracture recovery monitoring. Sci. Adv. 10.1126/sciadv.adt7488 (2025). [DOI] [PMC free article] [PubMed]
- 21.Aminian, K., Najafi, B., Büla, C., Leyvraz, P.-F. & Robert, P. Spatio-temporal parameters of gait measured by an ambulatory system using miniature gyroscopes. J. Biomech.35, 689–699 (2002). [DOI] [PubMed] [Google Scholar]
- 22.Gutruf, P. Towards a digitally connected body for holistic and continuous health insight. Commun. Mater.5, 1–6 (2024). [Google Scholar]
- 23.Bhatia, A., Kasper, K. A. & Gutruf, P. Continuous biosignal acquisition beyond the limit of epidermal turnover. Mater. Horiz. 10.1039/d5mh00758e (2025). [DOI] [PMC free article] [PubMed]
- 24.Shukla, B., Bassement, J., Vijay, V., Yadav, S. & Hewson, D. Instrumented analysis of the sit-to-stand movement for geriatric screening: a systematic review. Bioengineering7, 139 (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25.Savva, G. M. et al. Using timed up-and-go to identify frail members of the older population. J. Gerontol. A. Biol. Sci. Med. Sci.68, 441–446 (2013). [DOI] [PubMed] [Google Scholar]
- 26.Greene, B. R., Doheny, E. P., O’Halloran, A. & Anne Kenny, R. Frailty status can be accurately assessed using inertial sensors and the TUG test. Age Ageing43, 406–411 (2014). [DOI] [PubMed] [Google Scholar]
- 27.Lin, S. et al. A review of gait analysis using gyroscopes and inertial measurement units. Sensors25, 3481 (2025). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28.Bhatia, A. et al. Wireless battery-free and fully implantable organ interfaces. Chem. Rev. 10.1021/acs.chemrev.3c00425 (2024). [DOI] [PubMed]
- 29.Lu, T. et al. Biocompatible and long-term monitoring strategies of wearable, ingestible and implantable biosensors: reform the next generation healthcare. Sensors23, 2991 (2023). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30.Chan, L. L. Y., Choi, T. C. M., Lord, S. R. & Brodie, M. A. Development and large-scale validation of the watch walk wrist-worn digital gait biomarkers. Sci. Rep.12, 16211 (2022). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 31.Zhang, Y. et al. Can wearable devices and machine learning techniques be used for recognizing and segmenting modified physical performance test items?. IEEE Trans. Neural Syst. Rehabil. Eng.30, 1776–1785 (2022). [DOI] [PubMed] [Google Scholar]
- 32.Minici, D. et al. Towards automated assessment of frailty status using a wrist-worn device. IEEE J. Biomed. Health Inform.26, 1013–1022 (2022). [DOI] [PubMed] [Google Scholar]
- 33.García-Villamil, G., Neira-Álvarez, M., Huertas-Hoyas, E., Ramón-Jiménez, A. & Rodríguez-Sánchez, C. A pilot study to validate a wearable inertial sensor for gait assessment in older adults with falls. Sensors21, 4334 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 34.Razjouyan, J. et al. Wearable sensors and the assessment of frailty among vulnerable older adults: an observational cohort study. Sensors18, 1336 (2018). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 35.Schwenk, M. et al. Wearable sensor-based in-home assessment of gait, balance, and physical activity for discrimination of frailty status: baseline results of the Arizona Frailty Cohort Study. Gerontology61, 258–267 (2015). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 36.Homes, R. et al. Comparison of a wearable accelerometer/gyroscopic, portable gait analysis system (LEGSYS+TM) to the laboratory standard of static motion capture camera analysis. Sensors23, 537 (2023). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 37.Dempster, A., Petitjean, F. & Webb, G. I. ROCKET: exceptionally fast and accurate time series classification using random convolutional kernels. Data Min. Knowl. Discov.34, 1454–1495 (2020). [Google Scholar]
- 38.Dempster, A., Schmidt, D. F. & Webb, G. I. MINIROCKET: a very fast (Almost) deterministic transform for time series classification. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining 248–257 10.1145/3447548.3467231 (2021).
- 39.U.S. Census Bureau. Tucson, AZ - Census Profile data. Census Reporterhttps://censusreporter.org/profiles/16000US0477000-tucson-az/ (2023).
- 40.Bandeen-Roche, K. et al. Frailty in older adults: a nationally representative profile in the United States. J. Gerontol. A. Biol. Sci. Med. Sci.70, 1427–1434 (2015). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 41.Kim, D. H. & Rockwood, K. Frailty in older adults. N. Engl. J. Med.391, 538–548 (2024). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 42.Idnay, B. et al. Uncovering key clinical trial features influencing recruitment. J. Clin. Transl. Sci.7, e199 (2023). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 43.Basu, S. B. Benchmarking recruitment rates for phase III trials. Nat. Rev. Drug Discov.23, 887–888 (2024). [DOI] [PubMed] [Google Scholar]
- 44.Liu, C. K. & Fielding, R. A. Exercise as an intervention for frailty. Clin. Geriatr. Med.27, 101–110 (2011). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 45.Guan, Y. et al. Effectiveness of interventions to improve frailty among community-dwelled older adults: a systematic review. Arch. Gerontol. Geriatr. 10.1016/j.archger.2025.105946 (2025). [DOI] [PubMed]
- 46.Li, Q., Campan, A., Ren, A. & Eid, W. E. Automating and improving cardiovascular disease prediction using machine learning and EMR data features from a regional healthcare system. Int. J. Med. Inf.163, 104786 (2022). [DOI] [PubMed] [Google Scholar]
- 47.Johnson, K. B. et al. Precision medicine, AI, and the future of personalized health care. Clin. Transl. Sci.14, 86–93 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 48.Bortone, I. et al. How gait influences frailty models and health-related outcomes in clinical-based and population-based studies: a systematic review. J. Cachexia Sarcopenia Muscle12, 274–297 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 49.Lim, Y. L. et al. Seven-day Holter monitoring detects more significant arrhythmias than 24-hour and 3-day monitoring. Eur. Heart J.44, ehac779.014 (2023). [Google Scholar]
- 50.Galli, A. et al. Holter monitoring and loop recorders: from research to clinical practice. Arrhythmia Electrophysiol. Rev.5, 136 (2016). [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Description of Additional Supplementary Files
Supplementary Movie 1. Real-time in vivo healthy and prefrail gait monitoring.
Data Availability Statement
All data supporting the findings of this study are available within the article and its supplementary files. Any additional requests for information can be directed to and will be fulfilled by the corresponding author. Datasets used to generate the figures (with relevant code) are available on ReData (https://redata.arizona.edu/), the University of Arizona’s official public data archive: 10.25422/azu.data.29614193.
All custom data collection code and data processing code (CC BY 4.0 License) will be available with datasets (where relevant) and without access restriction on ReData (https://redata.arizona.edu/), the University of Arizona’s official public data archive: 10.25422/azu.data.29614193.





