Abstract
Background
The COVID-19 pandemic highlighted the need for early detection of viral infections in symptomatic and asymptomatic individuals to allow for timely clinical management and public health interventions.
Methods
Twenty healthy adults were challenged with an influenza A (H3N2) virus and prospectively monitored from 7 days before through 10 days after inoculation, using wearable electrocardiogram and physical activity sensors. This framework allowed for responses to be accurately referenced to the infection event. For each participant, we trained a semisupervised multivariable anomaly detection model on data acquired before inoculation and used it to classify the postinoculation dataset.
Results
Inoculation with this challenge virus was well-tolerated with an infection rate of 85%. With the model classification threshold set so that no alarms were recorded in the 170 healthy days recorded, the algorithm correctly identified 16 of 17 (94%) positive presymptomatic and asymptomatic individuals, on average 58 hours postinoculation and 23 hours before the symptom onset.
Conclusions
The data processing and modeling methodology show promise for the early detection of respiratory illness. The detection algorithm is compatible with data collected from smartwatches using optical techniques but needs to be validated in large heterogeneous cohorts in normal living conditions.
Clinical Trials Registration . NCT04204493.
Keywords: heart rate monitoring, heart rate variability, wearable sensors, ECG, viral respiratory infection, influenza, COVID-19
In this human-challenge study, participants were monitored using wearable electrocardiogram sensors integrated with accelerometers. A semisupervised machine learning algorithm detected the infection in both symptomatic and asymptomatic individuals, on average 23 hours before the onset of symptoms.
The coronavirus disease 2019 (COVID-19) pandemic highlights the need for early detection of viral respiratory infections. When infected persons are alerted before symptoms manifest, they can pursue timely diagnosis and treatment (particularly important with antivirals where earlier treatment is associated with better outcomes) and take precautions to limit the disease outbreak. Commercially available wearable sensors (wearables) have been used to monitor individuals under normal living conditions, providing information about their health and behavior [1–5]. Only recently have such sensors been employed in detecting respiratory infections such as COVID-19, influenza, and other influenza-like illnesses [6–15], providing a noninvasive method to complement blood-based gene-expression assays [16, 17].
Retrospective studies used data from Fitbit devices to evaluate population trends of seasonal influenza-like illnesses including COVID-19 [6, 7]. The investigations, subsequently replicated with Huami smart watches [8], showed an association between the number of individuals who displayed a significant increase in daily resting heart rate (HR) and the officially reported influenza-like illness infection rates. Average HR data acquired by wearables were also used in a study aimed at detecting COVID-19 in real time. Using metrics provided through the Garmin Connect app, the investigators showed that 15 of 24 (63%) COVID-19 cases could have been detected before symptom onset [9]. Working with a much larger cohort, other researchers developed a predictive model for COVID-19 infection based on HR and heart rate variability (HRV) metrics derived from data acquired by Fitbit devices [14]. The model correctly identified 15% of 1257 monitored symptomatic cases before symptom onset and 72% by the third day of symptoms [14]. Similar performance was achieved by a model based on respiration rates derived from the WHOOP sensor and system [11].
In contrast to the work referenced above, our influenza-focused study used a human challenge framework [16, 18, 19], where the immune response is monitored on a timeline referenced to the infection event rather than the onset of symptoms. This enables one to investigate asymptomatic presentations, which field studies are usually unable to identify. In place of optical HR monitoring sensors that are commonly found in fitness trackers or smartwatches, we employed wearable electrocardiogram (ECG) sensors to inform initial algorithm development [20]. Although optical sensors can provide data concordant with ECG, they are sensitive to wrist movement and skin tone differences that can introduce variation among healthy subjects [21, 22].
In the present study, we developed an end-to-end data preprocessing and HR and HRV feature extraction and standardization methodology. We used the standardized metrics as inputs to semisupervised machine learning algorithms developed to detect the viral respiratory infection in presymptomatic and asymptomatic individuals.
METHODS
Study Enrollment
The protocol was approved by the London-Fulham Research Ethics Committee and the Health Research Authority of the United Kingdom (reference 19/LO/1441, clinical trial NCT04204993). Healthy persons aged 18–55 were eligible. Exclusion criteria included chronic respiratory disease, recent upper respiratory infection, immune deficiency, pregnancy, and close domestic contact with high-risk populations. After informed consent was obtained, individuals were prescreened by microneutralization assay to ensure that they did not already possess high levels of antibodies against the influenza A strain. Individuals meeting prescreening criteria were evaluated in a screening visit that included a review of medical history, lung function tests, a chest X-ray, an ECG assessment, and blood tests to check for underlying illness. Twenty individuals passed all screening criteria and completed the study.
Virus Inoculation and Symptom Reporting
Participants checked into the quarantine unit 1 day before inoculation and remained in confinement for 10 days after inoculation. All participants were inoculated with Influenza A/Belgium/4217/2015 (H3N2) at a dose of 5 × 105 50% tissue culture infectious dose (TCID50) in a volume of 0.5 mL by drops divided between nostrils between 8 am and 10 am on their second day in the unit. To assess safety, data on adverse events that occurred or worsened during the 28 days postinoculation, and serious adverse events that occurred or worsened during the 28 days postinoculation were collected and their causal relationship to the challenge virus assessed. Expected symptoms were not deemed adverse events unless protracted and severe or at the discretion of the study clinicians and Chief Investigator, according to protocol-defined clinical severity guidelines.
While in confinement, participants recorded their symptoms twice daily according to the following scale: 0 = absent, 1 = mild, 2 = moderate, 3 = severe. Based on the Jackson symptom scoring system [23], 8 symptoms were scored: nasal obstruction, nasal discharge, sore throat, sneezing, cough, malaise, headache, and chills. An individual was identified as symptomatic according to modified Jackson criteria [23, 24] if the following were present:
A cumulative clinical symptom score of 6 or greater over a 6-day period AND
Nasal discharge is present on 3 or more days over the 6-day period after viral inoculation OR the subjective impression of a cold or flu.
For each symptomatic individual, we identified the onset of symptoms as the day during which total symptom score exceeded 6 for the first time.
Laboratory Testing
Nasal and blood samples were obtained up to twice a day during quarantine, and diagnostic polymerase chain reaction (PCR) tests were performed on nasal lavage to determine if the inoculation produced an infection. We required 2 positive tests beginning at least 24 hours after inoculation.
Data Acquisition
The study participants were monitored with Bittium Faros 180 devices (Bittium Corporation), each consisting of a single-lead ECG sensor (250 Hz sampling frequency) and a 3-axis accelerometer (25 Hz sampling frequency) [25]. Devices were attached using 2 disposable ECG electrodes spanning the heart, according to the manufacturer’s instructions which were provided to the participants. Acquired ECG and accelerometer data were stored in the European data format in the memory of the sensor and downloaded to a PC throughout the monitoring period. Participants began wearing the sensor 7 days before inoculation and removed the device to shower (or other activities such as swimming when the device could get wet) and for charging. We did not collect feedback related to device usability or comfort.
Data Preprocessing and Feature Extraction
ECG signals were processed in 5-minutes epochs at 1-minutes steps using Kubios HRV Software. For each 5-minutes epoch, we calculated the average interbeat interval (IBI), and frequency-domain HRV metrics: high-frequency (HF) power, low-frequency (LF) power, and the LF/HF ratio [26, 27] (Table 1). Frequency-domain measures of HRV collectively capture the balance of the sympathetic and parasympathetic branches of the autonomic nervous system [28]. Data processing settings in the Kubios software are provided in the Supplementary material. We reproduced results of the HRV metrics extraction using open-source Python scripts obtained from the GitHub [29].
Table 1.
IBI, HRV, and Activity Metrics Used in the Present Study
Metric | Unit | Definition |
---|---|---|
IBI | s | Interval between R peaks in consecutive QRS complexes |
LF | … | Log of low-frequency power (0.04–0.15 Hz) |
HF | … | Log of high-frequency power (0.15–0.40 Hz) |
LF/HF | % | Ratio of LF to HF |
A | g | , where x, y, z are the filtered values of acceleration along x, y, and z axes |
Abbreviations: A, activity; HF, high frequency; HRV, heart rate variability; IBI, interbeat interval; LF, low frequency.
We wrote MATLAB (MathWorks, Inc.) scripts to align the accelerometer and HRV data. The accelerometer data were filtered to the range of human movement (0.25–7 Hz), and an activity metric A was computed on 1-second intervals as the square root of the sum of squares of the 3 individual axes of acceleration. The average activity was calculated over the same 5-minute window as was used in HRV analysis (Table 1). To mark sleep periods, we first considered a period to correspond to rest if the activity level was ≤0.3 for at least 10 minute. All rest segments with less than 15 minutes of separation were then merged, and merged rest segments of at least 75 minutes duration were marked as sleep regardless of time of day to account for the possibility of napping while in the clinic.
Data Analysis and Modeling
Average IBI and frequency-domain HRV features were standardized for each individual, using the time period before inoculation as a reference dataset. Before standardization, all data points (and the following data point) where A exceeded a high activity threshold (eg, during intentional exercise) were removed (1.86% of points/participant on average; range, 0.04%–3.88%). Additionally, all data points where activity level exceeded a threshold indicating night awakenings (0.7% of points/participant on average; range, 0.2%–2.39%) were removed. We calculated z-scores for each individual for each observation i and metric j using the formula:
where Xij is the observation i of metric j at the activity A, and μj(A) and σj(A) are, respectively, the mean and standard deviation of Xij calculated from the reference dataset that represented the same “sleep” state (ie, asleep vs awake) and were within ±0.20 of the current log-transformed activity level. Standardized metrics were down-sampled to 1 point every 5 minutes and then smoothed using a 1-hour moving average with the requirement that there were at least 30 minutes of valid data in the previous hour to report a value.
To characterize changes in individual metrics and the cumulative change in the multidimensional space of all extracted features over time, we applied open-source multivariable process control (MVPC) techniques [30–32]. We built a principal component model using data acquired prior to the inoculation to reduce the dimensionality of the data. In a model with m principal components, Hotelling T2 statistic for datapoint i is calculated as
where tik is the principal component score for the kth principal component of the ith datapoint, and lk is the standard deviation of tik [32]. The control limits (CLs) are calculated for an assumed value of significance level α [32]. In our model, we retained 3 principal components and calculated CLs for α varying from 0.001 upward.
We then applied the model to “new” data (postinoculation) to monitor the process over time. To reduce the effect of short-term changes in the variables, such as fight-or-flight response, we smoothed T2 as a function of time, using a moving-average filter based on the previous 4 hours of data. A variation in T2 that exceeded the CL (the threshold) signaled an anomaly in the process. We denoted the first time this alert was issued for the given participant as the detection time.
We characterized the performance of the model by calculating the sensitivity and specificity for fixed threshold values and the area under the receiver-operating-characteristic (ROC) curve, AUC [33]. The ROC curve analysis was performed using MedCalc Statistical Software version 20.009 (MedCalc Software, Ltd).
RESULTS
Study Overview
We enrolled 20 healthy adult participants who were inoculated with influenza A and monitored with a wearable device capable of detecting HR and activity level. Seventeen individuals (85%) tested positive for H3N2 (1 also tested positive for rhinovirus), and 3 tested negative by PCR. Of the 17 positive individuals, 14 were characterized as symptomatic and 3 were asymptomatic. Figure 1 shows the daily mean total symptom score for the infected symptomatic, infected asymptomatic, and uninfected individuals.
Figure 1.
Mean daily total symptom score for symptomatic, asymptomatic, and uninfected individuals. Day 0 is the day of inoculation. Error bars represent standard deviation.
The onset of the symptoms happened on average on day +2 following the inoculation. The symptoms peaked on day +3 and subsided by day +9. There were no adverse events considered possibly, probably, or definitely related to inoculation with the influenza A (H3N2) challenge virus or serious adverse events.
Trends in RR Interval and HRV Metrics
Figure 2 presents IBI, total symptom score, activity, and z-score of IBI (z-IBI), as functions of time for a symptomatic individual. The physical activity level changed significantly after the study participants entered the quarantine (Figure 2C), making the activity matching important. The z-score plot (Figure 2D) highlights a downward shift in IBI in days after the inoculation; this change is masked in the absolute IBI data (Figure 2A) by the decrease in the activity level.
Figure 2.
Acquired data plotted as functions of time for a symptomatic participant FC001; t = 0 marks the timing of the inoculation. (A) Interbeat interval (IBI) averaged in 5-minute epochs; (B) total symptom score (TSS); (C) activity averaged in 5-minute epochs and timing of sleep (S), acceleration due to gravity (g); and (D) z-score for IBI, matched for activity.
Figure 3 presents z-scores for HRV metrics as a function of time for a positive symptomatic, a positive asymptomatic, and a negative individual. In the positive cases, LF and HF both decreased following the inoculation, while the LF/HF ratio increased. For the negative individual, values of the metrics remain close to the baseline.
Figure 3.
Standardized values of IBI and selected HRV metrics: (left) positive symptomatic participant FC001; (center) positive asymptomatic participant FC007; (right) participant FC002 who tested negative for the H3N2 virus. Abbreviations: TSS, total symptom score; z-HF, z-score high frequency; z-IBI, z-score interbeat interval; z-LF, z-score low frequency.
Figure 4 shows the trends in IBI and HRV metrics averaged across all 17 persons who tested positive for influenza. Mean parameter values were computed on 24-hour increments, prior to averaging across the cohort. The small upward shift in the metrics when participants enter quarantine (day −1) indicates that standardization did not completely eliminate changes associated with the lower activity level in the clinic setting. On average, the IBI and HF responses were of a similar magnitude, but there were individuals with significant HRV changes in the absence of a strong IBI response.
Figure 4.
Twenty-four–hour mean values of z-scores for interbeat interval (z-IBI), low frequency (z-LF), high frequency (z-HF), and z-LF/HF ratio averaged across all H3N2-positive subjects in the study. Error bars indicate standard error of the mean.
To capture the combined effect of IBI and HRV variables, we calculated the Hotelling T2 statistic as a measure of temporal change in the multidimensional variable space. Figure 5 shows plots of the statistic as a function of time for a symptomatic, an asymptomatic, and a negative individual. The horizontal line shows the CL of 15 (α of .003) averaged for all participants (individual CLs were within 1% of the mean). The values of the statistic exceed the CL for the positive individuals, but remain below threshold for the negative individual.
Figure 5.
Examples of Hotelling T2 statistic plotted as a function of time for symptomatic (FC001) and asymptomatic (FC007) positive participants, and for a negative (FC002) participant. Abbreviation: CL, control limit.
Detection Algorithms
We constructed detection algorithms for all 20 participants using data prior to the inoculation to define each individual’s baseline. We then applied the models to the data acquired after the inoculation. Because the CL value was consistent across the cohort, we used a universal threshold to determine alerts. We consider that we have 23 healthy periods (ie, 20 preinoculation periods and 3 postinoculation periods from the negative individuals) and 17 sick periods.
For a threshold of 11, the algorithm correctly identified all positive cases in the cohort but registered 4 false positives during preinoculation days. At a threshold of 15, no false alerts were recorded, and the algorithm issued the alerts for 16 of 17 (94%) positive participants. AUC was calculated to be 0.97 (95% CI, .91–1.0; Supplementary Figure 1). Figure 6 shows the timing of the alert, compared to symptom onset, for all study participants.
Figure 6.
Timing of alerts (triangles) relative to symptom onset for all study participants using a threshold of 15.
For the majority of the participants, the alert was before symptom onset. There were no alerts for the 3 negative participants or during any preinoculation days. The mean alert time was 58 hours after inoculation and 23 hours before the onset of symptoms (Supplementary Table 1).
In Supplementary Table 1 and Supplementary Figure 1, we compare the performance of the MVPC model to a univariable algorithm based on thresholding of z-IBI smoothed with the 4-hour moving-average filter. In the univariable case, there were no false alerts in any of the healthy days for the threshold value of −2.7. The algorithm issued an alert for 10 of the 17 positive individuals (59%), and 4 of these were before symptom onset. The mean alert time was 91 hours after inoculation. AUC for the univariable algorithm was calculated to be 0.79 (95% CI, .67–.91).
DISCUSSION
To complement previously conducted field investigations targeted at presymptomatic detection of influenza-like illness infections, we conducted an influenza challenge study where the timing of the immune response to the pathogen could be accurately referenced to inoculation, rather than to symptom onset. Data were acquired using a wearable ECG sensor with an integrated accelerometer, and data preprocessing and metrics extraction algorithms were established and run on local networks with no connection to third-party cloud servers.
Because IBI and HRV vary widely from individual to individual [34], we converted IBI and HRV metrics into z-scores using subject-specific means and standard deviations calculated from the data acquired before inoculation. To account for the effect of physical activity, we used a subset of the baseline data matched to the activity level in the z-score calculations.
We observed a decrease in the z-IBI metric from the baseline, beginning during the first 2 days following infection in the majority of individuals who tested positive for influenza; this was followed by a return to baseline as the patients recovered (Figure 2 and Figure 4). This behavior is similar to trends in the daily average HR (inverse of IBI) reported in earlier field studies [6, 8].
We also observed changes in HRV metrics following the inoculation. Values of HF and LF decreased, while the LF/HF ratio increased (Figure 3 and Figure 4). With the possible exception of the LF/HF ratio, these trends are similar to the direction of changes observed in hospital patients who developed sepsis. For example, Yien et al reported that the progressive decrease in the LF and HF values was indicative of the patient’s deterioration [35], while Piepoli et al analyzed frequency-domain HRV in 12 critically ill patients during septic shock and recovery and reported 10 patients who recovered from the infection with normalization of the LF component [36].
To analyze temporal changes in all monitored metrics collectively and construct an illness detection algorithm, we used the MVPC technique. With the threshold set at 15, the algorithm detected infection in 16 of 17 positive cases while correctly classifying the 3 negative cases and rendering no false positives in the preinoculation period.
For the majority of symptomatic individuals, the model issued alerts before the onset of symptoms. On average infection was detected 23 hours before symptoms were first noted. In this study, symptomatic determination was based entirely on subjective reports because no study participants developed a temperature rise greater than 0.6°C. The onset of symptoms reflects the earliest time when individuals would notice becoming unwell, which would likely be some time before they present to a health care professional.
Early diagnosis of respiratory viral infections is important both in terms of clinical management and public health intervention. For influenza, antivirals such as oseltamivir, zanamivir, and baloxavir are widely approved. Although the literature regarding effectiveness in hospitalized cohorts has been mixed [37], administration of these drugs early in the course of infection has been widely shown to be efficacious [38]. In a study of 2124 critically ill patients, oseltamivir treatment within 48 hours of symptom onset improved survival [39], while early treatment in the community shortened the time of illness by 1–2 days and reduced the risk of hospitalization [40]. Similarly, baloxavir has been shown to shorten the duration of symptoms by approximately 24 hours along with reduction in viral load [41], with modelling studies suggesting almost double the effect if the treatment is given within 24 hours rather than 48 hours [42].
The impact of early antiviral treatment has also been shown in other respiratory viral infections, most notably during the COVID-19 pandemic, where the efficacy of outpatient antiviral treatment in limiting disease progression and hospitalization has been marked [43–45]. In addition, the COVID-19 pandemic saw an unprecedented use of public health interventions with widespread implementation of self-isolation as a strategy for combating pandemic spread. Here, based on mathematical modelling, earlier diagnosis of infection with more rapid self-isolation (with a difference of as little as 1.4 days) resulted in significantly lower transmission rates [46]. For these reasons, even bringing forward diagnosis by 23 hours (and likely longer in practice) could have an impact on treatment efficacy and interruption of transmission. Furthermore, with H3N2 influenza, the time between inoculation and development of symptoms was short; future studies will investigate the detection timing for infections with a longer incubation period, like COVID-19, where there is an increased opportunity for presymptomatic detection and potentially even greater impact by early treatment and interrupting transmission by asymptomatic shedding.
The influenza-challenge study provided an advantageous framework for the development of presymptomatic illness detection algorithms: the time of the infection was known exactly, and both symptomatic and asymptomatic cases were identified. However, the cohort was small, and its heterogeneity was further restricted by the study exclusion criteria. While the high infection prevalence rate helped limit the number of individuals who needed to participate, more and longer healthy time periods are needed to better gauge the false-positive rate.
Additionally, participants in the current study were quarantined following the inoculation. This setting limited their physical activity and their exposure to environmental and psychological stressors that may act as confounders for the infection detection algorithm. Future work will need to apply the algorithm to datasets collected from groups that are monitored in normal living conditions throughout the study.
These real-world scenarios will require adjustments to the methodology of data collection and analytics. While the ECG sensors are not compatible with months-long periods of wear, smartwatches or wristbands have a more user-friendly form factor. Some modifications to the detection algorithms may be required given subtle differences in physiological aspects of the optical and ECG measurements [22].
Future work will also optimize the duration of the baseline period. In this study, the period was equal to 7 days for all study participants. We will determine the minimum duration of the baseline dataset as a function of the device-wear compliance, artifact level, and any other factors influencing the continuity of the data stream.
Finally, the physical infrastructure for the data collection and analytics will need adjustments in real-world scenarios. Instead of downloading data from the sensor to the laptop as in the current study, the data flow will be managed by a mobile application loaded on the user’s smartphone. This approach is similar to the one used in field studies [6–9], with one important distinction: the data would be transmitted from the phone to a local computer or to a storage and analytics node on a cloud server, bypassing the device vendor cloud. We have tested the end-to-end data collection and processing pipeline on a system architecture of the type described above and verified that the data storage and computing requirements are modest and compatible with commodity hardware in both local or cloud network configurations (DS Temple et al unpublished).
Supplementary Material
Notes
Acknowledgment. The authors gratefully acknowledge valuable technical discussions with Dr Mark Wrobel, DARPA Program Manager.
Disclaimer. The views, opinions, and/or findings expressed are those of the authors and should not be interpreted as representing the official views or policies of the Department of Defense, the US Government, UK NHS, the UK National Institute for Health and Care Research (NIHR), or the UK Department of Health and Social Care.
Financial support. This work was supported by the Defense Advanced Research Projects Agency (grant number 140D6319C00236). C. C. is supported by the Imperial Biomedical Research Centre, which is funded by the National Institute for Health Research (NIHR), under contract with the UK Department of Health and Social Care. Infrastructure support was provided by the NIHR Imperial Biomedical Research Centre and the NIHR Imperial Clinical Research Facility.
Potential conflicts of interest. All authors: No reported conflicts of interest. All authors have submitted the ICMJE Form for Disclosure of Potential Conflicts of Interest. Conflicts that the editors consider relevant to the content of the manuscript have been disclosed.
Contributor Information
Dorota S Temple, RTI International, Research Triangle Park, USA.
Meghan Hegarty-Craver, RTI International, Research Triangle Park, USA.
Robert D Furberg, RTI International, Research Triangle Park, USA.
Edward A Preble, RTI International, Research Triangle Park, USA.
Emma Bergstrom, Department of Infectious Disease, Imperial College London, London, United Kingdom.
Zoe Gardener, Department of Infectious Disease, Imperial College London, London, United Kingdom.
Pete Dayananda, Department of Infectious Disease, Imperial College London, London, United Kingdom.
Lydia Taylor, Department of Infectious Disease, Imperial College London, London, United Kingdom.
Nana Marie Lemm, Department of Infectious Disease, Imperial College London, London, United Kingdom.
Loukas Papargyris, Department of Infectious Disease, Imperial College London, London, United Kingdom.
Micah T McClain, Center for Infectious Diseases Diagnostic Innovation, Duke University School of Medicine, Durham, North Carolina, USA.
Bradly P Nicholson, Center for Infectious Diseases Diagnostic Innovation, Duke University School of Medicine, Durham, North Carolina, USA; Institute for Medical Research, Durham, North Carolina, USA.
Aleah Bowie, Center for Infectious Diseases Diagnostic Innovation, Duke University School of Medicine, Durham, North Carolina, USA.
Maria Miggs, Institute for Medical Research, Durham, North Carolina, USA.
Elizabeth Petzold, Center for Infectious Diseases Diagnostic Innovation, Duke University School of Medicine, Durham, North Carolina, USA.
Christopher W Woods, Institute for Medical Research, Durham, North Carolina, USA; Hubert-Yeargan Center for Global Health, Duke University School of Medicine, Durham, North Carolina, USA.
Christopher Chiu, Department of Infectious Disease, Imperial College London, London, United Kingdom.
Kristin H Gilchrist, RTI International, Research Triangle Park, USA.
Supplementary Data
Supplementary materials are available at The Journal of Infectious Diseases online. Supplementary materials consist of data provided by the authors that are published to benefit the reader. The posted materials are not copyedited. The contents of all supplementary data are the sole responsibility of the authors. Questions or messages regarding errors should be addressed to the author.
References
- 1. Topol EJ. High-performance medicine: the convergence of human and artificial intelligence. Nat Med 2019; 25:44–56. [DOI] [PubMed] [Google Scholar]
- 2. Li X, Dunn J, Salins D, et al. Digital health: tracking physiomes and activity using wearable biosensors reveals useful health-related information. PLoS Biol 2017; 15:e2001402. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3. Clifton L, Clifton DA, Pimentel MA, Watkinson PJ, Tarassenko L. Predictive monitoring of mobile patients by combining clinical observations with data from wearable sensors. IEEE J Biomed Health Inform 2014; 18:722–30. [DOI] [PubMed] [Google Scholar]
- 4. Bayo-Monton JL, Martinez-Millana A, Han W, Fernandez-Llatas C, Sun Y, Traver V. Wearable sensors integrated with internet of things for advancing eHealth care. Sensors (Basel) 2018; 18:1851. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5. Perez MV, Mahaffey KW, Hedlin H, et al. Large-scale assessment of a smartwatch to identify atrial fibrillation. N Engl J Med 2019; 381:1909–17. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6. Radin JM, Wineinger NE, Topol EJ, Steinhubl SR. Harnessing wearable device data to improve state-level real-time surveillance of influenza-like illness in the USA: a population-based study. Lancet Digit 2020; 2:e85–93. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7. Quer G, Radin JM, Gadaleta M, et al. Wearable sensor data and self-reported symptoms for COVID-19 detection. Nat Med 2020; 27:73–7. [DOI] [PubMed] [Google Scholar]
- 8. Zhu G, Li J, Meng Z, et al. Learning from large-scale wearable device data for predicting epidemics trend of COVID-19. Discrete Dyn Nat Soc 2020; 1–8. [Google Scholar]
- 9. Mishra T, Wang M, Metwally AA, et al. Pre-symptomatic detection of COVID-19 from smartwatch data. Nat Biomed Eng 2020; 4:1208–20. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10. Seshadri DR, Davies EV, Harlow ER, et al. Wearable sensors for COVID-19: a call to action to harness our digital infrastructure for remote patient monitoring and virtual assessments. Front Digit Health 2020; 2: 11. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11. Miller DJ, Capodilupo JV, Lastella M, et al. Analyzing changes in respiratory rate to predict the risk of COVID-19 infection. PLoS One 2020; 15:e0243693. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12. Poongodi M, Hamdi M, Malviya M, et al. Diagnosis and combating COVID-19 using wearable Oura smart ring with deep learning methods. Pers Ubiquitous Comput 2021; 1–11. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13. Hirten RP, Danieletto M, Tomalin L, et al. Use of physiological data from a wearable device to identify SARS-CoV-2 infection and symptoms and predict COVID-19 diagnosis: observational study. J Med Internet Res 2021; 23:e26107. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14. Natarajan A, Su HW, Heneghan C. Assessment of physiological signs associated with COVID-19 measured using wearable devices. NPJ Digit Med 2020; 3:156. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15. Shapiro A, Marinsek N, Clay I, et al. Characterizing COVID-19 and influenza illnesses in the real world via person-generated health data. Patterns 2021; 2:100188. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16. Woods CW, McClain MT, Chen M, et al. A host transcriptional signature for presymptomatic detection of infection in humans exposed to influenza H1N1 or H3N2. PLoS One 2013; 8:e52198. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17. McClain MT, Constantine FJ, Nicholson BP, et al. A blood-based host gene expression assay for early detection of respiratory viral infection: an index-cluster prospective cohort study. Lancet Infect Dis 2021; 21:396–404. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18. Sherman AC, Mehta A, Dickert NW, Anderson EJ, Rouphael N. The future of flu: a review of the human challenge model and systems biology for advancement of influenza vaccinology. Front Cell Infect Microbiol 2019; 9:107. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19. Habibi MS, Chiu C. Controlled human infection with RSV: the opportunities of experimental challenge. Vaccine 2017; 35:489–95. [DOI] [PubMed] [Google Scholar]
- 20. Ramasamy S, Balan A. Wearable sensors for ECG measurement: a review. Sensor Rev 2018; 38:412–9. [Google Scholar]
- 21. Bent B, Goldstein BA, Kibbe WA, Dunn JP. Investigating sources of inaccuracy in wearable optical heart rate sensors. NPJ Digit Med 2020; 3:18. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22. Yuda E, Shibata M, Ogata Y, et al. Pulse rate variability: a new biomarker, not a surrogate for heart rate variability. J Physiol Anthropol 2020; 39:21. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23. Jackson GG, Dowling HF, Spiesman IG, Boand AV. Transmission of the common cold to volunteers under controlled conditions: I. The common cold as a clinical entity. AMA Arch Intern Med 1958; 101:267–78. [DOI] [PubMed] [Google Scholar]
- 24. Gwaltney JMJ, Colonno RJ, Hamparian VV, Turner RB. Rhinovirus. In: Schmidt NJ, Emmons RW, eds. Diagnostic procedures for viral, rickettsial, and chlamydial infections. 6th ed. Washington DC: American Public Health Association, 1989:579–614. [Google Scholar]
- 25. Bittium . Bittium Faros technical specifications. Medical Technologies – Bittium Faros. https://www.bittium.com/medical/bittium-faros. 2019. Accessed 1 July 2021.
- 26. Heart rate variability: standards of measurement, physiological interpretation and clinical use. Task Force of the European Society of Cardiology and the North American Society of Pacing and Electrophysiology. Circulation 1996; 93: 1043–65. [PubMed] [Google Scholar]
- 27. Kubios . HRV analysis methods. https://www.kubios.com/hrv-analysis-methods. Accessed 1 July 2021.
- 28. Pomeranz B, Macaulay RJ, Caudill MA, et al. Assessment of autonomic function in humans by heart rate spectral analysis. Am J Physiol Heart Circ Physiol 1985; 248:H151–3. [DOI] [PubMed] [Google Scholar]
- 29. Champseix R. Aura-healthcare/HRV-analysis. https://github.com/Aura-healthcare/hrv-analysis. Accessed 1 March 2022.
- 30. Kourti T, MacGregor JF. Process analysis, monitoring and diagnosis, using multivariate projection methods. Chemom and Intell Lab Syst 1995; 28:3–21. [Google Scholar]
- 31. Kano M, Hasebe S, Hashimoto I, Ohno H. A new multivariate statistical process monitoring method using principal component analysis. Comput Chem Eng 2001; 25:1103–13. [Google Scholar]
- 32. SAS . SAS/QC 14.2 user’s guide. https://documentation.sas.com/doc/en/qccdc/14.2/qcug/main_contents.htm?locale.htm=. 2016. Accessed 1 July 2021.
- 33. Zweig M, Campbell G. Receiver-operating characteristic (ROC) plots: a fundamental evaluation tool in clinical medicine. Clin Chem 1993; 39:561–77. [PubMed] [Google Scholar]
- 34. Quer G, Gouda P, Galarnyk M, Topol EJ, Steinhubl SR. Inter- and intraindividual variability in daily resting heart rate and its associations with age, sex, sleep, BMI, and time of year: retrospective, longitudinal cohort study of 92,457 adults. PLoS One 2020; 15:e0227709. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 35. Yien H-W, Hseu S-S, Lee LC, et al. Spectral analysis of systemic arterial pressure and heart rate signals as a prognostic tool for the prediction of patient outcome in the intensive care unit. Crit Care Med 1997; 25:258–66. [DOI] [PubMed] [Google Scholar]
- 36. Piepoli M, Garrard CS, Kontoyannis DA, Bernardi L. Autonomic control of the heart and peripheral vessels in human septic shock. Intensive Care Med 1995; 21:112–9. [DOI] [PubMed] [Google Scholar]
- 37. Muthuri SG, Venkatesan S, Myles PR, et al. Impact of neuraminidase inhibitors on influenza A (H1N1) pdm09-related pneumonia: an individual participant data meta-analysis. Influenza Other Respir Viruses 2016; 10:192–204. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 38. Koonin LM, Patel A. Timely antiviral administration during an influenza pandemic: key components. Am J Public Health 2018; 108:S215–20. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 39. Moreno G, Rodríguez A, Sole-Violán J, et al. Early oseltamivir treatment improves survival in critically ill patients with influenza pneumonia. ERJ Open Res 2021; 7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 40. Atkins CY, Patel A, Taylor TH, et al. Estimating effect of antiviral drug use during pandemic (H1N1) 2009 outbreak, United States. Emerg Infect Dis 2011; 17:1591. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 41. Ison MG, Portsmouth S, Yoshida Y, et al. Early treatment with baloxavir marboxil in high-risk adolescent and adult outpatients with uncomplicated influenza (CAPSTONE-2): a randomised, placebo-controlled, phase 3 trial. Lancet Infect Dis 2020; 20:1204–14. [DOI] [PubMed] [Google Scholar]
- 42. Du Z, Nugent C, Galvani AP, Krug RM, Meyers LA. Modeling mitigation of influenza epidemics by baloxavir. Nat Commun 2020; 11:1–6. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 43. Hammond J, Leister-Tebbe H, Gardner A, et al. Oral nirmatrelvir for high-risk, nonhospitalized adults with COVID-19. N Eng J Med 2022; 386:1397–408. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 44. Jayk Bernal A, Gomes da Silva MM, Musungaie DB, et al. Molnupiravir for oral treatment of COVID-19 in nonhospitalized patients. N Eng J Med 2022; 386, 509–20. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 45. Gottlieb RL, Vaca CE, Paredes R, et al. Early remdesivir to prevent progression to severe COVID-19 in outpatients. N Eng J Med 2022; 386:305–15. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 46. Kucharski AJ, Klepac P, Conlan AJK, et al. Effectiveness of isolation, testing, contact tracing, and physical distancing on reducing transmission of SARS-CoV-2 in different settings: a mathematical modelling study. Lancet Infect Dis 2020; 20:1151–60. [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.