Skip to main content
IEEE - PMC COVID-19 Collection logoLink to IEEE - PMC COVID-19 Collection
. 2021 Apr 13;20(2):73–80. doi: 10.1109/MPRV.2021.3068183

Smart Multimodal Telehealth-IoT System for COVID-19 Patients

Lloyd E Emokpae 1,, Roland N Emokpae 1, Wassila Lalouani 2, Mohamed Younis 2
PMCID: PMC9280812  PMID: 35937554

Abstract

The COVID-19 pandemic has highlighted how the healthcare system could be overwhelmed. Telehealth stands out to be an effective solution, where patients can be monitored remotely without packing hospitals and exposing healthcare givers to the deadly virus. This article presents our Intel award winning solution for diagnosing COVID-19 related symptoms and similar contagious diseases. Our solution realizes an Internet of Things system with multimodal physiological sensing capabilities. The sensor nodes are integrated in a wearable shirt (vest) to enable continuous monitoring in a noninvasive manner; the data are collected and analyzed using advanced machine learning techniques at a gateway for remote access by a healthcare provider. Our system can be used by both patients and quarantined individuals. The article presents an overview of the system and briefly describes some novel techniques for increased resource efficiency and assessment fidelity. Preliminary results are provided and the roadmap for full clinical trials is discussed.


The emergence of the COVID-19 pandemic has led to an unprecedented burden on the healthcare system both nationally and worldwide.1 Just in the U.S. alone, the number of positive cases has exceeded twenty-eight million at the time of writing this article. Such a global-scale outbreak has caused overwhelming load on the healthcare facilities and personnel. Moreover, caregivers have become at high risk and quite a few of them have been infected. Subsequently the demand has increased for telehealth services to fill in the gap, especially at a time when social distancing measures are being employed. Telehealth employs a combination of communications, sensing, computation, and human–computer interaction technologies that are used in diagnosis, treatment, and monitoring of patients without disturbing the quality of life of the individuals.2 It also provides a conduit for allowing the physician to provide an expected level of care through sensor biofeedback.

The major breakthrough in developing wearable medical devices and the emergence of the Internet of Things (IoT) has revolutionized the healthcare industry.2,3 Particularly, these advanced technologies have enabled the development of effective and economic solutions for remote and continuous monitoring of patients with medical conditions. For example, the heartbeat of individuals can be measured to detect cardiac unrest and automatically call for emergency assistance. Such a monitoring service has traditionally been possible only through hospitals or specialized clinics, and consequently deemed both expensive for insurance companies and inconvenient for patients and their families. For health insurance providers, reducing the cost is paramount in order to maintain affordable premiums. Moreover, wearable sensors are invaluable for monitoring the body conditions under stress, e.g., while exercising or playing sports. The architecture of such a real-time health monitoring system consists of single or multimodality sensing devices to collect relevant measurements and transmit them through a gateway node to storage centers, either cloud-based or private, to be accessible to caregivers.

This article provides an overview of our smart Telehealth-IoT system that is geared for addressing the aforementioned challenge. Fundamentally, our Intel award winner and patent pending smart wireless wearable IoT system provides an in-home Telehealth monitoring tool for health assessment and for diagnosing illness conditions amid the COVID-19 pandemic. The system realizes a multimodal health assessment methodology by monitoring multiple vital conditions and correlating the collected data to provide continual and real-time assessment of the patient's health. The system is geared for use by: 1) patients who have been confirmed to have COVID-19 and are being treated at home, and 2) those who are quarantined after being exposed to infected individuals. Our novel wearable Telehealth-IoT system is fundamentally different from the state-of-the-art in that it uses a body area sensor network (BASN) with each node in the network having a plurality of sensors, including microphones and pulse oximeter, which will be used to provide illness diagnosing and monitoring of conditions. In essence, our wearable noninvasive system constitutes an enabling technology with value proposition of

  • 1)

    diagnosing fever conditions;

  • 2)

    providing alerts of any lung inflammations;

  • 3)

    detecting unusual patterns that indicate breathing difficulty;

  • 4)

    monitoring heart function and assessing fatigue level; and

  • 5)

    allowing a physician to remotely monitor quarantined and sick individuals.

In addition to describing the architecture and features of Telehealth-IoT, this article briefly highlights some of the novel techniques that have been developed to tackle the technical challenges and boost the efficiency of the system in practice. Particularly, we describe a novel fusion technique for multimodal diagnosis of COVID-19 infection. We provide an overview of an innovative technique for energy-efficient data collection through predictive sampling. We also show some of the preliminary results and report on the current status of the development. The article is organized as follows. The next section provides an overview of the system design and compares Telehealth-IoT with other solutions. Section “Multimodality Illness Diagnostics” describes our multimodal approach for diagnosis of COVID-19 illness and other diseases. Section “Energy Conservation Through Predictive Sampling” briefly discusses our energy optimization technique through predictive sampling. Finally, Section “Conclusions and Future Work” concludes the article with a summary and a brief discussion of future extensions.

SYSTEM ARCHITECTURE AND DISTINCT FEATURES

System Architecture

An overview of the system architecture is shown in Figure 1, the BASN incorporates a mesh of wireless sensors that are networked to measure full torso range of motion, muscle activation, and body vitals in the form of photoplethysmography (PPG), electrocardiography (ECG), electromyography (EMG), acoustic cardiography (ACG), and acoustic myography (AMG). The system uses Zigbee for supporting internode connectivity. The sensors transmit their data to a gateway node that serves as the interface for the BASN. One of the nodes can be designated as a gateway or a separate node can be incorporated to serve such a role. An example of the latter is when integrating the BASN with the smart LASARRUS glove (not shown in Figure 1), which has onboard processing capabilities, and includes additional sensor modalities, such as pulse sensor. For the purpose of COVID-19 diagnosis and symptom tracking, we are leveraging the ECG, ACG, and temperature sensors for physiological monitoring, auscultation of lung sounds and fever detection. Signal processing techniques are used for wireless beamforming and deconvolution of incoming lung sounds. The real-time sensor data are processed by the gateway node if feasible or relayed through the gateway to remote centers over secure connection.

Figure 1.

Figure 1.

Overview of our multimodal smart sensor network architecture for COVID-19 diagnosis. Also shown is one of our sensor data modality (ECG) in comparison with an existing FDA approved system.

To highlight the capabilities of the incorporated sensors, we provide a brief comparison of the performance of our ECG sensing with an FDA approved ECG device, namely the Kardia by AliveCore. The waveforms are shown in Figure 1. The result shows consistent heart rate (HR) and Q-wave, R-wave, and S-wave (QRS) intervals for the sample duration, the LASARRUS ECG slightly lags the Kardia in time due to the imperfections in the synchronization of the two independent data streams. The heart rate estimation between the two ECG streams was within 5% of each other, with the HR shown in Figure 1 to be around 59 bpm for a subject at rest. The other sensors of our system have comparable capabilities and specification to commercial systems, namely, the Eko DUO ECG + Digital Stethoscope, and ThermoWorks WAND.

Existing Covid-19 Technologies

Respiratory Rate Analysis: The respiration rate (RR) reflects the breathing frequency and is deemed indicative of health problems. Particularly, RR abnormality could be linked to hypoxaemia or hypercarbia, which is often associated with the COVID-19 infection. Many studies have correlated abnormal respiratory rates with pneumonia, pulmonary embolism, weaning failure, and overdose.4 Subbe et al. 5 have shown that RR identifies patients with high-risk of cardiopulmonary catastrophic deterioration more accurately than using the blood pressure and pulse rate. It has been shown that RR can be inferred from physiological signals such as ECG.6 We will be integrating a finger pulse oximeter in our system for rapid development to address the COVID-19 demands.

Wearable Auscultation Devices for Telehealth Diagnosis: Recent advances in wearable devices and smart sensors have led to the development of practical “StethoVest” systems.79 Researchers at Johns Hopkins University are among the first to develop a wearable vest that is embedded with microphones for heart and lung auscultation.7 However, the vest requires physical tethering to a PC performing the data acquisition. Scientists from Technische Universität Berlin, Germany, have developed a wireless multimodal sensor for acoustic auscultation.8 Their solution integrates sensors for ECG and actigraphy in addition to microphones. Although very promising, their solution can only be used for auscultation. A similar system has been developed by scientists from The University of Taipei, Taiwan, where a wearable sensor system is utilized to reduce the effect of motion artifacts on the breathing sound and ECG signals.9 To the best of our knowledge, there are no existing “StethoVest” or wearable systems that employ a network of smart multimodal sensor nodes, each with acoustic sensing capabilities, for heart and lung auscultation and diagnosis. Table 1 gives a holistic overview of our Telehealth-IoT system capabilities and comparison to the state-of-the-art.

TABLE 1. Comparison of our system to the state-of-the-art.

Wearable Device Max Number of Sensors Wireless Connectivity Multimodal Sensing Features
Telehealth-IoTTM 24-30 Yes Yes ✓Motion Tracking
✓Auscultation
✓Vital Monitoring
✓Diagnose Illness Conditions
Johns Hopkins 12 No No ✓Auscultation
✓Vital Monitoring
Technische University 1 Yes Yes ✓Auscultation
✓Vital Monitoring
University of Taipei 2 Yes Yes ✓Motion Tracking
✓Auscultation
✓Vital Monitoring

MULTIMODALITY ILLNESS DIAGNOSTICS

Our Telehealth-IoT system distinguishes itself through the inclusion of a diverse set of sensors and pursuing a multimodal methodology for detecting and tracking the symptoms of COVID-19. According to the data of the infected patients in Wuhan, China,10 the COVID-19 symptoms with commonality are, fever (73%), cough (59%), shortness of breath (31%), muscle ache (11%), confusion (9%), headache (8%), sore throat (5%), rhinorrhea (4%), chest pain (2%), diarrhea (2%), and nausea and vomiting (1%). From these statistics, it can be concluded that the strongest indication of infection are pulmonary-related impacts and breathing disorders. Therefore, our system focuses on respiratory related symptoms and aggregates different sensing modalities to detect signs of COVID-19 illness and track the patient's condition overtime. We show later in this section, that our multimodal approach boosts the fidelity of COVID-19 diagnostics.

As articulated earlier, our Telehealth-IoT opts to monitor patients remotely in a nonintrusive manner. Therefore, acoustic sensors are primarily used in tracking respiratory related symptoms. We analyze cough and breathing sounds to detect pulmonary related disease, namely, asthma, pneumonia, and lung inflammation. Our approach distinguishes itself by employing specific deep learning techniques to detect the COVID-19 based on the features extracted from cough and breathing sounds. We further include PPG and ECG sensing modalities to detect the respiratory rate to be used as features for our multimodal diagnostics. Our results confirm the effectiveness of our single and multimodality diagnostics. In the following, each classification technique is briefly explained and then our aggregation mechanism for COVID-19 diagnostic is highlighted.

Classification of COVID-19 cough sound: The most severe consequence of the COVID-19 is the development of pneumonia. This is typically confirmed with X-ray or CT scan images as they are the most effective means for assessing the lung's conditions. Published studies have shown that analyzing X-ray and CT scan images, either by physicians and/or computer-based processing techniques achieves a high accuracy of 95% for COVID-19 infection diagnosis.11 However, obtaining imagery data requires visiting healthcare facilities and getting in contact with medical professionals, which our system aims to prevent in order to halt infection spreading and facilitate social distancing. In essence, our system relies on analyzing the cough sound not only to assess the condition of the respiratory system but also to distinguish symptoms of COVID-19 from other illnesses, e.g., asthma. The wearable acoustic sensors in our Telehealth-IoT system acquires cough sounds to be processed at the gateway or remotely at a medical facility.

The use of acoustic recording of coughs and convolutional neural networks (CNNs) in detecting asthma, pneumonia, and COVID-19 has been recently explored.12 However, deep networks, like CNN, generally require substantial training data, which is not currently available for COVID-19. Therefore, we promote the use of generative adversarial network (GAN) to generate synthetic acoustic COVID records and demonstrate that the synthetic data produced can enhance the classification of the patient's respiratory symptoms with high accuracy. We realize data augmentation using a generative model. The generative acoustic data are used to train our CNN classifier. The input to the CNN model is the Mel-frequency cepstral coefficients (MFCCs), which represent the acoustic data in the time/frequency domain. The architecture of our discriminative and generative model is illustrated in Figure 2(a). Our GAN model uses two neural networks (NN) that try to defeat each other, where the first NN generates virtual data while the second NN acts as a discriminator model trying to detect such virtual (unreal) data within the entire dataset. The process continues while minimizing the probability of detecting such virtual data. In essence, the GAN model strives to increase the similarity between the real and generated sounds. We feed the generative networks with the MFCC of the real COVID cough sound and restrict the model to generate virtual COVID cough sounds. More specifically, we use conditional GAN to generate improved sounds by restricting the search space to the latent space point and the class to be COVID-19 data. The model's objective is to reduce binary cross-entropy.

Figure 2.

Figure 2.

Architectural design of the employed: (a) Conditional GAN used for data augmentation, and (b) the CNN classifier used for processing the cough sound.

To help in diagnosing COVID-19, our Telehealth-IoT employs a CNN model to classify the patient's cough sound (MFCC data). Such a CNN model is trained using both existing cough recordings as well as the augmented (GAN generated) data of patients with COVID infections. We have utilized the audio dataset from the Coswara database13 in our analysis. Coswara is a project at the Indian Institute of Science Bangalore for aiding in the diagnosis of COVID-19 based on respiratory, cough and speech sounds; it requires the participants to provide a recording of breathing and cough sounds. Specifically, we have extracted 1589 non-COVID and 220 COVID coughs, and 1590 non-COVID and 221 COVID breathing samples; we split the obtained samples into nonoverlapping training and testing sets. Figure 2(b) shows the MFCC data with dimension 4327×13×1 is provided as an input to a convolution layer. Then, we feed the output to two additional convolutional layers that have 32 and 64 channels with 4×4 kernel size, respectively.

The output is flattened and then passed to a fully connected layer of 84 neurons, followed by 16 intermediate neurons and then a final layer with 2 neurons. To achieve balanced training, we have used the same number of COVID and non-COVID samples, for both scenarios with and without augmentation. We have applied k-fold for cross validation where k is set to 5. In Table 2, we report the results for varying COVID-19 augmented records. We grow the size of augmented COVID data samples incrementally from 50 to 1000, so that the augmented data constitutes between 50/(270+270) and 1000/(1220+1220), i.e., 9–40% of the overall dataset. For example, 50/(270+270) means that we have used 270 samples of healthy individuals, 220 COVID patient samples and augmented the latter with 50 generated samples (i.e., a total of 270 sick patients). In essence, the dataset originally contains more records of healthy patients. Thus, when we generate COVID-19 records, the accuracy reaches 91% and 90% for cough and breathing, respectively. This is very much expected since the generated samples are based on the classification of the actual data. The total signal duration is equal to 30 seconds. The cough sound results are shown in Table 2, with and without augmented data. The most noteworthy observation in Table 2 is the relatively high detection accuracy that our approach could achieve, even by training with a small dataset.

TABLE 2. Cross-validation results for COVID-19 classification using the cough and breathing sound analysis of our telehealth-IoT deep learning model.

Cough sound Breathing sound
Dataset F1-score Accuracy F1-score Accuracy
Average Standard deviation Average Standard deviation Average Standard deviation Average Standard deviation
Collected data only 0.6549 0.0820 0.6297 0.0170 0.7273 0.0700 0.7045 0.0500
With data augmentation Aug = 50 0.6968 0.0361 0.6667 0.0166 0.7646 0.0491 0.7333 0.0522
Aug = 100 0.7099 0.0417 0.6875 0.0140 0.7904 0.0452 0.7703 0.0517
Aug = 150 0.7522 0.0368 0.7324 0.0289 0.8015 0.0293 0.7770 0.0283
Aug = 180 0.7777 0.0186 0.7375 0.0198 0.8120 0.0426 0.7988 0.0281
Aug = 200 0.7866 0.0317 0.7595 0.0246 0.8192 0.0217 0.8071 0.0237
Aug = 300 0.8244 0.0360 0.8010 0.0387 0.8417 0.0323 0.8260 0.0291
Aug = 1000 0.9172 0.0048 0.9098 0.0065 0.9172 0.0208 0.9098 0.0224

Detection of COVID breathing sound: Based on the Wuhan statistics mentioned earlier, about one-third of COVID-19 patients experience shortness in breath. Our multimodality solution exploits such a symptom by analyzing breathing sounds. Doctors use digital stethoscopes to listen to the lung's sound during breathing and spirometers to measure lung volume and capacity by gauging the airflow in the lung. Generally, the breathing sound patterns include different phases like inspiratory, pause, and expiratory that refer to the inhale/inflow and exhale/outflow of air to/from the lungs. Abnormalities reflect one or multiple lung/breathing complications. The frequency and energy characteristics of the acoustic signal in each phase enable the diagnosis of crackles, wheezes, rhonchus, squawk, and stridor. This motivates the exploitation of the spectrogram analysis using deep learning. Basically, the acquired sound records over time are analyzed using Mel-frequency to detect anomalous patterns. We apply deep learning techniques to the MFCC breathing vectors for patients with both COVID and other complications. We employ the same generative learning mechanism, described above, to populate the dataset with breathing sounds reflecting COVID-19 complications. Similarly, a CNN is used in the classification of COVID-19 breathing sounds; the architecture of the CNN model in this case includes two convolutional layers followed by two dense nonlinear layers and one linear layer. The filter size is 4×4 for both convolutional layers. As shown in Table 2, our approach achieves distinct accuracy.

Multimodal Data Fusion for COVID diagnostics: In order to generate accurate diagnosis and measure the progression of the COVID-19 illness, our Telehealth-IoT system correlates the various indicators provided by analyzing data from individual sensors. Here we use our cough and breathing sound analysis results as well as the respiratory rate based on PPG and ECG data. We explore two methodologies to conduct such correlation. The first is a voting ensemble-based mechanism to categorize the patient's infection. In this step, we use all COVID-19 indicators based on the single-modality analysis as inputs to a classifier. The output of the classifier reflects the overall assessment of whether the patient has COVID-19 or not, along with the accuracy (fidelity). We apply a variety of classifiers and then take a hard vote based on their output. In essence, we consider multiple machine learning classifiers including SVM Gaussian kernel, Adaboost, random forest, decision tree, and aggregate their results. Our preliminary results have shown that such voting ensemble achieves 80% accuracy for COVID diagnosis without any data augmentation; this is clearly a major improvement in accuracy to the assessment using individual modality, where Table 2 reports 70% and 63% accuracy when only breathing and cough sounds are used, respectively. The second methodology is to fuse the symptom indicators from the single-modality analysis using a fuzzy mechanism. Such an approach will enable the correlation of data that is not provided by the wearable system, e.g., fatigue, headache, etc. We also factor in the importance of the individual symptoms based on how common they are for COVID-19. Currently, we are implementing the second methodology.

ENERGY CONSERVATION THROUGH PREDICTIVE SAMPLING

The operation of these wearable devices involves significant energy consumption due to the wireless transmission and high sampling rates required for collecting physiological data. In our Telehealth-IoT system, we have developed a novel mechanism for reducing the number of transmissions through in-network data processing. The idea is to skip the transmission of some samples without degrading the data accuracy. We note that under no serious health conditions, there are little variations in the monitored physiological attributes and consequently the collected data. We employ a machine learning model at the sensor side; such a model is also duplicated on the gateway node. The model identifies the possible set of predictable samples that can be inferred by the gateway node. Generally, the analog sensing data from different modalities like ECG, EMG, and AMG exhibits some known patterns constituting time series. By setting a certain variation threshold α, the sensor will decide on skipping the transmission if the difference from the predicted and actual data sample is negligible. We utilize a long short-term memory (LSTM) network. A major advantage of our approach is that the error bound for a reproduced data sample is easily controlled by adjusting the variation threshold and thus our approach can be applied to a wide range of sensor modalities. Furthermore, our approach does not suffer from error distortion from each reconstructed signal segment, since the errors are handled sequentially; any violation of the variation threshold necessitates the transmission of the sample and consequently restoring the accuracy for the next sample prediction.

The effectiveness of our energy optimization approach is validated using ECG datasets. For the transmission power, we have considered a Zigbee transceiver, specifically the Digi XBee-3 radio which has a transmit power of 90 mW. The computation overhead is based on using an Arduino platform that has an active current of 1.23 mA when clocked at 16 MHz. The average power consumed in processing is approximately 5mW, which is an order of magnitude less than that of communication. We have used valgrind profiler with verrou tools to estimate the set and the number of instructions for the applied algorithms while handling the same number of samples (50 000 ECG records). Overall, we have observed that the computational overhead is quite insignificant and is about 1% of that of communication. This is because the employed LSTM model is simple and contains 101 trainable parameters. The model is trained offline and hence is computationally inexpensive for normal Telehealth-IoT usage for patient monitoring. The estimated runtime duration for predicting the data sample by our LSTM is approximately 0.4496 ms, which is much less than the sampling rate required for ECG.

As a baseline for comparison, we have implemented the compressive sensing approach of,15 which is based on the discrete wavelet transform (DWT) with five decomposition levels. The DWT coefficients are divided into three groups; a threshold is set for each group based on a desired energy packing efficiency. To retain 95% of the signal energy, the thresholds are set to: 99.9% for the approximation band coefficients of level five, 97% for the detail band coefficients of level five, and 85% for detail subbands coefficients of levels 1–4. To determine the most significant coefficient for each level i, we: i) calculate the energy of all coefficients, ECi,j, ii) sort ECi,j in descending order, and iii) add ECi,j in the sorted list progressively until the desired thresholded energy corresponding of level i (i.e., energy × thresholdi) is reached. The remaining coefficients are below the threshold and thus will be insignificant. A binary significance map is then formed where a binary one is outputted if the wavelet decomposition coefficient is significant, or zero otherwise. Compression is achieved using direct binary representation of the significant coefficients.

Figure 3 captures the energy savings as a result of reduced packet transmissions and optimized data sample quantization. Our Telehealth-IoT Energy Optimizer (TEO) achieves dramatic power savings, indicating that six times reduction in the communication overhead is possible with a variation threshold of 10-5. Figure 3 also highlights the significant impact of the tolerance inaccuracy on the performance; tolerating more deviation between the predicted and actual data enables skipping the transmission of more samples and consequently conserves more energy. The figure also demonstrates the superiority of the Telehealth-IoT optimization relative to contemporary compressive sensing. On the average, TEO could skip up to 80% of the samples for alpha 10-4 and 28% with alpha 10-7. Overall, our TEO approach is complementary, rather than alternative, for any compressive sensing algorithm and may be generically applied to various sensor modalities. As indicated by the results, combining compressive sensing with our TEO approach yields performance that surpasses each of them individually. Overall, a sensor node in our Telehealth-IoT system has a battery rating of 2.3 Watt-hours (Wh). Thus, the baseline approach will last approximately 4 hours (2.3 Wh/0.58 Watts, from Figure 3) on a continuous operation. Our TEO approach will extend such time to 22 hours.

Figure 3.

Figure 3.

Capturing the energy savings achieved by Telehealth-IoT in comparison to compressive sensing and to the baseline case where no optimization is applied, i.e., all samples are transmitted.

CONCLUSION AND FUTURE WORK

In this article, we have presented our novel and patent pending Telehealth-IoT system for diagnosing COVID-19 related symptoms and similar contagious diseases. Our preliminary results show a classification accuracy of 80% for COVID-19 diagnosis without any data augmentation. Furthermore, we have demonstrated that our solution can operate while also conserving energy through predictive sampling. Our future work includes extending our multimodal analysis to fuse the symptom indicators from the single-modality analysis using a fuzzy mechanism, and to assess the performance of data sample prediction for other signals such as EEG and EMG. We hope to begin our clinical study by Q1 of 2021.

Acknowledgments

This work was supported by National Science Foundation under Grant #1912945 and Grant #2030629.

Biographies

Lloyd E. Emokpae is the CEO and co-founder of LASARRUS. He received the Ph.D. degree in computer engineering from the University of Maryland Baltimore County, Baltimore, MD, USA, in 2013. He has over 14 years of experience in science and engineering with over 20 publications, 1 issued patent, and 3 pending patents. Contact him at lloyd.emokpae@lasarrus.com.

Wassila Lalouani is currently a Research Assistant with the Embedded Systems and Networks Laboratory, Department of Computer Science and Electrical Engineering, University of Maryland Baltimore County. She received the Ph.D. degree in computer science from the University of Science and Technology Houari Boumediene, Bab Ezzouar, Algeria. Her interest includes network management and protocols, machine learning, and network security. Contact her at lwassil1@umbc.edu.

Mohamed Younis is currently a Professor with the Department of Computer Science and Electrical Engineering, University of Maryland Baltimore County. His technical interest includes network architectures and protocols, wireless sensor networks, embedded systems, fault tolerant computing, secure communication and cyber physical systems. He received the Ph.D. degree in computer science from New Jersey Institute of Technology, Newark, NJ, USA. He is a senior member of the IEEE and the IEEE Communications Society. Contact him at younis@umbc.edu.

Roland N. Emokpae Jr. is a medical graduate, a co-founder, and clinical scientist at LASARRUS, with over 12 years in medical related research. His research interest includes diagnostics tools for remote patient monitoring, wearable devices, and technologies for improved clinical outcomes. He received the Graduate degree from St. George's University, True Blue, Grenada, in 2015. Contact him at roland.emokpae.jr@lasarrus.com.

Funding Statement

This work was supported by National Science Foundation under Grant #1912945 and Grant #2030629.

References

  • 1.CDC COVID-19 Response Team, “Severe outcomes among patients with coronavirus disease 2019 (COVID-19)—United states, February 12–March 16, 2020,” MMWR Morb Mortal Wkly Rep 2020, vol. 69, pp. 343–346, Mar. 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Riazul Islam S. M., Kwak D., Humaun Kabir M., Hossain M., and Kwak K.-S., “The Internet of Things for health care: A comprehensive survey,” IEEE Access, vol. 3, pp. 678–708, Jun. 2015. [Google Scholar]
  • 3.Nadeem A., Hussain M. A., Owais O., Salam A., Iqbal S., and Ahsan K., “Application specific study, analysis and classification of body area wireless sensor network applications,” Comput. Netw., vol. 83, pp. 363–380, 2015. [Google Scholar]
  • 4.Mlgaard R. R., Larsen P., and Håkonsen S. J., “Effectiveness of respiratory rates in determining clinical deterioration: A systematic review protocol,” JBI Database Syst. Rev. Implement. Rep., vol. 14, no. 7, pp. 19–27, 2016. [DOI] [PubMed] [Google Scholar]
  • 5.Subbe C., Davies R., Williams E., Rutherford P., and Gemmell L., “Effect of introducing the modified early warning score on clinical outcomes, cardio-pulmonary arrests and intensive care utilisation in acute medical admissions,” Anaesthesia, vol. 58 pp. 797–802, 2003. [DOI] [PubMed] [Google Scholar]
  • 6.Charlton P. H., et al. , “Extraction of respiratory signals from the electrocardiogram and photoplethysmogram: Technical and physiological determinants,” Physiol. Meas., vol. 38, pp. 669–690, 2017. [DOI] [PubMed] [Google Scholar]
  • 7.Sapsanis C., et al. , “StethoVest: A simultaneous multichannel wearable system for cardiac acoustic mapping,” Proc. IEEE Biomed. Circuits Syst. Conf., Cleveland, OH, USA, 2018, pp. 191–194. [Google Scholar]
  • 8.Klum M., et al. , “Wearable multimodal stethoscope patch for wireless biosignal acquisition and long-term auscultation,” Proc. 41st Int. Conf. IEEE Eng. Med. Biol. Soc., Berlin, Germany, 2019, pp. 5781–5785. [DOI] [PubMed] [Google Scholar]
  • 9.Lin B., Jhang R., and Lin B., “Wearable cardiopulmonary function evaluation system for six-minute walking test,” Sensors, vol. 10, 2019, Art. no. 4656. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Shi H., et al. , “Radiological findings from 81 patients with COVID-19 pneumonia in Wuhan, China: A descriptive study,” Lancet Infect. Dis., vol. 20, no. 4, Apr. 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Waheed A., Goyal M., Gupta D., Khanna A., Al-Turjman F., and Pinheiro P. R., “CovidGAN: Data augmentation using auxiliary classifier GAN for improved covid-19 detection,” IEEE Access, vol. 8, pp. 91916–91923, 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Imran A., et al. , “AI4COVID-19: AI enabled preliminary diagnosis for COVID-19 from cough samples via an app,” Inform. Med. Unlocked, vol. 20, 2020, Art. no. 100378. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Sharma N., et al. , “Coswara–a database of breathing, cough, and voice sounds for covid-19 diagnosis,” 2020, arXiv:2005.10548.
  • 14.Al Disi M., et al. , “ECG signal reconstruction on the iot-Gateway and efficacy of compressive sensing under real-time constraints,” IEEE Access, vol. 6, pp. 69130–69140, 2018. [Google Scholar]
  • 15.Rajoub B., “An efficient coding algorithm for the compression of ECG signals using the wavelet transform,” IEEE Trans. Bio-Med. Eng., vol. 49, no. 4, pp. 355–362, Apr. 2002. [DOI] [PubMed] [Google Scholar]

Articles from Ieee Pervasive Computing are provided here courtesy of Institute of Electrical and Electronics Engineers

RESOURCES