Skip to main content
IEEE - PMC COVID-19 Collection logoLink to IEEE - PMC COVID-19 Collection
. 2020 Oct 5;21(3):2921–2928. doi: 10.1109/JSEN.2020.3028494

A Movement Detection System Using Continuous-Wave Doppler Radar Sensor and Convolutional Neural Network to Detect Cough and Other Gestures

Euclides Lourenco Chuma 1,, Yuzo Iano 2
PMCID: PMC8768982  PMID: 37975064

Abstract

The 2019 coronavirus disease (COVID-19) pandemic has contaminated millions of people, resulting in high fatality rates. Recently emerging artificial intelligence technologies like the convolutional neural network (CNN) are strengthening the power of imaging tools and can help medical specialists. CNN combined with other sensors creates a new solution to fight COVID-19 transmission. This paper presents a novel method to detect coughs (an important symptom of COVID-19) using a K-band continuous-wave Doppler radar with most popular CNNs architectures: AlexNet, VGG-19, and GoogLeNet. The proposed method has cough detection test accuracy of 88.0% using AlexNet CNN with people 1 m away from the microwave radar sensor, test accuracy of 80.0% with people 3 m away from the radar sensor, and test accuracy of 86.5% with a single mixed dataset with people 1 m and 3 m away from the radar sensor. The K-band radar sensor is inexpensive, completely camera-free and collects no personally-identifying information, allaying privacy concerns while still providing in-depth public health data on individual, local, and national levels. Additionally, the measurements are conducted without human contact, making the process proposed in this work safe for the investigation of contagious diseases such as COVID-19. The proposed cough detection system using microwave radar sensor has environmental robustness and dark/light-independence, unlike traditional cameras. The proposed microwave radar sensor can be used alone or in group with other sensors in a fusion sensor system to create a robust system to detect cough and other movements, mainly if using CNNs.

Keywords: Sensor, cough, COVID-19, radar, neural network

I. Introduction

Coronavirus disease 2019 (COVID-19), caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), is present worldwide, infecting millions of people and resulting in high fatality rates. Due to its highly contagious nature, lack of vaccines, and appropriate drug treatments, the only effective method to prevent further spreading is social distancing, quarantine, and isolation. However, quickly identifying people with COVID-19 allows for the adoption of measures that can flatten the curve for proper allocation of limited medical resources.

The main symptoms of COVID-19 are a fever, a dry cough, and shortness of breath [1]. Detection of a fever at a distance can be made using thermographic cameras or infrared thermometers [2]. Cough can be detect with sound [3][6], videos [7] or air flow [8], but the sound has a limitation of distance.

Artificial intelligence based technologies are playing a significant role in the COVID-19 pandemic response [9][13] to study the virus, test potential treatments, diagnose individuals, and analyze public health impact.

In the last few years, a new generation of microwave sensors has made it possible for non-contact heart and respiration rate measurements through clothing with a microwave Doppler sensor [14][18]. However, this type of sensor offers a number of challenges of radar monitoring such as the removal of motion and respiration artifacts of the patient [19], the background [20], and the development of rate detection methods for heart rate variability analysis [21][23].

Sensors and artificial intelligence are complementary in the development of new technologies. Among the most varied uses, artificial intelligence has been applied to sensors to investigate the integrity of physical structures [24], to detect defects in wheels [25], and to detect cardiac abnormalities [26].

This paper presents a system to detect coughs using a microwave Doppler sensor to capture data of the unique time-varying characteristics of the different body motions that occur during coughing and apply these data to a CNN. This paper introduces a machine learning approach to uncover a cough waveform using Doppler radar.

The microwave radar sensor used in this work is a low-cost K-band motion sensor and is completely camera-free. Therefore, the proposed method collects no personally-identifying information, allaying privacy concerns. The sensor proposed in this article can serve as a trigger to start the identification process of the sick person, that is, the image of the person that coughed thus protecting privacy not only of the person, but of everyone near of coughing person.

Additionally, the measurements can be made without human contact, making the process proposed in this work safe for the investigation of contagious diseases such as COVID-19. The proposed cough detection system using microwave radar sensor has environmental robustness and dark/light-independence, unlike traditional cameras.

How cough is an important symptom of COVID-19 and, together with the measurement of fever made by thermal cameras, together they can perform a screening before referring the patient to other types of diagnoses.

II. Methods

The proposed work can be divided into two major activities: signal acquisition through the microwave Doppler sensor and the recognition of the cough signal pattern through the machine learning technique. Fig. 1 shows a block diagram of the proposed cough detection system where the microwave radar sensor generate a signal with person’s movements that are digitalized by the oscilloscope and sent to computer. In the computer the Matlab application generate pseudocolor images from acquired signals and this images are used to training and validating the CNNs.

Fig. 1.

Fig. 1.

Block diagram of the proposed cough detection system.

A. Doppler Radar

Continuous-wave (CW) radar uses a voltage-controlled oscillator to continuously transmit a signal. The receiver is always on to detect the echo signal. The CW radar is a simple radar and is easier to integrate into mobile devices. Fig. 2 shows a diagram of the CW radar [27].

Fig. 2.

Fig. 2.

Diagram of the continuous-wave radar (CW) [27].

This work proposes the use of CW Doppler effect that changes the frequency of a wave in relation to an observer who is moving relative to the wave source.

If the target is in motion, Inline graphic (distance from the radar to target) and the phase Inline graphic (angular excursion) are continually changing. A change in Inline graphic with respect to time is equal to a frequency. Therefore, the Doppler angular frequency Inline graphic is [28]:

A.

where Inline graphic doppler frequency shift and Inline graphic relative velocity of target with respect to radar. The Doppler frequency shift is:

A.

where Inline graphic transmitted frequency and Inline graphic velocity of propagation ( Inline graphic m/s).

Therefore, for CW Doppler radar, an unmodulated transmitted signal is given by:

A.

where Inline graphic is the amplitude of the signal, Inline graphic is the nominal carrier angular frequency, Inline graphic is the time elapsed, and Inline graphic is the phase of the signal.

The received signal is:

A.

where Inline graphic is the attenuation factor, Inline graphic is the Doppler angular frequency shift and Inline graphic is the time delayed.

Microwave radar sensors and millimeter waves are common and the simplest models, such as CW, are inexpensive.

The microwave radar sensor used in this work is the CW mono transceiver IPM 165 from InnoSent [29], [30] that is a universal low-cost K-band transceiver for motion detection in various applications. Although simple, the sensor impresses by outstanding sensitivity. A human being can easily be seen in a range up to 15 m or even beyond. Fig. 3 shows the microwave radar sensor used.

Fig. 3.

Fig. 3.

Low-cost K-band CW radar sensor [29].

The microwave radar sensor IPM 165 works at frequency range 24.00 - 24.25 GHz (K-band) with a typical output power of 16 dBm. More details about the radar microwave sensor, including its radiation pattern, can be found in the datasheet [30].

The signal available at the unit output is sinusodial for a monotonously moving object and will provide very low signal amplitude and therefore it must be amplified immediately with high input impedance and lowest noise contribution. The Fig. 4 show the circuit schematic of LF amplifier used in this work.

Fig. 4.

Fig. 4.

Circuit schematic of LF amplifier used.

B. Data Transform

The data acquired on the oscilloscope were transferred to the computer and imported into MATLAB where they were transformed in an image with a pseudocolor plot [31] using the captured values (pseudocolor plot displays matrix data as an array of colored cells linearly mapped to an index into the color map matrix).

A pseudocolor plot displays matrix data as an array of colored cells (known as faces). Pseudocolor creates this plot as a flat surface in the x-y plane. The surface is defined by a grid of x-coordinates and y-coordinates that correspond to the corners (or vertices) of the faces. A matrix specifies the colors at the vertices. The color of each face depends on the color at one of its four surrounding vertices. Of the four vertices, the one that comes first in the x-y grid determines the color of the face.

Therefore, the acquisitions grayscale intensities are mapped into a colormap, and each intensity generates a unique color. The colormap array used is the Parula found in Matlab [32]. Thus, the vector of data (grayscale image) was transformed into a vector of RGB values to be used in AlexNet that requires RGB images. Fig. 5 shows an illustration of the pseudocolor process.

Fig. 5.

Fig. 5.

Transforming image of acquisitions into a pseudocolor image.

During the indexing of colors using the pseudocolor function the range of color indexes are changed to overload the colors at the limits, thus allowing greater contrast in the generated image.

Fig. 6 shows the image of the acquired data from the oscilloscope and the transformed image using a pseudocolor plot with colormap limits changed to increase the contrast.

Fig. 6.

Fig. 6.

Oscilloscope image transformed into a pseudocolor image.

C. Convolutional Neural Network

Convolutional neural network (ConvNets or CNNs) is a type of Deep Learning being one of the main types to do images recognition and image classifications. Therefore, object detections, recognize faces, etc., are some of the areas where CNNs are widely used. CNNs are typically comprised of convolutional layers that create feature mappings which serve to explain the input in different ways, while the pooling layers compress the spatial dimensions, reducing the number of parameters needed to extract features in the following layers. Fig. 7 shows an example of typical CNN architecture.

Fig. 7.

Fig. 7.

Typical CNN architecture.

To evaluate the effectiveness of the proposed system, three popular CNNs were tested: AlexNet, VGG-19, and GoogLeNet.

Of the various CNN models, a pre-trained AlexNet model obtained better accuracy. The architecture of AlexNet neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. More detailed information about the architecture was previously published [33].

Another popular CNN tested was the VGG-19 [34], which is a 19-layer-deep CNN. This network is characterized by its simplicity, using only Inline graphic convolutional layers stacked on top of each other in increasing depth. Reducing volume size is handled by max pooling. Two fully-connected layers, each with 4,096 nodes are then followed by a softmax classifier.

The GoogLeNet CNN [35] was also tested with good results. The GoogLeNet proposes inception modules, which introduce skip connections in the network, forming a mini-module that is repeated throughout the network. The overall GoogLeNet architecture is 22 layers deep. The GoogLeNet was designed to keep computational efficiency in mind. The idea behind that the architecture can be run on individual devices even with low computational resources. The architecture also contains two auxiliary classifier layers connected to the output of the Inceptions layers.

III. Experimental Methods

A. Measurement Setup

To capture data for training and validation, the experimental measurement setup was mounted with a K-band CW radar sensor, a signal generator Tektronix AFG2021 to generate the trigger pulse, a Tektronix DPO2002B oscilloscope to capture the signal from the radar sensor and an Amrel LPS-305 power supply to power the radar sensor.

The oscilloscope was connected to a computer by a USB to record 1,250,000 points of the captured signals with 8 bits resolution. Fig. 8 shows the measurement setup used to capture the signals from radar.

Fig. 8.

Fig. 8.

Measurement setup used to capture the signals from the 24 GHz CW radar sensor.

The data were acquired in time intervals of 4 seconds and voltage range at −5 V and 5V synchronized with start of events by the trigger pulse from signal generator to oscilloscope.

B. Data Collection

To build and evaluate the proposed system, we created a database of cough, stopping, moving arms, scratching head, and shaking head recordings from ten healthy volunteers, five males and five females. Each volunteer produced four events records to 1 m and 3 m away from the radar sensor. Therefore, are 300 samples to people 1 m away from the radar sensor and more 300 samples to people 3 m away from the radar sensor forming a dataset with 600 samples records. To AlexNet CNN were used images with size of Inline graphic pixels and to GoogLeNet and VGG-19 CNNs were used images with size of Inline graphic pixels. Fig. 9 shows examples of acquired signals transformed in images with pseudocolor Matlab function.

Fig. 9.

Fig. 9.

Examples of acquired signals transformed in image with pseudocolor Matlab function to use in CNNs.

IV. Results

A. Experimental Results

The software used to test with the CNNs was Deep Network Designer from MATLAB.

In training our CNNs, we first split our database in 2 groups: samples with people 1 m and samples with people 3 m away from microwave radar sensor. Each group was split in three parts: 140 samples for building the model, and 60 for validation, and 100 samples for testing. The training set was used to train the CNNs. The trained model is then run several times against the validation set to find optimal model hyperparameters (e.g., learning rate and batch size). Once all hyperparameters are found, the model is retrained and run against the test set for the final evaluation.

The convolutional network is trained using a stochastic gradient descent (SGD), with a learning rate of 0.0001 and a batch size of four.

In the proposed system, the AlexNet, VGG-19 and GoogleNet CNNs were tested, which are the most common CNNs. Measurements were performed with people 1 m away from the microwave radar sensor and 3 m from the microwave radar sensor.

The Deep Network Designer from MATLAB give the accuracy and loss information. The loss function used in AlexNet, VGG-19 and GoogleNet was cross entropy loss [36] using the classification layer that computes the cross entropy loss for multi-class classification problems with mutually exclusive classes.

Table I presents the results summary obtained using the proposed CNNs with people 1 m away from the microwave radar sensor and total of 200 samples acquired, with 140 samples to training CNN and 60 samples to validation CNN.

TABLE I. Results Obtained With People 1 m Away From the Microwave Radar Sensor Using Proposed CNNs.

AlexNet VGG-19 GoogLeNet
Accuracy 95.0% 93.3% 93.3%
Loss 0.23 0.35 0.21
Convergence ~10 epochs ~10 epochs ~15 epochs

By results in Table I, the AlexNet show better accuracy performance. Using AlexNet the training converges after roughly 20 epochs with a train accuracy of 95.0% and loss of 0.23.

We used the remaining 100 samples acquired (20 for each gesture) with people 1 m away from the microwave radar sensor to test CNNs. The confusion matrix of the tested AlexNet trained is show in Fig. 10, including coughing detection, moving arms detection, scratching arms detection, spinning detection and stopped detection.

Fig. 10.

Fig. 10.

Confusion matrix using AlexNet with people 1 m away from the microwave radar sensor.

Just like Table I, Table II presents the results summary obtained using the proposed CNNs but with people 3 m away from the microwave radar sensor. The database have the same total of 200 samples acquired, with 140 samples to training CNN and 60 samples to validation CNN.

TABLE II. Results Obtained With People 3 m Away From the Microwave Radar Sensor Using Proposed CNNs.

AlexNet VGG-19 GoogLeNet
Accuracy 86.6% 80.0% 76.6%
Loss 0.85 1.06 1.44
Convergence ~30 epochs ~40 epochs ~20 epochs

Again, by results in Table II, the AlexNet show better accuracy performance. Using AlexNet the training converges after roughly 30 epochs with a train accuracy of 86.6% and loss of 0.85.

We also used the remaining 100 samples acquired (20 for each gesture) with people 3 m away from microwave radar sensor to test CNNs. The confusion matrix of the tested AlexNet trained is show in Fig. 11, including coughing detection, moving arms detection, scratching arms detection, spinning detection and stopped detection.

Fig. 11.

Fig. 11.

Confusion matrix using AlexNet with people 3 m away from the microwave radar sensor.

When the values in Table I with people 1 m away from the microwave radar sensor are compared with the values in Table II with people 3 m away from the radar sensor, it is observed that there is a reduction in the efficiency of the tested CNNs.

Finally, the proposed CNNs were tested with a dataset containing all the samples acquired together, in other words, the samples acquired for people 1 m and 3 m away from the radar sensor in a single mixed dataset containing 600 samples records. Table III presents the results summary.

TABLE III. Results Obtained With People 1 m and 3 m Away From the Microwave Radar Sensor in a Single Dataset Using Proposed CNNs.

AlexNet VGG-19 GoogLeNet
Accuracy 81.6% 79.1% 79.1%
Loss 1.55 1.09 1.23
Convergence ~30 epochs ~25 epochs ~20 epochs

By results in Table III, the AlexNet show better accuracy performance. Using AlexNet the training converges after roughly 30 epochs with a train accuracy of 81.6% and loss of 1.55.

We also used the remaining 200 samples acquired (20 for each gesture) with people 1 m and 3 m away from microwave radar sensor to test CNNs. The confusion matrix of the tested AlexNet trained is show in Fig. 12.

Fig. 12.

Fig. 12.

Confusion matrix using AlexNet with a single dataset containing people 1 m and 3 m away from the microwave radar sensor.

Although the proposed cough detection system was tested with a dataset of a few images for the training of the CNNs, the system performed well and was able to identify individual coughs with an accuracy of 95% in the proposed movements group using AlexNet CNN and with people 1 m away from the microwave radar sensor. With people 3 m away from the microwave radar sensor the accuracy was 86.6% using AlexNet CNN. Moreover, with people 1 m and 3 m away from the microwave radar sensor in a single mixed dataset the train accuracy was 81.6% using AlexNet CNN.

Both with people 1 m or 3 m away from the microwave radar sensor, the CNN with the best accuracy performance was AlexNet, with a test accuracy of 88% for 1 m, 80% for 3 m, and 86.5% for a single dataset with 1 m and 3 m. However, other CNNs, such as ResNet, and YOLO, [37] can also be tested.

With a single set of mixed data, the test accuracy of the cough detection performed well and was superior to other gestures: moving the arms, scratching the head, and shaking the head. The only classification better than cough detection was standing in front of the radar sensor.

The proposed cough detection system using microwave radar is compared with other works in Table IV and shows a good performance.

TABLE IV. Comparison With Other Works for Cough Detection.

Ref Method Classifier Test accuracy
[8] Air flow ANN 91.0%
[3] Sound CNN LeNet-5 89.7%
[38] Accelerometer Random Forest 81.4%
This work Microwave radar CNN AlexNet 86.5% for 1 m and 3 m

B. Discussion

The proposed article is a novelty because it proposes the use of radar sensors not only to capture hand gestures but the person’s body gestures, including coughing that is a common symptom of diseases.

However, the use of microwave radar sensors to acquire gestures has already been used in recent studies to capture hand gestures [39], [40] and heartbeat [41]. In Choi et al. [41] a Doppler radar sensor was used to heartbeat detection show maximum error rate of 14.3%. Therefore, for comparative effect on the use of radar sensors to capture human movements, it could be concluded that the proposed method has good accuracy in detecting human gestures, in addition to being able to classify them because of the use of CNNs.

C. Limitations

This work did not consider an environment with more people moving. It is also important to mention that the tests were performed offline, that is, the tested data were acquired and then processed to validate the efficiency of the use of radar as a new possibility to “see” the movements in the environment. The development of real-time identification is expected in future work.

Another important issue is that the microwave radar sensor used in this article has one channel only, therefore not is possible to determine the direction of movement.

It is also important to mention that the coughing gestures were captured from healthy patients imitating a cough without a mask. At the time of writing this article, the healthcare system is overwhelmed and focused on treating patients with COVID-19. In the future, we hope to acquire cough data from patients with COVID-19.

V. Conclusion

In this paper, a novel cough detection system is proposed using a K-band continuous-wave Doppler radar sensor and popular convolutional neural networks architectures: AlexNet, VGG-19, and GoogLeNet. Tests were carried out with people 1 m and 3 m away from the microwave radar sensor.

The system has cough detection test accuracy of 88.0% using AlexNet CNN with people 1 m away from the microwave radar sensor, test accuracy of 80.0% with people 3 m away from the radar sensor, and accuracy of 86.5% with a single mixed dataset with people 1 m and 3 m away from the radar sensor. Our work here mainly focuses on the cough classification task. In the future, we will focus on cough detection of COVID-19 infected people and other respiratory diseases.

The proposed system is completely camera-free and collects no personally-identifying information. The measurements are performed without human contact, making the process proposed in this work safe for contagious diseases such as COVID-19. The proposed cough detection system using this microwave radar sensor has environmental robustness and dark/light-independence, unlike traditional cameras.

The proposed microwave radar can be another sensor in a fusion sensor system to create a robust system to detect cough and other movements, mainly if using CNNs.

Biographies

graphic file with name chuma-3028494.gif

Euclides Lourenço Chuma (Member, IEEE) received the degree in mathematics from the University of Campinas (UNICAMP) in 2003, the master’s degree in networks and telecommunications systems from the INATEL in 2015, and the M.Sc. and Ph.D. degrees in electrical engineering from the UNICAMP, Brazil, in 2017 and 2019, respectively. His research interests include microwave, millimeter wave, photonics, bioengineering, sensors, wireless power transfer, and telecommunications.

graphic file with name iano-3028494.gif

Yuzo Iano (Life Member, IEEE) received the B.Sc., M.Sc., and Ph.D. degrees in electrical engineering from the University of Campinas (UNICAMP), Brazil, in 1972, 1974, and 1986, respectively. He is a Professor of Electrical Engineering with the UNICAMP and has been the Head and Founder of the Laboratory of Visual Communications since 1972. His research interests include digital signal processing (images/audio/video), digital TV, 4G (LTE) and 5G cellular networks, pattern recognition, smart cities, smart grids, and the Internet of Things.

Contributor Information

Euclides Lourenço Chuma, Email: euclides.chuma@ieee.org.

Yuzo Iano, Email: yuzo@decom.fee.unicamp.br.

References

  • [1].Guan W.et al. , “Clinical characteristics of coronavirus disease 2019 in China,” New England J. Med., vol. 382, pp. 1708–1720, 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [2].Chiang M.-F.et al. , “Mass screening of suspected febrile patients with remote-sensing infrared thermography: Alarm temperature and optimal distance,” J. Formosan Med. Assoc., vol. 107, no. 12, pp. 937–944, Dec. 2008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [3].Amoh J. and Odame K., “Deep neural networks for identifying cough sounds,” IEEE Trans. Biomed. Circuits Syst., vol. 10, no. 5, pp. 1003–1011, Oct. 2016. [DOI] [PubMed] [Google Scholar]
  • [4].Matos S., Birring S. S., Pavord I. D., and Evans D. H., “Detection of cough signals in continuous audio recordings using hidden Markov models,” IEEE Trans. Biomed. Eng., vol. 53, no. 6, pp. 1078–1083, Jun. 2006. [DOI] [PubMed] [Google Scholar]
  • [5].Rudraraju G.et al. , “Cough sound analysis and objective correlation with spirometry and clinical diagnosis,” Informat. Med. Unlocked, vol. 19, 2020, Art. no. 100319. [Google Scholar]
  • [6].Imran A.et al. , “AI4COVID-19: AI enabled preliminary diagnosis for COVID-19 from cough samples via an app,” Informat. Med. Unlocked, vol. 20, 2020, Art. no. 100378. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [7].Thi T., Wang L., Ye N., Zhang J., Maurer-Stroh S., and Cheng L., “Recognizing flu-like symptoms from videos,” BMC Bioinf., vol. 15, no. 1, p. 300, 2014. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [8].Soliáski M., Åepek M., and Koátowski Å., “Automatic cough detection based on airflow signals for portable spirometry system,” Informat. Med. Unlocked, vol. 18, Oct. 2020, Art. no. 100313. [Google Scholar]
  • [9].Vaishya R., Javaid M., Khan I. H., and Haleem A., “Artificial intelligence (AI) applications for COVID-19 pandemic,” Diabetes Metabolic Syndrome, Clin. Res. Rev., vol. 14, no. 4, pp. 337–339, Jul. 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [10].Alimadadi A., Aryal S., Manandhar I., Munroe P. B., Joe B., and Cheng X., “Artificial intelligence and machine learning to fight COVID-19,” Physiol. Genomics, vol. 52, no. 4, pp. 200–202, Apr. 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [11].Mei X.et al. , “Artificial intelligence-enabled rapid diagnosis of COVID-19 patients,” medRxiv, vol. 23, pp. 1224–1228, Jan. 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [12].Gates B., “Responding to covid-19 — A Once-in-a-Century pandemic?” New England J. Med., vol. 382, no. 18, pp. 1677–1679, Apr. 2020. [DOI] [PubMed] [Google Scholar]
  • [13].Shi F.et al. , “Review of artificial intelligence techniques in imaging data acquisition, segmentation and diagnosis for COVID-19,” IEEE Rev. Biomed. Eng., early access, Apr. 16, 2020, doi: 10.1109/RBME.2020.2987975. [DOI] [PubMed]
  • [14].Petrovic V. L., Jankovic M. M., Lupsic A. V., Mihajlovic V. R., and Popovic-Bozovic J. S., “High-accuracy real-time monitoring of heart rate variability using 24 GHz continuous-wave Doppler radar,” IEEE Access, vol. 7, pp. 74721–74733, 2019. [Google Scholar]
  • [15].Zakrzewski M., Raittinen H., and Vanhala J., “Comparison of center estimation algorithms for heart and respiration monitoring with microwave Doppler radar,” IEEE Sensors J., vol. 12, no. 3, pp. 627–634, Mar. 2012. [Google Scholar]
  • [16].Saluja J., Casanova J., and Lin J., “A supervised machine learning algorithm for heart-rate detection using Doppler motion-sensing radar,” IEEE J. Electromagn., RF Microw. Med. Biol., vol. 4, no. 1, pp. 45–51, Mar. 2020. [Google Scholar]
  • [17].Hu X., Qiu L., Zhao D., Chen Q., Qian R., and Jin T., “Acceleration-based algorithm for long monitoring the micro motions on a stationary subject using UWB radar,” in Proc. Int. Conf. Circuits, Devices Syst. (ICCDS), 2017, pp. 205–208. [Google Scholar]
  • [18].Erdoan S., “Microwave noncontact vital sign measurements for medical applications,” in Proc. IEEE Int. Symp. Med. Meas. Appl. (MeMeA), 2019, pp. 1–5. [Google Scholar]
  • [19].Morgan D. R. and Zierdt M. G., “Novel signal processing techniques for Doppler radar cardiopulmonary sensing,” Signal Process., vol. 89, no. 1, pp. 45–66, Jan. 2009. [Google Scholar]
  • [20].Li C. and Lin J., “Complex signal demodulation and random body movement cancellation techniques for non-contact vital sign detection,” in IEEE MTT-S Int. Microw. Symp. Dig., Oct. 2008, pp. 567–570. [Google Scholar]
  • [21].Høst-Madsen A.et al. , “Signal processing methods for Doppler radar heart rate monitoring,” in Signal Processing Techniques for Knowledge Extraction and Information Fusion, Mandie D., Ed. Berlin, Germany: Springer, 2008. [Google Scholar]
  • [22].Massagram W., Lubecke V. M., Host-Madsen A., and Boric-Lubecke O., “Assessment of heart rate variability and respiratory sinus arrhythmia via Doppler radar,” IEEE Trans. Microw. Theory Techn., vol. 57, no. 10, pp. 2542–2549, Oct. 2009. [Google Scholar]
  • [23].Boric-Lubecke O., Massagram W., Lubecke V. M., Host-Madsen A., and Jokanovic B., “Heart rate variability assessment using Doppler radar with linear demodulation,” in Proc. 38th Eur. Microw. Conf., Oct. 2008, pp. 420–423. [Google Scholar]
  • [24].Ibrahim A., Eltawil A., Na Y., and El-Tawil S., “A machine learning approach for structural health monitoring using noisy data sets,” IEEE Trans. Autom. Sci. Eng., vol. 17, no. 2, pp. 900–908, Apr. 2020. [Google Scholar]
  • [25].Krummenacher G., Ong C. S., Koller S., Kobayashi S., and Buhmann J. M., “Wheel defect detection with machine learning,” IEEE Trans. Intell. Transp. Syst., vol. 19, no. 4, pp. 1176–1187, Apr. 2018. [Google Scholar]
  • [26].Li B.et al. , “A cascade learning approach for automated detection of locomotive speed sensor using imbalanced data in ITS,” IEEE Access, vol. 7, pp. 90851–90862, 2019. [Google Scholar]
  • [27].Wang F.-K., Mercuri M., Horng T.-S. J., and Schreurs D. M. M.-P., “Principles and applications of RF/microwave in healthcare and biosensing,” in Biomedical Radars for Monitoring Health. London, U.K.: Academic, 2004, pp. 243–294. [Google Scholar]
  • [28].Skolnik I. M., Introduction to Radar Systems. New York, NY, USA: McGraw-Hill, 1981. [Google Scholar]
  • [29].Weidmann I. W., “IPM-165—A universal low cost K-band transceiver for motion detection in various applications,” Innov. Sensor Technol., Donnersdorf, Germany, Appl. Note 03, 2006. [Google Scholar]
  • [30].Data Sheet IPM-165, InnoSenT, Donnersdorf, Germany, Apr. 2014. [Google Scholar]
  • [31].MathWorks. Pseudocolor Plot. Accessed: Sep. 5, 2020. [Online]. Available: https://www.mathworks.com/help/matlab/ref/pcolor.html
  • [32].MathWorks. Parula Colormap Array. Accessed: Sep. 5, 2020. [Online]. Available: https://www.mathworks.com/help/matlab/ref/parula.html
  • [33].Krizhevsky A., Sutskever I., and Hinton G. E., “ImageNet classification with deep convolutional neural networks,” in Proc. NIPS, 2012, pp. 1097–1105. [Google Scholar]
  • [34].Simonyan K. and Zisserman A., “Very deep convolutional networks for large-scale image recognition,” in Proc. ICLR, 2015, pp. 1–5. [Google Scholar]
  • [35].Szegedy C.et al. , “Going deeper with convolutions,” in Proc. ILSVRC, 2014, pp. 1–5. [Google Scholar]
  • [36].Bishop C. M., Pattern Recognition and Machine Learning. New York, NY, USA: Springer, 2006. [Google Scholar]
  • [37].Lundervold A. S. and Lundervold A., “An overview of deep learning in medical imaging focusing on MRI,” Zeitschrift für Medizinische Physik, vol. 29, no. 2, pp. 102–127, May 2019. [DOI] [PubMed] [Google Scholar]
  • [38].Georgescu T., “Classification of coughs using the wearable RES peck monitor,” Univ. Edinburgh, Edinburgh, U.K., Tech. Rep., 2019. [Google Scholar]
  • [39].Hazra S. and Santra A., “Robust gesture recognition using millimetric-wave radar system,” IEEE Sensors Lett., vol. 2, no. 4, pp. 1–4, Dec. 2018. [Google Scholar]
  • [40].Kim Y. and Toomajian B., “Application of Doppler radar for the recognition of hand gestures using optimized deep convolutional neural networks,” in Proc. 11th Eur. Conf. Antennas Propag. (EUCAP), Mar. 2017, pp. 1–5. [Google Scholar]
  • [41].Choi C., Park J., Lee H., and Yang J., “Heartbeat detection using a Doppler radar sensor based on the scaling function of wavelet transform,” Microw. Opt. Technol. Lett., vol. 61, no. 1, pp. 1792–1796, 2019. [Google Scholar]

Articles from Ieee Sensors Journal are provided here courtesy of Institute of Electrical and Electronics Engineers

RESOURCES