Skip to main content
IEEE - PMC COVID-19 Collection logoLink to IEEE - PMC COVID-19 Collection
. 2021 Feb 1;8(21):15965–15976. doi: 10.1109/JIOT.2021.3055804

Trustworthy and Intelligent COVID-19 Diagnostic IoMT Through XR and Deep-Learning-Based Clinic Data Access

Yonghang Tai 1, Bixuan Gao 1, Qiong Li 1,, Zhengtao Yu 2, Chunsheng Zhu 3, Victor Chang 4
PMCID: PMC8769002  PMID: 35782175

Abstract

This article presents a novel extended reality (XR) and deep-learning-based Internet-of-Medical-Things (IoMT) solution for the COVID-19 telemedicine diagnostic, which systematically combines virtual reality/augmented reality (AR) remote surgical plan/rehearse hardware, customized 5G cloud computing and deep learning algorithms to provide real-time COVID-19 treatment scheme clues. Compared to existing perception therapy techniques, our new technique can significantly improve performance and security. The system collected 25 clinic data from the 347 positive and 2270 negative COVID-19 patients in the Red Zone by 5G transmission. After that, a novel auxiliary classifier generative adversarial network-based intelligent prediction algorithm is conducted to train the new COVID-19 prediction model. Furthermore, The Copycat network is employed for the model stealing and attack for the IoMT to improve the security performance. To simplify the user interface and achieve an excellent user experience, we combined the Red Zone’s guiding images with the Green Zone’s view through the AR navigate clue by using 5G. The XR surgical plan/rehearse framework is designed, including all COVID-19 surgical requisite details that were developed with a real-time response guaranteed. The accuracy, recall, F1-score, and area under the ROC curve (AUC) area of our new IoMT were 0.92, 0.98, 0.95, and 0.98, respectively, which outperforms the existing perception techniques with significantly higher accuracy performance. The model stealing also has excellent performance, with the AUC area of 0.90 in Copycat slightly lower than the original model. This study suggests a new framework in the COVID-19 diagnostic integration and opens the new research about the integration of XR and deep learning for IoMT implementation.

Keywords: Auxiliary classifier generative adversarial network (ACGAN), COVID-19, extended reality (XR), Internet of Medical Things (IoMT), security

I. Introduction

To date, the Internet-of-Medical-Things (IoMT) technology has been recognized and widely applied due to its high performance and practicality. The IoMT enables the application of deep learning for automated and accurate prediction of many diseases, assisting and facilitating effective and efficient medical treatment [1][3]. However, there are fewer studies that investigate the diagnostic IoMT through telemedicine and deep-learning-based attacks targeting the services deployed on the IoMT devices, particularly, the IoMT-based AI services. Since the extended reality (XR) technology, which includes the virtual reality (VR), augmented reality (AR), and the mixed reality (MR) [4][6], refer to the real/virtual environments generated by computer graphics and wearables has been widely applicated in the medical field, especially in the telemedicine implementations.

During the outbreak of the pandemic of COVID-19, IoMT can even be used to detect main symptoms ubiquitously, by the data collection from the infected area and customize the treatment plan based on aggregated IoMT data. Inspired by the aforementioned approaches, the XR implementation is introduced into the COVID-19 diagnostic IoMT. Furthermore, a customized XR-enabled COVID-19 surgical planning/rehearse strategy is also being developed. Taking into account the previously mentioned deep-learning-based IoMT platform, a novel deep neural network (DNN) algorithm has been developed to predict the COVID-19 is positive or not by data 5G data transformation. Apart from that, to achieve a better human ergonomics performance, we visualized all the COVID-19 diagnostic clues from our XR surgical decision system. Third, we used a Copycat-based access control system to protect the patient’s clinic data used for rendering the XR images. We adopted a simplified approach based on Wang et al. [7], which allows electronic medical data to be accessed and shared on cloud storage. More specifically, each visit request to any patient’s clinic data will be recorded into the customized 5G cloud together with a timestamp, requestor’s ID, patient ID, and image ID.

Three original contributions are presented in this article.

  • 1)

    For the first time, the deep auxiliary classifier generative adversarial network (ACGAN)-based prediction and telemedicine surgical guiding methods are proposed for the COVID-19 diagnostic with 5G IoMT, which supplemented the shortage of medical staff and treatment of the Red Zone.

  • 2)

    Copycat ACGAN is employed to steal and attack for the IoMT model to evaluate security performance. The privacy of COVID-19 patients has been guaranteed during IoMT data transmission.

  • 3)

    A novel XR-based COVID-19 surgical plan/rehearse prototype has been implemented for evaluating the new techniques and ideas. This work opens new research on the integration of XR and deep learning for telesurgical applications.

II. Related Work

A. XR-Based Implementations for Telemedicine IoMT

In order to promote doctors to acquire more information conveniently during the operation, the XR-based IoMT strategy has been evaluated, which is the first method above-mentioned, to rebuild the 3-D virtual patient from the medical images and superimpose it on the real patient in an operating room for the 3-D surgical guiding [8][10]. A traditional XR system includes two steps: 1) 3-D reconstruction of anatomically based on CT/MRI images and 2) the registration step between the reconstructed model and the patient [11]. Although existing communal or software could automatically complete the 3-D rebuilt step, for example, the Osirix, the Mimics, and the 3-D slicer, the semi-automatic manual correction by the professional surgeon, is still the most reliable strategy in the clinic applications [12]. The Curve (Brain Brainlab AG, Germany) system [13] and the Stealth Station are designed to XR navigation of MIS [14]; the NavSuite3 (Stryker Corporation, USA) is designed for the spine surgery [15]; the Navigation Panel Unit (Storz, Germany) is used for the endoscopic surgical navigation [16]; and SCOPIS (Scopis, Germany) [17], with the aid of Microsoft HoloLens, provides ENT, CMF, neuro, and spine navigation. Nevertheless, the critical issues of these commercial systems are implemented with either visual-guide or optical-guide mechanisms. In other words, the infrared-based NDI Polaris is the vital unit supporting all of these navigation schemes. Unfortunately, two serious challenges still need to be addressed for the NDI Polaris system: 1) a precise registration between the 3-D static image-based reconstructed model and the real patient is the most challenging issue due to the medical image caused by the human respiration. Furthermore, the heterogeneity of the lesions and 2) the IR-based navigation is usually limited by the disadvantage of the signal blocking during the real operations, surgeons’ operation area should not occlude the infrared transmit trajectory which also leads to many inconveniences in IoMT. To the best of our knowledge, in the operation room, a majority of XR guiding surgical applications focus on the medial image fusion algorithms and the routing planning. Research has not yet introduced many intuitive perceptions, such as tactile feedback through the 5G transmission, which would significantly improve the accuracy of the surgical performance.

Meanwhile, due to the outbreak of COVID-19, there are increasing interest in the telemedicine diagnostic, which can provide a treatment plan without exposing doctors and patients into the risk of infection [18], [19]. Shelton et al. [20] surveyed that within the first two weeks of the stay-at-home order, the number of telemedicine services increased to about 86% or higher in the U.S., except for the hospital in Fayetteville, NC, USA, where telehealth consultations increased from 2% to 24%. Triantafillou and Rajasekaran [21] suggested that telemedicine allows for examination of a patient’s health and helps to educate patients virtually on physical examination changes and symptom that should prompt a discussion with their physicians. Similarly, results from Patel et al. [22] indicate that patient stored heath information can provide guidance for future examination. Additionally, Li et al. [23] deployed an online platform to reduce the number of in-person visits thereby lessening face-to-face contact among patients and physicians, which suggests that telemedicine provides an effective triage, screening, and treatment method during the COVID-19 pandemic.

B. AI-Based COVID-19 IoMT Platform

-19 systems can quickly diagnose COVID-19 pathogens and found different types of attacks [24][28]. In addition, DL Inference models were tested, including acoustic emission disturbances to the classifier, launching a black-box attack using the Clarifai REST API model, and using the back door attack to update the model [29]. Holshue et al. developed a research-centric CDSS. The device that leverages the power of the Internet of Things to collect real-time physiological data from patients on ventilators and other medical devices. To monitor and manage the conditions of patients in intensive care units, doctors can prioritize their care, aiming to improve diagnosis, prediction, and event recognition in intensive care units. Additionally, encrypted files are used to ensure the safety of patient information [30]. Chan et al. designed a chronic kidney disease prediction system based on the Internet of Things (IoMT) platform, an adaptive hybridized deep convolutional neural network. CT image data from renal cancer were used, and the missing values were processed with median estimates. The dual training method of learning and activation mechanisms can effectively avoid kidney disease. Rehm et al. [31] have designed and proposed a new privacy anonymous Internet-of-Things model. Moreover, an RFID proof of concept is provided for this model. The blockchain is used to simulate contract deployment and function execution. The model will make it easier to identify groups of infected contacts and provide mass isolation while protecting individual privacy [32]. Chamola et al. conducted detailed research on the Internet of Things, drones, blockchain, artificial intelligence, and 5G. During the COVID-19 epidemic, the medical Internet of Things can effectively collect, analyze, and transmit clinical data. Drones ensure minimal human interaction and can also be used to reach areas that are unreachable by humans. Robots and autonomous vehicles have also contributed significantly to the field of automatic disinfection by reducing human contact. Artificial intelligence plays an important role in risk prediction and prognosis treatment [33], [34].

C. Cyberattacks With Deep Learning Network

When it comes to the IoMT, we should know that there is a very close connection between IoMT and the IoT. An idea was put forward by Hu et al. that IoMT could be used in the medical industry must be a truth [35]. After five years, a healthcare monitoring system had been made by Jagadeeswari et al. [36] using significant data training, which proved the idea, which put forward by Hu had become a truth. Nowadays, with an increasing number of cyberattacks have appeared, Flynn et al. [37] discovered that the IoMT system based on a mobile platform is straightforward to be breached by various network attacks. A series of evidence can be presented to support our attack model. Deep learning has gained prominence in many fields, including computer vision and cybersecurity, such as vulnerability detection [38], [39]. In 2014, however, Szegedy et al. [40] and follow-up studies [41] demonstrated that small changes to the data as images are entered can attack deep learning techniques. Subsequently, Dalvi et al. [42] and Lowd and Meek [43] have proved that in the linear classification of spam detection.

Barreno et al. [44] pointed out that with the development of cyberattacks, both ML algorithms and DL algorithms can be attacked by a malicious adversary. It can be seen from the relevant literature that there are three different attack modes of adversarial attack, including white-box attack, gray-box attack, and black-box attack. The difference between them is how much is known about the target model (including data sets, parameters/hyperparameters, deep learning models, and algorithms). Because of the similarity of COVID-19 text data, among the many ways of adversarial attacks, the one that can have the most impact on our network is the gray-box attack. Crafted adversarial samples have been used against a DNN, aiming to create confrontation examples by approaching the decision boundary of the target DNN [45].

III. New System Design

In this section, we addressed the COVID-19 diagnostic IoMT through XR and deep neural model design and implementation, as demonstrated in Fig. 1. A new Inline graphic-nearest neighbor (KNN)-based ACGAN model is developed to estimate the COVID-19 prediction accuracy, and the XR platform is employed for the remote diagnoses. After that, the 5G transmission is employed to transfer and compute the medical data for the COVID-19 prediction using the 5G cloud. AR-remote diagnose, and XR surgical implementations are developed, we also present the evaluation approaches, which evaluate the performances with different kinds of deep neural algorithms.

Fig. 1.

Fig. 1.

Customized design of COVID-19 diagnostic IoMT through XR and deep neural model, which has been implemented in the prevention and treatment of COVID-19 in China. The Red Zone is an epidemiological term, which means the COVID-19 infected area, especially in Wuhan and Hubei. Clinic data are collected from the OPC of Red Zone by the cell phone, tablet, and laptop. After that, the 5G transmission is employed to transfer and compute the medical data for the COVID-19 prediction using the 5G cloud (Alibaba Cloud). Finally, the professional respiratory physician, and the thoracic surgeon from the Green Zone, such as Shanghai and Kunming, could make a diagnosis and detailed surgical plan through the IoMT application layer with high efficiency and safety.

A. ACGAN-Based COVID-19 Intelligent Network Design

The whole technological process of the ACGAN-based COVID-19 intelligent prediction system is demonstrated in Fig. 2. The real-world clinical data are collected and then some preprocessing, including samples wrangling (such as selecting the demanding data and setting correct data formats), KNN for missing data imputation and resampling techniques for solving the problem of imbalance samples between normal subjects and COVID-19 subjects in a retrospective cohort. The processed training set is employed to train the ACGAN prediction model. After that, the well-trained discriminator of ACGAN is used to forecasting the samples from the prospective cohort. Finally, the interpretability of this system is produced by the contrastive explanations method (CEM) to give an analysis of medical significance. The further descriptions of each part of ACGAN-based COVID-19 intelligent prediction are provided as follows.

Fig. 2.

Fig. 2.

ACGAN-based COVID-19 intelligent prediction network: the real-world clinical data are collected, and then some preprocessing including samples wrangling (such as selecting the demanding data and setting correct data formats). The KNN algorithm is used for imputing missing data by finding the Inline graphic-closest neighbors to the observation with missing data. After that, imputing them based on the nonmissing values in the neighbors. KNN for missing data imputation and resampling techniques for solving the problem of imbalance samples between normal subjects and COVID-19 subjects in a retrospective cohort. The processed training set is employed to train the ACGAN prediction model. After that, the well-trained discriminator of ACGAN is used to forecasting the samples from a prospective cohort. Finally, the interpretability of this system is produced by CEM to give an analysis for medical significance.

1). KNN for Missing Data Imputation:

A technique widely used for handling with the extremely imbalanced distribution of samples is regarded as resampling. In resampling, to make up for the imbalanced class, a bias is used for reselecting more samples from one class, which has a smaller number of data than another type. The process of resampling has mainly consisted of two parts: 1) deleting some samples from the majority class, which is called undersampling and 2) augmenting samples from the minority class, which is called oversampling.

Due to the influence of elements, such as broken system and fabricated error, the missing of recording clinical data is inevitable. Moreover, much worthwhile information on the original data would be lost resulting in the decreases of forecasting accuracy and the mistaken research result, if only to delete these missing data. In this work, the KNN-based missing data estimation algorithm is utilized to solve this thorny problem. It is more suitable for simply binary problems with small-scale and low-dimensional data. Missing data is imputed by occurring rather than constructed data, which preserves the original structure of data. As a nonparametric and nonmapping imputation method, the condition of model misspecification can, to a great extent, be avoided. In the KNN, the Inline graphic samples nearest to the missing sample are searched from all complete instances in the data set, and then the corresponding missing value is padded with the mean value of these using the mean value samples. In KNN, the Inline graphic is defined as the features of samples, and then their Inline graphic-nearest neighbors are Inline graphic. The KNN estimator can be described as follows:

1).

where Inline graphic is the target sample, Inline graphic is a missing feature in Inline graphic, Inline graphic is the classification which is 0 or 1 in the current task, Inline graphic represents the value within the range of the Inline graphic, and Inline graphic represents a discriminant function that outputs 0 or 1 depending on its argument is false or true.

In order to choose the Inline graphic samples nearest to the target sample, the similarity between the target sample and the corresponding Inline graphic-nearest samples must be minimum. The commonly used approach called the Minkowski distance (or its variants) is given as follows:

1).

where Inline graphic represents a positive integer, which is the Minkowski coefficient, the Minkowski distance is defined as the Manhattan distance, when Inline graphic and it is described as the Euclidean distance when Inline graphic. In the current system, Inline graphic is used.

2). Deep Training Module Design:

Deep learning techniques are widely used in medical application, prediction, and retrieval domains, promising excellent performance in classification fields. The ACGANs were further improved on the basis of the CGAN through the incorporation of the idea of mutual information in InfoGAN [46]. Unlike traditional generative networks which are based on the unsupervised models, the supervised learning method is used in the generated adversarial concept. Furthermore, the internal structure of ACGAN adds the portion is embedding the class information into the input of the generator and compares with traditional CGAN. The additional task for ACGAN is to classify the category of samples by expanding an auxiliary judgment layer in discriminator, which can output the class labels of input samples [47]. Due to the speciality of the network, the objective function of ACGANs is divided into two parts: 1) the log likelihood of the correct source Inline graphic and 2) the log likelihood of the correct class Inline graphic

2).

where Inline graphic represents the created clinical sample. The discriminator Inline graphic is trained to find the maximum of Inline graphic, while the generator is trained to find the maximum of Inline graphic.

3). Contrastive Explanations Method for Prediction System:

The CEM is an AI novel algorithm created and implemented by IBM research, which can provide contrastive explanations for black-box models such as DNNs well known as black-box models. CEM can be effectively used to create meaningful descriptions in different domains that are presumably easier to consume as well as more accurate [48]. CEM of looking for the correlation positive/negative is expressed as an optimization problem of using perturbation variable Inline graphic that is used to explain how the model’s deep learning model to decide prediction results according to the input features. In finding pertinent negatives (PNs), Inline graphic is defined as the feasible data; Inline graphic is an example where Inline graphic is the class label predicted by a neural network model; Inline graphic is a modified example which is defined as a perturbation variable Inline graphic applied to Inline graphic; and Inline graphic is the corresponding prediction results. For any natural example Inline graphic, CEM dedicates to find an interpretable perturbation and thus study the difference between the Inline graphic and Inline graphic, where Inline graphic is the output consisting of prediction probabilities for all classes. The implementations of CEM finding PN are formulated as follows:

3).

where Inline graphic is an objective function designed to encourage Inline graphic to be predicted as a different class than Inline graphic. Inline graphic represents the Inline graphicth class probabilities of Inline graphic, Inline graphic refers to confidence parameter controlling the separation between Inline graphic and Inline graphic and Inline graphic called the elastic net regularizer, which is used for efficient feature selection in high-dimensional learning problems [38]. Inline graphic is an Inline graphic reconstruction error of Inline graphic evaluated by autoencoder, and Inline graphic, and Inline graphic are the associated regularization coefficients.

B. XR-Based COVID-19 Remote Diagnosis Platform

1). COVID-19 Patient-Specific CT 3-D Rendering:

The CT images for the visual rendering are reconstructed based on the patient-specific clinic images data, which developed with the platform of the integrated development environment (IDE) of VS2015. A 55-year-old male COVID-19 patient is demonstrated with two days history of pharyngalgia, headache, rhinorrhea, and fever. He did not contact any COVID-19 patients, without the history of hypertension and with a 30-year smoker. The patient’s chest CT scan (February 8, 2020) demonstrated the unilateral peripheral distribution of ground-glass opacities, as shown in Fig. 3. Laboratory investigations illustrated that elevated higher count of neutrophil ( Inline graphic/L, normal range, 2.0– Inline graphic/L), white blood cell count ( Inline graphic/L, normal range, 4– Inline graphic/L), and lymphocyte count was slightly reduced at Inline graphic/L (normal range 0.8– Inline graphic/L). We imported patients’ CT images first, use the DICOM format image to reconstruct a surgical simulation demo. Four professional thoracic surgeons manually corrected the COVID-19 infection region of interest after that, segmentation functions like threshold and area growing are employed here to the ROI extraction. Four professional thoracic surgeons from the Hua Shan Hospital and Yunnan First People’s Hospital are invited to revise the auto-segmentation result with manual correction, which is demonstrated in Fig. 3. The images into the 3-D mesh model were employed the marching cube algorithm to reconstruct, after the superfluous mesh cleaning and Laplacian smoothing processing to keep the ribs, renal, skin, and the lesion for the interventional biopsy surgery.

Fig. 3.

Fig. 3.

XR COVID-19 surgical IoMT simulator framework: the first part is the COVID-19 patient-specific medical image processing from the clinic data collection. The second part is the XR visuo-haptic reconstruction with the medical data. The third part is the audio rendering procedure, stored the audio details of OR-based heart monitor, anesthesia, and breathing apparatus, and line four is the surgical environment reconstruction.

2). XR Surgical Visual-Haptic Implementation:

The VATS-XR systems developed in this article mainly include the development of hardware and software. Fig. 3 shows the framework of the system. The tactile and visual are two important indicators of the system. For visual aspects, the OpenHaptic plugin calls feedback devices to interact with virtual objects, such as collision detection and soft-tissue cutting and deformation. For visual elements, interactive objects are rendered more realistically by shader language, to make it close to the real physical model. UGUI is used to design the UI interface design of the system. These functions were finally implemented in Unity3D. Surgical instruments and force feedback devices are connected through the linker. The operator holds the surgical instrument to bring the three axes of the power-feedback device to perform corresponding transformation operations. When the clip of the virtual surgical instrument interacts with the virtual object, the computer calls the force feedback device through the OpenHaptic plugin (Geomagic, USA) to give the corresponding driving force, thereby giving the operator a real tactile sense. HTC VIVE and Logitech camera are used to realize XR display methods.

3). 3DUI Design:

Referring to the GPS navigation interface, we developed a Haptic-XR-based 3DUI with the XR device and the main parts of the UI included in both visual and haptic intro-operation details. Three main kinds of XR display technologies during the operation have been presented; compared to the video-based and projection-based XR navigation system, the see-through display system using a semi-transparent free-form lens to reflect the digital content overlapped with the patient on the near-eye micro-display provided an intuitional and portable surgical experience. In this article, we chose the see-through XR display pattern with the Microsoft HoloLens MR head-mounted display (HMD). Since the C-arm image or ultrasound image is the essential navigational clues during the intentional surgery, we put the real-time CT images on the central left part of the 3DUI, as demonstrated in Fig. 3. For the real-time XR, the navigation interface is constructed in the top right of the UI, which is the manipulation platform for the Haptic-XR surgical simulator. We introduced this module to mimic the real operation in OR. Apart from these two components, the coronal, sagittal, and axial CT images synchronously display the needle track during the surgical simulation as a part of XR navigation. Referring to the GPS interface, we integrated the navigation clues in the bottom of the 3DUI, which includes the operation time, intervention depth, force limitation, speed limitation, the matching layer of the tissue, and the warning of mispuncture during the surgery, as demonstrated in the bottom of Fig. 4.

Fig. 4.

Fig. 4.

Diagram of the general software architecture of the Haptic-XR-based 3DUI with the IoMT device integrative implementation. Visual rendering pipeline conducted from the organ 3-D reconstructed and the surgical environment simulation, and haptic rendering includes the soft-tissue deformable modeling and the force rendering. The IoMT system integrated both visual and haptic rendering by the human–computer interaction system.

C. Model Stealing Attack to the New IoMT Platform

In this section, we will show you how to train an imitation network (Copycat network) by stealing labels from the original network (auxiliary classifier GANs). In this article, model stealing attacks mainly use the fake natural data set to steal labels from the ACGAN and put these labels and the data set into the imitation network. From Fig. 4, we can conclude that this process mainly consists of two steps. The first step is to create a training data set that has a similar structure to the original data set, but they come from different problem domains (PDs). So, the data set we have chosen is different from the original data set. Obviously, in the second step, we must use the labels and the pseudo data set to train our model. (In this article, we choose the ACGAN as a Copycat model.)

Even though the data set obtained from the first-line hospital is used in the original network, we can still download a similar COVID-19 data set from the public source and then change its data structure to have a similar structure with the original data set. By doing this, we can be stealing the corresponding label from the original model.

Next, we will explain the assignability of adversarial samples. Suppose that the adversary is interested in classifying the wrong example and producing a hostile sample Inline graphic different from the model in which the class is assigned to the legal input Inline graphic. In the following optimization formula, we can achieve this:

C.

Misleading example Inline graphic, deliberately Inline graphic calculation model. However, adversarial samples are often incorrectly classified as Inline graphic instead of Inline graphic in practice. For the convenience of discussion, the concept of transferability of adversarial samples is formalized

C.

Set Inline graphic represents the expected input distribution solved by the models Inline graphic and Inline graphic in the task. We divide the transferability of adversarial samples into two variables to describe the models Inline graphic. The first is the transferability within the technology. The transferability between different parameter initializations of the same technology or training models of other data sets [e.g., Inline graphic and Inline graphic are deep learning networks or both support vector machines (SVMs)] has been defined. Second, for cross-technology transferability, two technologies can be used to train models (e.g., Inline graphic is a deep learning network and Inline graphic is SVM).

IV. Results

A. KNN-ACGAN Learning Accuracy

Based on the prospective cohort, the results toward COVID-19 prediction for KNN-ACGAN and the other four models (KNN-SVM, KNN-RF, KNN-DNN, and KNN-CNN) are reported in Table I and Fig. 5(a). The evaluation metrics include precision, recall, and F1-score. As shown in Table I and Fig. 5(a), the highest values indicate that our proposed KNN-ACGAN model has the best prediction performance compared to KNN-SVM, KNN-RF, KNN-DNN, and KNN-CNN.

TABLE I. Performance Comparison Between the Proposed KNN-ACGAN Model and the Four General Prediction Methods.

Model Precision Recall F1-score
KNN-SVM 0.75 0.98 0.85
KNN-RF 0.63 0.95 0.75
KNN-DNN 0.81 1.00 0.89
KNN-CNN 0.77 0.98 0.86
KNN-ACGAN 0.92 0.98 0.95

SVM: Support vector machine; RF: Random forest; DNN: Original deep neural network; CNN: Convolution neural network.

Fig. 5.

Fig. 5.

Experimental results. (a) Precision, recall, and F1-score comparison between the proposed KNN-ACGAN model and four other general prediction methods (SVM: support vector machine; RF: random forest; DNN: original deep neural network; and CNN: convolution neural network). It can be seen from this figure that KNN-ACGAN outperforms other traditional models in precision and F1-score, while the recall is slightly lower than the KNN-DNN model. (b) Performance promotion of the KNN-based prediction model compared to the average-based prediction model in the criterions of precision, recall, and F1-score. It can be computed in the following equation: Inline graphic. It is noticeable in this figure that almost all models had a performance improvement (from 0.02 to 0.42) when the model used KNN imputation, except for the recall of CNN, the recall of DNN, and the precision of RF.

To evaluate the forecasting performance of KNN imputation for missing data, we performed a comparison between the KNN-based prediction model and the average-based prediction model. The area under the ROC curve (AUC) of the comparison result is shown in Fig. 6. In terms of receiver operating characteristic (ROC), KNN-based models obtain promotions compared to average-based models. Table II and Fig. 5(b) report the detailed promotion of the comparison of KNN-based models and average-based models under three performance criteria. It visually shows that all KNN-based predictive models have more significant improvement in performance than KNN-based models.

Fig. 6.

Fig. 6.

ROCs and AUCs of SVM, RF DNN, CNN, and ACGAN prediction models based on different data imputation methods (left: average imputation; and right: KNN imputation). For the figures, we can easily observe that prediction accuracies improve for all models when using the KNN-based imputation method (with increase from 0.1 to 0.8 in terms of the AUC area), and the ACGAN model have the best prediction result in both imputation methods, reaching the 0.97 and 0.98 AUC area on average imputation and KNN imputation, respectively.

TABLE II. Promotion of the KNN-Based Prediction Model Compared to the Average-Based Prediction Model in Precision, Recall, and F1-Score.

KNN-based model vs. Average-based model SVM RF DNN CNN ACGAN
Pprecision 0.03 −0.05 0.42 0.33 0.16
Precall 0.02 0.41 0.00 −0.02 0.01
PF1-score 0.02 0.12 0.24 0.18 0.07

Inline graphic = Inline graphic

B. Stealing Model Performance for the New IoMT Platform

There are some evaluation indicators and corresponding parameters shown in Table III and Fig. 7(a). A higher number on the same scale indicates better performance for the model. The F1-score for normal people and COVID-19 patients in Table III are 0.99 and 0.88, respectively, which indicates that the original network has a strong performance in predicting COVID-19 and non-COVID-19 data.

TABLE III. Values of Different Indicators Outputted by the Target Model.

Object Precision Recall F1-score Support
NORMAL 1.00 0.97 0.99 68
COVID-19 0.78 1.00 0.88 7
Macro avg 0.89 0.99 0.93 75
Weighted avg 0.98 0.97 0.97 75
Accuracy 0.97 75

Fig. 7.

Fig. 7.

Detailed performance for the prediction model. (a) KNN-ACGAN. (b) Copycat. (Normal represents the predicted performance in normal people; COVID represents the predicted performance in COVID-19 patients; macro is the macro average performance in test data; and weight is the weighted average performance in train data.)

Table IV and Fig. 8(b) show the different performance indicators that Copycat network outputs after training with stolen labels and the corresponding data set. Because we selected data between PD and non-PD (NPD) when we selected the Copycat data set, we still got a 79% accuracy rate with many irrelevant data effects. Based on Tables III and IV, we can observe that the Copycat network achieves approximately to the results of the original data.

TABLE IV. Values of Different Indicators Outputted by the Copycat Model.

Object(copycat) Precision Recall F1-score Support
NORMAL 1.00 0.77 0.87 196
COVID-19 0.38 1.00 0.55 28
Macro avg 0.69 0.88 0.71 224
Weighted avg 0.92 0.79 0.83 224
Accuracy 0.79 224

Fig. 8.

Fig. 8.

Confusion matrix for different algorithms. (a) AVG-SVM. (b) KNN-SVM. (c) AVG-RF. (d) KNN-RF. (e) AVG-DNN. (f) KNN-DNN. (g) AVG-CNN. (h) KNN-CNN. (i) AVG-ACGAN. (j) KNN-ACGAN. We can see from Fig. 10 that it is hardly for KNN-ACGAN to misjudge with six errors in total 448.

V. Discussion

In order to develop an intelligent and trustworthy COVID-19 diagnostic IoMT through XR and DNN, the XR-based framework has been conducted. Based on the training results, the COVID-19 can be accomplished diagnose with or without assistance, so that visual feedback and numerical feedback are provided. Offering includes displaying a real-time 3-D representation of the surgical implementations.

A. Performance by ACGAN-Based COVID-19 IoMT

As shown in Table I, the proposed KNN-ACGAN model has excellent performance. Compared with the CNN model, the precision and F1-score on the KNN-ACGAN increased by 15% and 9%, respectively. Compared with the DNN model, the precision and F1-score on the KNN-ACGAN increased by 11% and 6%, respectively. It indicates that the ACGAN model can obtain more accurate features and more precise prediction results after the preprocessing of KNN for missing data and the resampling processing in training. We used KNN ( Inline graphic) to fill up the missing data and the oversampling to solve the problem of imbalanced samples. In Fig. 5 and Table II, where the performance of KNN is evaluated, the AUC of the KNN-based models has increased by 1%–8% compared with average-based models. Moreover, except for the Inline graphic of KNN-RF and the Inline graphic of KNN-CNN, all the KNN-based models have a promotion in which Inline graphic-score have increased by 2%–24%, Inline graphic have increased by 2%–41%, and Inline graphic have increased by 3%–41%. More promising information can be obtained from the confusion matrix in Fig 6. All the experiments demonstrate that KNN-ACGAN is a promising technology that can be used effectively in COVID-19 prediction.

In the offline process, we use real-world clinical COVID-19 data to train the proposed KNN-ACGAN model. After optimizing and adjusting the model parameters, the model is saved. The new experiments with the protected model are performed in the online application. According to the predicted feedback, whether the patients are infected are predicted and displayed on the monitor. Besides, the interpretability based on CEM can provide the importance for the clinical features, which gives the KNN-ACGAN model the medical insight and ensure the reliability of our proposed COVID-19 intelligent prediction system.

B. Performance by IoMT Stealing Model

As shown in Fig 10, the obfuscated matrix is an error matrix that can be used to evaluate the performance of supervised learning algorithms. Therefore, we can see more clearly that the prediction set is a mixed part of the real set through the confusion matrix. We can see from Fig. 9, true positive (TP) and false negative (FN) account for a large proportion in the confounding matrix, among which TP accounts for the largest proportion, which has been directly reflected that the ACGAN network can accurately predict the data of patients with and without COVID-19.

Fig. 10.

Fig. 10.

Confusion matrix diagram based on the ACGAN model and ROC curve using different models for data prediction. It can be seen that Copy DNN has a better performance with the AUC area of 0.90, which is only 0.08 lower than that of the KNN-ACGAN model. Moreover, regarding the confusion matrix, all the patients with COVID-19 are tested correctly, while a few numbers of ordinary people are tested for COVID-19.

Fig. 9.

Fig. 9.

Interpretation to the KNN-ACGANs with respect to how the clinical feature influences their decision for whether a patient is infected with COVID-19. It can be seen from Fig. 6 that lymphocyte quantity, mitochondria quantity, and whether patients have the above symptoms (from neutrophil to no previous features) are the top-3 risk factors affecting the model to estimate the probability of patients getting COVID-19.

The ROC curve is drawn according to a series of different dichotomies (cut-off values or determining thresholds), unlike traditional evaluation methods, the ROC curve does not need to divide experimental results into two categories for statistical analysis, and all points on the curve reflect the same receptivity. The ROC curve is judged by which line in the curve can get the fastest and most infinitely close to an ordinate of 1, indicating that the model represented by that curve will work best. As we can see from Fig. 10, KNN-ACGAN can have the best effect on the classification of new crown data. ACGAN can more accurately predict the data of COVID-19 patients and non-COVID-19 patients by combining the results of the ROC curve and confounding matrix. At the same time, the Copycat network can also achieve similar effects to the original network.

VI. Conclusion

In this article, we proposed a trustworthy and intelligent COVID-19 diagnostic IoMT through XR and DNNs. We developed a customized novel ACGAN-based intelligent prediction algorithm that was addressed to learn a new COVID-19 prediction model. Apart from that, to achieve a better human ergonomics performance, we visualized all the navigational clues from our Haptic-AR guide system. We are among the first to apply deep learning for the COVID-19 IoMT prediction and remote surgical plan cues, which may provide a new strategy for COVID-19 therapy. In the future, we will improve this IoMT system in both hardware design and deep learning algorithms promotion, aims to create a platform for both academia and industry to the COVID-19 track and treatment.

Acknowledgment

The authors thank Dr. Yinjia Wang of Chibi People’s Hospital for the COVID-19 clinic data collection and Dr. Kai Qian of Huashan Hospital for the COVID-19 surgical suggestions.

Biographies

graphic file with name tai-3055804.gif

Yonghang Tai received the Ph.D. Degree in computer science from Deakin University, Victoria, Australia, in 2019. He is currently an Associate Professor with Yunnan Key Laboratory of Opto-electronic Information Technology, Yunnan Normal University, Yunnan, China. He has authored more than 30 publications published by refereed international journals. His current research interests include physics-based simulation, AI-based medical implementations, virtual reality/augmented reality, internet of medical things and big data.

graphic file with name yu-3055804.gif

Zhengtao Yu received the Ph.D. degree in computer application technology from the Beijing Institute of Technology, Beijing, China, in 2005. He is currently a Professor with the School of Information Engineering and Automation, Kunming University of Science and Technology, Kunming, China. He has authored more than 100 publications published by refereed international journals. His main research interests include natural language process, image processing, and machine learning.

graphic file with name zhu-3055804.gif

Chunsheng Zhu (Member, IEEE) received the Ph.D. Degree in Electrical and Computer Engineering from The University of British Columbia, Vancouver, BC, Canada. He is an currently Associate Professor in the Institute of Future Networks with the Southern University of Science and Technology, Shenzhen, China. He is also an Associate Researcher with the PCL Research Center of Networks and Communications, Peng Cheng Laboratory, China. He has authored more than 100 publications published by refereed international journals, such as IEEE Transactions on Industrial Electronics, IEEE Transactions on Computers, IEEE Transactions on Information Forensics and Security, IEEE Transactions on Industrial Informatics, IEEE Transactions on Vehicular Technology, IEEE Transactions on Emerging Topics in Computing, IEEE Transactions on Cloud Computing, ACM Transactions on Embedded Computing Systems, ACM Transactions on Cyber-Physical Systems. His research interests mainly include Internet of Things, wireless sensor networks, cloud computing, big data, social networks, and security.

graphic file with name chang-3055804.gif

Victor Chang has been a Full Professor of data science and information systems with the School of Computing, Engineering and Digital Technologies, Teesside University, Middlesbrough, U.K., since September 2019. He gave 18 keynotes at International Conferences. He is widely regarded as one of the most active and influential young scientist and expert in IoT/Data Science/Cloud/security/AI/IS, as he has experience to develop ten different services for multiple disciplines.

Bixuan Gao, photograph and biography not available at the time of publication.

Qiong Li, photograph and biography not available at the time of publication.

Funding Statement

This work was supported in part by the Yunnan Key Laboratory of Opto-Electronic Information Technology of Yunnan Normal University and in part by the National Natural Science Foundation of China under Grant 62062069, Grant 62062070, and Grant 62005235. The work of Victor Chang was supported in part by VC Research (VCR) under Grant 0000113.

Contributor Information

Yonghang Tai, Email: taiyonghang@ynnu.edu.cn.

Qiong Li, Email: liqiong@ynnu.edu.cn.

Zhengtao Yu, Email: ztyu@hotmail.com.

Chunsheng Zhu, Email: chunsheng.tom.zhu@gmail.com.

Victor Chang, Email: victorchang.research@gmail.com.

References

  • [1].Kumar M. and Chand S. A., “A secure and efficient cloud-centric Internet-of-Medical-Things-enabled smart healthcare system with public verifiability,” IEEE Internet Things J., vol. 7, no. 10, pp. 10650–10659, Oct. 2020. [Google Scholar]
  • [2].Ding Y.et al. , “DeepEDN: A deep learning-based image encryption and decryption network for Internet of Medical Things,” 2020. [Online]. Available: http://arXiv:2004.05523 [Google Scholar]
  • [3].Rahman A., Hossain M. S., Alrajeh N. A., and Alsolami F., “Adversarial examples–security threats to COVID-19 deep learning systems in medical IoT devices,” IEEE Internet Things J., early access, Aug. 3, 2020, doi: 10.1109/JIOT.2020.3013710. [DOI] [PMC free article] [PubMed]
  • [4].Elmi-Terander A.et al. , “Surgical navigation technology based on augmented reality and integrated 3D intraoperative imaging: A spine cadaveric feasibility and accuracy study,” Spine, vol. 41, no. 21, pp. E1303–E1311, 2016. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [5].Li L.et al. , “A novel augmented reality navigation system for endoscopic sinus and skull base surgery: A feasibility study,” PLoS ONE, vol. 11, no. 1, pp. 1–17, 2016. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [6].Kersten-Oertel M., Jannin P., and Collins D. L., “The state of the art of visualization in mixed reality image guided surgery,” Comput. Med. Imag. Graph., vol. 37, no. 2, pp. 98–112, 2013. [DOI] [PubMed] [Google Scholar]
  • [7].Wang D.et al. , “Clinical characteristics of 138 hospital-ized patients with 2019 novel coronavirus–infected pneumonia in Wuhan, China,” J. Amer. Med. Assoc., vol. 323, no. 11, pp. 1061–1069, 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [8].Mezger U., Jendrewski C., and Bartels M., “Navigation in surgery,” Langenbeck’s Archives Surg., vol. 398, no. 4, pp. 501–514, 2013. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [9].Krieg S. M.et al. , “Preoperative motor mapping by navigated transcranial magnetic brain stimulation improves outcome for motor eloquent lesions,” Neuro Oncol., vol. 16, no. 9, pp. 1274–1282, 2014. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [10].Saito J., Kitayama M., Kato R., and Hirota K., “Interference with pulse oximetry by the stealth station™ image guidance system,” JA Clin. Rep., vol. 3, no. 1, p. 6, 2017. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [11].Chen X.et al. , “Development of a surgical navigation system based on augmented reality using an optical see-through head-mounted display,” J. Biomed. Informat., vol. 55, pp. 124–131, Jun. 2015. [DOI] [PubMed] [Google Scholar]
  • [12].Burduk P. K., Dalke K., and Kaźmierczak W., “Intraoperative navigation system in endoscopic sinus surgery,” Otolaryngol. Pol., vol. 66, no. S4, pp. 36–39, 2012. [DOI] [PubMed] [Google Scholar]
  • [13].Citardi M. J., Yao W., and Luong A., “Next-generation surgical navigation systems in sinus and skull base surgery,” Otolaryngol. Clin. North Amer., vol. 50, no. 3, pp. 617–632, 2017. [DOI] [PubMed] [Google Scholar]
  • [14].Ni D.et al. , “A virtual reality simulator for ultrasound-guided biopsy training,” IEEE Comput. Graph. Appl., vol. 31, no. 2, pp. 36–48, Mar./Apr. 2011. [DOI] [PubMed] [Google Scholar]
  • [15].Selmi S.-Y., Fiard G., Promayon E., Vadcard L., and Troccaz J., “A virtual reality simulator combining a learning environment and clinical case database for image-guided prostate biopsy,” in Proc. 26th IEEE Int. Symp. Comput. Based Med. Syst. (CBMS), Porto, Portugal, 2013, pp. 179–184. [Google Scholar]
  • [16].Yi N., Xiao-Jun G., Xiao-Ru L., Xiang-Feng X., and Wan-Jun M., “The implementation of haptic interaction in virtual surgery,” in Proc. Int. Conf. Elect. Control Eng. (ICECE), Wuhan, China, 2010, pp. 2351–2354. [Google Scholar]
  • [17].Wei L., Najdovski Z., Abdelrahman W., Nahavandi S., and Weisinger H., “Augmented optometry training simulator with multi-point haptics,” in Proc. IEEE Int. Conf. Syst. Man Cybern., Seoul, South Korea, 2012, pp. 2991–2997. [Google Scholar]
  • [18].Chinmay C., Amit B., Lalit G., and Joel J. P. C. R., Internet of Medical Things for Smart Healthcare: Covid-19 Pandemic (Studies in Big Data), vol. 80. Singapore: Springer, 2021. [Google Scholar]
  • [19].Awais M.et al. , “LSTM based emotion detection using physiological signals: IoT framework for healthcare and distance learning in COVID-19,” IEEE Internet Things J., early access, Dec. 10, 2020, doi: 10.1109/JIOT.2020.3044031. [DOI] [PMC free article] [PubMed]
  • [20].Shelton C. J., Kim A., Hassan A. M., Bhat A., Barnello J., and Castro C. A., “System-wide implementation of telehealth to support military veterans and their families in response to COVID-19: A paradigm shift,” J. Mil., Veter. Family Health, vol. 6, no. S2, pp. 50–57, 2020. [Google Scholar]
  • [21].Triantafillou V. and Rajasekaran K., “A commentary on the challenges of telemedicine for head and neck oncologic patients during COVID-19,” Otolaryngol. Head Neck Surg., vol. 163, no. 1, pp. 81–82, 2020. [DOI] [PubMed] [Google Scholar]
  • [22].Patel P. D.et al. , “Rapid development of telehealth capabilities within pediatric patient portal infrastructure for COVID-19 care: Barriers, solutions, results,” J. Amer. Med. Informat. Assoc., vol. 27, no. 7, pp. 1116–1120, 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [23].Li P.et al. , “How telemedicine integrated into China’s anti-COVID-19 strategies: Case from a national referral center,” BMJ Health Care Informat., vol. 27, no. 3, 2020, Art. no. e100164. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [24].Le D.-N., Parvathy V. S., Gupta D., Khanna A., Rodrigues J. J. P. C., and Shankar K., “IoT enabled depthwise separable convolution neural network with deep support vector machine for COVID-19 diagnosis and classification,” Int. J. Mach. Learn. Cybern., to be published. [DOI] [PMC free article] [PubMed]
  • [25].Ahmed I., Ahmad M., Rodrigues J. J. P. C., Jeon G., and Din S., “A deep learning-based social distance monitoring framework for COVID-19,” Sustain. Cities Soc., vol. 5, Feb. 2021, Art. no. 102571. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [26].Santos P.et al. , “Prediction of COVID-19 using time-sliding window: The case of Piauí State—Brazil,” in Proc. 22nd Int. Conf. E-Health Netw. Appl. Serv., Shenzhen, China, Dec. 2020, pp. 174–181. [Google Scholar]
  • [27].Chaudhary Y., Mehta M., Gupta D., Khanna A., Sharma R., and Rodrigues J. J. P. C., “Efficient-CovidNet: Deep learning based COVID-19 detection from chest X-ray images,” in Proc. 22nd Int. Conf. E-Health Netw. Appl. Serv. (IEEE Healthcom), Shenzhen, China, Dec. 2020, pp. 1247–1255. [Google Scholar]
  • [28].Rahman A., and Hossain M. S., “An Internet of Medical Things-Enabled Edge Computing Framework for Tackling COVID-19,” IEEE Internet of Things J., Jan. 2021. [DOI] [PMC free article] [PubMed]
  • [29].Holshue M. L.et al. , “First case of 2019 novel coronavirus in the United States,” New England J. Med., vol. 382, no. 10, pp. 929–936, 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [30].Chan J. F.-W.et al. , “A familial cluster of pneumonia associated with the 2019 novel coronavirus indicating person-to-person transmission: A study of a family cluster,” Lancet, vol. 395, no. 10223, pp. 514–523, 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [31].Rehm G. B.et al. , “Leveraging IoTs and machine learning for patient diagnosis and ventilation management in the intensive care unit,” IEEE Pervasive Comput., vol. 19, no. 3, pp. 68–78, Jul.–Sep. 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [32].Chamola V., Hassija V., Gupta V., and Guizani M., “A comprehensive review of the COVID-19 pandemic and the role of IoT, drones, AI, blockchain, and 5G in managing its impact,” IEEE Access, vol. 8, pp. 90225–90265, 2020. [Google Scholar]
  • [33].Chen G.et al. , “Prediction of chronic kidney disease using adaptive hybridized deep convolutional neural network on the Internet of Medical Things platform,” IEEE Access, vol. 8, pp. 100497–100508, 2020. [Google Scholar]
  • [34].Garg L., Chukwu E., Nasser N., Chakraborty C., and Garg G., “Anonymity preserving IoT-based COVID-19 and other infectious disease contact tracing model,” IEEE Access, vol. 8, pp. 159402–159414, 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [35].Hu F., Xie D., and Shen S., “On the application of the Internet of Things in the field of medical and health care,” in Proc. IEEE Int. Conf. Green Comput. Commun. Internet Things Cyber Phys. Soc. Comput., Beijing, China, 2013, pp. 2053–2058. [Google Scholar]
  • [36].Jagadeeswari V., Subramaniyaswamy V., Logesh R., and Vijayakumar V., “A study on medical Internet of Things and big data in personalized healthcare system,” Health Inf. Sci. Syst., vol. 6, no. 1, p. 14, 2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [37].Flynn T., Grispos G., Glisson W., and Mahoney W., “Knock! knock! who is there? Investigating data leakage from a medical Internet of Things hijacking attack,” in Proc. 53rd Hawaii Int. Conf. Syst. Sci., 2020, p. 10. [Google Scholar]
  • [38].Gu J.et al. , “Recent advances in convolutional neural networks,” Pattern Recognit., vol. 77, pp. 354–377, May 2018. [Google Scholar]
  • [39].Sun N., Zhang J., Rimba P., Gao S., Zhang L. Y., and Xiang Y., “Data-driven cybersecurity incident prediction: A survey,” IEEE Commun. Surveys Tuts., vol. 21, no. 2, pp. 1744–1772, 2nd Quart., 2019. [Google Scholar]
  • [40].Szegedy C.et al. , “Intriguing properties of neural networks,” 2013. [Online]. Available: http://arXiv:1312.6199 [Google Scholar]
  • [41].Goodfellow I. J., Shlens J., and Szegedy C., “Explaining and harnessing adversarial examples,” 2014. [Online]. Available: http://arXiv:1412.6572 [Google Scholar]
  • [42].Dalvi N., Domingos P., Sanghai S., and Verma D., “Adversarial classification,” in Proc. 10th ACM SIGKDD Int. Conf. Knowl. Discovery Data Min., 2004, pp. 99–108. [Google Scholar]
  • [43].Lowd D. and Meek C., “Adversarial learning,” in Proc. 11th ACM SIGKDD Int. Conf. Knowl. Discovery Data Min., 2005, pp. 641–647. [Google Scholar]
  • [44].Barreno M., Nelson B., Sears R., Joseph A. D., and Tygar J. D., “Can machine learning be secure?” in Proc. ACM Symp. Inf. Comput. Commun. Security, 2006, pp. 16–25. [Google Scholar]
  • [45].Bapiyev I. M., Aitchanov B. H., Tereikovskyi I. A., Tereikovska L. A., and Korchenko A. A., “Deep neural networks in cyber attack detection systems,” Int. J. Civil Eng. Technol., vol. 8, no. 11, pp. 1086–1092, 2017. [Google Scholar]
  • [46].Salehi P., Chalechale A., and Taghizadeh M., “Generative adversarial networks (GANs): An overview of theoretical model, evaluation metrics, and recent developments,” 2020. [Online]. Available: http://arXiv:2005.13178 [Google Scholar]
  • [47].Odena A., Olah C., and Shlens J., “Conditional image synthesis with auxiliary classifier GANs,” in Proc. 34th Int. Conf. Mach. Learn., vol. 6, 2017, pp. 4043–4055. [Google Scholar]
  • [48].Erol B., Gurbuz S. Z., and Amin M. G., “Motion classification using kinematically sifted ACGAN-synthesized radar micro-doppler signatures,” IEEE Trans. Aerosp. Electron. Syst., vol. 56, no. 4, pp. 3197–3213, Aug. 2020. [Google Scholar]
  • [49].Dhurandhar A.et al. , “Explanations based on the missing: Towards contrastive explanations with pertinent negatives,” in Advances in Neural Information Processing Systems, vol. 2018. Red Hook, NY, USA: Curran, 2018, pp. 592–603. [Google Scholar]

Articles from Ieee Internet of Things Journal are provided here courtesy of Institute of Electrical and Electronics Engineers

RESOURCES