Skip to main content
Springer Nature - PMC COVID-19 Collection logoLink to Springer Nature - PMC COVID-19 Collection
. 2023 Mar 7;35(17):12751–12761. doi: 10.1007/s00521-023-08411-5

Performance analysis and comparison of Machine Learning and LoRa-based Healthcare model

Navneet Verma 1,, Sukhdip Singh 1, Devendra Prasad 2
PMCID: PMC9989556  PMID: 37192938

Abstract

Diabetes Mellitus (DM) is a widespread condition that is one of the main causes of health disasters around the world, and health monitoring is one of the sustainable development topics. Currently, the Internet of Things (IoT) and Machine Learning (ML) technologies work together to provide a reliable method of monitoring and predicting Diabetes Mellitus. In this paper, we present the performance of a model for patient real-time data collection that employs the Hybrid Enhanced Adaptive Data Rate (HEADR) algorithm for the Long-Range (LoRa) protocol of the IoT. On the Contiki Cooja simulator, the LoRa protocol's performance is measured in terms of high dissemination and dynamic data transmission range allocation. Furthermore, by employing classification methods for the detection of diabetes severity levels on acquired data via the LoRa (HEADR) protocol, Machine Learning prediction takes place. For prediction, a variety of Machine Learning classifiers are employed, and the final results are compared with the already existing models where the Random Forest and Decision Tree classifiers outperform the others in terms of precision, recall, F-measure, and receiver operating curve (ROC) in the Python programming language. We also discovered that using k-fold cross-validation on k-neighbors, Logistic regression (LR), and Gaussian Nave Bayes (GNB) classifiers boosted the accuracy.

Keywords: Machine learning, Diabetes mellitus, Internet of Things, LoRa, Contiki Cooja

Introduction

Today, due to TCP/IP protocol suite, billions of Internet of Things (IoT)-based equipment have been able to link with the Internet and are working too. IoT protocols are a crucial part of the IoT knowledge heap because lacking this; connected devices are meaningless since they only allow the flow of organized, useful data. Due to the lack of a standardized architecture in the IoT, using communication techniques that are appropriate for the applications may be difficult. Despite being the greatest source of Big Data today, IoT is useless without analytical power. Low power wide area network (LPWAN), LoRa (Long Range), and NB-IoT are a few examples of modern IoT technologies that make effective long-distance communication possible. The LoRa-enabled network, where LoRa-enabled sensors are located in the network field or on patients' bodies, is covered by the LPWAN protocols. Data transmission may be started by the sensors or by any outside control, as per the circumstances which are mentioned in Mekki et al. work [1]. Throughput and collision rate have an impact on how effective and influential these strategies are. We can remotely monitor patients and gather data using the IoT-based sensor. For the amount of insulin in the blood sugar to be near the normal value, patients and systems will now need to review a significant number of data sets and engage in a lot of data interpretation. LoRa Gateways can transport data from Lora-enabled glucose sensors to a database.

In our paper, the performance of the Hybrid Enhanced Adaptive Data Rate (HEADR) algorithm for the LoRa protocol of the Internet of Things (IoT) is analyzed in terms of improved switching which will boost the data rate, high dissemination which can handle more end devices means we can add more IoT devices in this network, and dynamic data transmission range allocation specifically for heterogeneous IoT network. It is designed to cover unstable frequency conditions in case of rain or bad weather condition and also the Data rate adaptation by end devices is specified in this LoRa (HEADR) protocol. In terms of analysis, monitoring, and other clinical difficulties related to diagnosis, ML techniques in DM are unquestionably important. However, there is an optimization approach based on Scikit Learns that employs train-test splitting and k-fold cross-validation. ML algorithms are commonly used to predict diabetes, and they deliver better results.

As the population of various countries grows, so does the burden of responsibility for the health of this growing population on hospitals, doctors, and nursing staff. So the present study has focused on the IoT since it has the potential to reduce the pressure on health care systems. According to Verma et al. [2], Diabetes Healthcare Monitoring Systems are crucial right now, particularly for remote health care monitoring, as visiting hospitals and standing in line for services is a waste of time for patient monitoring. It is extremely dangerous for diabetic patients to wait in line since their life might be in danger at any moment [3]. In this COVID-19 situation, remote monitoring and ML prediction system such as this one may be pretty helpful while we are incapable of actively caring for our elderly guardians and relatives. In Fig. 1, the application of IoT sensors and technology in the health care industry is illustrated.

Fig. 1.

Fig. 1

IoT in Healthcare

Related work

Patient monitoring

In terms of applications, the Internet of Things is now more prevalent than ever in the health care industry. According to Islam et al. [4], they offer a useful analysis and a model that is appropriate for collecting patient physical health data from supporting sensors and IoT health care. The IoT system needs to be adaptable to provide all amenities during an emergency while also keeping the presence of medical staff and nursing staff in remote areas. This technology also automates data collection, making it more reliable than manual patient data entry. The LoRa network has been used to turn an interface called my signals into a health monitoring system that gathers information from a heart rate, ECG sensor, body temperature, and pulse rate. The LoRa module's performance was assessed once it was integrated into the terminal application, and it showed promise for gathering information from the patient's body.

In the work of Lavric [5], to estimate the effectiveness of the LoRa protocol, more focus is given on measures including the number of packet collisions and network performance. Accordingly, this work describes how many LoRa nodes can connect to the gateway at once while still abiding by the protocol's rules. To lessen collision incidence and increase communication channel efficiency, several factors have been researched, including the spreading factor, data transmission rate, and duty cycle. The Adaptive Data Rate (ADR) technique, which will modify the data rate without human intervention when a collision is determined, is one of the suggestions made by the author for decreasing the frequency of collisions [6, 7]. However, this tactic will result in more energy being used by the LoRa node.

A comparison of short- and long-range communication protocols is also demonstrated in Table 1.

Table 1.

Comparison of short- and long-range communication protocols [2]

Long-range protocols
SigFox LoRa NB-IoT
Suitability for health care Poor Average Good
Deployment area 9.5 km 7.2–10 km 15 km
Transmission rate 100bps 0.25–5.5kpbs 250kbps
Security Private key signature Encryption and scrambling Technique Distinctive key distribution, recognized only by the node and base station using the unique key Data encryption 3GPP S3 security, which includes user and device identity, entity authentication, confidentiality, and data integrity
Bandwidth

868 MHz (Eu)

915 MHz (US)

868 MHz (Eu) 915 MHz (US) 433 MHz (Asia) LTE bands, in the guard bands of LTE (guard-band mode), or re-farmed GSM bands
Short-range protocols
Bluetooth low energy ZigBee MQTT
Suitability for health care Good Average Average
Deployment area 150 m 30 m M2M
Transmission rate 1Mbps 250kbps 2Mbps
Security Secure pairing before the key exchange, Two keys used to provide authentication and identity protection, AES-128b encryption AES-128b, Network key shared across network, Optional link key to secure Application layer communications TLS/SSL
Bandwidth 2.4 GHz 2.4 GHz 2.4 GHz

Machine learning methods

According to Zou et al. [8], information on over 70,000 people—both healthy and diabetes patients—was physically collected from a hospital in Luzhou, China. Decision trees, random forests, and neural networks with k-fold validation were utilized to predict diabetes mellitus, while PCA and mRMR were employed to minimize dimensionality. Last but not least, the random forest generated the most accurate forecast (0.8084). It is also mentioned that choosing the right characteristic requires careful consideration of the classifier technique.

In the work of Sneha and Gangil [9], the research goal has been to use predictive analysis to identify which feature has the most influence on early diabetes mellitus prediction. Diabetic hyperglycemia is linked to harm to several organs, including the heart, kidneys, veins, eyes, and nerves. In this research, the goal is to utilize ML to create a perfect classifier model whose results are equivalent to clinical outcomes and a prediction algorithm that considers important factors. The final results of this model demonstrated that the decision tree and random forest have given 98.20 percent and 98.00 percent maximum specificity, respectively, for the interpretation of diabetes data. The approach with the highest accuracy, NB, has an accuracy of 82.30 percent. The study extends the choice of the top attributes from the dataset to increase classification accuracy.

The local hospital in Kano, Nigeria provided the diabetes diagnosis dataset that was used in this investigation by Muhammad et al. [10]. Using this dataset, a model was created using SVM, k-nn, LR, RF, NB, and GB. Although the accuracy of the random forest and gradient boosting predictive learning-based models is 86.28 and 88.76 percent, respectively, for these models, the receiver operating characteristic (ROC) curve indicates that these are the best models. The algorithm will help health care workers and medical experts identify and predict type 2 diabetes in those who are suspected of having the condition.

As per Letters et al. work [11], using the PIMA Indian dataset, the SVM model was used to estimate the risk factor for diabetes after feature scaling, selection, augmentation, and imputation. The accuracy, selection, and specificity performance parameters for this model were assessed as follows using a tenfold stratified cross-validation approach: 83.20 percent accuracy, 87.20 percent selection, and 79 percent specificity. Patients can send data through smart devices like smartphones and smartwatches, but it is not defined on which technology it was based. This suggested strategy might help medical professionals make judgments at an early stage depending on the threat predicted by the computer.

Proposed work

In Fig. 2, the suggested model is displayed. Two components make up the LoRa-based Diabetes Predictive Model first one is the communication module, and the second is the processing module whose flowchart is given in Figs. 3 and 4. LoRa-enabled sensors from the patient's body will first send the data to the gateway node as explained in algorithm1, after that this will be stored on a server (i.e., Dataset of Patients). Following preprocessing of the data, normalization, and ML classification algorithms are applied which is the standard protocol for applying an ML technique.

Fig. 2.

Fig. 2

LoRa(HEADR) and machine learning-based healthcare model

Fig. 3.

Fig. 3

Flowchart for communication module

Fig. 4.

Fig. 4

Flowchart for processing module

Data gathering

The network's data are gathered to establish a patient's health state. Because LoRa (HEADR) protocol is used for transmission, sensors with LoRa capability may be able to identify diabetes patients' health states. Using a glucose sensor, we may examine the patient's blood for the presence of glucose. Depending on the situation, the glucose sensor can either be placed internally beneath the skin or externally on the skin as part of the continuous glucose monitoring system (CGM).

Data transmission

The LoRa (HEADR) protocol is used to transmit data from the patient to the gateway. The dataset unit for the patient receives this information from the gateway. The discovered data may be sent regularly or whenever the patient's biomedical sensor readings significantly change. This new Hybrid Enhanced Adaptive Data Rate (HEADR) technique also enhances LoRa device implementation and resolves the sensor's data transmission range distribution issue. To evaluate network performance, this model shows real-time data detection and transfer utilizing IoT protocol using the Contiki Cooja simulator.

Dataset validation and ML classification

  • (i)

    Data preprocessing To complete the processing module, a table with the data type and range value of each attribute is created using the data collected from patients using LoRa-enabled sensors. Following preprocessing, several ML classification models are used. Our dataset contains the following variables: age, outcome, BMI, HbA1c, skin thickness, blood pressure, family history, glucose value, and insulin unit. The effectiveness of the used algorithms has been decreased as a result of the discovery that the data contain 0 values in several places. Since the Contiki is used to perform the data transmission component of the proposed HEADR model, real Indian Diabetes data are gathered from two different hospitals in the Indian state of Haryana.

  • (ii)
    Invalid data values There are numerous ways to deal with invalid data values, as listed below:
    • Ignore the following data In the majority of circumstances, it is not advised because it can leave some crucial information behind. The "skin thickness" and "insulin" columns in the dataset supplied include a large number of incorrect data points, although the "BMI," "glucose," and "blood pressure" values may be legitimate.
    • Employing mean values If a mean value were used for the blood pressure column in our dataset, the model would, however, yield incorrect findings. On various other kinds of data sets, mean values can also be employed.
  • (iii)

    Correlation and Heat map Python is used for figuring out these correlations and heat maps. By using correlation, one may comprehend characteristics more thoroughly, such as how one or more qualities are reliant on another attribute or serve as a catalyst for another quality. The heat map is a two-dimensional graph of data where each value is represented by a color matrix.

  • (iv)

    Cloud storage With the aid of the gateway node, the data gathered from the patient by the glucose sensor are saved in the dataset for that patient. After preprocessing the acquired data, a copy of the preprocessed data is sent to the cloud. Physicians may utilize the cloud-based database for upcoming research. The suggested method automatically records diabetic patients' glucose readings and sends them to a preprocessing unit for cloud storage.

  • (v)

    Normalization We must carry out the normalization step following the data cleaning phase, where we must divide the entire dataset into training and testing models. We will set aside the test dataset because the data are in split form and apply the training method to the training dataset. Consequently, with the assistance of this training procedure, a training model will be produced that will function on the values of the features in the training data, logic, and algorithm. Bringing all of the attributes to the same standard is the aim of normalization.

  • (vi)

    Machine learning techniques Only after the data have been presented appropriately, ML techniques can be used. Medical diagnostic datasets may be effectively mined for information using machine learning (ML) techniques. We may use a variety of classification and ensemble algorithms as used by Onan et al. [12], to predict diabetes using the diabetes dataset. The main goal of employing ML techniques is to understand how these classification methods are implemented to determine their accuracy, as well as to identify the major characteristic that is crucial for diabetes prediction. ML methods may be divided into three groups: reinforcement learning, unsupervised learning, and both. In this prototype, we will employ supervised learning, in which the model is trained and accurate result predictions are made using a labeled dataset.

Classification and regression are further divisions of supervised learning:

  • (i)

    Gradient Boosting The name "gradient boosting" refers to the fact that the gradient of the prediction error determines the target outputs for each case. In the work of Lai et al. [13], every new prototype makes predictions in each training example to reduce error.

  • (ii)

    Random Forest Several well-liked ensemble techniques exist, including "bagging, boosting, gradient boosting, ada-boosting, averaging, and voting, etc., as mentioned in Korukoğlu et al. research [14]." For forecasting diabetes, we employ a Random Forest of Bagging ensemble technique. Random Forest may be used for both classification and regression tasks in the ensemble learning approach.

  • (iii)

    Decision Tree It transforms the data collection into a roughly sine curve based on basic if–then-else decision principles. According to Li et al. [15], for a deeper decision tree, prototype complications will rise and model fitting will get more difficult.

  • (iv)

    k-neighbors We often employed the k-nn approach, which is a supervised ML algorithm, to solve the classification and regression issues [10]. K-nn is a useless prediction method since it presupposes that similar objects are close to one another, which is usually the case with comparable data points.

  • (v)

    Logistic Regression The category of supervised learning classification techniques includes logistic regression as well, as mentioned in Butt et al. work [16]. By presenting the output result in binary form, which denotes 1 and 0, we can distinguish between individuals who are positive or negative for diabetes in this diabetic dataset. Logistic regression is often used to categorize our distinct data items.

  • (vi)

    Gaussian Naïve Bayes According to Sneha and Gangil work [9], Naive Bayes Classifiers may be taught quickly and effectively, especially while under supervision. Small training data are needed for Naive Bayes classifiers to approximate the classification-related parameters. Gaussian Naive Bayes only supports continuous-valued features and models that follow a Gaussian (normal) distribution.

  • (vii)

    SVM The Support Vector Machine approach is one of the supervised machine learning techniques. The SVM-generated hyperplane divides the data into two categories. In high-dimensional space, it may also produce one or more hyperplanes that can be used for classification or regression also mentioned in Bondre et al. research [17].(vii)

Execution of the whole process step-by-step

  1. The placement of LoRa (HEADR)-enabled sensors for the data collection on the bodies of patients in the network region (Algorithm 1).

  2. The gateway sends the gathered data to the database system.

  3. To determine the relative relevance of various characteristics, we create a correlation and heat map after preprocessing the data.

  4. Divide the entire dataset into training and testing datasets at a ratio of 0.75:0.25.

  5. Choose the machine learning algorithm from the list, which includes the GB, DT, SVM, k-nn, RF, LR, and GNB algorithms.

  6. Using the training dataset and the aforementioned ML approaches, build a classification model.

  7. Apply the same ML approach to test the trained model using the test dataset.

  8. Compare the performance of each classifier's predictions experimentally.

  9. Determine the optimum algorithm based on multiple analytical metrics.

Simulation results and analysis

Improved switching, high dissemination, and dynamic data transmission range allocation—all of these boost the data rate—support the proposed HEADR. The hybrid improved switching manages data collision prevention, data transmission speed, and range allowance while routing between the sensors. According to the user request criteria, the HEADR chose the dissemination conditions. The end device may be given a dynamic data transmission range with high dispersion that includes both continuous and dynamic data transfers. Based on the device's availability, the allocation of the data transmission range shifted to a steady state. Only the frequency ranges from 800 to 2100 MHz are covered by the current ADR, and these are the bands set aside for LTE.

The planned HEADR encompasses millimeter waves in different band frequencies from 24 to 54 GHz. Base stations for the proposed prototype are linked to routers that utilize the Internet and IoT switching devices in the mobile network through high-speed optical fiber connections. To adjust the channel and set the device, use the end device. A fixed data rate is scheduled for each HEADR request. With the allocation of the downlink data transmission range, the user request is made possible. As seen in Fig. 5a, a certain amount of LoRa packets were transmitted with a 1% duty cycle. For 30 min, we sent 600 or more packets to 20 nodes. A node must abide by the duty cycle constraint of 1% while taking into account our simulation scenario and the suggested HEADR model. This suggested solution improves the LoRaWAN device implementation and addresses the device data transmission range allocation issue, as demonstrated in Fig. 5b. Here, we are implementing the data processing module after gathering the patient's data.

Fig. 5.

Fig. 5

a Packets received by 20 nodes in 30 min. b Constant and dynamic data transmission

Comparative analysis and discussion

As per our proposed HEADR algorithm, the instant switching data transmission range 125, 250, and 500 kHz are used which will be decided by the end devices. Concerning high band; mid band, and low band, millimeter waves, quick switching spans a range of 24–54 GHz while reducing latency and enhancing throughput. As compared to the LoRa ADR algorithm as mentioned in Lavric and Popa research [18], the overall performance of the HEADR algorithm is better. The major difference between ADR and HEADR is shown in Table 2.

Table 2.

ADR vs HEADR

ADR HEADR
The frequency range in ADR is 800–2100 MHz, which can handle tens to thousands of end devices The frequency range in HEADR is proposed 24–54 GHz, which can handle more end devices
It is designed for situations with steady radio channel conditions It is designed to cover unstable frequency conditions in case of rain or bad weather condition
The majority of ADR protocols now in use are designed for homogeneous end devices It is designed for heterogeneous end devices means dynamic end devices
Data rate adaptation by end devices is not specified Data rate adaptation by end devices is specified

In the work of Sripreethaa [19], by employing several classifiers, a model for the early prediction of diabetes was expected, with RF having the best accuracy (94%) but a lower accuracy than the suggested model (96.48%) without k-fold validation. A performance evaluation model incorporating classifiers like DT, LR, SVM, k-nn, RF, GB, and NB is presented in Mekki et al. research [1]. The classifier with the best accuracy (77.60%) is LR, yet its accuracy is lower than that of the suggested model (87.84%).

The ML classification results are shown in Fig. 7a, whereas after k-fold validation, RF ranks best in Accuracy (96.28%), Precision (94.56%), Recall (90.24%), F-Measure (92.35%), and ROC (95%) as shown in Fig. 7b. Even the results are shown in Fig. 8a and b which are based on the PIMA dataset are lower than our ML classification results. Both data structure is shown in the following Table 3. ROC curve and values of evaluation parameters are shown in Fig. 6 and Table 4.

Fig. 7.

Fig. 7

a Classifier Accuracy. b Accuracy after k-fold validation

Fig. 8.

Fig. 8

a PIMA Classifier. b PIMA Accuracy after k-fold validation

Table 3.

Dataset structure

Our dataset [20]
Attributes Type Values Units
Family history Integer {0–6} Count
Glucose value Integer {80–460} mg/dl
Blood pressure Integer {80–190} mm Hg
Skin thickness Integer {7–99} Mm
Insulin unit Integer {0–20} mu U/ml
BMI Floating {12.8–50.0} kg/m2
HbA1c Floating {2.9–9.9} mmol/mol
Age Integer {19–65} Years
Outcome Binary {0–1}

0-Negative

1-Positive

PIMA dataset [3]
Attributes Type Values Units
Pregnancy Integer 0–17 Count
Plasma glucose Real 0–199 mg/dl
Blood pressure Real 0–122 mm Hg
Triceps skin fold Real 0–99 Mm
Serum insulin Real 0–846 mu U/ml
Body mass index Real 0–67.1 kg/m2
Diabetes pedigree (ancestors) Real 0.078–2.42 %
Age Integer 21–81 Years
Outcome Binary 0, 1

0-Negative

1-Positive

Fig. 6.

Fig. 6

ROC curve

Table 4.

Evaluation parameters

Classifiers Precision Recall F1-measure AUC
0 GB 0.935583 0.929878 0.932722 0.953563
2 DT 0.940063 0.908537 0.924031 0.943976
1 RF 0.936508 0.899390 0.917574 0.938861
3 k-nn 0.859589 0.765244 0.809677 0.860412
4 GNB 0.749254 0.765244 0.757164 0.837118
5 LR 0.756944 0.664634 0.707792 0.794397
6 SVC 0.751131 0.506098 0.604736 0.723255

Conclusion

We have given a healthcare model in this research that is utilized to track a patient's health, particularly diabetic symptoms, as well as determine the severity of the patient's illness. Monitoring and diagnosis were done independently in previous studies. The suggested HEADR method is utilized to monitor the LoRa protocol from an IoT network, and its working is assessed on the Contiki Cooja simulator, where we discovered that data rate is improved by quick switching and by employing low, mid, and high band millimeter waves, and for data processing, the Random Forest approach of ML classification is having the highest accuracy of 96.28 percent after k-fold cross-validation as shown in Fig. 7b. Patients' datasets may be reserved on the cloud after preprocessing with ML so that medical professionals may later access more precise data to research this ailment. On a LoRa network based on HEADR, the patient monitoring method may also be physically carried out.

Data availability

Data will be made available on reasonable request.

Declarations

Conflict of interest

There are no conflicts of interest associated with this investigation.

Footnotes

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

References

  • 1.Mekki K, Bajic E, Chaxel F, Meyer F. A comparative study of LPWAN technologies for large-scale IoT deployment. ICT Express. 2019;5(1):1–7. doi: 10.1016/j.icte.2017.12.005. [DOI] [Google Scholar]
  • 2.Verma N, Singh S, Prasad D. A review on existing IoT architecture and communication protocols used in healthcare monitoring system. J Inst Eng Ser B. 2021 doi: 10.1007/s40031-021-00632-3. [DOI] [Google Scholar]
  • 3.Vizhi K, Dash A (2020) Diabetes prediction using machine learning. Int J Adv Sci Technol 29(6):2842–2852. 10.32628/cseit206463.
  • 4.Islam MS, Islam MT, Almutairi AF, Beng GK, Misran N, Amin N (2019) Monitoring of the human body signal through the Internet of Things (IoT) based LoRa wireless network system. Appl Sci 9(9). 10.3390/app9091884.
  • 5.Lavric A (2019) LoRa (long-range) high-density sensors for internet of things. J Sensors. 10.1155/2019/3502987.
  • 6.Abrardo A, Pozzebon A (2019) A multi-hop lora linear sensor network for the monitoring of underground environments: The case of the medieval aqueducts in Siena, Italy. Sensors (Switzerland) 19(2). 10.3390/s19020402. [DOI] [PMC free article] [PubMed]
  • 7.“Adaptive Data Rate | The Things Network.” https://www.thethingsnetwork.org/docs/lorawan/adaptive-data-rate/. Accessed January 15, 2022.
  • 8.Zou Q, Qu K, Luo Y, Yin D, Ju Y, Tang H. Predicting diabetes mellitus with machine learning techniques. Front Genet. 2018;9(November):1–10. doi: 10.3389/fgene.2018.00515. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Sneha N, Gangil T (2019) Analysis of diabetes mellitus for early prediction using optimal features selection. J Big Data 6(1). 10.1186/s40537-019-0175-6.
  • 10.Muhammad LJ, Algehyne EA, Usman SS. Predictive supervised machine learning models for diabetes mellitus. SN Comput Sci. 2020;1(5):1–10. doi: 10.1007/s42979-020-00250-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Letters HT, Ramesh J, Aburukba R, Sagahyroon A (2021) A remote healthcare monitoring framework for diabetes prediction. Original Res Paper, pp 45–57. 10.1049/htl2.12010. [DOI] [PMC free article] [PubMed]
  • 12.Onan A (2022) Bidirectional convolutional recurrent neural network architecture with group-wise enhancement mechanism for text sentiment classification. J King Saud Univ Comput Inf Sci 34:2098–2117. 10.1016/j.jksuci.2022.02.025.
  • 13.Lai H, Huang H, Keshavjee K, Guergachi A, Gao X. Predictive models for diabetes mellitus using machine learning techniques. BMC Endocr Disord. 2019;19(1):1–9. doi: 10.1186/s12902-019-0436-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Onan A, Korukoğlu S, Bulut H. A hybrid ensemble pruning approach based on consensus clustering and multi-objective evolutionary algorithm for sentiment classification. Inf Process Manag. 2017;53(4):814–833. doi: 10.1016/j.ipm.2017.02.008. [DOI] [Google Scholar]
  • 15.Li Y, Li H, Yao H (2018) Analysis and study of diabetes follow-up data using a data-mining-based approach in new urban area of Urumqi, Xinjiang, China, 2016–2017. Comput Math Methods Med. 10.1155/2018/7207151. [DOI] [PMC free article] [PubMed]
  • 16.U. M. Butt, S. Letchmunan, M. Ali, F. H. Hassan, A. Baqir, and H. H. R. Sherazi, “Machine Learning Based Diabetes Classification and Prediction for Healthcare Applications,” J. Healthc. Eng., vol. 2021, 2021. 10.1155/2021/9930985. [DOI] [PMC free article] [PubMed]
  • 17.Bondre VM, Umare PN, Patle PG (2016) Parallel artificial bee colony optimisation for solving curricula time-tabling problem. Int J Innov Res Comput Commun Eng 2016(1):1–8. 10.15680/IJIRCCE.2016.
  • 18.Lavric A, Popa V (2018) Performance evaluation of LoRaWAN communication scalability in large-scale wireless sensor networks. Wirel Commun Mob Comput. 10.1155/2018/6730719.
  • 19.Sripreethaa NYKR. Diabetes prediction in healthcare systems using machine learning algorithms on Hadoop cluster. Cluster Comput. 2017 doi: 10.1007/s10586-017-1532-x. [DOI] [Google Scholar]
  • 20.Verma N, Singh S, Prasad D. Machine learning and IoT-based model for patient monitoring and early prediction of diabetes. Concurr Comput Pract Exp. 2022 doi: 10.1002/cpe.7219. [DOI] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

Data will be made available on reasonable request.


Articles from Neural Computing & Applications are provided here courtesy of Nature Publishing Group

RESOURCES