Abstract
The growing trend of utilizing artificial intelligence in healthcare has put it under significant consideration as a means of enhancing early diagnosis and clinical decision-making, though the centralized storage of patient information persists in posing significant privacy, regulatory, and interoperability issues. This paper presents a new federated learning framework named Health-FedNet, which is a privacy-protective and secure predictive model of chronic diseases and is designed to allow training a model using multiple institutions without the transfer of raw clinical information. This framework integrates three main elements, including calibrated differential privacy, the Paillier homomorphic encryption, and secure aggregation, which guarantee that the updates of the models are kept secure during the training pipeline. A node-weighting program is integrated so as to stabilize convergence in the situation where the data distribution is heterogeneous by giving priority to high-quality institutional contributions. Health-FedNet was tested on the MIMIC-III clinical database to predict diabetes and hypertension in a simulated environment of multiple hospitals under realistic conditions. The results of the experiment obtained after five independent runs prove that the model has reached an accuracy of 92%, and AUC-ROC is equal to 0.94, with the confidence interval showing the relevance of these findings at 95 percent. Paired t-tests (p < 0.01) have statistically validated that Health-FedNet is 12% more predictive than centralized and baseline federated (meaning less communication overhead produces a 41.6% decrease in communication overhead). Privacy tests show that the suggested approach will lower the membership inference risk by 20 to 5%. The framework is compliant with HIPAA and GDPR and proves to be robust in the presence of noisy or imbalanced clinical data. Health-FedNet gives a viable basis for safe federated healthcare analytics and has high potential to be implemented in distributed hospital information systems.
Keywords: Federated learning, Privacy-preserving healthcare AI, Differential privacy, Homomorphic encryption, Adaptive node weighting, Chronic disease prediction, MIMIC-III, Secure medical data sharing
Subject terms: Computational biology and bioinformatics, Engineering, Health care, Mathematics and computing
Introduction
AI is boosting predictive modeling, disease detection, and treatment prescription in the healthcare industry.1 ML approaches enable health-care providers to evaluate enormous amounts of patient data to improve decision-making, care, and system performance2. AI is used in imaging, pharmaceutical development, telemedicine, and disease diagnosis. However, AI in healthcare requires large, accurate, and sometimes private data that can be efficiently evaluated, such as medical records.
Pharmacogenomics and clinical notes 4. This dependence raises concerns about data privacy, security, and regulation. Health care data is always stored locally in hospitals, clinics, research labs, etc., making it challenging to gather huge datasets for complex predictive models. Centralized HIEs that aggregate identifiable patient data in a central database are privacy-risky because the Anthem data breach exposed over 79 million records. The US Health Insurance Portability and Accountability Act (HIPAA) and European General Data Protection Regulation (GDPR) limit data exchange and need more creative ways to use data while protecting patient privacy3. Federated learning (FL) is a new decentralized machine learning model training method. FL permits model training across institutions without exchanging raw data, protecting data privacy and location.
However, current FL frameworks have security issues with shared gradients, poor support for large-scale healthcare datasets, and problems maintaining robustness while learning from multiple data distributions4, 5. The primary motive of this research is to bridge the gap between data utility and privacy in AI-driven healthcare applications. The most challenging issue in the area of healthcare analytics is that healthcare data is decentralized and is stored in different institutions, and this is coupled with a very important aspect of maintaining patient confidentiality and regulatory standards6. Conventional centralized systems pose risks of breach of data and Weakness in privacy compliance, such as HIPAA and GDPR. Federated learning provides a paradigm of decentralized model training, but the current frameworks do not have the privacy guarantees, scalability, and robustness to heterogeneous data7, 8. Privacy, regulatory, and heterogeneity of data are among the challenges facing healthcare institutions in exchange of sensitive data. Federated learning is decentralized but does not concern privacy or data quality9. Health-FedNet encrypts aggregated data homomorphically and masks individual data contributions with differential privacy10, 11.
The adaptive node weighting technique pulls higher-quality data sets to the global model, reducing bias and increasing robustness. HIPAA and GDPR compliance, confidence, and teamwork are achieved with this multimodal strategy12. Data confidentiality is maintained by collaborative analytics. Mehendale said FL may change healthcare and research data exchange. Data localization in federated systems reduces data breach concerns at centralized data storage13.
They also observed major gaps in previous methods that fail to achieve high model accuracy and privacy14. Narmadha and Varalakshmi examined FL’s role in regulatory compliance and how it can navigate complex legal landscapes and share data across institutions15. Padmanaban introduced computational and privacy-focused healthcare AI/ML architectures16. These architectural solutions demonstrate secure and efficient data-driven healthcare9. LEAF was created by Patel et al. for federated healthcare network privacy and scalability. They found that FL frameworks operate best with privacy. Experimentally, FL models preserve healthcare data, Pati et al. Data exposure lowered and privacy improved17, 18.
Potter and Olaoye examined data-protected, networked machine learning FL systems19. Roy studied universal healthcare, privacy-preserving AI, and data breach prevention. The study explored how AI could reduce risks in big data management20. Shah suggested enhanced FL system encryption for secure aggregation and model sharing21. After Shanmugam et al., we analyzed AI/ML application architectures and identified significant gaps in privacy-preserving techniques, then proposed data-safe solutions13. Ahir developed an FL-cryptographic privacy architecture for healthcare data security and scalability2522. Deep learning and AI personalization increased healthcare data privacy; however, Tam-raparani preferred personalized solutions. Torkzadehmahani et al. examined privacy-preserving AI in biomedical healthcare8,23.
Vizitu et al. suggest privacy-preserving AI can provide tailored precision medicine while maintaining data confidentiality. Wang and colleagues suggested a privacy-sensitive FL architecture to smart healthcare systems that meets the issues of communication and scalability (Wang et al.24). This paper has shown that FL can be used in large healthcare organizations. Finally, Yang et al. examined FL for privacy-protected drug development data exchange25.
Federated learning (FL) has recently emerged as a practical paradigm for enabling collaborative healthcare model training without centralizing sensitive clinical records. Comprehensive reviews in medical imaging demonstrate that federated systems can support multi-institutional learning while maintaining data locality and reducing institutional privacy risks26, 27. However, real-world deployments face challenges including client heterogeneity, communication overhead, and vulnerability to privacy attacks. Such restrictions are compounded further in healthcare environments where institutional datasets are very imbalanced and non-IID. Previous research on optimization of heterogeneous networks underscores the fact that simple averaging algorithms tend to fail in converging in this situation28. This fosters the necessity of privacy-sensitive federated systems capable of stable convergence, robust protection of privacy, and scalable implementation in a variety of medical facilities.
Secure federated and blockchain-based healthcare has also been studied recently. Mazid et al.29 have suggested a blockchain-based personalized FL model in an IoMT setup, with a special focus on safe medical data transfers across the distributed machines. This is in line with the motivation of our work, and Health-FedNet enhances the field by adding to the previous state of the art AI calibrated differential privacy, adaptive node weighting, and Paillier-based encrypted aggregation of clinical data on hospital scales.
The recent studies in secure distributed learning have focused on effective privacy measures and the detection of malicious threats in cloud-based systems30. To emphasize the significance of robust model aggregation, Fedmup31 presented a federated learning-based predictive mechanism of malicious users to secure a distributed cloud environment. Quantum-based threat detection through quantum ML also improves privacy based on quantum-principled predictive models of cloud networks32. Gupta et al.33 also suggested a privacy-preserving medical data sharing algorithm as differential and triphase adaptive learning, which showed greater inference attack resistance. Secure FL on industrial clouds has also been advanced with a new VM threat prediction technique and dynamic workload estimation34 as well, with a focus on scale and real-time scheduling. In contrast to these works, our Health-FedNet framework concentrates on privacy-related guarantees that are specific to healthcare, including differential privacy, homomorphic encryption, and adaptive node weighting to guarantee secure cross-institution medical model training with a higher accuracy rate and a greater robustness level.
The most recent publications (2023–2025) have proposed a number of more sophisticated privacy-conscious learning models applicable to healthcare prediction tasks. These are federated disease detection pipelines, CNN-based clinical risk models, and IoMT-security authentication mechanisms. The inclusion of these studies enhances the situation contextualization of Health-FedNet by calling attention to the existing trends in the field of privacy-preserving medical AI, as well as showing the necessity to have federated solutions that could work safely in the environment of decentralized healthcare data silos.
Various publications in the recent past are directly related to the objectives of Health-FedNet. A cross-stage recurrent FL architecture that Jayalakshmi and Tamilvizhi recommended to enhance the privacy of diabetic prediction35 was used to create a probabilistic model. Muthukumar et al. trained a CNN-based CKD prediction pipeline36, whereas Jayalakshmi et al conducted an extensive analysis of federated healthcare models37. Furthermore, Riya et al. proposed an IoMT-oriented encryption and authentication scheme applicable to the process of exchanging clinical data with security49. These works support the need to integrate privacy, robustness, and efficiency in current federated clinical prediction systems.
Health-FedNet is developed as a two-fold purpose architecture that not only does exceptionally precise chronic disease forecasting but also provides high privacy safeguarding. Federated learning in this work is employed to retain all patient records within each institution, whereas homomorphic encryption and differential privacy can make sure that encrypted gradients and noise-resistant updates are transferred during training. This makes the model provide not only the correct predictions of diabetes, hypertension, and heart failure, but also preserves the information of patients against leakage, reconstruction, or unauthorized access at any time during the process of learning.
Recent studies have emphasized that data heterogeneity remains a central challenge in medical federated learning, particularly when institutional data distributions differ significantly across sites. Darzi et al. demonstrated that aligning representation spaces using vision transformers can substantially improve convergence stability and robustness under heterogeneous medical data settings, reinforcing the importance of heterogeneity-aware learning mechanisms in federated healthcare systems38. Similarly, comparative evaluations of federated learning strategies for COVID-19 detection reveal that naïve aggregation approaches often suffer from degraded performance in non-IID clinical environments, highlighting the need for adaptive aggregation and robustness-aware designs39. These findings directly motivate the heterogeneity-aware architecture of Health-FedNet.
The summaries of the discussed studies provided in Table 1 illustrate how the concept of privacy-preserving healthcare intelligence has been progressing, including federated learning models with differential privacy and secure aggregation on one hand and sophisticated biomedical applications with quantum computing, deep pathological models, explainable AI, and genomic prediction systems on the other. Although these papers make important contributions to the development of healthcare analytics, they either do not include built-in federated privacy, fail to use homomorphic encryption with differential privacy, or lack adaptive node weighting in heterogeneous medical settings. Such constraints underscore the necessity of having a single architecture, such as Health-FedNet, that can deal with privacy, scalability, robustness, and real-world deployment issues in distributed healthcare environments at the same time.
Table 1.
Comparative table of previous studies in privacy-preserving healthcare AI.
| Attributes | Madhavi et al.40 |
Patel et al.17 | Moon and Lee41, Bikku et al.42 |
Bikku et al. 43 |
Srinivasu et al.44 | Bikku et al. 45 |
|
|---|---|---|---|---|---|---|---|
| Techniques |
Adaptive node weighting, DP |
DP + secure aggregation |
Secure FL communication |
Quantum algorithm for biomedical drug discovery |
DL for histopathology image classification | Explainable AI for stroke prediction | Gene ex-pression bicluster- ing with LSTM- SVM |
| Methodology |
Prioritizes high-quality client nodes |
LEAF privacy- preserving FL architecture |
Communicatio efficient secure FL |
nQ-uantum- enhanced medical data computation |
CNN- based cancer histopathology learning |
XAI-based clinical feature evaluation |
LSTM + SVM hybrid model for genomic prediction |
| Key contributions |
Improved privacy and accuracy |
Scalable FL model for health-care data |
Lower communication cost |
Advances in quantum-assisted medical AI |
High- accuracy ovarian cancer diagnosis |
Transparent clinical decision support |
Efficient health-care gene pattern discovery |
| Limitations |
Limited real-world deployment |
High communication overhead |
Scalability challenges |
Not FL/DP focused | No fed-erated privacy layer | Not fed-erated learning-based | Lacks privacy/FL integration |
| Applications |
Decentralized diagnostics |
Hospital FL collaboration |
IoT patient monitoring |
Drug discovery and pharma AI |
Digital pathology cancer detection |
Stroke prediction and ex- plainability |
Precision healthcare genomics |
The prevalence of healthcare data distributions is different among hospitals; in MIMIC-III units, the prevalence of anemia is between 8.2 and 42.5%, and diabetes ratios differ across more than 30% of the locations of admission. This patient-mix variation results in non-IID patient data issues and poor convergence in typical FL.
To add weight to the corresponding work section and the issue noted by the reviewer, we have extended the summary of related literature to cover the accuracy, AUC, privacy budgets (e), and methodological differences as presented in modern federated healthcare research. This will give a better view of how the proposed Health-FedNet framework will fit in with the state-of-the-art DP-FL and HE-FL solutions. The following Table 2 summarizes the key performance indicators that were obtained by the research that utilized similar datasets and privacy mechanisms.
Table 2.
Comparative metrics from recent federated learning studies in healthcare.
| Study | Dataset | Accuracy | AUC | Privacy ε | Method |
|---|---|---|---|---|---|
| Patel et al.17 | MIMIC-III | 0.88 | 0.90 | 3.0 | DP + Secure aggregation |
| Khalid et al.7 | eICU | 0.90 | 0.91 | 5.0 | Differential privacy only |
| Yin et al.13 | ICU Waveform | 0.86 | 0.87 | N/A | Standard federated learning |
| Health-FedNet (Proposed) | MIMIC-III | 0.92 | 0.94 | 1.53 | DP + Homomorphic encryption + Adaptive weighting |
Significant values are in bold.
In practice, in a clinical environment, sensitive medical data is shared among many hospitals, labs, and diagnostic facilities, and this data aggregation in a central place is not viable because of the security, regulatory, and ethical considerations. Traditional machine learning models lead to the direct exchange of data, which creates a risk of privacy leakage, as well as cyber-attacks. Thus, there is an acute necessity for a safe and cooperative learning system that will not risk the confidentiality of patients and will allow providing high-quality predictive healthcare analytics. Health-FedNet solves this dilemma with differentiated privacy, homomorphic encryption, and adaptive node weighting to achieve highly effective and privacy-aware federated training.
While differential privacy and homomorphic encryption have been individually explored in federated healthcare learning, Health-FedNet advances the state of the art through their unified integration with adaptive node weighting under heterogeneous clinical distributions. The contribution of this work lies in system-level design, rigorous empirical validation, and robustness analysis rather than proposing new cryptographic primitives.
Contribution
The key findings of the article on Health-FedNet may be summarized as under:
Privacy-Preserving Framework Our system, Health-FedNet, is a federated learning architecture that incorporates the principles of differential privacy and homomorphic encryption to ensure the privacy of sensitive patient data during the training of the model.
Better Diagnostic Performance We verify that the Health-FedNet does better (AUC-ROC: 0.94; precision score: 0.92) than centralized models and traditional federated learning models.
Less Communication Overhead We offer results that Health-FedNet implies a spectacular decrease in the expenses of communication, and the overhead is reduced by 41.6 percent compared to the federated learning models that are standard, which allows scalability within large healthcare networks.
Adherence to Privacy Regulations We guarantee that we adhere to significant privacy regulations, including HIPAA and GDPR, and thus minimize the risk of data leakage by over 50 percent compared to non-privacy federated learning models.
Possible Real-Time Application Health-FedNet applicability to real-time health care, e.g. continuous patient monitoring and prediction of diseases outbreaks, is examined in the present study with particular attention paid to the problem of data.
Heterogeneity and computing efficiency
Compared with the current methods that use either differential privacy or homomorphic encryption alone, Health-FedNet combines both methods with an adaptive node-weighting mechanism that continuously increases or decreases the contribution of individual institutions according to the quality and consistency of their data. This design delivers high robustness with the heterogeneous healthcare datasets and provides high communication efficiency and a stringent privacy guarantee. Beyond simulation, the research has a methodological depth as it describes model architecture, hyperparameters, and statistical testing plans in detail, which guarantees the transparency and reproducibility of the research. The comparison of Health-FedNet with centralized, conventional federated, and privacy-preserving federated models reveals that Health-FedNet represents a state-of-the-art with the greatest improvement of diagnostic accuracy, reduced communication overheads, and enhanced privacy protection, which makes it a viable and innovative technology and not a repetition of previous approaches.
In order to make it clearer and mathematically consistent, Table 3 lists all the variables and symbols that will be used in the formulation of Health-FedNet.
Table 3.
Summary of notations and mathematical variables used in Health-FedNet.
| Symbol | Description |
|---|---|
| θ | Global model parameters |
| θi | Local model parameters of institution i |
| θ˜i | Privacy-preserved encrypted update from institution i |
| wi | Adaptive weight of institution i for aggre-gation |
| qi | Data quality score (accuracy, sample size, and stability) for client i |
| ni | Number of samples at institution i |
| n | Total number of training samples across all institutions (n = ∑i ni) |
| ε | Differential privacy budget |
| δ | DP failure probability |
| σ | Noise scale used in Gaussian mechanism |
| C | Gradient clipping threshold |
| E(·) | Paillier encryption function |
| D(·) | Paillier decryption function |
| pk, sk | Public key and secret key for HE scheme |
| L(θ ) | Loss function |
| t | Training round/communication iteration |
| α | Convergence constant in theoretical bound |
| ρ | Bound on gradient deviation to limit leak-age |
Privacy-preserving federated learning for decentralized healthcare data
The essence of the issue is to develop a federated learning model of decentralized healthcare information that is capable of ensuring high levels of privacy, regulatory adherence, and enhanced model performance. It aims at ensuring privacy during training and aggregation, as well as solving some of the major issues, such as data heterogeneity, overhead during communications, and scalability constraints.
Di is defined as a local dataset of healthcare institution i, i = 1, 2, N and N, the number of institutions that are involved. In every institution, the local model is trained by minimizing its local loss function Li (θ; Di). The global model is acquired by a weighted federated averaging of the encrypted and privacy-preserved local models.
More complex privacy methods, such as differential privacy and homomorphic encryption, are incorporated in the following way:
Differential Privacy Gaussian noise N (0, σ 2) is introduced in the local update of the model, θ˜i = θi + N (0, σ 2).
Homomorphic Encryption E(θ˜i). The privacy-preserved model update is encrypted under homomorphic encryption, and this can be aggregated at the central server.
The adaptive node weighting mechanism dynamically adjusts the contribution of each institution based on a unified quality score qi that accounts for both local dataset size and local model performance. To incorporate client reliability, a unified quality score is computed using Eq. (1):
![]() |
1 |
In Eq. (1), the parameter
controls the balance between institutional data volume and local model reliability. In this study,
was selected to assign equal importance to both components, avoiding aggregation bias toward large institutions
or over-reliance on potentially noisy local validation accuracy
. This fixed setting provides a stable and interpretable balance under heterogeneous clinical partitions. The effectiveness of this design choice is empirically validated through the ablation study in Sect. 4.1.4, where disabling adaptive weighting resulted in a 4.6% reduction in global accuracy under medium-noise conditions, confirming its role in stabilizing convergence rather than introducing aggregation bias.
The softmax-based weight assignment is then performed using Eq. (2):
![]() |
2 |
The global aggregation step incorporates these weights according to Eq. (3):
![]() |
3 |
where ni denotes the size of local dataset Di, Acci is the institution-specific local validation accuracy, and α is the trade-off parameter (set to 0.5 in our experiments). This formulation unifies the weighting mechanism with a single softmax-based rule across the framework, replacing the previous linear normalization.
The adaptive node weighting mechanism prioritizes clients based on data quality, model consistency, and sample diversity, assigning higher weights to clinically reliable institutions. This reduces the influence of noisy or skewed updates, stabilizes global convergence, and mitigates outlier bias common in medical datasets.
Local model training with regularization
Each institution optimizes its local loss with L2 regularization, as expressed in Eq. (4):
![]() |
4 |
where λ is the regularization parameter.
Standard federated averaging baseline (FedAvg)
For comparison, the traditional FedAvg rule is computed using Eq. (5):
![]() |
5 |
which relies solely on data proportionality without incorporating model-quality-based adaptive weighting. This serves as the baseline against which the adaptive scheme of Eq. (1) is evaluated.
Differential privacy guarantees
The privacy guarantee for neighboring datasets is formalized using Eq. (6):
![]() |
6 |
where D and D ′ are neighboring datasets differing by one entry, and M is the randomized mechanism (model training).
Differential privacy noise is injected into client-side gradients before encryption, following the Gaussian mechanism and privacy composition rules originally developed for neural networks46. This ensures that the contribution of any single patient record remains indistinguishable, providing a mathematically provable privacy guarantee.
Secure aggregation with homomorphic encryption
The encrypted global update is computed using the additive homomorphism property, as shown in Eq. (7):
![]() |
7 |
This leverages the additive property of homomorphic encryption.
Adaptive Node Weighting
The stable softmax weighting rule used in Health-FedNet is defined in Eq. (8):
![]() |
8 |
In this work, Eq. (6) is adopted as the unified weighting rule to ensure convex combination and numerical stability. The previously defined linear weighting (Eq. 1) is deprecated to maintain consistency across the aggregation process. This softmax-based weighting enables stable optimization under heterogeneous node contributions and prevents dominance of large data-holding institutions during aggregation.
Gaussian noise calibration for differential privacy
The Gaussian noise required to meet
-DP is calibrated using Eq. (9):
![]() |
9 |
where ε is the privacy budget and δ is the probability of failure.
Convergence guarantee for federated optimization
The convergence rate is ensured under smoothness and bounded variance conditions using Eq. (10):
![]() |
10 |
where θt is the global model at iteration t, L(θ ∗) is the optimal loss, and α is a constant.
The convergence bound in Eq. (8) assumes the following conditions:
L-smooth loss functions, i.e.,
;bounded gradient variance
;client heterogeneity modeled using the FedProx framework, with µ-proximal regularization.
To verify that the assumptions used in Eq. (10) hold under real-world heterogeneous settings, we conducted an empirical convergence test using the MIMIC-III institutional partitions. Each hospital subset was treated as a separate client with different data sizes and feature distributions. Across all clients, the gradient norms remained bounded, with a maximum observed value of ‖∇L‖ ≤ 3.1, confirming the bounded-variance condition. The loss function decreased smoothly over 20 communication rounds, which supports the L-smoothness assumption used in the theoretical proof. These empirical observations show that the convergence behaviour predicted by Eq. (10) holds even under practical non-IID and heterogeneous healthcare conditions.
Differential privacy noise with variance σ 2 is incorporated in the convergence rate using composition-aware privacy accounting, where the overall bound is expressed as:
![]() |
indicating the additional term introduced by differential privacy perturbations and federated heterogeneity.
The convergence assumptions follow the theoretical analysis for federated optimization under non-IID clinical data distributions28, ensuring stable convergence by incorporating smoothness bounds and gradient variance constraints.
Gradient leakage risk mitigation
To mitigate gradient leakage, the deviation between local and aggregated gradients is constrained using Eq. (11):
![]() |
11 |
where ρ is a bound on the gradient deviation, ensuring model updates do not leak sensitive information.
To replace the theoretical gradient-leakage bound of Eq. (10), we empirically evaluate privacy leakage using three adversarial attacks—membership inference (MI), property inference (PI), and gradient inversion (GI)—on 10% of the MIMIC-III test samples. These metrics offer measurable indicators of information exposure risk under various federated settings.
Table 4 summarizes how well Health-FedNet withstands different adversarial attacks. Compared with Non-Private FL and DP-FL, Health-FedNet shows much lower membership and property inference risks, and achieves a higher PSNR under gradient inversion attacks. This indicates that the framework provides stronger privacy protection across multiple threat models.
Table 4.
Adversarial attack success rates (Lower is better).
| Attack | Non-private FL | DP-FL | Health-FedNet |
|---|---|---|---|
| Membership inference (%) | 43.2 | 18.5 | 5.7 |
| Property inference (%) | 37.9 | 15.2 | 6.3 |
| Gradient inversion (PSNR ↑) | 11.2 dB | 23.5 dB | 31.8 dB |
Significant values are in bold.
These results confirm that Health-FedNet significantly mitigates adversarial leakage risks compared to baseline models, thereby empirically validating its privacy-preserving capacity instead of relying on unverified theoretical bounds.
Health-FedNet uses
size
and dataset size
. Differential privacy guarantees for healthcare AI. We improve privacy, scalability, model robustness, and secure, decentralized healthcare data analysis. Model training and HIPAA/GDPR compliance are secured using a federated learning system with differential privacy and homomorphic encryption. Prioritized high-quality data sources with adaptive node weighting to improve global model robustness and prediction in heterogeneous healthcare datasets.
Comprehensively studied MIMIC-III clinical dataset, showing significant diagnostic accuracy and privacy assurance improvements over baseline centralized and federated models. Proposed a computationally efficient, low-communication-overhead architecture for large-scale, multi-institutional cooperation. A four-section Introduction addresses AI in healthcare, privacy, and federated learning. Problem: Healthcare data analytics learning models are centralized and federated. The proposed framework describes Health-FedNet’s adaptive node weighting, homomorphic encryption, and differential privacy.
The methodology involves data pretreatment, model training, federated aggregation, and assessment metrics. Results and Discussion evaluate diagnostic accuracy, privacy, and MIMIC-III baseline scalability. The report concludes with contributions, practical implications, and healthcare privacy-preserving AI research goals.
Methods
In the proposed method, Health-FedNet, a unique federated learning platform for privacy-preserving healthcare analytics, is designed and assessed. As mentioned in the Introduction, centralized and non-private federated methods have privacy, regulatory compliance, and computational inefficiencies. Health-FedNet improves multi-institutional healthcare performance, security, and compliance with innovative privacy-preserving and adaptive technologies. In the following subsections, we describe the architectural components, algorithmic steps, and evaluation metrics used to evaluate the performance of Health-FedNet in real-world healthcare scenarios.
Ethical approval was not newly required for this study, as it used the de-identified MIMIC-III database, which has received prior approval from the Institutional Review Board of the Beth Israel Deaconess Medical Center.
Dataset description
Health-FedNet is evaluated using de-identified health data from over 40,000 critical care patients obtained from the publicly available MIMIC-III clinical database47. The dataset includes demographics, vital signs, laboratory test results, medication records, and diagnostic codes. In this study, MIMIC-III data subsets were used to predict diabetes and hypertension. Heart failure is mentioned as a motivating example; however, experimental evaluation focuses on diabetes and hypertension to ensure controlled validation. Extension to additional chronic conditions remains future work.
Categorical variables were encoded, numerical features were normalized, and records with missing values were excluded. Patient records were shared between simulated institutions to replicate decentralized healthcare with substantial data heterogeneity. Table 5 provides an overview of the dataset used in this study.
Table 5.
Dataset description for chronic disease prediction.
| Attribute | Description | Count/Range | Example |
|---|---|---|---|
| Patients | Total number of patients | 40,000 + | – |
| Age | Patient age | 18–89 | 65 |
| Gender | Male/Female distribution | 55% / 45% | Male |
| Vitals | Heart rate, blood pressure, etc | 10 + features | Systolic BP: 120 |
| Laboratory tests | Test results for clinical mark-ers | 20 + tests | Hemoglobin: 13.5 g/dL |
| Diagnoses | ICD-9 codes for chronic dis-eases | 15 + categories | Diabetes: ICD-9 250 |
| Data distribution | Number of simulated institu-tions | 5 (Hospitals/Clinics) | Hospital A: 8,000 patients |
| Missing data handling | Imputation techniques for missing features | Median imputation, forward fill | – |
| Normalization | Scaling numerical features | Min–Max Scaling (0–1) | BP: 0.75 |
Although MIMIC-III is a single-source dataset, institutional heterogeneity was simulated using patient stratification by ICU type, admission source, and prevalence imbalance. Nevertheless, true inter-hospital variations such as heterogeneous electronic health record schemas and regional demographics remain a limitation and will be addressed using multi-center datasets in future work.
Ethics approval and consent to participate
This study used the MIMIC-III database, which contains de-identified health information from patients admitted to intensive care units at the Beth Israel Deaconess Medical Center. The MIMIC-III database has received prior ethical approval from the Institutional Review Board of the Beth Israel Deaconess Medical Center (IRB protocol number 2001-P-001315/14). Because the data are retrospective and de-identified, the requirement for informed consent was waived. The use of MIMIC-III data complied with the approved data use agreement and the PhysioNet Credentialed Health Data License Agreement.
To enhance dataset transparency, the final cohort comprised 40,102 unique patients, including 12,438 patients diagnosed with diabetes and 9122 patients diagnosed with hypertension, corresponding to moderate class imbalance (diabetes: 31.0%, hypertension: 22.7%). The remaining patients served as non-disease controls. Selected features included demographic attributes, vital signs, laboratory measurements, medication history, comorbidity indicators, ICU unit types, and temporal trends, chosen for their clinical relevance and availability across care settings.
Proposed model: Health-FedNet
Overview
Health-FedNet is a federated learning model that can safely train machine learning models in decentralized healthcare facilities without exposing their raw patient data. The model is aimed at ensuring high diagnostic accuracy, scalability, and robustness of the models. Health-FedNet attains this by using three main elements:
Differential Privacy Guarantees the privacy of the contribution of individual data by introducing controlled noise to the model changes.
Homomorphic Encryption Provides the security of aggregation of model updates and also encrypts through computations.
Adaptive Node Weighting: Contributes to the efforts of institutions by dynamically using a weighting system that is sensitive to data quality and consistency in order to improve robustness in case of data heterogeneity.
The changes to the model are documented and interpreted with the help of SHAP-based feature attribution, which gives insight into the role of clinical variables and their respective contribution to global model learning and institutional impact.
Health-FedNet operates in iterative cycles whereby local models are trained at the institutions participating in the program and then summarised into a global model within a central server. The process goes on until we have reached convergence. A mathematical formulation of the framework and implementation is given in the following subsections.
Figure 1 provides an architecture that shows the end-to-end workflow of Health-FedNet. The data of the patients is safely stored in every institution, and before any processing, they are encrypted. Preferably, the use of differential privacy is local, whereas homomorphic encryption and secure aggregation are used to protect updates. Confidential computing Enclave-based confirms tamper-proof execution of local models. The aggregated encrypted updates are also trained centrally, and this way the raw data are not leaked, resulting in a global model. The last model underpins the predictive analytics system and clinical decision-support systems and can predict chronic diseases privately across distributed health care systems.
Fig. 1.

Model architecture.
Mathematical model
N is the number of institutions totaling the number of participating institutions, and Di is the local dataset of institution i (i = 1, 2,..., N). The aim is to jointly optimize a global model of the collaboration of the accumulated knowledge of all institutions, but not to share the raw data.
-
Local Model Training Each institution reduce its local loss function Li(θ; Di) to train a local model θi. The objective is:
where
12
is the regularization term, and.λ is the regularization parameter.
-
Privacy-Preserving Update To avoid the leakage of sensitive information, differential privacy is used by introducing Gaussian noise N (0, s 2) to the model gradients:

13 The noise variance σ 2 is calibrated to control the privacy budget ε:
where δ is the probability of failure in privacy protection.
14 To protect individual data points, Health-FedNet employs a calibrated Gaussian Differential Privacy mechanism, denoted as N (0, σ 2). The noise scale is chosen with the moment accountant method, which provides an optimal trade-off between privacy and utility. The gradients are always clipped to a fixed norm C, and the noise is introduced before this clipping, ensuring sensitivity limits and eliminating the possibility of a single patient record making a disproportionate contribution. The addition of noise in this manner ensures that every update is (ε, δ)-DP, and protects patient-level medical attributes against inversion or membership inference attacks.
-
Encrypted Model Updates These updates θ˜i are homomorphically encrypted with an encryption function.

15 This will make sure that the server is able to combine updates without decryption.
-
Secure Aggregation The aggregate of the encrypted updates of all institutions is done on the central server:

16 Using the properties of homomorphic encryption, the server can compute:
17 -
Adaptive Node Weighting A weighted sum of various institutions as per a quality score is used to explain heterogeneity of the data qi:

18 The aggregation weight
in Eq. (18) is derived from the normalized unified quality score
, ensuring that all client contributions sum to one while preserving relative reliability differences across institutions. This formulation prevents unstable aggregation caused by extreme client dominance and enables robustness under heterogeneous data distributions. The quality score
integrates both data representativeness and predictive reliability, as defined in Eq. (1), allowing adaptive weighting to dynamically respond to institutional variability during federated optimization.The global model update is then computed as:
19 - Convergence and Loss Bound The convergence of Health-FedNet will be assured by bounding the expected loss.:
where θ ∗ is the optimal model, and α is a positive constant.
20
Algorithm: Health-FedNet framework
To complement the system architecture in Fig. 1, Algorithm 3.2.3 summarizes the end-to-end Health-FedNet training workflow, including local training, differential privacy, encryption, and secure aggregation.
Health-FedNet uses adaptive weighting, local fine-tuning, and partial parameter aggregation to mitigate heterogeneity across hospitals. This strategy aligns diverse clinical distributions and improves convergence stability in non-IID healthcare settings.
Algorithm 3.2.3.
Health-FedNet training workflow
To strengthen the quantitative foundation of the proposed framework, additional numerical justification is provided for the core components used in Health-FedNet. The regularization applied during local training reduced validation overfitting by approximately 2.3%. The differential privacy noise introduced at the client-side increased gradient variability by less than 1.2%, indicating that privacy preservation does not significantly compromise model stability. The adaptive node-weighting approach produced a steady 4–5 percent increase in the summative AUC-ROC in comparison to standard FedAvg in cases of heterogeneous client conditions. Gradient norm testing. Empirical convergence testing showed that gradient norms were well-bounded in all MIMIC-III institutional partitions, and facilitated stable optimization in 20 communication rounds. All of these quantitative effects led to an improvement in accuracy of 7 percent, a privacy leakage decrease of 75%, and a 41.6% overhead decrease in communication compared to the baseline FL methods.
Evaluation metrics
In order to assess the performance and strength of the suggested Health-FedNet framework, there is a system of extensive measurements that is used. These indicators determine the diagnostic accuracy of the model, privacy and preservation performance, communication performance, and scalability of the model to realistic healthcare contexts. The metrics of evaluation that have been used in this study are shown in Table 6. Diagnostic Accuracy determines the capacity of the model to accurately forecast the chronic diseases with metrics: Precision, Recall, F1-Score, and AUC-ROC. The principle of Privacy Assurance is measured by examining privacy-preserved updates with the privacy budget specified.
Diagnostic Accuracy Evaluates how the model results in the correct prediction of chronic diseases as evaluated by Precision, Recall, F1-Score, and AUC-ROC.
Privacy Guarantee: Measured by privacy budget (e) in the context of differential privacy as well as data leakage due to model changes.
Communication Efficiency The assessment was performed based on measurements of bandwidth needed to update the model and convergence time of federated training.
Scalability The performance of the model is tested at increasing levels of the number of participating institutions or the size of the dataset used.
Table 6.
Evaluation Metrics for Health-FedNet with Statistical Validation.
| Metric | Description | Formula/Method |
|---|---|---|
| Accuracy | Measures overall correctness of predictions | ![]() |
| Precision | Correctness of positive predictions | ![]() |
| Recall (Sensitivity) | Ability to detect true positives | ![]() |
| F1-Score |
Harmonic mean of precision and recall |
![]() |
| AUC-ROC | Measures discriminatory ability | AUC computed from ROC curve us- ing trapezoidal rule |
Privacy Budget ( ) |
Differential privacy leakage bound | ![]() |
| Communication efficiency | Bandwidth + computation over-head | Bandwidth (MB), Convergence Time (sec) |
| Robustness | Performance under noisy/heterogeneous data | Accuracy under Gaussian noise and imbalanced partitions |
| Statistical Significance | Validates experimental repeatability | Paired t-test (p < 0.05) and 95% CI across 5 runs |
Model Robustness Tests the resilience of the global model against heterogeneous and noisy datasets, including variations in data quality across institutions.
Experimental design and implementation
To ensure clarity and reproducibility, Health-FedNet was implemented in TensorFlow 2.10 with Python 3.9 and executed on an NVIDIA RTX A6000 GPU equipped with 48 GB of memory. The global model was designed as a four-layer neural network consisting of an input layer whose dimensionality corresponded to the features of the MIMIC-III subsets, followed by two hidden layers with 128 and 64 neurons respectively using ReLU activation, and an output layer employing a softmax activation for multi-class classification. Model training used a cross-entropy loss function and the Adam optimizer with a learning rate of 0.001, a batch size of 64, and 50 local epochs per round. The experimental evaluation included three clearly defined baselines. The centralized model was trained on the complete dataset without privacy-preserving mechanisms, the traditional federated learning model was trained with FedAvg aggregation without privacy guarantees, and a private federated learning model was implemented with differential privacy only. Health-FedNet combined differential privacy with a budget
of ε = 1.5 and δ = 10−5, homomorphic encryption using the Paillier scheme, and adaptive node weighting based on both
dataset size and model accuracy. The statistical significance (p < 0.05) and repeat the experiment 5 times in simulated institutions were tested by the paired t-tests to prove the stability in the case of heterogeneous assumptions. To enhance reproducibility, the specific training procedures and aggregation process were outlined in Algorithm 1. To test statistical significance, we calculated mean accuracy, SD, and 95% confidence interval of 5 runs. Health-FedNet obtained an average accuracy (92% SD = 1.4, CI = 91.2–92.8), which was better than baseline FL (85%), and centralized learning (82%). The significance was proven by a paired t-test (p < 0.01), which proved that the improvement of performance is statistically significant.
A fixed 80/20 train–test split was maintained across all experiments, where the test set was strictly held out from local training, aggregation, and hyperparameter tuning. The same held-out test partition was used consistently across the five independent runs, while random initialization and client partitioning were varied to assess robustness. This design ensures that reported accuracy, AUC-ROC, and confidence intervals reflect generalization performance rather than overfitting to specific MIMIC-III splits.
Arrangement of the experiment and devices
All the experiments were carried out using an NVIDIA RTX A6000 graphics card (48 GB VRAM) and an AMD Threadripper 3960X processor and 256 GB of system memory. Individual virtual nodes of Ubuntu 22.04, Python 3.9, TensorFlow 2.10 and identical package environments were used to simulate each participating institution in the federated setup. A 100 Mbps virtual network link was used to communicate between nodes, as this is close to the speeds of a hospital-to-cloud transfer. This arrangement guarantees that the results obtained can be replicated at a realistic network and hardware limit.
Sample pipeline processing dataset before it is loaded into memory
Raw MIMIC-III records were processed to clean up to have the same record contents across all simulated institutions:
Min-Max scaling was used to normalize continuous clinical variables (heart rate, systolic/diastolic BP, respiratory rate, hemoglobin, creatinine, glucose) between 0 and 1.
Categorical variables (gender, type of ICU, type of admission, comorbidity signs) were one-hot encoded.
The median imputation was adopted to provide the missing values (6%).
Values above the 99th percentile were cut to minimize the distortion of outliers without losing clinical interpretation.
The same preprocessing scripts were used by all the institutions to remove the heterogeneity based on preprocessing.
This guarantees uniformity of the dataset and justice amongst all the clients.
Whole hyperparameter configuration
All experiments were run using the following hyperparameters to achieve complete reproducibility:
Learning rate: 0.001
Optimizer: Adam
Batch size: 64
Local epochs per round: 50
Sampling rate: 5%
Gradient clipping norm C: 1.2
Gaussian DP noise σ: 0.8
Privacy budget per round εr: 0.15
Final privacy budget (ε, δ): (1.53, 10⁻5)
Momentum: 0.9
Weight decay: 1e−5
Dropout rate: 0.25
Paillier HE key size: 2048 bits
Fixed-point scaling factor: 104
Ciphertext packing: 32 gradients per block
In order to clearly report the entire experimental setup we had to summarize all the hyperparameters of Health-FedNet. These are learning environments, the setting of the differential privacy, model regularization, and the cryptographic parameters in homomorphic encryption. Table 7 gives the full perspective of values used in training in order to have the same results repeated in other environments precisely.
Table 7.
Hyperparameter configuration used in Health-FedNet.
| Parameter | Value |
|---|---|
| Learning rate | 0.001 |
| Optimizer | Adam |
| Batch size | 64 |
| Local epochs per round | 50 |
| Weighting factor α | 0.5 |
| Gradient clipping norm C | 1.2 |
| Gaussian DP noise σ | 0.8 |
| Privacy budget per round εr | 0.15 |
| Final privacy budget (ε, δ) | (1.53, 10−5) |
| Sampling rate | 5% |
| Momentum | 0.9 |
| Weight decay | 1e − 5 |
| Dropout rate | 0.25 |
| Paillier HE key size | 2048 bits |
| Fixed-point scaling factor | 104 |
| Ciphertext packing | 32 gradients per block |
Table 8 gives the explicit formalization of the complete differential privacy pipeline in Health-FedNet. Before encryption, gradient clipping at norm C = 1.2 and Gaussian noise N (0, σ 2) with s = 0.8 had been used. Privacy accountant followed Renyi Differential Privacy (RDP) with moment accountant of 50 local epochs and R = 20 rounds, resulting in a final (ε, δ ) = (1.53, 10−5) with a 5% sampling rate. We have calculated the total privacy cost in the entire training rounds using the Renyi Differentiating Privacy (RDP) accountant and the moments accountant to elucidate the privacy accounting procedure. The cumulative privacy assurance is (ε, δ) = (1.53, 10⁻5) under a privacy budget of per-round εr = 0.15, Gaussian noise σ = 0.8, clipping norm C = 1.2, sampling rate q = 0.05, and a total of 20 federated rounds. This comprises composition across all local epochs and ensures that the ultimate DP protection meets the high privacy criteria of healthcare data.
Table 8.
Differential privacy configuration for Health-FedNet.
| Parameter | Symbol | Value |
|---|---|---|
| Clipping Norm | C | 1.2 |
| Noise Standard Deviation | σ | 0.8 |
|
Privacy Budget (per round) Failure Probability |
εr δ |
0.15 10 − 5 |
| Sampling Rate | q | 0.05 |
| Total Training Rounds | R | 20 |
| Accountant Method | – | RDP + Moments Accountant |
| Final Privacy Guarantee | (ε, δ ) | (1.53, 10−5) |
Hyperparameters were optimized by means of a structured grid search approach to achieve the best performance. We tested learning rates, {1e−4, 1e−3, 5e−3}, batch sizes, {32, 64, 128}, and differential privacy noise scales, {0.5, 0.8, 1.0}. The momentum values were chosen among the set of {0.9, 0.95} and gradient clipping norms among the set of {1.0, 1.5}. Early stopping of patience 5 epochs was used. Each of the five simulated healthcare institutions, which represented separate data silos in individual healthcare institutions, was used to perform all experiments. Reproducibility of the hardware was achieved by running each setup five times on an NVIDIA RTX A6000 (48 GB VRAM) with fixed random seeds and reporting the means and standard deviations of the results.
In the case of diabetes and hypertension prediction, the features were demographics, vitals (HR, BP, RR), lab (glucose, creatinine, hemoglobin), comorbidity flags, and medications. One-hot encoding and median statistics were used to categorize and impute missing values, respectively.
Paillier encryption module uses a 2048-bit key, scaling factor of 104 in fixed-point, cipher blocks are packed with 32 gradients each and use modular reduction to ensure overflow is avoided in additive aggregation.
Results
This part shows the performance evaluation of Health- FedNet in different measures, such as diagnostic accuracy, privacy guarantee, communication workability, scale, and robustness. The findings are addressed with references to the fact that the proposed framework can tackle the issues of federated learning in healthcare.
To verify the evaluation methodology with an additional and more extensive evaluation, we also benchmarked Health-FedNet with three classes of models, which comprised centralized baselines, generic federated learning models, and privacy-preserving FL variants. The accuracy of diagnoses, AUC-ROC, communication overhead, resistance to noisy conditions, privacy leakage and demographic fairness are analyzed. The relative metrics will always depict a scene in which the adaptive node weighting, homomorphic encryption and calibrated differential privacy combination is new as applied in Health-FedNet. The hybrid design attains high accuracy (+ 7.0%over standard FL), low leakage ( -75%) and a communication saving of 41.6%. All these findings ensure the originality and the practical benefit of the suggested system.
Privacy-preserving and healthcare AI: federated learning of patient data
The subsection compares the Health-FedNet performance based on the diagnostic accuracy, privacy assurance, computational efficiency, scalability, robustness, and communication overhead. The findings reaffirmed by tables and figures, shed light on the capacity of the framework to enforce a balance between privacy, performance, and efficiency in federated healthcare analytics.
The diagnostic accuracy and results of prediction
To further reduce biases and the presence of optimistic bias, model results were checked on an independent hold-out test set, consisting of 20% previously unseen MIMIC-III samples. This split of validation was held out during training and hyperparameter selection. Health-FedNet obtained an AUC-ROC of 0.94 on the federated validation set and 0.92 on the independent test set and was able to generalize effectively without overfitting. Five repetitions of each experiment were done with varying random seeds in federated institutions, with mean values and 95% confidence interval being given. Figure 3 displays error bars indicating standard deviation between runs, which indicates consistent performance between heterogeneous nodes.
Fig. 3.
Diagnostic accuracy comparison across models.
Table 9 shows the performance metrics of the diagnostic of Health-FedNet upon centralized and traditional federated models.
Table 9.
Diagnostic accuracy and prediction metrics.
| Model | Precision | Recall | F1-Score | AUC-ROC |
|---|---|---|---|---|
| Centralized model | 0.78 | 0.75 | 0.76 | 0.80 |
| Traditional FL model | 0.83 | 0.81 | 0.82 | 0.85 |
| Health-FedNet | 0.92 | 0.90 | 0.91 | 0.94 |
Significant values are in bold.
Per-class performance was also assessed to resolve the issue of the imbalance in classes among diabetes and hypertension prediction. This aids in confirming that the model is not overfitting to the majority class, and it is also stable because of all categories of diseases. Table 10 has given very high levels of precision, recall, and F1-scores per class, which makes Health-FedNet behave as a balanced predictor in situations of heterogeneous clinical conditions.
Table 10.
Per-class diagnostic metrics.
| Class | Precision | Recall | F1-Score |
|---|---|---|---|
| Diabetes | 0.91 | 0.89 | 0.90 |
| Hypertension | 0.93 | 0.92 | 0.92 |
| Other / Healthy | 0.88 | 0.87 | 0.87 |
Per-class performance and statistical robustness
To address class imbalance and ensure that performance gains are not driven by majority-class dominance, per-class diagnostic metrics were evaluated for diabetes, hypertension, and healthy cohorts. As reported in Table 10, Health-FedNet achieves consistently high precision, recall, and F1-scores across all classes, with F1-scores of 0.90 for diabetes, 0.92 for hypertension, and 0.87 for healthy cases. This balanced performance confirms that the proposed framework does not overfit to a single disease category and remains stable under heterogeneous clinical distributions.
Beyond point estimates, statistical robustness was assessed through five independent training runs with distinct random seeds across five simulated healthcare institutions. The resulting ROC curves, shown in Fig. 2, were averaged across runs and are presented with 95% confidence intervals, providing a visual and quantitative indication of model stability. Health-FedNet achieved a mean AUC-ROC of 0.94 (95% CI 0.92–0.95), outperforming both the traditional federated baseline and the centralized model. Paired t-tests further confirmed that these improvements are statistically significant (p < 0.01), demonstrating that the observed gains are attributable to robust model design rather than random variation.
Fig. 2.
ROC curves for diabetes and hypertension prediction using Health-FedNet. The shaded areas represent 95% confidence intervals computed from five independent runs.
Statistical significance and confidence intervals
To achieve a statistically confident assessment, each experiment was repeated 5 times with five random seeds in the five simulated healthcare institutions. Each model was calculated in terms of mean performance metrics and 95% confidence intervals (CI). Health-FedNet attained an AUC-ROC 0.94 (CI 0.92–0.95), which was higher than the aforementioned traditional FL model 0.85 (CI 0.83–0.87) and centralized model 0.80 (CI 0.78–0.82). A paired t-test was used to ensure that these performance improvements are statistically significant (p < 0.01). These findings prove that the gains of Health-FedNet in diagnosis are strong and stable, and they did not appear because of the accident.
In order to enhance the authenticity of the received outcomes, a complete statistical assessment was incorporated within five autonomous training loops. The mean performance, standard deviation, and 95 percent confidence interval were calculated for each model. Health-FedNet attained an AUC-ROC of 0.94 (CI 0.92–0.95), as well as a mean accuracy of 92% (SD = 1.4), which confirms the stability of behaviour when the institutional partitions are heterogeneous. T-tests, which are done on Health-FedNet vs. the baseline FL and centralized models, show p-values of less than 0.01, which show that the differences in the performance of the two models are not by chance but are statistically significant. This statistical commentary supports the power of the suggested strategy. In order to have a visual representation of the predictive performance, we plotted the ROC curves of diabetes and hypertension. The curves were created on the independent test set and averaged after five runs. Figure 3 indicates the 95 percent confidence bands by the shaded areas. These curves prove that Health-FedNet provides consistent and high-performance classification in a diverse set of institutions.
The findings indicate that Health-FedNet attains a substantial increase in all the measures as compared to both the centralized and base federated models.
Privacy and security assurance metrics
The privacy assurance metrics are summarized in Table 11 and they are privacy budget (e) and data leakage risks. These findings are represented in Fig. 4.
Table 11.
Privacy and security metrics.
| Model | Privacy budget (ε) | Data leakage risk (%) |
|---|---|---|
| Non-Private FL | N/A | 45.0 |
| Traditional Private FL | 10.0 | 20.0 |
| Health-FedNet | 1.5 | 5.0 |
Significant values are in bold.
Fig. 4.
Privacy and security metrics across models.
In addition to membership inference risk, the privacy evaluation also considers broader data leakage threats commonly associated with federated learning, including property inference and gradient inversion attacks. The reported data leakage risk values in Table 11 represent the aggregated susceptibility across these adversarial scenarios, reflecting the ability of an attacker to infer sensitive attributes or reconstruct private information from shared updates. By jointly minimizing the privacy budget
and observed leakage risk, Health-FedNet demonstrates substantially stronger resistance to multiple privacy threats compared to both non-private and traditional private federated learning baselines. These metrics provide a consolidated view of adversarial robustness rather than isolating a single attack vector.
Fairness and error analysis
In order to determine the effect of Health-FedNet on the introduction of bias or unequal prediction patterns in different patient groups, we implemented subgroup fairness analysis of gender (male vs female), age groups (18–40, 40–60, 60 +), and ICU admission types. Health-FedNet recorded balanced performance in all subgroups with a highest accuracy variance of + −2.4, which means that there is low bias amplification during federation training.
We also analyzed the distribution of errors between models. The health-fednet decreased false-negatives by 11.3% than the baseline FL model, which is clinically significant since false-negative cases of chronic diseases usually result to patients being diagnosed late. Also, the false-positive rate dropped by 6.1, indicating an improvement in overall model calibration in privacy-preserving constraints. These findings indicate that Health-FedNet does not compromise privacy and equally predicts performance when used in a varied population of patients.
To achieve a better representation of demographic fairness, we have calculated group-wise fairness measures in gender and age cohorts. Our subgroup measurement metrics were accuracy, demographic parity (DP) gap and equal opportunity gap. These measures can be used to measure how the model will favor or disadvantage any given group of patients. Table 12 results indicate that there are small gaps in fairness when comparing all subgroups, which proves that Health-FedNet does not have any systematic bias in its behaviour.
Table 12.
Fairness evaluation metrics across demographic groups.
| Group | Accuracy (%) | Demographic parity gap | Equal opportunity gap |
|---|---|---|---|
| Male | 91.8 | 0.02 | 0.03 |
| Female | 92.1 | 0.00 | 0.02 |
| Age 18–40 | 90.2 | 0.04 | 0.05 |
| Age 40–60 | 92.7 | 0.01 | 0.02 |
| Age 60 + | 91.5 | 0.02 | 0.03 |
Health-FedNet has a high capacity of minimizing any risk of leakage of data at a relatively low privacy budget, hence adherence to privacy laws, including HIPAA and GDPR. Recent research studies indicate that medical FL models are vulnerable to reconstruction-based and membership inference attacks48. As a remedy, we have formal privacy attack assessments wherein we modeled both membership inference attacks (MIA) and gradient inversion efforts. The adversary was supposed to have white-box access to model gradients as per the existing FL attack threat models. The MIA of health-fednet was reduced to 5.7 as opposed to 43.2 in non-private FL and 18.9 in DP-only FL. Also, the use of gradient inversion attacks did not recreate recognizable patient data as a result of simultaneously implementing differential privacy noise and Paillier-based encrypted aggregation. These findings substantiate empirically that Health-FedNet is resistant to adversarial privacy leakage attacks.
Table 13 gives empirical privacy-attack performance between models. Health-FedNet is highly resilient to privacy, and its membership inference success rate is the lowest; gradient reconstruction is inhibited under adversarial conditions.
Table 13.
Privacy attack simulation results.
| Model | Membership inference attack success (%) | Gradient inversion outcome |
|---|---|---|
| Non-Private FL | 43.2 | Partial feature recovery |
| FL + DP Only | 18.9 | No meaningful reconstruction |
| Health-FedNet (DP + HE) | 5.7 | No reconstruction observed |
Significant values are in bold.
Computational efficiency analysis
The training time and cost of computation of a privacy-preserving and non-privacy-preserving model are also included in Table 14. The results of the calculation efficiency are represented in Fig. 5.
Table 14.
Computational efficiency: training time per epoch.
| Model | Training time (s) | Encryption overhead (s) |
|---|---|---|
| Standard FL | 40 | N/A |
| FL with DP | 45 | 5 |
| FL with DP + HE | 50 | 10 |
Fig. 5.
Training Time Comparison across Models.
In terms of encryption overhead, the additional computational cost introduced by privacy-preserving mechanisms is explicitly quantified in Table 14, where differential privacy adds approximately 5 s per epoch, and Paillier-based homomorphic encryption adds an additional 10s per epoch compared to standard federated learning. Paillier encryption was selected due to its integer-domain compatibility and deterministic aggregation properties, which are critical for preserving numerical correctness in medical prediction tasks. While approximate homomorphic schemes such as CKKS offer lower computational overhead, they introduce numerical approximation errors that may affect clinical reliability. As a result, benchmarking against CKKS was not included in this study and is planned as future work once approximation-aware validation strategies are established.
It is found that the health-FedNet can compete in training times even with the extra computational cost of privacy-preserving methods.
Scalability and robustness evaluation
Table 15 and Fig. 6 analyze the scalability of Health-FedNet and demonstrate the consistent performance with a growing number of institutions. The model maintains a good level of accuracy and a consistent growth in training time as the number of clients grows. Minor decrease in accuracy at 100 clients is anticipated as a result of increased data sparsity and diversification of institutions. This practice is consistent with the literature of FL difficulties in large heterogeneous healthcare, where non-IID data influences convergence stability. Notably, the effect is not substantial, which proves that Health-FedNet is an efficient scale. Further scalability testing will be applied to more than 200 institutions in the future in order to further test the performance in the extreme-scale federated environment.
Table 15.
Scalability analysis: training time and accuracy.
| Number of institutions | Accuracy (%) | Training time (s) |
|---|---|---|
| 10 | 90 | 30 |
| 50 | 88 | 35 |
| 100 | 85 | 40 |
Fig. 6.
Scalability analysis across Institutions.
Our external validation experiment (that was used to assess generalization outside the MIMIC-III data) involved a 3500-patient sample of the eICU collaborative research database. Without retraining the same preprocessing and inference pipeline was used to evaluate the model. Health-FedNet was established to have an accuracy of 89 % and an AUC-ROC of 0.91 on eICU, which confirms the framework can be easily applied to inaccessible clinical environments and is resilient to the changes in patient populations and hospital practices. This out-of-sample analysis reinforces the scalability findings in Table 15 and indicates that Health-FedNet can be a reliable analyzer of various datasets in critical care.
Table 16 and Fig. 7 contain the robustness in the presence of noisy data. Health-FedNet is more robust to corruption of data and client unreliability and in all noisy conditions it is better than the non-weighted model. This validates the advantage of adaptive client weighting in reducing the noise induced degradation.
Table 16.
Model Robustness under noisy data.
| Noise level | Accuracy without weighting (%) | Accuracy with weight-ing (%) |
|---|---|---|
| Low | 75 | 85 |
| Medium | 68 | 80 |
| High | 60 | 70 |
Fig. 7.
Model robustness under noisy data.
Noise model and component ablation analysis
To assess robustness under noisy and heterogeneous client conditions, we evaluated Health-FedNet using Gaussian differential privacy noise
at three intensity levels (low, medium, and high), following standard DP-FL practice. An ablation experiment was further conducted by disabling the adaptive node-weighting mechanism while keeping all other components unchanged. Under medium-noise conditions, removing adaptive weighting resulted in a 4.6% reduction in global accuracy, demonstrating its critical role in stabilizing convergence and mitigating heterogeneous noise across institutions.
To explicitly isolate the contribution of each architectural component, we evaluated four configurations: FedAvg only, FedAvg with Differential Privacy, FedAvg with Differential Privacy and Homomorphic Encryption, and the full Health-FedNet framework. As summarized in Table 17, accuracy progressively increased from 85% (FedAvg) to 92% (Full Health-FedNet), while privacy leakage was reduced from 20.0% to 5.0%. These results empirically confirm that adaptive weighting provides the dominant performance gains in noisy and non-IID settings, while differential privacy and homomorphic encryption primarily contribute to substantial reductions in privacy leakage and improved reliability. Overall, the ablation study verifies that the superior performance of Health-FedNet arises from the synergistic integration of differential privacy, homomorphic encryption, and adaptive node weighting, rather than from any single component in isolation.
Table 17.
Ablation study for component contribution.
| Configuration | Accuracy (%) | Privacy leakage (%) |
|---|---|---|
| FedAvg Only | 85 | 20.0 |
| FedAvg + Differential Privacy | 87 | 9.8 |
| FedAvg + DP + Homomorphic Encryption | 89 | 7.1 |
| Full Health-FedNet (DP + HE + Adaptive Weighting) | 92 | 5.0 |
Significant values are in bold.
The observed robustness of Health-FedNet under noisy and heterogeneous client conditions is consistent with recent advances in structured robustness research. Darzi and Marx emphasize that explicitly modeling robustness under distribution shifts is essential for reliable learning in decentralized systems49. In line with this perspective, the ablation results in Table 17 confirm that adaptive node weighting plays a dominant role in maintaining performance stability when client data distributions vary, while differential privacy and homomorphic encryption primarily contribute to mitigating privacy leakage without destabilizing convergence.
Communication overhead analysis
Effective communication plays an important role in federated learning, especially when the respective institutions involved increase. In this subsection, the communication overhead of Standard FL, Traditional Private FL, and Health-FedNet are assessed. These findings show that Health-FedNet is scalable and efficient in controlling communication costs.
Table 18 above compares the communication overhead of the three models of various numbers of institutions. The insights, as pointed out in the analysis, include:
Normal FL It is the most overhead in communication because it involves direct communication of data with the central server. In 100 institutions, the communication overhead is 1200 ms, which shows that scale is a problem.
Traditional Private FL Provides moderate privacy improvement to Standard FL through the application of simple privacy-preserving methods. The overhead associated with communicating in the framework at 100 institutions is 900 ms, which is a quarter lower than the overhead of Standard FL.
Health-FedNet It is superior to the two models because it uses an optimized communication protocol and adaptive mechanisms. Reduction of the overhead to 700 ms at 100 institutions (a 41.6% reduction over Standard FL and 22.2% reduction over Traditional Private FL).
Table 18.
Communication overhead for different models across institutions.
| Number of institutions | Standard FL (ms) | Traditional private FL (ms) | Health-FedNet (ms) |
|---|---|---|---|
| 20 | 400 | 300 | 200 |
| 40 | 600 | 450 | 300 |
| 60 | 800 | 650 | 550 |
| 80 | 1000 | 800 | 700 |
| 100 | 1200 | 900 | 700 |
| 200 | 1650 | 1250 | 980 |
Latency Latency expressed in milliseconds (ms) each round of communication. Simulated bandwidth in the conditions of a heterogeneous network (e.g., 5–20 Mbps uplink) to assess the impact of transmission overhead in the real conditions of healthcare environments.
In order to further evaluate scalability in the environment set at 100 institutions, we simulated up to 200 distributed clients. This is because of an increase in the natural communication latency brought about by an increased aggregation load, but the deterioration of the performance was still contained by adaptive message compression and partial client participation strategies. This minor error decrease at larger scale is especially caused by higher heterogeneity of data and unbalanced update variance among nodes, which should be expected in realistic hospital networks with different patient populations and hardware resources.
In order to complete the analysis of the communication, we benchmarked the computational cost of each model by the number of floating-point operations (FLOPs) and mean time per training round on the NVIDIA RTX A6000 GPU of our experiments. These metrics measure complexity of the model and cost of running the model during training on local clients. As Table 19 demonstrates, Health-FedNet comes with a small computational overhead as a result of variable privacy noise creation and Paillier encoding, but the incremental cost remains reasonable in the real-life clinical implementation. Notably, the compute cost is low compared to the huge privacy and communication benefits illustrated above.
Table 19.
Compute overhead benchmark across learning models.
| Model | FLOPs (× 10⁸) | Time per round (s) |
|---|---|---|
| Standard FL (FedAvg) | 7.2 | 0.42 |
| Traditional private FL (DP-FL) | 8.1 | 0.48 |
| Health-FedNet | 9.4 | 0.53 |
Significant values are in bold.
The communication overhead analysis was conducted under explicitly defined transmission and network assumptions to ensure reproducibility. In each federated round, clients transmitted full model parameter updates to the server, and the communication cost was measured as the total number of transmitted bytes per round. All experiments were simulated under heterogeneous uplink bandwidth conditions ranging from 5 to 20 Mbps, reflecting realistic hospital network environments. Communication latency values reported in Table 18 represent the average end-to-end transmission delay per round, aggregated across all participating institutions. The reported results correspond to 20 federated communication rounds, with identical round counts and model sizes maintained across all compared methods to ensure fair comparison.
Now to make the communication savings more evident we calculated the percentage of reduction based on the transmitted bytes per round rather than just the values of latency. The cost of communication was calculated as the overall size of the model parameters that are exchanged between the clients and the server at a certain round of communication. The standard formula below was used to calculate the reduction.
![]() |
Under our configuration, Standard FL needed an average of 12.4 MB of parameter updates each round, and Health-FedNet sent 7.25 MB of messages through the use of encrypted gradient packing and adaptive message compression. Using the formula, the communication volume was reduced by 41.6 percent. This is in agreement with the latency results recorded in Table 19 and it shows that Health-FedNet brings consistent benefits on both bandwidth usage and round-trip delay in the circumstances of heterogeneous networks.
Figure 8 shows the traffic overhead of Standard FL, Traditional Private FL, and Health-FedNet in more and more institutions. The overhead of Health-FedNet also decreases as the number of users increases, as compared to both baselines, which illustrates its effectiveness in cost reduction in communication at the expense of privacy. These differences are further brought out by the zoomed view, particularly with an increase in the number of clients, with Standard FL showing a steep overhead growth compared to Health-FedNet.
Fig. 8.
Communication overhead vs number of institutions for different models.
This outcome highlights the scalability of Health-FedNet, especially when considering the federation of large scale networks, and thus it is applicable to practical healthcare contexts.
Comparison against baselines
The comparison of Health-FedNet to a centralized learning model and a standard federated learning baseline was conducted in three areas of primary performance, namely, model accuracy, privacy protection, and communication overhead. As it is illustrated in Table 20, Health-FedNet provides the most accurate and privacy score and simultaneously lowers the cost of communication considerably. These gains are further captured in Fig. 9. It is interesting to note that the 41.6% communication reduction is calculated against FedAvg and secure aggregation baselines, which are calculated based on the size of transmitted model parameters per training round, and this proves our privacy-preserving design to be efficient in the context of multi-institution simulations.
Table 20.
Comparison of Health-FedNet against centralized and baseline FL models across accuracy, privacy, and communication overhead.
| Model | Accuracy (%) | Privacy Score (1–100) | Communication Overhead (1–100) |
|---|---|---|---|
| Centralized Model | 82 | 50 | 100 |
| Baseline FL | 85 | 75 | 70 |
| Health-FedNet | 92 | 92 | 40 |
Significant values are in bold.
Fig. 9.
Comparison against Baselines: accuracy, privacy, and communication overhead.
It should be noted that the baseline models selected in this study were intentionally chosen to isolate the impact of privacy-preserving mechanisms rather than optimization-specific federated learning variants. Advanced aggregation strategies such as FedProx or FedNova primarily address optimization and convergence under system heterogeneity, whereas the focus of Health-FedNet is on secure aggregation, privacy protection, and robustness under heterogeneous clinical data distributions. To this end, centralized learning, standard FedAvg, and private FL baselines provide a controlled and interpretable comparison framework. Benchmarking against optimization-oriented FL variants such as FedProx and FedNova is considered valuable and is therefore identified as a direction for future work.
To prove to be better, we ran head-to-head comparisons with centralized models, baseline FL, and encrypted FL models on the same datasets and evaluation metrics. The accuracy and privacy leakage of Health-FedNet were always higher, which confirms its effectiveness.
In order to make the comparative analysis more robust as requested, we compared Health-FedNet with some of the recent privacy-preserving federated learning models reported between 2023 and 2025. This research applies similar healthcare data sets (MIMIC-III, eICU, ICU waveform) and offers effective benchmarks. The head-to-head comparison is concluded in Table 21 and provides various values of AUC-ROC, privacy leakage, and communication overhead.
Table 21.
Benchmark comparison with recent federated healthcare models.
| Model | Dataset | AUC-ROC | Privacy leakage (%) | Communication overhead | Notes |
|---|---|---|---|---|---|
| LEAF-FL17 | MIMIC-III | 0.88 | 15.8 | Medium | DP + Secure Aggregation |
| FedMed-DP7 | eICU | 0.90 | 12.3 | High | DP-Only |
| FedLoc-Healthcare13 | ICU Waveform | 0.86 | 18.1 | Medium | FL Without HE |
| Health-FedNet (Proposed) 2025 | MIMIC-III | 0.94 | 5.0 | Low | DP + HE + Adaptive Weighting |
Significant values are in bold.
Health-FedNet has the largest AUC-ROC, the smallest privacy leakage, and the lowest communication burden, which proves that the use of different privacy, homomorphic encryption, and adaptive node weighting is an undeniable advantage compared to current practices.
We further benchmarked Health-FedNet with a variety of recent state-of-the-art privacy-preserving federated learning models to increase the comparative assessment, published from 2023 to 2025. The techniques are competitive clinical prediction and secure distributed learning approaches to healthcare institutions. These advanced models have lower diagnostic accuracy, leak less privacy, and place less communication burden than Health-FedNet as evidenced by Table 22. Differential privacy and homomorphic encryption combined with adaptive weighting have an obvious performance benefit in both predictive and privacy-oriented measures.
Table 22.
Comparison with state-of-the-art federated healthcare models.
| Model (Year) | Dataset | Accuracy (%) | Privacy budget (ε) | Membership inference attack success ↓ | Notes |
|---|---|---|---|---|---|
| Cross-stage recurrent FL Model35 | Diabetes (MIMIC-III) | 88 | 6.0 | 22% | Recurrent DP-FL |
| CNN + Logistic regression36 | CKD (ICISS) | 85 | N/A | N/A | Hybrid centralized ML |
| Federated healthcare benchmarking study37 | eICU & ICU Waveform | 86 | 4.5 | 18% | DP + FL evaluation |
| IoMT Encryption-authentication model50 | IoMT Networks | 83 | N/A | N/A | Secure OAuth + Encryption |
| Health-FedNet (Proposed) | MIMIC-III | 92 | 1.53 | 5.7% | DP + HE + Adaptive Weighting |
Significant values are in bold.
Disease outbreak prediction
The ability of Health-FedNet to forecast the outbreak of diseases in real-time was tested in terms of convergence time and accuracy of prediction. The findings are summarized in Table 23 and Fig. 10.
Table 23.
Disease outbreak prediction results.
| Epoch | Prediction accuracy (%) | Convergence time (min) |
|---|---|---|
| 5 | 65 | 15 |
| 10 | 74 | 10 |
| 15 | 90 | 7 |
| 20 | 93 | 4 |
Fig. 10.
Disease outbreak prediction using Health-FedNet.
The findings show that Health-FedNet develops high prediction accuracy and lower convergence time, which highlights the appropriateness of the algorithm in real-time usage.
Cross-institution collaboration
The experiment compared the performance of models with cross-institution work and the compliance with privacy. The results are provided in Table 24 and Fig. 11.
Table 24.
Cross-institution collaboration results.
| Institution | Accuracy before collaboration (%) | Accuracy after collaboration (%) |
|---|---|---|
| Hospital A | 78 | 90 |
| Lab B | 75 | 85 |
| Clinic C | 80 | 89 |
Fig. 11.
Cross-institution collaboration: model performance and privacy compliance.
Teamwork resulted in high accuracy of the model and a high level of privacy compliance.
Health-FedNet is compliant to HIPAA and GDPR in the sense that no raw data is transferred outside of the institutions borders, there are audit logs, role-based access control, updates are encrypted, and different privacy is used. The principles of data minimization and right-to-erasure are observed according to the GDPR rules. International regulation is guaranteed by the existing institutional policies and federated governance.
Real-time prediction trends
Health-FedNet was also tested on the real-time prediction trends. The trends are observed in Table 25.
Table 25.
Real-time prediction trends.
| Time interval (hours) | Real-time prediction | Averaged prediction |
|---|---|---|
| 0–5 | 0.82 | 0.85 |
| 5–10 | 0.87 | 0.88 |
| 10–15 | 0.89 | 0.91 |
| 15–20 | 0.90 | 0.93 |
To have a more insight into the error behaviors of the models as well as their misclassification behavior, we conducted a detailed error analysis in terms of confusion matrices in relation to the prediction of diabetes and hypertension tasks. The matrices aid in determining the kind of mispredicted cases, especially false negatives, that are of clinical significance since false diagnosis can delay treatment. Table 26 contains the summarized values of the confusion matrix in 5 evaluation runs. Its findings indicate that Health-FedNet has a low false-negative and consistent performance in both types of diseases.
Table 26.
Confusion matrix results for chronic disease prediction.
| Class | True positive (TP) | False Positive (FP) | False Negative (FN) | True Negative (TN) |
|---|---|---|---|---|
| Diabetes | 923 | 78 | 103 | 1120 |
| Hypertension | 907 | 65 | 95 | 1154 |
In order to visualize these pattern of failures, Fig. 12 shows a normalized confusion matrix of both prediction tasks. The fact that most predictions lie within the right categories reflects the fact that the diagonal dominance is high whereas the off-diagonal values are relatively low certifies low misclassification rates. This specifically touches on the issue of characterising errors raised by reviewer and makes Health-FedNet more interpretable in real-world settings.
Fig. 12.
Confusion matrix for diabetes and hypertension prediction.
Discussion
As far as the evaluation of Health-FedNet is concerned, it is superior in terms of the main aspects of federated learning application to healthcare. There is a significant improvement in the performance of diagnostic, where Health-FedNet produces AUC-ROC of 0.94, which surpasses Traditional FL and centralized model, thus validating its usefulness in handling heterogeneous and privacy-preserving information. The analysis of communication also highlights the fact that it is scalable as the overhead decreases by 41.6 percent with respect to Standard FL which points to its applicability in large-scale deployments.
Compared to the traditional models, Health-FedNet is in the lead in terms of its privacy and assurance of security level because it remains highly compliant with regulations like the HIPAA and GDPR and is unlikely to release data. Moreover, the robustness analysis at varying noise levels proves the adaptability of Health Fed Net wherein adaptive node weighting schemes mitigate the low-quality data influence, and improve the precision of the diagnostic in every noise environment. Lastly, the experiment of prediction of disease outbreaks shows that Health-FedNet has potential to be used in real time applications whereby we obtain high prediction accuracy and convergence times are low and suitable in dynamic healthcare settings. The findings in general indicate the usability of Health-FedNet as a scalable, privacy protecting, and resilient federated learning system to collaborative healthcare analytics.
Figure 13 shows the trend of real-time prediction of Health-FedNet in which instantaneous predictions tend to closely follow the averaged prediction signal in the span of 24 h. The polar plots on the bottom also show the patterns of prediction intensities at various temporal cycles of two model variants. Both of these visualizations emphasize the stability, consistency, and consistency of real time predictions versus aggregate model outputs.
Fig. 13.
Simulated real-time prediction trends under temporally evolving input patterns. The results are generated in a controlled experimental setting and are intended as a conceptual demonstration rather than validated real-time deployment on streaming clinical data.
The real-time prediction trends illustrated in Fig. 13 are generated under a simulated streaming environment and are intended to demonstrate the conceptual behavior of Health-FedNet under temporally evolving input patterns. These results do not represent deployment-level validation on live clinical streaming data. Instead, they provide an illustrative analysis of how the proposed framework may respond to time-varying signals, such as gradual disease progression or population-level fluctuations, under controlled experimental conditions. Empirical validation using real-world streaming electronic health records remains an important direction for future work.
There is a trade-off in privacy- which is measured. Health-FedNet accrues a 2–3% loss in accuracy because of DP noise, however, it has better privacy guarantees and does not pool raw data a la a centralized model. Privacy confidentiality is more valuable than the loss of marginal accuracy which is also clinically acceptable.
Health-FedNet proved to be highly predictive with time, which is worthwhile to prove its performance in dynamic settings.
These empirical results are in line with the current research that has shown that privacy protection can be enhanced by injecting noise in the federated weight space in a controlled manner, yet still maintaining accuracy51. This confirms that the application of differential privacy noise with secure aggregation will give us better robustness and privacy assurances.
Our work supplements other recent federated medical imaging surveys44, and the work further expands the literature by incorporating different levels of privacy, homomorphic encryption, and adaptive weighting to facilitate scalable and compliant cross-institution learning52.
The main reasons behind the performance improvements are the adaptive node weighting scheme where important contributors to medical quality are given priority, and the calibrated version of DP mechanism, where the privilege of accuracy loss is minimized without compromising on privacy. In addition, homomorphic encryption provides a secure gradient aggregation without revealing unmasked medical information.
IoT and Edge Deployment Limitations: Despite the high level of performance of Health-FedNet when deployed in a distributed hospital environment, the framework faces limitations when deployed on resource-limited edge and IoT healthcare devices (e.g., wearables, remote patient monitoring systems). HE and secure aggregation impose non-trivial computational and memory overheads on low-power processors that can increase inference time and battery life. Also, lightweight models and privacy mechanisms of energy-awareness are needed to predict chronic diseases in real-time on IoT nodes. The compression of models, fine-tuning that is parameter-efficient, pruning, and lightweight post-quantum cryptography will be studied in the future to make it possible to deploy such devices on smart biomedical sensors and home-care monitoring devices efficiently. Computational overhead is brought by HE and DP. HE operations consume substantial amounts of compute time in low-powered devices and aggregation regularly draws more communication load in large networks. Additional future work is on model compression, secure hardware acceleration, and client sampling.
Beyond predictive performance, recent trends in medical AI indicate a shift toward large-scale, transferable foundation models that can generalize across institutions and modalities. Bao et al. introduced a foundation-level AI model for medical image segmentation, demonstrating the feasibility of scalable and reusable medical AI architectures53. While Health-FedNet focuses on tabular clinical prediction rather than imaging, its federated and privacy-preserving design aligns with this emerging paradigm by enabling secure, cross-institution knowledge sharing. Future extensions of this work may integrate foundation-model principles with federated learning to further enhance generalizability across diverse healthcare environments.
Mapping of regulatory compliance
In order to make sure that Health-FedNet complies with the available healthcare privacy regulations, a specific compliance audit was conducted based on the correspondence between its technical controls in the Health Insurance Portability and Accountability Act (HIPAA) and the General Data Protection Regulation (GDPR). The table below gives a summary of the mapping of each of the controls in Health-FedNet relates to selected articles or safeguards specified by these standards.
It is important to note that the regulatory compliance mapping presented in Table 27 and the accompanying simulated audit logs are intended as a technical alignment and illustrative validation of how Health-FedNet supports key HIPAA and GDPR requirements. These results do not constitute a formal legal audit, regulatory certification, or legal compliance endorsement, which would require independent assessment by certified compliance professionals and regulatory authorities. As such, the compliance analysis should be interpreted as a system-level design alignment rather than a substitute for institutional regulatory approval.
Table 27.
HIPAA/GDPR control mapping for Health-FedNet.
| Control | Implementation in Health-FedNet | Regulatory reference |
|---|---|---|
| Data minimization | Localized model training ensures that no raw patient data leaves the institution; only encrypted model updates are exchanged | GDPR Art. 5(1)(c) |
| Encryption in transit and at rest | End-to-end data protection using Paillier Homomorphic Encryption and TLS during transmission and aggregation | HIPAA 164.312(a)(2)(iv) |
| Differential privacy noise addition | Gaussian noise limits the influence of any individual record on model updates, ensuring data untraceability | GDPR Recital 26 |
| Access control and audit logs | Federated server maintains secure authentication, role-based access, and immutable audit trails of all training activities | HIPAA 164.312(b) |
| Data integrity and accountability | Hash-based model verification ensures no tampering during aggregation; cryptographic logs provide traceability | HIPAA 164.312(c)(1); GDPR Art. 5(2) |
| Right to erasure and data locality | Institutions retain full data ownership and can remove records without affecting the federated model | GDPR Art. 17 |
| Incident response and breach reporting | Encrypted update protocols and monitoring modules trigger automated alerts in cases of abnormal communication or potential security breaches | HIPAA 164.308(a)(6)(ii); GDPR Art. 33 |
In order to present tangible evidence of adherence to regulatory auditability requirements, we were able to come up with a simulated audit log that captured critical security events when we were conducting federated training. The audit log entries are time-stamped records of client authentication, encrypted parameter uploads, aggregation and integrity verification. These entries would comply directly with the requirements of the audit control of HIPAA SS164.312(b) and the security incident procedures as outlined in HIPAA SS164.308(a) 6(ii). Audit trail is also used to express compliance with GDPR principles of accountability (Art. 5(2)) by having records of all interactions with the model update as immutable and verifiable across all the involved institutions.
According to the compliance mapping of Table 27, the federated design of Health-FedNet fulfills the requirements of the key regulations implicitly since it precludes central storage of identifiable data and provides technical safeguards against confidentiality, integrity, and accountability. All these protection measures create an auditable and transparent framework that enhances institutional trust and compliance with international data governance standards.
Discrimination prevention and equality
Health-FedNet resolves the issue of bias and fairness by including fairness-aware optimization and model auditing in various groups of patients. The framework compares the performance of models to subpopulations (e.g. age, gender, ethnicity) to identify possible difference in prediction results. As part of the process of aggregation, fairness indicators are also adaptively weighted on data quality, to avoid models updates that will favor or harm certain groups of individuals disproportionately. Also, by means of differential privacy noise calibration, sensitive subgroup leakage is minimized, allowing bias amplification due to adversarial reconstruction or membership inference to be minimized. An ongoing fairness checkpoint and group-specific evaluation measures are necessary to make sure that the model behaves fairly across an institution and population and ethical use in a real-world healthcare system.
To ensure that the fairness analysis is quantitatively grounded rather than purely descriptive, we explicitly report group-wise fairness metrics, including demographic parity gap and equal opportunity gap, in addition to accuracy, for each gender and age subgroup. These metrics, summarized in Table 12, provide a formal assessment of whether prediction outcomes are unevenly distributed across demographic groups. The consistently low demographic parity gaps
and equal opportunity gaps
across all evaluated subgroups indicate that Health-FedNet does not introduce systematic bias or disproportionate error rates toward any specific population, thereby satisfying fairness requirements under heterogeneous clinical conditions.
Clinical safety and fairness analysis
In addition to predictive accuracy, clinical deployment of healthcare AI systems requires explicit evaluation of fairness and safety. As reported in Table 12, Health-FedNet demonstrates consistently balanced performance across demographic subgroups, with accuracy variations remaining within a narrow range for gender and age groups. The demographic parity and equal opportunity gaps remain small across all subgroups, indicating that the model does not introduce systematic bias toward any specific population. This suggests that the federated training process, combined with adaptive node weighting, does not amplify demographic disparities commonly observed in heterogeneous clinical datasets.
Furthermore, Table 26 provides a confusion-matrix-based analysis of diagnostic outcomes for diabetes and hypertension across five evaluation runs. The results indicate a low and stable false-negative rate for both diseases, which is particularly critical in chronic disease prediction where missed diagnoses can delay treatment and adversely affect patient outcomes. The consistent true-positive and true-negative counts across disease categories confirm that Health-FedNet maintains reliable diagnostic behavior under privacy-preserving constraints. Collectively, these findings demonstrate that Health-FedNet is not only accurate, but also clinically safe and equitable, satisfying key prerequisites for real-world healthcare deployment.
Conclusion
This paper presented a privacy-guaranteed, secure federated learning platform implementation called Health-FedNet, which is capable of making accurate predictions of chronic diseases in distributed healthcare settings with regulation constraints. The framework unites the benefits of the use of differential privacy, Paillier homomorphic encryption and an adaptive node-weighting approach to overcome some of the limitations that exist in the privacy-conscious learning systems in the current state, where privacy, accuracy, and stability are usually considered separately. The results of the experimental analysis of the MIMIC-III dataset confirm that Health-FedNet provides 92% accuracy and AUC-ROC of 0.94 and exceeds several more recent federated healthcare models by 4-7x. The system also enhances membership-inference leakage to 5.7 percent, versus 15–20 percent in the literature, and cuts by 41.6 per cent the amount of communication by using encryption-conscious model aggregation. These results demonstrate that the suggested DP, HE, and adaptive weighting integration offers a statistically significant predictive performance and privacy resilience increase.
Although the results of this framework are promising, it has a number of limitations. Homomorphic encryption also adds computational cost to the process of encryption and aggregation of gradients, which makes it a problematic deployment to resource-constrained IoT and mobile healthcare devices. The existing experiments are based solely on the MIMIC-III data and they might not work as well in different hospital systems with different EHRs, data distributions, and clinical protocols. Although adaptive weighting helps to increase the robustness in the presence of heterogeneous conditions, very small or highly unbalanced institution datasets can also lead to unstable updates. These constraints provide evidence that cryptographic pipeline and learning stability mechanisms still require optimization.
To mitigate the deployment challenges on resource-constrained IoT and edge healthcare devices, several practical strategies are feasible within the Health-FedNet architecture. These include selective client participation, edge–cloud hybrid aggregation where only lightweight updates are processed on-device, adaptive model compression, and encrypted gradient sparsification to reduce both computational and memory overhead. While these optimizations were not experimentally evaluated in the present study, they represent concrete and well-established mechanisms that can significantly improve feasibility on wearable and edge-based healthcare platforms.
Health-FedNet is suitable to be use in real-world settings. The fact that it adheres to HIPAA and GDPR privacy requirements allows privacy-conscious cooperation among hospitals that are not legally allowed to share the raw patient information. Remote patient monitors, clinical wearables, and edge gateways are also IoT-enabled healthcare infrastructures that are compatible with the modular architecture. This compatibility would also render the framework applicable to the integration into multi-hospital networks and the eHealth ecosystems on the national level in the future in which secure distributed learning would become a key requirement.
In the future, there are a number of measurable avenues of research that can enhance the system. A key goal is to reduce the computational cost of homomorphic encryption by up to 50 per cent with the use of the CKKS-based approximate encryption. Further issues involve the development of communication-efficient encrypted update pipelines of mobile health devices, to wider disease problems, like oncology or cardiovascular event prediction, and performance evaluation using real-world pilot implementations across geographically distributed hospitals. Real-time streaming information and constant federated updates will also be taken into account in future work to aid in making clinical decisions on a live basis. Altogether, Health-FedNet is a meaningful advancement on the way to safe and scaled, and clinically feasible federated healthcare intelligence.
Beyond computational efficiency, future work will explicitly focus on addressing extreme non-IID data distributions and adversarial client behavior, which remain open challenges in real-world federated healthcare systems. Planned extensions include robustness-aware aggregation strategies, Byzantine-resilient client filtering, and trust-based update validation to prevent malicious or low-quality contributions from destabilizing the global model. These enhancements will be evaluated under highly skewed institutional data distributions to ensure stability, fairness, and security at scale.
Acknowledgements
The authors would like to thank the Deanship of Scientific Research, Vice Presidency for Graduate Studies and Scientific Research, King Faisal University, Saudi Arabia, for supporting this work under Grant No. KFU253265. The authors also acknowledge the responsible use of Large Language Models (LLMs), such as ChatGPT, strictly for language refinement and formatting purposes. All intellectual and scientific contributions, including research design, data analysis, and interpretation, were solely performed by the authors in compliance with institutional academic integrity and ethical research policies.
Author contributions
Muhammad Ilyas Shahid designed and implemented the federated learning models, conducted experiments, and analyzed results. Muhammad Nabeel Asghar contributed to methodology development, validation, and interpretation of findings. Alaulamie Abdullah assisted in data preparation, experimental setup, and manuscript editing. Hafiz Muhammad Sanaullah Badar provided guidance throughout the research, contributed to overall direction, and reviewed the manuscript critically. All authors read and approved the final manuscript.
Funding Statement
This research was supported by the Deanship of Scientific Research, Vice Presidency for Graduate Studies and Scientific Research, King Faisal University, Saudi Arabia, under Grant No. KFU253265.
Data availability
The MIMIC-III (Medical Information Mart for Intensive Care III) database used in this study is publicly available through PhysioNet (https://physionet.org/content/mimiciii/). Access to the MIMIC-III database requires completion of the CITI (Collaborative Institutional Training Initiative) “Data or Specimens Only Research” course and approval under the PhysioNet data use agreement. Researchers wishing to access the data must submit an access request through the PhysioNet credentialing system.
Declarations
Competing interests
The authors declare no competing interests.
Footnotes
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
- 1.Pattyam, S. P. Ai in data science for healthcare: Advanced techniques for disease prediction, treatment optimization, and patient management. Distributed Learn. Broad Appl. Sci. Res.5, 417–455 (2019). [Google Scholar]
- 2.Louassef, B. & Chikouche, N. Privacy preservation in healthcare systems. 1–6, 10.1109/AI-CSP52968.2021.9671083 (2021).
- 3.Aridor, G., Che, Y.-K. & Salz, T. The economic consequences of data privacy regulation: Empirical evidence from GDPR (National Bureau of Economic Research Cambridge, MA, USA, 2020).
- 4.Khan, S. Privacy-preserving computing in the healthcare using federated learning. In Federated Learning and Smart Healthcare Systems, 263–280, 10.4018/979-8-3693-2165-2.ch015 (2024).
- 5.Vizitu, A., Nita, C., Puiu, A., Suciu, C. & Itu, L. M. Privacy-preserving artificial intelligence: Application to precision medicine. In Proceedings of the IEEE EMBC, 6498–6504, 10.1109/EMBC.2019.8857960 (2019). [DOI] [PubMed]
- 6.Kondaveeti, H. K., Simhadri, C. G., Mangapathi, S. & Vatsavayi, V. K. Federated learning for privacy preservation in healthcare. igi-global (2024).
- 7.Khalid, N., Qayyum, A., Bilal, M., Al-Fuqaha, A. & Qadir, J. Privacy-preserving artificial intelligence in healthcare: Techniques and applications. Comput. Biol. Med.158, 106848. 10.1016/j.compbiomed.2023.106848 (2023). [DOI] [PubMed] [Google Scholar]
- 8.Torkzadehmahani, R. et al. Privacy-preserving artificial intelligence techniques in biomedicine. 12–27 (2022). [DOI] [PMC free article] [PubMed]
- 9.Padmanaban, H. Privacy-preserving architectures for ai/ml applications: Methods, balances, and illustrations. 3 (2024).
- 10.Xie, H., Zhang, Y., Zhongwen, Z. & Zhou, H. Privacy-preserving medical data collaborative modeling: A differential privacy enhanced federated learning framework. J. Knowl. Learn. Sci. Technol. ISSN: 2959–6386 (online)3, 340–350 (2024).
- 11.Ding, W., Yan, Z. & Deng, R. H. Encrypted data processing with homomorphic re-encryption. Inf. Sci.409, 35–55 (2017). [Google Scholar]
- 12.Kethireddy, R. R. Privacy-preserving ai techniques for secure data sharing in healthcare. 10.70589/JRTCSE.2020.2.4 (2024)
- 13.Yin, F. et al. Fedloc: Federated learning framework for data-driven cooperative localization and location data processing. IEEE Open J. Signal Process.1, 187–215 (2020). [Google Scholar]
- 14.Badar, H. M. S., Ahmed, S., Kajla, N. I., Fan, G. & Zhang, C. Q-BLAISE: Quantum-resilient blockchain and AI-enhanced security protocol for smart grid IoT. IEEE Trans. Consum. Electron.71, 4959–4971. 10.1109/TCE.2025.3570969 (2025). [Google Scholar]
- 15.Zahoor, A. et al. Lightweight authenticated key agreement protocol for smart power grid systems using PUF. IEEE Open J. Commun. Soc.5, 3568–3580. 10.1109/OJCOMS.2024.3409451 (2024). [Google Scholar]
- 16.Alsharif, M. H., Kannadasan, R., Wei, W., Nisar, K. S. & Abdel-Aty, A.-H. A contemporary survey of recent advances in federated learning: Taxonomies, applications, and challenges. Internet Things 101251 (2024).
- 17.Patel, N. P. et al. Leaf: A federated learning-aware privacy-preserving framework for healthcare ecosystem. IEEE Trans. Netw. Serv. Manag.21, 1129–1141. 10.1109/TNSM.2023.3287393 (2024). [Google Scholar]
- 18.Pati, S. et al. Privacy preservation for federated learning in health care. Patterns5, 100974. 10.1016/j.patter.2024.100974 (2024). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.Kasyap, H. & Somanath, T. Privacy-preserving decentralized learning framework for healthcare system. ACM Trans. Multimed. Comput. Commun. Appl.17, 1–24. 10.1145/3426474 (2021). [Google Scholar]
- 20.Roy, S. Privacy prevention of health care data using ai. J. Data Acquis. Process.37, 769. 10.5281/zenodo.7699408 (2022). [Google Scholar]
- 21.Shah, W. Federated learning and privacy-preserving ai: Safeguarding data in distributed machine learning. 10.13140/RG.2.2.36659.44324 (2024)
- 22.Habu, J., Dhabariya, A. S., Imam, B. S. & Sani, Z. M. Privacy-preserving federated learning in healthcare: A comprehensive review. 2–8 (2024).
- 23.S., S. Privacy-preserving federated learning for healthcare data. In Federated Learning and Smart Healthcare Systems, 178–196, 10.4018/ 979-8-3693-0593-5.ch008 (2023).
- 24.Wang, W. et al. A privacy preserving framework for federated learning in smart healthcare systems. Inf. Process. Manag.60, 103167. 10.1016/j.ipm.2022.103167 (2023). [Google Scholar]
- 25.Yang, M., Huang, D. & Zhan, X. Federated learning for privacy-preserving medical data sharing in drug development. 10.20944/preprints202410. 1641.v1 (2024)
- 26.Darzidehkalani, E., Ghasemi-Rad, M. & Van Ooijen, P. M. A. Federated learning in medical imaging: Part i – toward multicentral health care ecosystems.J. Am. Coll. Radiol.19, 969–974, 10.1016/j.jacr.2022.03.015 (2022). [DOI] [PubMed]
- 27.Darzidehkalani, E., Ghasemi-Rad, M. & Van Ooijen, P. M. A. Federated learning in medical imaging: Part ii – methods, challenges, and considerations. J. Am. Coll. Radiol.19, 975–982. 10.1016/j.jacr.2022.03.016 (2022). [DOI] [PubMed] [Google Scholar]
- 28.Li, T. et al. Federated optimization in heterogeneous networks. Proc. Mach. Learn. Syst.2, 429–450, https://proceedings.mlsys.org/paper/2020/file/1f5fe83998a09396ebe6477d9475ba0c-Paper.pdf (2020).
- 29.Mazid, A., Kirmani, S., Abid, M. & Pawar, V. A secure and efficient framework for internet of medical things through blockchain driven customized federated learning. Clust. Comput.28, 225. 10.1007/s10586-024-04896-4 (2025). [Google Scholar]
- 30.Ahir, P. & Parikh, M. Cyber security concerns and mitigation strategies in federated learning: A comprehensive review. Towards Excell.2 (2023).
- 31.Fedmup: Federated learning driven malicious user prediction model for secure data distribution in cloud environments. Appl. Soft Comput.157, 111519, 10.1016/j.asoc.2024.111519 (2024).
- 32.Quantum machine learning driven malicious user prediction for cloud network communications. IEEE Netw. Lett.4, 174–178, 10.1109/LNET.2022. 3200724 (2022).
- 33.Gupta, R., Saxena, D., Gupta, I. & Singh, A. K. Differential and triphase adaptive learning-based privacy-preserving model for medical data in cloud environment. IEEE Netw. Lett.4, 217–221. 10.1109/LNET.2022.3215248 (2022). [Google Scholar]
- 34.Saxena, D., Gupta, R., Singh, A. K. & Vasilakos, A. V. Emerging vm threat prediction and dynamic workload estimation for secure resource management in industrial clouds. IEEE Transactions on Autom. Sci. Eng.10.1109/TASE.2023.3319373 (2023). [Google Scholar]
- 35.Jayalakshmi, R. & Tamilvizhi, T. Privacy preservation in diabetic disease prediction using federated learning based on efficient cross stage recurrent model. Sci. Rep.15, 37258. 10.1038/s41598-025-21229-6 (2025). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 36.Muthukumar, B., Senthamizhselvi, H., Kanchana, A., Surendran, R. & Balamurugan, K. S. Machine learning-based early prediction of chronic kidney disease using CNNs and logistic regression. Proc. Int. Conf. Intell. Sustain. Syst. (ICISS)10.1109/ICISS63372.2025.11076245 (2025). [Google Scholar]
- 37.Jayalakshmi, R., Tamilvizhi, T. & Ramya, P. Comprehensive evaluation of federated learning based models for disease detection in healthcare. Proc. Int. Conf. Innovation Intell. Inf., Comput., Technol. (3ICT), 643–650, 10.1109/3ICT64318.2024.10824463 (2024).
- 38.Darzi, E., Shen, Y., Ou, Y., Sijtsema, N. M. & van Ooijen, P. M. Tackling heterogeneity in medical federated learning via aligning vision transformers. Artif. Intell. Med.155, 102936. 10.1016/j.artmed.2024.102936 (2024). [DOI] [PubMed] [Google Scholar]
- 39.Darzi, E., Sijtsema, N. M. & van Ooijen, P. M. A. A comparative study of federated learning methods for COVID-19 detection. Sci. Rep.14, 3944. 10.1038/s41598-024-54323-2 (2024). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 40.Madhavi, R. et al. Federated learning and adaptive privacy preserving in healthcare. In AI Smart Health Appl.10.1007/978-3-031-27499-2_51 (2023). [Google Scholar]
- 41.Moon, S. & Lee, W. Privacy-preserving federated learning in healthcare. 1–4, 10.1109/ICEIC57457.2023.10049966 (2023).
- 42.Bikku, T., Malligunta, K. K., Thota, S. & Surapaneni, P. P. Improved quantum algorithm: A crucial stepping stone in quantum-powered drug discovery. J. Electron. Mater.54, 3434–3443. 10.1007/s11664-024-11275-7 (2025). [Google Scholar]
- 43.Bikku, T., Pujari, J. J., Satyasree, K. P. N. V., Thota, S. & Joseph, S. Deep learning-based ovarian cancer detection from histopathology images. Qual. & Quant.10.1007/s11135-025-02315-3 (2025). [Google Scholar]
- 44.Srinivasu, P. N. et al. An interpretable approach with explainable ai for heart stroke prediction. Diagnostics14, 128. 10.3390/diagnostics14020128 (2024). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 45.Bikku, T. et al. Healthcare biclustering of predictive gene expression using lstm-based support vector machine. Informing Sci.28, 12. 10.28945/5446 (2025). [Google Scholar]
- 46.Abadi, M. et al. Deep learning with differential privacy. In Proceedings of the ACM SIGSAC Conference on Computer and Communications Security, 308–318, 10.1145/2976749.2978318 (2016).
- 47.Johnson, A. E. W. et al. Mimic-iii, a freely accessible critical care database. Sci. Data3, 160035. 10.1038/sdata.2016.35 (2016). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 48.Darzi, E., Dubost, F., Sijtsema, N. M. & Van Ooijen, P. M. A. Exploring adversarial attacks in federated learning for medical imaging. IEEE Trans. Ind. Inf.10.1109/TII.2024.3423457 (2024). [Google Scholar]
- 49.Darzi, E. & Marx, A. Structured robustness for distribution shifts. In Workshop on Spurious Correlation and Shortcut Learning: Foundations and Solutions, https://openreview.net/forum?id=4tBjnFqmaQ (2024).
- 50.Riya, K. S., Surendran, R., Tavera Romero, C. A. & Sendil, M. S. Encryption with user authentication model for Internet of Medical Things environments. Intell. Autom. Soft Comput.35, 507–520 (2023). [Google Scholar]
- 51.Darzi, E., Sijtsema, N. M. & Van Ooijen, P. Weight-space noise for privacy-robustness trade-offs in federated learning. Neural Comput. Appl.10.1007/s00521-025-11420-1 (2025). [Google Scholar]
- 52.Sharif, M. I. et al. Federated learning for analysis of medical images: A survey. J. Comput. Sci.20, 1610–1621. 10.3844/jcssp.2024.1610.1621 (2024). [Google Scholar]
- 53.Bao, R., Darzi, E., He, S., Hsiao, C. H., Hussain, M. A., Li, J., Bjornerud, A., Grant, E. & Ou, Y. Foundation AI model for medical image segmentation. arXiv preprint arXiv:2411.02745, 10.48550/arXiv.2411.02745 (2024).
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Availability Statement
The MIMIC-III (Medical Information Mart for Intensive Care III) database used in this study is publicly available through PhysioNet (https://physionet.org/content/mimiciii/). Access to the MIMIC-III database requires completion of the CITI (Collaborative Institutional Training Initiative) “Data or Specimens Only Research” course and approval under the PhysioNet data use agreement. Researchers wishing to access the data must submit an access request through the PhysioNet credentialing system.
































