Skip to main content
. 2024 May 30;24(11):3519. doi: 10.3390/s24113519
Algorithm 1. Meta-Learner_Optimization
Input: Ensemble of deep learning classifiers {C1, C2,…, C7}, Training data D
Output: Optimized weights for each classifier in the ensemble
1: Initialize weights Wi for each classifier Ci in the ensemble, i = 1 to 7, such that sum (Wi) = 1
2: For each training epoch or until performance converges do
3: For each classifier Ci in the ensemble do
4: Extract meta-features: Accuracy Ai, Loss Li, Confidence Level ConfLi from Ci using D
5:  Calculate performance score PSi for Ci using Ai, Li, ConfLi
6:  Update weight Wi for Ci based on PSi
7:  End For
8:  Evaluate ensemble performance on validation set using updated weights
9:  If ensemble performance has converged or improved minimally then
10:    Break from the loop
11:  End If
12: End For

Procedure Calculate_Performance_Score(Accuracy Ai, Loss Li, Confidence Level ConfLi)
1:  Define a performance function F that considers Ai, Li, ConfLi
2:  Return performance score PSi = F(Ai, Li, ConfLi)

Procedure Update_Weight(Performance Score PSi)
1:  Define a weighting strategy that adjusts Wi based on PSi
2:  Update Wi according to the defined strategy
3:  Normalize all weights Wi so that sum(Wi) = 1

Procedure Evaluate_Ensemble(Validation Data V)
1:  For each data point in V do
2:  Aggregate predictions from all classifiers using their weights Wi
3:  End For
4:  Calculate and return the overall performance of the ensemble on V