Skip to main content
. 2025 Aug 6;47(8):628. doi: 10.3390/cimb47080628
Algorithm 1: MvAl-MFP
Input: The initial labeled set SL, the initial unlabeled set SU, the number of labels L, the human expert H, the multi-label learning model M, the number of feature representation methods V, feature representation methods B1,B2BV, the number of selected unlabeled samples in each iteration λ.
Output: Ensemble multi-label classifiers {M1,M2MV}
Procedure:
  1. For i = 1:V

  2.   Transform SL and SU by Bi;

  3.   Train the initial classifier Mi on the transformed SL;

  4. End for

  5. Repeat until satisfying the presetting stopping criterion

  6.     For i = 1:|SU|

  7.      Generate Yi by adopting {M1,M2MV} to predict its L labels (see Equation (3));

  8.      Calculate its voting entropy Entropyi by Equation (4);

  9.   End for

  10.   Rank voting entropies of all unlabeled samples in descending order, and select Top-λ ones into SU*=s1,s2,,sλ by Equation (5);

  11.   Submit SU* to H for acquiring real labels by wet-lab experiments;

  12.   Add SU*=s1,s2,,sλ with real labels into SL;

  13.   Remove SU*=s1,s2,,sλ from SU;

  14.   Update {M1,M2MV} by using the extended SL;

  15. Output the final {M1,M2MV} and make decision for future unseen samples by them (see Equation (6)).