Skip to main content
. Author manuscript; available in PMC: 2023 Feb 3.
Published in final edited form as: IEEE/ACM Trans Comput Biol Bioinform. 2022 Feb 3;19(1):107–120. doi: 10.1109/TCBB.2021.3058941

Algorithm 2.

Dropfeature for Training DNNs

Input: The input feature subset, R; the feature index set, ϵ; the learning algorithm, A.
Output: The updated feature subset and index set, R and ϵ′.
1: begin
2:  Run A on R, and compute AUC A by A;
3:  Initialize the current state S = 0;
4: repeat
5:   Use random walking to update the following three states with equal probabilities;
6:   if state S = 1 then
7:    Remove gR and add gG/R to get R1 and ϵ1;
8:    Run A on R1 and compute AUC A1 by A;
9:   else if state S = 2 then
10:    Remove gR to get R2 and ϵ2;
11:    Run A on R2 and compute AUC A2 by A;
12:   else if state S = 3 then
13:    Add gG/R to get R3 and ϵ3;
14:    Run A on R3 and compute AUC A3 by A;
15:   end if
16:   if (Ai = max {A, A1, A2, A3}) then
17:    Update R=Ri, ϵ′ = ϵi, A = Ai and search from R;
18:   else if (A = max {A, A1, A2, A3}, and Ai = max {A1, A2, A3}) then
19:    Search from R with probability ui and update R=R, ϵ′ = ϵ, and A = A;
20:    or
21:    Search from Ri with probability 1 − ui and update R=Ri, ϵ′ = ϵi, and A = Ai;
22:   end if
23:   Reinitialize the current state S = 0;
24: until no significant improvement is achieved in AUC score.
25: return R and ϵ′.
26: end