Table 2. Pseudocode of multi label English translation text classification algorithm.
| Input: Training parameter set T = {(wk, ykj)Lj = 1}k = 1, model parameter θ. | |
|---|---|
| 1: | W = {wk = 1/N | k = 1, 2, …N} |
| 2: | repeat |
| 3: | for all w = {(Wk, ykj )Nj = 1}k = 1∈D do |
| 4: | for l in range(N) |
| 5: | wk = {w1, w2, …, wn} |
| 6: | Tk = Att-BILSTM(wk) |
| 7: | for o in rang (O) |
| 8: | Xok = transformer(Tk) |
| 9: | end for |
| 10: | for p in range (p) |
| 11: | ht1 = TLCM(Xok) |
| 12: | ht2 = BERT(Xok) |
| 13: | ht3 = [ht1,ht2] |
| 14: | end for |
| 15: | for q in range(Q) |
| 16: | vq = Conv(ht1) |
| 17: | Fq = Maxpooling(vq) |
| 18: | end for |
| 19: | M = Concat(Fq , hpt) |
| 20: | A = Attention(M) |
| 21: | W2 = Fully Connected (A, Dropout) |
| 22: | yjk = δ (w2) |
| 23: | end for |
| 24: | Calculate the gradient of each parameter |