Skip to main content
. 2022 Dec 3;24(12):1770. doi: 10.3390/e24121770
Algorithm 2 Tr-Predictior
  • Input: 

    Suppose T={t1,t2,,tn} is a set containing multiple workload sequences of different lengths and the base learning learner LSTM.

  • Output: 

    The ensemble transfer model obtains the predicted value through the following formula:

    Tempj=log1βtjj=1Nlog1βtjLSTMjx

    For t = 1,2,…,S.

  •  1:

    Call Equations (1) and (3) to obtain the source domain data, recorded as Tsource (length n);

  •  2:

    Call Equations (4)–(6) to preprocess data, extract target data as Ttarget (length m), Tc=Tsource+Ttarget.

  •  3:

    Set the initial weight vector:

    Wt=11m+nm+n,i=1,2,,m+n.

  •  4:

    Set the total weight of Tsource: wis=n/(m+n). The total weight of Ttarget is wit=m/(m+n)Tsource.

  •  5:

    Call the AdaBoost-LSTM (Algorithm 1) to train data Tc, freeze the weight wis of the top n source data in the process of training and only update the weight of the target wit, record the above training model as modelt. (Train hypothesis in modelt for LSTMt is ht:xR.)

  •  6:

    Use Step 2 in Algorithm 1 to calculate the adjusted error of each instance in the target domain: eit, change βt in algorithm 1 to βs=1/(1+2lognN), use it to calculate the adjusted error of each instance in the source domain: eis.

  •  7:

    Calculate the adjusted error, εt, of modelt:εt=i=1neitwit; if εt0.5, stop and set N=t1.

  •  8:

    Let βt=εt/(1εt), freeze the weight of the target domain, then just update the weight vector of the new source domain: wis+1=wisβteit/Zt.(Zt is a normalizing constant.)

    End for.

  •  9:

    returnypre=j=1NTempj.