Skip to main content
. 2022 Feb 24;16:834952. doi: 10.3389/fnbot.2022.834952

Algorithm 1.

The training process of MDGCN-SRCNN.

     Input : A labeled training data set{X,Y}={xi,yi}i=1N, the maximum number of training epochs T; the initialize adjacency matrix A,regularization coefficientα.
     Output: The learned adjacency matrixA^, the model parameterΘ for MDGCN-SRCNN and the predicted labelŷ.
     Step 1 : Initialize the model parametersΘ in MDGCN-SRCNN model. Set iteration unit iter = 1;
     Step 2 : whileiter < T do
     Step 3 : fork = 1, ..., l do
     Step 4 : Calculate the k-th graph convolutional layer H(k)via Eq. (1) and calculate the k-th sum pooling layerPool(H(k));
     Step 5 : fork = 1, ..., l do
     Step 6 : Calculate the k-th SMR-based convolution layerCkvia Eq. (9);
     Step 7 : Concatenate the different layers of features Fvia Eq. (10);
     Step 8 : Calculate the prediction labelŷ via Eq. (11);
     Step 9 : Update the adjacency matrix Aand the model parametersΘ via optimizer according to the cross-entropy loss.
     Step 10: iter =iter+1;
     Step 11: end while