|
Algorithm 1 MIMO-NOMA Based on the DL Training Algorithm. |
-
1:
Initialize the DNN model;
-
2:
Generate and adjust the format of the training data. Assuming that the number of slots is N, the input data are denoted as . Each is a MIMO-NOMA column vector in slot i;
-
3:
Set the key parameters, including mini-batch, learning rate, and output functions of the hidden layer and output layer, and initialize the weight and bias of each DNN layer;
-
4:
Implement the forward DNN process and obtain the results of the output layer’s data, denoted as ;
-
5:
Calculate the loss function, that is, the cross-entropy
-
6:
Calculate the corrective parameter with the Adam optimization algorithm. Update the parameters with the algorithm to search for the optimal solution;
-
7:
Return to Step 4 if the loss function is not small enough, otherwise proceed to the next step. If the loss function does not meet the requirement, the DNN with the updated parameters should be retrained;
-
8:
Test the trained DNN with the test data and plot the SER–SNR curve.
|