Algorithm 1. RIW in CNN layer. |
1. Initialize the weights of the convolutional layers randomly using a normal distribution with mean 0 and standard deviation 1 with a specified shape for the weight tensor. 2. Pass the input data through the convolutional layers to generate a set of feature maps. 3. Employ a non-linear activation function, such as ReLU, to the feature maps. 4. Pass the output of the initial convolutional layer to the second convolutional layer. 5. Initialize the weights of the subsequent convolutional layer using a normal distribution with mean 0 and standard deviation 1. 6. To assist in learning the shift of feature space among the first and second convolutional layers, include a bias term to the weights of the second convolutional layer proportional to the mean of the output feature maps from the first convolutional layer. 7. Iterate steps 2–6 for subsequent convolutional layers in the model. 8. Train the model with backpropagation to adjust the weights and biases of the convolutional layers in accordance with the error between the predicted output and the true output. 9. Repeat the training process with varied RIWs to avoid getting stuck in local optima. 10. Assess the model performance on a validation set to determine if it is capable of effectively learning the shift of feature space between the convolutional layers. |