Skip to main content
. 2022 Mar 23;13:716506. doi: 10.3389/fpls.2022.716506

TABLE 3.

Parameters for CNNs.

Layer Parameters (2D-CNN) Parameters (3D-CNN)
Conv2D (for 2D-CNN) or Conv3D (for 3D-CNN) Filters = 64 Filters = 64
Kernel_size = (3,3) Kernel_size = (3,3,3)
Padding = “same” Padding = “same”
Activation = “relu” Activation = “relu”
Input_shape = (5,5,7) Input_shape = (5,5,5,7)
Conv2D (for 2D-CNN) or Conv3D (for 3D-CNN) Filters = 128 Filters = 128
Kernel_size = (2,2) Kernel_size = (2,2,3)
Padding = “same” Padding = “same”
Activation = “relu” Activation = “relu”
Conv2D (for 2D-CNN) or Conv3D (for 3D-CNN) Filters = 256 Filters = 256
Kernel_size = (1,1) Kernel_size = (1,1,5)
Padding = “valid” Padding = “valid”
Activation = “relu” Activation = “relu”
Reshape Target_shape = (5,5,256)
Flatten NA
Dense Units = 256
Activation = “relu”
Dense Units = 5,400
Reshape Target_shape = (5,5,256)
Conv2D Filters = 128
Kernel_size = (2,2)
Padding = “same”
Activation = “relu”
Conv2D Filters = 64
Kernel_size = (3,3)
Padding = “same”
Activation = “relu”
Conv2D Filters = 1
Kernel_size = (1,1)
Padding = “same”
Activation = “linear”

Each Keras layer refers to a building block of the neural network, including convolution layers (Conv2D and Conv3D), reshaping layers (Flatten and Reshape), and fully connected layers (Dense). Besides the last layer, all layers used a rectified linear unit (“relu”) activation function that directly outputs the input, if positive, or zero otherwise.