Table 6.
Notations of Algorithm 1 and their descriptions.
| Notation | Description |
|---|---|
| X1, X2, X3,…., X17 | The seventeen input features |
| The number of neurons in the hidden layers | |
| m | The total number of samples in the MUCHD dataset |
| α | An arbitrary scaling factor |
| Ni | The number of neurons in the input layer |
| No | The number of neurons in the output layer |
| W | Weights |
| 0, 1, …, 17 | The eighteen input weights |
| Z | A linear transfer function |
| b | bias node |
| T | Transpose |
| i | A counter |
| n | The total number of input features |
| g | The activation function |
| The predicted output | |
| ReLU | Rectified Linear unit |
| the Sigmoid function | |
| Loss Function | |
| Y | Actual output |
| Cost Function | |
| Accumulative weight | |
| Accumulative bias |