Table 2.
Model configurations for MLP and CNN
| Synthetic | CBH | CSS | HMP | CS | FS | FSH | IBD | PDX | |
|---|---|---|---|---|---|---|---|---|---|
| MLP | (256, 256) | (1024, 512) | (512, 256) | (512, 256) | (512, 512) | (512, 512) | (512, 256) | (512, 256, 128) | (512, 256, 128) |
| CNN | Conv1D(8, 3) → Dropout → ReLu → MaxPool1D(2) → Conv1D(8, 3) → ReLu → MaxPool1D(2) → FC | ||||||||
Number in the round bracket represents the number of hidden units. Conv1D is the one-dimensional convolution layer. ReLu is the non-linear rectifier layer. MaxPool1D represents the one-dimensional max pooling layer. Dropout and FC represent dropout and fully connected layers, respectively. Details of each dataset are described in Table 1