TABLE 6:
The architecture of the infused, poor model used by CHEER, Direct, KD and AT for knowledge infusion in PTBDB, which includes a total 45.0k parameters.
| Layer | Type | Hyper-parameters | Activation |
|---|---|---|---|
| 1 | Split | n_seg=10 | |
| 2 | Convolution1D | n_filter=32, kernel_size=16, stride=2 | ReLU |
| 3 | Convolution1D | n_filter=32, kernel_size=16, stride=2 | ReLU |
| 4 | Convolution1D | n_filter=32, kernel_size=16, stride=2 | ReLU |
| 5 | AveragePooling1D | ||
| 6 | LSTM | hidden_units=32 | ReLU |
| 7 | PositionAttention | ||
| 8 | Dense | hidden_units=n_classes | Linear |
| 9 | Softmax |