Skip to main content
. Author manuscript; available in PMC: 2023 Feb 1.
Published in final edited form as: IEEE Trans Knowl Data Eng. 2020 Apr 22;34(2):531–543. doi: 10.1109/tkde.2020.2989405

TABLE 6:

The architecture of the infused, poor model used by CHEER, Direct, KD and AT for knowledge infusion in PTBDB, which includes a total 45.0k parameters.

Layer Type Hyper-parameters Activation
1 Split n_seg=10
2 Convolution1D n_filter=32, kernel_size=16, stride=2 ReLU
3 Convolution1D n_filter=32, kernel_size=16, stride=2 ReLU
4 Convolution1D n_filter=32, kernel_size=16, stride=2 ReLU
5 AveragePooling1D
6 LSTM hidden_units=32 ReLU
7 PositionAttention
8 Dense hidden_units=n_classes Linear
9 Softmax