Skip to main content
. 2020 Sep 4;20(18):5030. doi: 10.3390/s20185030

Table 3.

Performance comparison of Proposed Model with compressed version of competitive models.

Algorithm Pruning FLOPs
(M)
Memory
(MB)
Accuracy
(%)
# of Channels
Pruned/Total
Inference Time (μs/Sample)
(Container-NVIDIA Docker)
Xavier TX2 Nano
DeepConv
GRU
- Attention
0% 1.627 7.91 98.36 - ∼505 ∼1175 ∼2580
8.50% 1.482 7.48 98.04 LSTM1(07/128)
LSTM2(11/128)
∼469 ∼1040 ∼2270
19% 1.314 6.64 96.89 LSTM1 (20/128)
LSTM2(15/128)
∼452 ∼997 ∼2160
FCN-LSTM 0% 0.566 3.28 95.10 - ∼284 ∼365 ∼450
5.30% 0.535 3.14 94.32 Conv1(10/128)
Conv2(10/256)
∼253 ∼342 ∼416
12.20% 0.496 3.07 93.92 Conv1(10/128)
Conv2(20/256)
Conv3(10/256)
∼241 ∼333 ∼371
Proposed
DC-LSTM
0% 0.233 1.69 97.86 - ∼188 ∼207 ∼230
Proposed
DC-GRU
0% 0.232 1.69 98.72 - ∼182 ∼205 ∼227