Skip to main content
. Author manuscript; available in PMC: 2021 Dec 5.
Published in final edited form as: Neurocomputing (Amst). 2020 Jul 26;417:302–321. doi: 10.1016/j.neucom.2020.07.053

Table VI.

Compressed architecture energy and power running AlexNet on a Tegra GPU

Compression technique Execution time Energy consumption Implied power consumption
Benchmark study, 2015 [185] 49.1msec 232.2mJ 4.7 W (all layers)
Deep X software accelerator, 2016 [186] 866.7msec (average of 3 trials) 234.1mJ 2.7 W (all layers)
DNN various techniques, 2016 [181] 4003.8msec 5.0mJ 0.0012W (one layer)