Skip to main content
. 2023 Aug 25;23(17):7436. doi: 10.3390/s23177436

Table 6.

Results of comparison for inference time in different environments. All models were inferred by converting them to ONNX form, and the different computing environments are as follows: “@GPU” refers to the Nvidia Tesla T4, “@Nano” refers to the Nvidia Jetson Nano, and “@CPU” refers to the Intel Xeon processor.

Model Inference Time (ms) ONNX @Nano ONNX @GPU ONNX @CPU
YOLOv8n 12.04 133.30 353.13
YOLOv8s 12.16 217.20 891.93
YOLOv8m 20.20 471.59 1792.31
YOLOv8l 29.09 733.27 3340.26
YOLOv8x 41.87 1208.69 4222.87