Kernekoncepter
This study comprehensively evaluates the performance of popular deep learning object detection models, including YOLOv8, EfficientDet Lite, and SSD, across various edge computing devices such as Raspberry Pi 3, 4, 5, Pi with TPU accelerators, and NVIDIA Jetson Orin Nano. The evaluation focuses on key metrics including inference time, energy consumption, and accuracy (mean Average Precision).
Resumé
The researchers developed object detection applications using Flask-API and deployed the models on the edge devices using different frameworks like PyTorch, TensorFlow Lite, and TensorRT. They employed the FiftyOne tool to evaluate the accuracy of the models on the COCO dataset and used the Locust tool for automated performance measurement, including energy consumption and inference time.
The key findings are:
- SSD_v1 exhibits the fastest inference times, while YOLO8_m has the highest accuracy but also the highest energy consumption.
- The addition of TPU accelerators to the Raspberry Pi devices significantly improves the performance of the SSD and YOLO8 models in terms of inference time and energy efficiency.
- The Jetson Orin Nano emerges as the most energy-efficient and fastest device overall, particularly for the YOLO8 models, despite having the highest idle energy consumption.
- The results highlight the need to balance accuracy, speed, and energy efficiency when deploying deep learning models on edge devices, providing valuable guidance for practitioners and researchers.
Statistik
The Raspberry Pi 3 has a base energy consumption of 270 mWh, while the Raspberry Pi 4 and 5 have 199 mWh and 217 mWh, respectively.
The energy consumption per request (excluding base energy) for the SSD_v1 model ranges from 0.01 mWh on Jetson Orin Nano to 0.31 mWh on Raspberry Pi 3.
The inference time for the SSD_v1 model ranges from 12 ms on Pi 4 with TPU to 427 ms on Raspberry Pi 3.
The mean Average Precision (mAP) for the YOLO8_m model ranges from 32 on Pi 5 with TPU to 44 on Raspberry Pi 4 and Jetson Orin Nano.
Citater
"SSD_v1 exhibits the lowest inference time among all evaluated models."
"Jetson Orin Nano stands out as the fastest and most energy-efficient option for request handling, despite having the highest idle energy consumption."
"The results highlight the need to balance accuracy, speed, and energy efficiency when deploying deep learning models on edge devices."