核心概念
Event cameras require efficient algorithms for processing, and Ev-Edge optimizes performance on edge platforms.
摘要
Abstract:
- Event cameras offer high temporal resolution and dynamic range.
- Processing event streams requires ANNs, SNNs, and hybrid algorithms.
- Current edge platforms struggle with event-based vision systems.
- Ev-Edge proposes optimizations to boost performance.
Introduction:
- Event cameras are crucial for robotics and autonomous systems.
- Commodity edge platforms face challenges in processing event data efficiently.
Related Work:
- Prior research focuses on algorithmic techniques and hardware accelerators.
- Mapping frameworks optimize multiple ANNs on heterogeneous platforms.
Ev-Edge Framework:
Event2Sparse Frame Converter (E2SF):
- Converts raw event streams to sparse frames directly.
Dynamic Sparse Frame Aggregator (DSFA):
- Merges sparse frames dynamically based on input dynamics and hardware capabilities.
Network Mapper (NMP):
- Maps layers of networks to different processing elements while optimizing layer precision.
Experimental Methodology:
- Evaluation across various tasks like optical flow, semantic segmentation, etc.
- Developed using PyTorch and evaluated on Jetson Xavier AGX board.
Results:
Single-task execution performance:
- Ev-Edge outperforms GPU implementation by 1.23x - 2.05x.
Multi-task execution performance:
- NMP provides 1.43x - 1.81x latency improvements over round-robin methods.
Conclusion:
Ev-Edge enhances the efficiency of event-based algorithms on edge platforms through three key optimizations.
統計資料
Ev-Egdeは、NVIDIA Jetson AGX Xavierでの単一タスク実行シナリオにおいて、レイテンシで1.28倍から2.05倍の改善を達成しました。
Ev-Egdeは、マルチタスク実行シナリオにおいて、ラウンドロビンスケジューリング方法よりもレイテンシで1.43倍から1.81倍の改善を達成しました。