Bhattacharya, A., Cannici, M., Rao, N., Tao, Y., Kumar, V., Matni, N., & Scaramuzza, D. (2024). Monocular Event-Based Vision for Obstacle Avoidance with a Quadrotor. In 8th Conference on Robot Learning (CoRL 2024). Munich, Germany.
This research paper presents the first event-driven method for static obstacle avoidance on a quadrotor, aiming to overcome the limitations of traditional cameras in high-speed, cluttered, and low-light environments.
The researchers developed a learning-based approach that leverages depth prediction as a pretext task to train a reactive obstacle avoidance policy. They utilized a simulation environment to pre-train the policy with approximated event data and then fine-tuned the perception component with limited real-world event-and-depth data. The system was deployed on two different quadrotor platforms equipped with different event cameras.
This research highlights the potential of event cameras for enabling robust and high-speed obstacle avoidance in quadrotors. The simulation pre-training and real-world fine-tuning approach allows for effective sim-to-real transfer and adaptation to different environments and event camera platforms.
This work contributes significantly to the field of event-based vision and robotics by demonstrating the feasibility and advantages of using event cameras for challenging perception and navigation tasks in real-world scenarios.
The study acknowledges limitations related to the lack of a continuous-time event camera simulator and the need for real-world data fine-tuning. Future research directions include exploring event-based methods for dynamic obstacle avoidance, optimizing computational efficiency, and investigating the impact of event camera bias tuning on performance.
Na inny język
z treści źródłowej
arxiv.org
Głębsze pytania