Core Concepts
Integrating local and global perception enhances nano-UAV navigation success.
Abstract
In the realm of autonomous nano-sized unmanned aerial vehicles (UAVs), the challenge lies in navigating unknown environments efficiently. This study introduces a novel approach that combines visual-based convolutional neural networks for semantic information extraction with depth maps for close-proximity maneuvers. The integration strategy showcases the strengths of both visual and depth sensory information, achieving a 100% success rate in complex navigation scenarios. The research focuses on the importance of combining global and local planning techniques to enable successful autonomous navigation on nano-drones. By utilizing lightweight computing solutions, this study highlights the benefits of integrating vision-depth fusion for enhanced UAV navigation capabilities.
Stats
We achieve a 100% success rate over 15 flights in a complex navigation scenario.
The ToF sensor has a range of 0.2 –4 m.
The CNN runs on the 8-core cluster at 19 FPS.
The system achieved a 100% success rate across both straight pathways and turn segments.
Quotes
"Our fused perception pipeline achieved a 100% success rate on a set of fifteen flights."
"Our results highlight the benefit of combining depth and vision sensory inputs to enhance nano-UAV navigation."
"Our fused global + local perception pipeline captures the benefits of both depth-based and vision-based sensory inputs."