The paper introduces a "Roadmap with Gaps" data structure that captures the approximate reachability of local regions in a given environment given a learned controller. The roadmap is constructed offline and provides high-level guidance on how the robot can navigate the target environment.
During online planning, a wavefront is computed over the roadmap to express the cost-to-goal at each node. The proposed Roadmap-Guided Expansion (RoGuE) method integrates this roadmap guidance with an asymptotically optimal tree sampling-based planner. At each iteration, RoGuE selects an informed local goal from the roadmap wavefront and propagates the robot's state towards it using the learned controller. If the controller cannot reach the local goal, the planner resorts to random exploration to maintain probabilistic completeness and asymptotic optimality.
The experimental evaluation demonstrates the effectiveness of the proposed approach on various benchmarks, including physics-based vehicular models on uneven and varying friction terrains, as well as a quadrotor under air pressure effects. The RoGuE-based planners significantly outperform alternatives that do not leverage the roadmap guidance.
In un'altra lingua
dal contenuto originale
arxiv.org
Approfondimenti chiave tratti da
by Aravind Siva... alle arxiv.org 04-01-2024
https://arxiv.org/pdf/2310.03239.pdfDomande più approfondite