The paper proposes a novel deep learning-based motion planning framework called the Transformer-Enhanced Motion Planner (TEMP). TEMP consists of two key modules:
Environmental Information Semantic Encoder (EISE): EISE encodes the environmental information into a compressed semantic representation, which is then used by the downstream planning network.
Motion Planning Transformer (MPT): MPT leverages an attention mechanism to dynamically focus on the semantic environmental information, task objectives, and historical planning data during the sampling stage. This attention-guided sampling helps TEMP generate sampling nodes more efficiently and effectively compared to traditional sampling-based motion planning (SBMP) algorithms.
The collaborative training of EISE and MPT allows EISE to autonomously learn and extract patterns from environmental data, forming semantic representations that MPT can interpret and utilize more effectively for motion planning.
Extensive simulations on 2D, 3D, and 7D planning tasks demonstrate that TEMP significantly outperforms advanced SBMP techniques like RRT* and IRRT* in terms of planning time, number of nodes, success rate, and path quality. TEMP achieves roughly a 10x speedup over IRRT* in both 3D and 7D tasks, and is about 24x faster than RRT* for planning in 7D. Moreover, TEMP exhibits greater robustness and adaptability, particularly in challenging, high-dimensional scenarios.
The attention mechanism in TEMP dynamically adjusts its focus on different information sources (semantic environmental information, task objectives, historical planning data) to guide the sampling process more effectively. This attention-guided sampling leads to a notable reduction in the number of nodes required to find high-quality paths, as well as faster convergence of the path cost compared to traditional SBMPs.
To Another Language
from source content
arxiv.org
Key Insights Distilled From
by Lei Zhuang,J... at arxiv.org 05-01-2024
https://arxiv.org/pdf/2404.19403.pdfDeeper Inquiries