核心概念
The proposed IVLMap empowers robots with instance-level and attribute-level semantic mapping, enabling precise localization and zero-shot end-to-end navigation based on natural language commands.
摘要
The paper introduces the Instance-aware Visual Language Map (IVLMap), a novel approach to enhance robot navigation capabilities by constructing a semantic map that incorporates instance-level and attribute-level information.
Key highlights:
- IVLMap is built by fusing RGBD video data with a specially-designed natural language map indexing in the bird's-eye view, enabling instance-level and attribute-level semantic mapping.
- IVLMap demonstrates the ability to transform natural language into navigation targets with instance and attribute information, enabling precise localization.
- IVLMap can accomplish zero-shot end-to-end navigation tasks based on natural language commands, outperforming baseline methods.
- The authors developed an interactive data collection platform to efficiently capture RGBD data and camera poses, reducing data volume and improving reconstruction.
- Extensive experiments in simulation and real-world environments validate the effectiveness of IVLMap in instance-level and attribute-level navigation tasks.
統計資料
"Vision-and-Language Navigation (VLN) is a challenging task that requires a robot to navigate in photo-realistic environments with human natural language promptings."
"Recent studies aim to handle this task by constructing the semantic spatial map representation of the environment, and then leveraging the strong ability of reasoning in large language models for generalizing code for guiding the robot navigation."
"However, these methods face limitations in instance-level and attribute-level navigation tasks as they cannot distinguish different instances of the same object."
引述
"To address this challenge, we propose a new method, namely, Instance-aware Visual Language Map (IVLMap), to empower the robot with instance-level and attribute-level semantic mapping, where it is autonomously constructed by fusing the RGBD video data collected from the robot agent with special-designed natural language map indexing in the bird's-in-eye view."
"Such indexing is instance-level and attribute-level. In particular, when integrated with a large language model, IVLMap demonstrates the capability to i) transform natural language into navigation targets with instance and attribute information, enabling precise localization, and ii) accomplish zero-shot end-to-end navigation tasks based on natural language commands."