Khái niệm cốt lõi
A novel tele-immersive framework that promotes cognitive and physical collaboration between humans and drones through Mixed Reality, incorporating bi-directional spatial awareness and multi-modal virtual-physical interaction approaches.
Tóm tắt
The paper presents a novel tele-immersive framework for human-drone collaboration through Mixed Reality (MR). The key features include:
-
Bi-directional Spatial Awareness:
- The Spatial Awareness Module (SAM) seamlessly integrates the physical and virtual worlds, offering egocentric and exocentric environmental representations to both the human and the drone.
- This allows for a shared understanding of the surrounding environment between the human and the drone.
-
Virtual-Physical Interaction:
- The framework combines a Variable Admittance Control (VAC) with a planning algorithm to enable intuitive virtual-physical interaction.
- The VAC allows the user to input virtual forces as commands to the drone, while ensuring compatibility with the environment map.
- An obstacle force field is also introduced to provide additional safety during free-form interaction.
The proposed framework is validated through various collaborative planning and exploration tasks involving a drone and a user equipped with an MR headset. The results demonstrate the effectiveness of the spatial awareness and virtual-physical interaction approaches in enabling seamless human-drone collaboration.
Thống kê
The quadrotor platform used in the experiments is equipped with a Qualcomm® Snapdragon™ board for on-board computing and an embedded stereo camera for obtaining the point cloud.
The Mixed Reality framework is implemented in C# and executed on the Microsoft® Hololens 2.0.
The Root Mean Squared Error between the reference trajectory (∆r) and the commanded trajectory (∆c) during the FPVI and APVI modalities is 0.0373 m and 2.0142 m, respectively.
Trích dẫn
"The emergence of innovative spatial computing techniques, such as Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR), presents a technological opportunity in robotics. These techniques facilitate enhanced collaboration between humans and robots through multi-modal information sharing within the human-robot team based on vision, gestures, natural languages, and gaze."
"To the best of our knowledge, the proposed framework is the first to enable continuous spatial virtual-physical navigation and interaction with an aerial robot via MR."