The paper presents the authors' research efforts and experimental results on the development of ultra-portable 3D mapping systems for emergency responders. Four different sensor configurations, either helmet-mounted or body-worn, were evaluated along with various sensor fusion algorithms during field trials.
Experiment A used an Intel Realsense L515 time-of-flight (ToF) camera mounted on a helmet, which provided a coherent global layout but suffered from limited field of view, range, and point density. Experiment B replaced the ToF camera with a Microsoft Azure Kinect DK, which showed improved mapping performance in small indoor environments, but had limitations in larger spaces or areas with indirect sunlight.
Experiment C utilized a Livox Mid-360 solid-state LiDAR, combined with two Luxonis OAK-D Pro Wide stereo visual-inertial sensors, mounted on the helmet. This setup demonstrated promising overall accuracy, benefiting from the higher quality LiDAR data and dual-camera visual-inertial system. However, the placement of the LiDAR sensor was identified as a potential area for improvement.
Finally, Experiment D explored a non-rigid, dual LiDAR-inertial system, with the sensors attached to a tactical jacket using velcro. This approach aimed to increase the coverage and robustness of the LiDAR-inertial odometry, with initial positive results, though there were also failure cases where one of the LiDARs could not gather sufficient geometrical features.
The authors conclude that wearable 3D mapping systems have the potential to revolutionize emergency response capabilities, and future work will focus on refining sensor placement and exploring additional data fusion techniques to create even more robust and informative 3D maps for real-world scenarios.
他の言語に翻訳
原文コンテンツから
arxiv.org
深掘り質問