Core Concepts
A novel global optimization approach to accurately colorize LiDAR point clouds by jointly optimizing the poses of 360-degree camera frames to maximize photometric consistency.
Abstract
The paper presents OmniColor, a novel and efficient algorithm for colorizing point clouds using an independent 360-degree camera. The key highlights are:
OmniColor addresses the challenges of fusing data from LiDARs and cameras, which often lead to unsatisfactory mapping results due to inaccurate camera poses. It proposes a global optimization approach to jointly optimize the poses of all camera frames to maximize the photometric consistency of the colored point cloud.
The method leverages the wide field of view (FOV) of 360-degree cameras to capture surrounding scene information, which helps reduce artifacts from illumination variation and acquire sufficient correspondences for improved adaptability in diverse environments.
OmniColor introduces a point cloud co-visibility estimation approach to mitigate the impact of noise on the point cloud surface, which improves the optimization process.
The approach operates in an off-the-shelf manner, enabling seamless integration with any mobile mapping system while ensuring both convenience and accuracy. Extensive experiments demonstrate its superiority over existing frameworks.
Stats
The paper provides the following key metrics:
Rotation errors (degrees) and translation errors (centimeters) on the HKUST Guangzhou campus dataset:
Depth Edge-Based method: 1.71°/5.70 cm
Intensity Edge-Based method: 2.03°/5.56 cm
OmniColor: 0.475°/3.06 cm
Rotation errors (degrees) and translation errors (centimeters) on the Omniscenes dataset:
Prior Pose-based SfM method: 0.31°/3.61 cm
OmniColor: 0.25°/3.56 cm