The paper proposes MIRETR, a coarse-to-fine approach to multi-instance point cloud registration. At the coarse level, it jointly learns instance-aware superpoint features and predicts per-instance masks using an Instance-aware Geometric Transformer module. This allows the method to minimize the influence from the background and other instances, leading to reliable superpoint correspondences.
At the fine level, the superpoint correspondences are extended to instance candidates based on the instance masks. Instance-wise point correspondences are then extracted within each instance candidate to estimate per-instance poses. An efficient candidate selection and refinement algorithm is further devised to obtain the final registrations, bypassing the need for multi-model fitting.
Extensive experiments on three public benchmarks demonstrate the efficacy of the proposed method. Compared to the state-of-the-art, MIRETR achieves significant improvements, especially on the challenging ROBI benchmark where it outperforms the previous best by 16.6 points on F1 score. The method can effectively handle cluttered scenes and heavily-occluded instances by leveraging the instance-aware correspondences.
Til et annet språk
fra kildeinnhold
arxiv.org
Viktige innsikter hentet fra
by Zhiyuan Yu,Z... klokken arxiv.org 04-09-2024
https://arxiv.org/pdf/2404.04557.pdfDypere Spørsmål