מושגי ליבה
This research paper introduces a novel consistency loss function designed to enhance the performance of point cloud completion networks (PCCNs) by mitigating the one-to-many mapping problem inherent in reconstructing 3D objects from incomplete point cloud data.
סטטיסטיקה
The PCN model trained with an improved training strategy achieved a CDl2 score of 2.37 · 10−3, a substantial improvement over the previously reported performance of 4.08 · 10−3.
The second approach (predicting only the missing points) yields better completion performance than the first approach (predicting complete points).
The CD-score of networks trained and evaluated on DB (randomly selected ShapeNet55 data) is lower (better) than the CD-score of networks trained and evaluated on DA (dataset designed to highlight the one-to-many mapping issue).
The completion performance is improved by 27%, 25%, and 4.8% for PCN, AxFormNet, and AdaPoinTr, respectively, when trained with the consistency loss on the ShapeNet55 dataset.
Training AdaPointTr and SVDFormer using consistency loss on the MVP dataset increased performance, with a 0.19 CD metric decrease for SVDFormer and a 0.06 CD metric decrease for AdaPointTr.
PCN with consistency loss achieves a mean CD of 1.07 · 10−3, which is better than the mean CD of PoinTr (1.09 · 10−3) on the Shapenet55 dataset.
AxFormNet with consistency loss achieves a mean CD of 0.91 · 10−3, which is better than the mean CD of SeedFormer (0.92·10−3) on the Shapenet55 dataset.
The inference latency of PCN (1.9 ms) and AxFormNet (5.3 ms) are significantly lower than PoinTr (11.8 ms) and SeedFormer (38.3 ms).
Incorporating the consistency loss results in significant improvements in the gaps between the evaluation results on Shapenet34-seen split and Shapenet34-unseen split for PCN and AxFormNet.
SVDFormer’s Chamfer Distance metric improved from 1.302 to 1.2731, and AdaPointTr’s improved from 1.2802 to 1.2588 when trained with the consistency loss on the ShapeNet-55 dataset.
The training time of SVDFormer increased from 641.02 ms to 709.21 ms per batch (an increase of approximately 10.63%) when trained with the consistency loss on the ShapeNet-55 dataset.