The paper proposes a method for fast and efficient few-shot novel view synthesis using neural radiance fields (NeRF). The key contributions are:
Annealing Signed Distance Function (ASDF) loss: This loss function enforces adaptive geometric smoothing, guiding the network to first learn the overall structure and then progressively recover detailed geometry. This addresses the instability issues encountered when using the conventional Eikonal loss for few-shot NeRF optimization.
Utilization of dense 3D predictions and multi-view consistency: The method leverages additional geometric cues from structure-from-motion and deep dense priors to improve the quality of the reconstructed scenes.
Efficient optimization: By incorporating the ASDF loss and the geometric priors, the proposed approach achieves comparable performance to state-of-the-art methods while being 30-45 times faster in training time.
The paper first analyzes the limitations of the Eikonal loss in the few-shot NeRF setting, demonstrating its instability and inability to capture reliable geometry. It then introduces the ASDF loss, which adaptively smooths the surface during optimization to enable stable convergence. The method further utilizes dense 3D predictions and multi-view consistency to enhance the quality of the reconstructed scenes. Extensive experiments on the ScanNet and NeRF-Real datasets show that the proposed approach achieves comparable performance to state-of-the-art methods while significantly reducing the training time.
To Another Language
from source content
arxiv.org
Key Insights Distilled From
by Byeongin Jou... at arxiv.org 04-01-2024
https://arxiv.org/pdf/2403.19985.pdfDeeper Inquiries