מושגי ליבה
DNGaussian achieves high-quality novel view synthesis with efficient training and rendering speed.
תקציר
The paper introduces DNGaussian, a framework based on 3D Gaussian radiance fields for few-shot novel view synthesis. It focuses on depth regularization and global-local depth normalization to improve scene geometry and rendering quality. DNGaussian outperforms state-of-the-art methods in efficiency and quality, with significant reductions in training time and faster rendering speed.
Introduction
Radiance fields for novel view synthesis
Challenges in sparse-view NeRFs
Introduction of DNGaussian for efficient view synthesis
Method
Depth regularization for Gaussian radiance fields
Global-local depth normalization for detailed geometry reshaping
Training details and loss function formulation
Experiments
Evaluation on LLFF, DTU, and Blender datasets
Comparison with state-of-the-art methods
Efficiency study on limited resources
Supplementary Material
Additional results on depth normalization and neural color renderer
Comparison with grid-based methods
Implementation details and pre-trained depth models
סטטיסטיקה
DNGaussian achieves a 25× reduction in training time and over 3000× faster rendering speed.
DNGaussian outperforms state-of-the-art methods in quality and efficiency.
ציטוטים
"DNGaussian stands out by delivering comparably high-quality synthesized views and superior details with a remarkable 25× reduction in time and significantly lower memory overhead during training."