This paper presents NEAT, a novel rendering-distilling formulation using neural fields to represent 3D line segments and junctions, enabling matching-free 3D wireframe reconstruction from multi-view images.
GD2-NeRF is a coarse-to-fine generative detail compensation framework that hierarchically includes GAN and pre-trained diffusion models into One-shot Generalizable Neural Radiance Fields (OG-NeRF) to synthesize novel views with vivid plausible details in an inference-time finetuning-free manner.
NaviNeRF, a NeRF-based 3D reconstruction model, achieves fine-grained disentanglement while preserving 3D accuracy and consistency without any priors or supervision.
We propose a regularized optimization approach to enable 3D Gaussian Splatting (3DGS) for sparse input views. Our key idea is to introduce coherency to the 3D Gaussians during optimization by constraining their movement in 2D image space using an implicit decoder and total variation loss. We further leverage monocular depth and flow correspondences to initialize and regularize the 3D Gaussian representation, enabling high-quality texture and geometry reconstruction from extremely sparse inputs.
A novel method for decomposing 3D scenes into individual objects and backgrounds with minimal human interaction, by integrating the Segment Anything Model (SAM) with hybrid implicit-explicit neural surface representations and a mesh-based region-growing technique.