GenN2N, a unified framework for NeRF-to-NeRF translation, enables a range of 3D NeRF editing tasks, including text-driven editing, colorization, super-resolution, inpainting, etc. by leveraging 2D image-to-image translation methods and modeling the distribution of 3D edited NeRFs.
By decomposing the appearance of a 3D scene into low-frequency and high-frequency components, the proposed method enables high-fidelity and transferable photorealistic editing of 3D scenes based on text instructions.
This paper proposes a new language-driven method for efficiently inserting or removing objects in neural radiance field (NeRF) scenes. The method leverages a text-to-image diffusion model to blend objects into background NeRFs, and a novel pose-conditioned dataset update strategy to ensure view-consistent rendering.