This study presents a comprehensive framework for reference-based, 3D-aware image editing that leverages the unique capabilities of triplane latent spaces within the EG3D generator. The approach achieves seamless integration of reference attributes while preserving the identity of the input image through spatial disentanglement and fusion learning.
Our method, 3DPE, enables efficient and versatile 3D-aware portrait editing from a single image by distilling knowledge from 3D GANs and diffusion models into a lightweight module.