The paper presents a Hybrid Mesh-Gaussian Head Avatar (MeGA) that models different head components (face and hair) using more suitable representations to achieve high-fidelity rendering and support various editing functionalities.
For facial modeling, MeGA adopts an enhanced FLAME mesh as the base representation and predicts a UV displacement map to account for personalized geometric details. It also uses disentangled neural textures, including a diffuse texture map, a view-dependent texture map, and an expression-dependent dynamic texture map, to achieve photorealistic facial rendering.
For hair modeling, MeGA first builds a static canonical 3D Gaussian Splatting (3DGS) representation and then applies a rigid transformation and an MLP-based deformation field to handle complex dynamic expressions. An occlusion-aware blending module is proposed to properly blend the face and hair images.
The hybrid representation enables various editing functionalities, such as hairstyle alteration and texture editing, which are not easily supported by previous methods. Experiments on the NeRSemble dataset demonstrate that MeGA outperforms state-of-the-art methods in terms of novel expression synthesis and novel view synthesis, while also supporting the aforementioned editing capabilities.
Sang ngôn ngữ khác
từ nội dung nguồn
arxiv.org
Thông tin chi tiết chính được chắt lọc từ
by Cong Wang,Di... lúc arxiv.org 05-01-2024
https://arxiv.org/pdf/2404.19026.pdfYêu cầu sâu hơn