The paper introduces GoMAvatar, a novel approach for real-time, memory-efficient, high-quality animatable human modeling from a single monocular video. The key contribution is the Gaussians-on-Mesh (GoM) representation, which combines the rendering quality and speed of Gaussian splatting with the geometry modeling and compatibility of deformable meshes.
The GoM representation attaches Gaussian splats to a deformable mesh, allowing for efficient rendering and articulation. The authors also propose a unique differentiable shading module that splits the final color into a pseudo albedo map from Gaussian splatting and a pseudo shading map derived from the normal map.
Extensive experiments on the ZJU-MoCap, PeopleSnapshot, and YouTube datasets show that GoMAvatar matches or surpasses the rendering quality of state-of-the-art monocular human modeling algorithms, while significantly outperforming them in computational efficiency (43 FPS) and memory efficiency (3.63 MB per subject).
他の言語に翻訳
原文コンテンツから
arxiv.org
深掘り質問