Belangrijkste concepten
PhysAvatar combines inverse rendering with inverse physics to automatically estimate the shape, appearance, and physical parameters of a human's clothing from multi-view video data, enabling high-fidelity rendering of avatars in novel motions and lighting conditions.
Samenvatting
The paper introduces PhysAvatar, a novel framework for reconstructing 3D avatars of clothed humans from multi-view video data. The key components of the method are:
Mesh Tracking: The method uses a mesh-aligned 4D Gaussian technique to track the deformation of the garment geometry across the video sequence, providing accurate correspondences.
Physics-based Dynamic Modeling: The tracked mesh sequence is used to estimate the physical parameters of the garment, such as density, membrane stiffness, and bending stiffness, through a gradient-based optimization process that integrates a physics simulator.
Physics-based Appearance Modeling: The refined geometry from the simulation step is used in a physically-based inverse renderer to estimate the surface material and ambient lighting, enabling high-quality rendering of the avatar under novel views and lighting conditions.
The proposed approach demonstrates significant improvements over existing methods in terms of capturing the realistic dynamics of loose-fitting garments, as well as the overall visual fidelity of the reconstructed avatars. The authors show results on challenging datasets and discuss potential applications in areas like virtual reality, gaming, and digital fashion.