The proposed morphable diffusion model enables the generation of 3D-consistent and controllable photorealistic human avatars from a single input image by integrating a 3D morphable model into a state-of-the-art multi-view diffusion framework.
A novel approach for generating photo-realistic and animatable human avatars from monocular input videos by learning a joint representation using Gaussian splatting and textured mesh.