FaceTalk: Audio-Driven Motion Diffusion for Neural Parametric Head Models
FaceTalk introduces a novel generative approach for synthesizing high-fidelity 3D motion sequences of talking human heads from input audio signals, utilizing diffusion models in the expression space of neural parametric head models.