AUEditNet achieves accurate manipulation of facial action unit intensities in high-resolution synthetic face images, without requiring retraining or extra estimators, by leveraging a dual-branch architecture that implicitly disentangles facial attributes and identity even with limited subject data.
DiffFAE introduces a one-stage diffusion-based framework for high-fidelity facial appearance editing, addressing challenges of low generation fidelity, poor attribute preservation, and inefficient inference.