Facial affective behavior analysis can be enhanced by leveraging multi-modal large language models (MLLMs) through instruction tuning, enabling fine-grained emotion and action unit recognition.
Facial AU detection benefits from contrastive learning for person-independent representations.
Introducing FaceXFormer, a unified transformer model for comprehensive facial analysis tasks.
DrFER introduces disentangled representation learning to enhance 3D facial expression recognition by effectively separating expression and identity information.
The author introduces DrFER, a method for disentangling expression features from identity information in 3D facial expression recognition. By employing a dual-branch framework and innovative loss functions, DrFER achieves superior performance in recognizing facial expressions.