The author proposes VIGFace, a framework for generating synthetic facial images to address challenges faced by traditional datasets in face recognition. By incorporating virtual prototypes into the model, VIGFace ensures unique virtual identities and achieves state-of-the-art results.
The author presents LAFS, a landmark-based self-supervised learning approach, to enhance face recognition performance by leveraging facial landmarks and self-supervised pretraining.
The author presents a novel approach to RGB-D face recognition by leveraging virtual depth synthesis and adaptive confidence weighting for improved accuracy and robustness.
The author introduces the Feature-Guided Gradient Backpropagation method to enhance explainability in face verification systems, providing precise saliency maps for "Accept" and "Reject" decisions.
The author proposes a contrastive learning method to address the scarcity of AU annotations by learning from unlabelled facial videos, achieving discriminative AU representations and person-independent detection.
The author proposes a new AU detection framework that combines multi-task learning, facial landmark detection, and domain separation to enhance performance in detecting facial action units in the wild.
This paper introduces an efficiency-driven approach to face recognition model quantization, demonstrating outstanding results with a smaller dataset. The core argument is that effective quantization can be achieved with minimal data and training time.