Pourramezan Fard, A., Hosseini, M. M., Sweeny, T. D., & Mahoor, M. H. (2024). AffectNet+: A Database for Enhancing Facial Expression Recognition with Soft-Labels. arXiv preprint arXiv:2410.22506.
This paper introduces AffectNet+, a novel facial expression dataset designed to address the limitations of existing datasets in capturing the nuances of human emotions, particularly compound emotions, by employing a "soft-label" approach.
The researchers developed AffectNet+ by building upon the existing AffectNet dataset. They utilized a subset of AffectNet with multiple human annotations to train two deep learning models: an ensemble of binary classifiers and an action unit (AU)-based classifier. These models generated "soft-labels," representing the probability of each emotion being present in an image. Based on the agreement between soft-labels and original "hard-labels," the researchers categorized AffectNet+ images into three subsets: Easy, Challenging, and Difficult.
AffectNet+ offers a valuable resource for advancing facial expression recognition research by addressing the limitations of traditional datasets and enabling the development of more robust and accurate FER models, particularly for recognizing compound emotions.
This research significantly contributes to the field of computer vision and affective computing by providing a more realistic and comprehensive facial expression dataset, paving the way for developing FER models capable of better understanding and responding to the complexities of human emotions.
While AffectNet+ presents a significant advancement, future research could explore expanding the dataset with more diverse demographics and incorporating temporal information from videos to further enhance the understanding and recognition of dynamic facial expressions.
เป็นภาษาอื่น
จากเนื้อหาต้นฉบับ
arxiv.org
ข้อมูลเชิงลึกที่สำคัญจาก
by Ali Pourrame... ที่ arxiv.org 10-31-2024
https://arxiv.org/pdf/2410.22506.pdfสอบถามเพิ่มเติม