The MER-MCE framework leverages specialized emotion encoders for text, audio, and visual modalities, as well as Multimodal Language Models, to effectively identify emotions and their underlying causes in multimodal conversational data.
UniMEEC proposes a unified framework to jointly model emotion recognition and emotion-cause pair extraction, leveraging the complementarity and causality between emotion and emotion cause.
Short videos' affective computing was neglected in the past, MSEVA system aims to monitor emotions in short videos for public opinion guidance.
提案されたアプローチは、多様なモダリティを統合した効率的なマルチモーダル感情原因分析(ECA)システムの開発に取り組んでいます。