Emotions are complex, multi-faceted experiences that involve multiple interconnected components, including appraisal, motivation, expression, physiology, and feeling. This study explores the relationship between the Component Process Model (CPM) and discrete emotions using interactive Virtual Reality (VR) games, multimodal data collection, and machine learning methods.
Pupillometry, the measurement of pupil diameter, can be used to accurately recognize human emotions, including happiness, sadness, anger, and fear, within a virtual reality environment using machine learning techniques.
Different fusion approaches for emotion recognition from speech using acoustic and text-based features show improved performance with BERT embeddings compared to Glove embeddings.
Effective fusion model for dimensional emotion recognition using recursive cross-modal attention.
Novel approach integrating MAE pre-training, TCN, and Transformer modules enhances continuous emotion recognition performance.
The author proposes a counterfactual emotion inference (CLEF) framework to address context bias interference in emotion recognition. By decoupling the causal relationships and subtracting biased predictions, CLEF aims to achieve robust debiased predictions.
The author proposes MultiDAG+CL, integrating Directed Acyclic Graphs and Curriculum Learning to enhance Multimodal Emotion Recognition in Conversation models.