แนวคิดหลัก
The author introduces the SEGAA model as a novel approach to predicting age, gender, and emotion simultaneously from speech data, highlighting the flaws in individual models and advocating for multi-output learning architecture.
บทคัดย่อ
The study explores predicting age, gender, and emotion from vocal cues using deep learning models. It addresses challenges in sourcing suitable data and proposes the SEGAA model for efficient predictions across all three variables. The experiments compare single, multi-output, and sequential models to capture intricate relationships between variables.
สถิติ
The SEGAA model achieves an accuracy of 96% for emotion detection.
The MLP model attains 98% accuracy for gender detection.
The SEGAA model reaches 95% accuracy for age detection.
คำพูด
"The ability to discern emotions offers an opportunity to improve emotional and behavioral disorders."
"SEGAA demonstrates a level of predictive capability comparable to univariate models."
"The proposed model emerges as the most efficient choice."