Core Concepts
The authors investigate gendered emotion attribution in Large Language Models, revealing consistent stereotypes influenced by societal norms. This sheds light on the interplay between language, gender, and emotion.
Abstract
The study explores how Large Language Models (LLMs) reflect gender stereotypes in emotion attribution. It reveals that LLMs consistently associate women with sadness and men with anger, aligning with societal biases. The research highlights the implications of these findings for emotion applications and calls for interdisciplinary collaboration to address biases in NLP systems.
Stats
"We find stark gender differences: models attribute SADNESS to women 10,635 times and only 6,886 times to men; JOY is attributed 4,415 times and 6,520 times to men and women, respectively."
"Models overwhelmingly link SADNESS with women and ANGER with men."
"LLMs predict different emotions based on gender."
"Models attribute ANGER to men almost twice as often as for women (13,173 times compared to 7,042)."
Quotes
"Emotions serve as heuristics to interpret a given situation."
"The presence of stereotypes in LLMs poses a potential risk to downstream emotion applications."