Core Concepts
Incorporating individuals' moral foundations can enhance the performance of stance detection models on social media data.
Abstract
This study investigates how moral foundation dimensions can contribute to predicting an individual's stance on a given target on social media. The authors incorporate moral foundation features extracted from text, along with message semantic features, to classify stances at both message- and user-levels using traditional machine learning models, fine-tuned language models, and large language models.
The key findings are:
Encoding moral foundations can enhance the performance of stance detection tasks, with improvements in F1 score up to 23.7 points, depending on the choice of model and dataset.
The predictive performance of moral foundation-based models varies across tasks (message-level vs. user-level stance detection), datasets, stance targets, and classifiers.
The study highlights interesting associations between moral foundations and the stances towards specific targets, such as people against the "Climate Change is a Real Concern" target displaying moral violations towards Care, Fairness, Authority and Sanctity foundations.
The incorporation of moral foundations improves F1 score on stance detection tasks on average by 1.06 points for traditional machine learning models, 5.91 points for Fine-tuned Language Models, and 15.82 points for Large Language Models, suggesting that the addition of such psychological attributes might be particularly fruitful for LLM-based stance detection models.
Stats
Moral foundations can enhance stance detection performance by up to 23.7 F1 score points.
Incorporating moral foundations improves F1 score on average by 1.06 points for traditional ML models, 5.91 points for Fine-tuned Language Models, and 15.82 points for Large Language Models.
Quotes
"Encoding moral foundations can enhance the performance of stance detection tasks, with improvements in F1 score up to 23.7 points, depending on the choice of model and dataset."
"The incorporation of moral foundations improves F1 score on stance detection tasks on average by 1.06 points for traditional machine learning models, 5.91 points for Fine-tuned Language Models, and 15.82 points for Large Language Models."