MAGPIE: Multi-Task Media-Bias Analysis Generalization for Pre-Trained Identification of Expressions
Belangrijkste concepten
MAGPIE introduces a large-scale multi-task pre-training approach tailored for media bias detection, outperforming previous models and emphasizing the effectiveness of Multi-Task Learning.
Samenvatting
Abstract:
MAGPIE introduces Large Bias Mixture (LBM) for multi-task pre-training in media bias detection.
Outperforms previous models on Bias Annotation By Experts (BABE) dataset by 3.3% F1-score.
Shows improvement on 5 out of 8 tasks in the Media Bias Identification Benchmark (MBIB).
Introduction:
Media bias is a complex issue involving various subtypes like linguistic, gender, racial biases.
Shift from isolated to multi-task methodologies needed for effective detection.
Methodology:
MAGPIE uses RoBERTa encoder with only 15% finetuning steps compared to single-task approaches.
LBM includes 59 bias-related tasks enhancing generalizability and performance.
Related Work:
Existing models focus on single tasks and saturate quickly on smaller datasets.
Empirical Results:
MAGPIE achieves state-of-the-art performance on BABE dataset and MBIB collection.
Task scaling crucial for performance improvements in Multi-Task Learning.
Samenvatting aanpassen
Herschrijven met AI
Citaten genereren
Bron vertalen
Naar een andere taal
Mindmap genereren
vanuit de broninhoud
Bron bekijken
arxiv.org
Multi-Task Media-Bias Analysis Generalization for Pre-Trained Identification of Expressions
Statistieken
MAGPIEはBias Annotation By Experts(BABE)データセットで3.3%のF1スコア向上を達成しました。
MAGPIEはMedia Bias Identification Benchmark(MBIB)コレクションで5つのタスクで改善を示しました。