MAGPIE: Multi-Task Media-Bias Analysis Generalization for Pre-Trained Identification of Expressions
Khái niệm cốt lõi
MAGPIE introduces a large-scale multi-task pre-training approach tailored for media bias detection, outperforming previous models and emphasizing the effectiveness of Multi-Task Learning.
Tóm tắt
Abstract:
MAGPIE introduces Large Bias Mixture (LBM) for multi-task pre-training in media bias detection.
Outperforms previous models on Bias Annotation By Experts (BABE) dataset by 3.3% F1-score.
Shows improvement on 5 out of 8 tasks in the Media Bias Identification Benchmark (MBIB).
Introduction:
Media bias is a complex issue involving various subtypes like linguistic, gender, racial biases.
Shift from isolated to multi-task methodologies needed for effective detection.
Methodology:
MAGPIE uses RoBERTa encoder with only 15% finetuning steps compared to single-task approaches.
LBM includes 59 bias-related tasks enhancing generalizability and performance.
Related Work:
Existing models focus on single tasks and saturate quickly on smaller datasets.
Empirical Results:
MAGPIE achieves state-of-the-art performance on BABE dataset and MBIB collection.
Task scaling crucial for performance improvements in Multi-Task Learning.
Tùy Chỉnh Tóm Tắt
Viết Lại Với AI
Tạo Trích Dẫn
Dịch Nguồn
Sang ngôn ngữ khác
Tạo sơ đồ tư duy
từ nội dung nguồn
Xem Nguồn
arxiv.org
Multi-Task Media-Bias Analysis Generalization for Pre-Trained Identification of Expressions
Thống kê
MAGPIEはBias Annotation By Experts(BABE)データセットで3.3%のF1スコア向上を達成しました。
MAGPIEはMedia Bias Identification Benchmark(MBIB)コレクションで5つのタスクで改善を示しました。