Multitask learning (MTL) is explored as a tool to improve worst-group accuracy and group-wise fairness. The study investigates the impact of MTL on worst-group error and proposes modifications to enhance its effectiveness. Results show that regularized MTL consistently outperforms other methods in improving both average and worst-group outcomes across various datasets.
The study focuses on fine-tuning pre-trained models, demonstrating that regularized MTL can be a simple yet robust approach for improving model performance. By leveraging insights from synthetic data experiments, the study proposes a method to adapt MTL to target worst group error effectively. The results highlight the importance of pre-training in setting up regularized MTL as an effective solution for poor worst-group outcomes.
To Another Language
from source content
arxiv.org
Key Insights Distilled From
by Atharva Kulk... at arxiv.org 03-01-2024
https://arxiv.org/pdf/2312.03151.pdfDeeper Inquiries