Główne pojęcia
Multitask learning can enhance worst-group outcomes and address group-wise fairness by regularizing the joint multitask representation space.
Streszczenie
Multitask learning (MTL) is explored as a tool to improve worst-group accuracy and group-wise fairness. The study investigates the impact of MTL on worst-group error and proposes modifications to enhance its effectiveness. Results show that regularized MTL consistently outperforms other methods in improving both average and worst-group outcomes across various datasets.
The study focuses on fine-tuning pre-trained models, demonstrating that regularized MTL can be a simple yet robust approach for improving model performance. By leveraging insights from synthetic data experiments, the study proposes a method to adapt MTL to target worst group error effectively. The results highlight the importance of pre-training in setting up regularized MTL as an effective solution for poor worst-group outcomes.
Statystyki
Multitasking often achieves better worst-group accuracy than Just-Train-Twice (JTT).
Regularized MTL consistently outperforms JTT on both average and worst-group outcomes.
Regularized MTL improves worst-group accuracy over ERM by approximately 4%.
Regularized MTL outperforms JTT and Bitrate-Constrained DRO on various datasets.