핵심 개념
Mitigating label bias in foundation models improves fine-tuning performance.
초록
The article discusses the bias in foundation models and proposes the Generalized Logit Adjustment (GLA) method to mitigate it. GLA shows significant improvements across various tasks, such as ImageNet and few-shot datasets. The study highlights the importance of addressing label bias in pre-training data for better downstream task performance.
통계
GLA achieves 1.5 pp accuracy gains on ImageNet
GLA shows large average improvement (1.9-4.4 pp) on 11 few-shot datasets
GLA demonstrates 2.4 pp gains on long-tailed classification
인용구
"Our GLA achieves consistent improvement across all three subgroups, particularly showing a significant gain on tail classes."
"GLA offers two alternative methods for debiasing: optimization-based bias estimation and identifying label bias through eigenvectors."
"Removing the bias of foundation models is challenging, but GLA demonstrates significant improvements across a diverse range of tasks."