The article discusses the bias in foundation models and proposes the Generalized Logit Adjustment (GLA) method to mitigate it. GLA shows significant improvements across various tasks, such as ImageNet and few-shot datasets. The study highlights the importance of addressing label bias in pre-training data for better downstream task performance.
In un'altra lingua
dal contenuto originale
arxiv.org
Approfondimenti chiave tratti da
by Beier Zhu,Ka... alle arxiv.org 03-28-2024
https://arxiv.org/pdf/2310.08106.pdfDomande più approfondite