Core Concepts
Imbalanced gradients in long-tailed classification bias models towards head classes, addressed by the Gradient-Aware Logit Adjustment (GALA) loss.
Abstract
Abstract:
Data often follows a long-tailed distribution.
Imbalanced gradients lead to bias towards head classes.
GALA loss balances accumulated gradients for optimization.
Introduction:
Deep learning struggles with long-tailed data.
Adjusting classifiers crucial for long-tail issues.
Imbalanced gradients bias classifiers towards head classes.
Method:
Problem setup and notations explained.
GALA loss introduces margin items to balance gradients.
Experiments:
Conducted on CIFAR100-LT, ImageNet-LT, Places-LT, iNaturalist2018.
Superior performance of GALA loss shown across datasets.
Conclusion:
GALA loss effectively balances imbalanced gradients.
Prediction re-balancing strategy mitigates biases towards head classes.
Stats
Our approach achieves top-1 accuracy of 48.5%, 41.4%, and 73.3% on CIFAR100-LT, Places-LT, and iNaturalist datasets respectively.
Quotes
"Imbalanced gradients distort the classifier in two ways."
"Our proposed GALA loss outperforms many prior methods by obvious margins."