The article explores the effectiveness of adversarial training under long-tailed distributions, focusing on data augmentation. It highlights that existing methods are tested on balanced datasets but may not be as effective in real-world scenarios with long-tailed data. The study delves into RoBal's components, emphasizing Balanced Softmax Loss (BSL) as crucial. It discusses robust overfitting issues and unexpected findings that data augmentation not only mitigates overfitting but also enhances robustness significantly. Various augmentation techniques like MixUp, Cutout, CutMix, and others are explored for their impact on improving model performance. The experiments demonstrate that data augmentation leads to increased example diversity, resulting in improved model robustness across different datasets and architectures.
Іншою мовою
із вихідного контенту
arxiv.org
Ключові висновки, отримані з
by Xinli Yue,Ni... о arxiv.org 03-18-2024
https://arxiv.org/pdf/2403.10073.pdfГлибші Запити