The article explores the effectiveness of adversarial training under long-tailed distributions, focusing on data augmentation. It highlights that existing methods are tested on balanced datasets but may not be as effective in real-world scenarios with long-tailed data. The study delves into RoBal's components, emphasizing Balanced Softmax Loss (BSL) as crucial. It discusses robust overfitting issues and unexpected findings that data augmentation not only mitigates overfitting but also enhances robustness significantly. Various augmentation techniques like MixUp, Cutout, CutMix, and others are explored for their impact on improving model performance. The experiments demonstrate that data augmentation leads to increased example diversity, resulting in improved model robustness across different datasets and architectures.
To Another Language
from source content
arxiv.org
Key Insights Distilled From
by Xinli Yue,Ni... at arxiv.org 03-18-2024
https://arxiv.org/pdf/2403.10073.pdfDeeper Inquiries