Label refinement in adversarial training mitigates robust overfitting by addressing noisy labels.
This paper introduces a novel approach called Robustness Reprogramming, which enhances the robustness of pre-trained deep learning models against adversarial attacks without modifying their original parameters, achieving this by replacing the traditional linear feature transformation with a robust non-linear pattern matching technique.