Addressing Robust Fairness in Adversarial Learning through Distributional Optimization
The core message of this paper is to address the robust fairness issue in adversarial learning by leveraging distributional robust optimization (DRO) to learn class-wise distributionally adversarial weights, which can enhance the model's robustness and fairness simultaneously.