Alapfogalmak
訓練画像を毒入りラベルで攻撃するクリーンイメージバックドア攻撃は、モデルの公平性と堅牢性を脅かす。
Idézetek
"To explore potential security threats posed by outsourced labels, in this paper we propose clean-image backdoor attacks."
"Our attacks seriously jeopardize the fairness and robustness of image classification models."