Venom introduces a binary-task optimization problem to enhance backdoor attacks against defenses. It focuses on injecting backdoors and enhancing attack survivability by imitating benign sample behavior. The evaluation demonstrates significant improvements in attack survivability without compromising original attack capabilities.
Backdoor attacks pose serious security threats to deep neural networks, leading to misclassification of samples with attacker-specified triggers. Existing defenses focus on data distribution or model behavior, but little attention has been given to surviving model reconstruction-based defenses. Venom addresses this gap by enhancing existing attacks' survivability through attention imitation loss, forcing poisoned samples' decision paths to couple with crucial benign sample paths.
Till ett annat språk
från källinnehåll
arxiv.org
Djupare frågor