The proposed Label Smoothing Poisoning (LSP) framework can effectively defeat trigger reverse engineering based backdoor defense methods by manipulating the classification confidence of backdoor samples.
A novel backdoor mitigation approach using activation-guided model editing to counter backdoor attacks on machine learning models.