Syntactic Ghost: An Imperceptible General-purpose Backdoor Attacks on Pre-trained Language Models
The authors propose a novel method, Syntactic Ghost, to achieve invisible and general backdoor implantation in pre-trained language models. By manipulating poisoned samples with different syntactic structures, they outperform previous methods and achieve predefined objectives.