Data Poisoning Attacks Compromise In-context Learning in Large Language Models
Data poisoning attacks can significantly degrade the performance of in-context learning in large language models, highlighting the urgent need for enhanced security and robustness.