Adversarial Attacks Undermine Explanation Robustness of Rationalization Models
Existing rationalization models are vulnerable to adversarial attacks that can significantly change the selected rationales while maintaining model predictions, undermining the credibility of these models.