Transformer-based models are vulnerable to adversarial attacks, but can be made more robust through techniques like data augmentation and embedding perturbation loss.