VLMs can be compromised by poisoning training data with malicious image-text pairs, leading to successful jailbreak attacks.