Unlabeled data can be maliciously poisoned to inject backdoors into self-supervised learning models, even without any label information.