Safety fine-tuning in language models like Llama 2-Chat may be easily circumvented, posing significant risks of misuse and harm.