Fine-tuning and quantization of large language models (LLMs) can significantly reduce their jailbreak resistance, leading to increased vulnerabilities.