Large Language Models (LLMs) face risks of misuse in conversations, prompting research on attacks, defenses, and evaluations for safety.