The core message of this article is to introduce the CANTTALKABOUTTHIS dataset, which is designed to train language models to maintain topical focus and avoid getting distracted during task-oriented dialogues.
DialogBench, a comprehensive benchmark, is proposed to standardize the evaluation of large language models (LLMs) as human-like dialogue systems, uncovering their strengths and limitations across diverse dialogue tasks.
BootTOD proposes a self-bootstrapping framework for task-oriented dialogue representations, outperforming contrastive methods.
Initroducing Mix-Initiative Dynamic Prefix Tuning (IDPT) to enhance response generation in dialogue systems by incorporating initiative awareness.
DIIR framework enables learning and applying dialogue strategies for Motivational Interviewing, improving active listening skills and promoting collaborative responses.
CTSM combines trait and state emotions to enhance empathetic response generation by perceiving a comprehensive range of emotions in contextual dialogues.
Coherent and engaging knowledge selection is crucial for generating informative responses in dialogue systems.
The author introduces the CET2 framework to address shortcomings in knowledge selection methods, focusing on topic transitions for coherent conversations.