DiffusionDialog, a novel approach that combines a pre-trained language model with a latent-based diffusion model, can greatly enhance the diversity of dialogue responses while maintaining coherence and achieving high inference efficiency.
Causal reasoning can be leveraged to alleviate the hallucination problem in knowledge-grounded dialogue generation by exploiting the interaction between dialogue history and external knowledge.
A novel end-to-end pipeline for generating personality-based synthetic dialogue data using prompting techniques with large language models.
The author presents a high-quality benchmark, Ms.WoW, to evaluate multi-source dialogue knowledge selection and response generation. They introduce the challenge of dialogue knowledge plug-and-play to test models on using new support knowledge in a zero-shot fashion.