Linguistic entrainment, where conversational participants align their linguistic patterns, can improve the naturalness and success of task-oriented dialogue systems. This work introduces methods to achieve dialogue entrainment in a GPT-2-based end-to-end system through training instance weighting, an entrainment-specific loss, and keyword-based generation conditioning.
DiagGPT is a multi-agent AI system that leverages the strong knowledge and reasoning capabilities of Large Language Models (LLMs) to enable flexible task-oriented dialogues. It can proactively guide users, manage dialogue topics, and assist in completing specific tasks.
A novel dialogue pre-training model called DivTOD that collaborates with large language models (LLMs) to learn diverse task-oriented dialogue representations by transferring rich general background knowledge and task-specific domain knowledge from LLMs to smaller models.