The paper proposes a framework that combines domain-specific language models (SLMs) with large language models (LLMs) for open-domain dialogue evaluation. The key aspects are:
The SLM leverages both sentence-level and AMR graph-level information to learn enhanced semantic representations. A gating mechanism is used to fuse the two types of representations, and a contrastive loss is introduced to align the sentence and graph features.
The output score from the SLM and the AMR graph information are then integrated into the prompt of the LLM to provide domain-specific knowledge and improve the in-context learning performance.
Experiments on the DailyDialog++ and Personachat datasets show that the proposed method outperforms a wide range of baselines, including LLM-based methods, especially in discriminating adversarial negative responses.
The framework effectively incorporates structured semantic information from AMR graphs into the dialogue evaluation process, making it more robust to challenging adversarial examples compared to existing approaches.
翻譯成其他語言
從原文內容
arxiv.org
深入探究