The core message of this paper is to leverage unlabeled data in the target domain to improve zero-shot dialogue state tracking performance by utilizing joint and self-training methods.
Durch die Nutzung von ungelabelten Daten im Zielbereich und den Einsatz von Hilfstasks zur Generierung und Auswahl von Dialogzuständen kann die Leistung von Zero-Shot Dialogue State Tracking Modellen deutlich verbessert werden.
The proposed Mixture of Prefix Experts (MoPE) model establishes connections between similar slots in different domains, strengthening the model's transfer performance in unseen domains for zero-shot dialogue state tracking.
A zero-shot, open-vocabulary pipeline system that integrates domain classification and dialogue state tracking, enabling efficient and adaptable task-oriented dialogue understanding without relying on predefined ontologies.