Główne pojęcia
LARA enhances multi-turn intent classification with Linguistic-Adaptive Retrieval-Augmented LLMs.
Streszczenie
The paper introduces LARA, a framework designed to improve accuracy in multi-turn intent classification tasks across six languages. By combining a fine-tuned smaller model with a retrieval-augmented mechanism integrated within the architecture of LLMs, LARA dynamically utilizes past dialogues and relevant intents to enhance context understanding. The adaptive retrieval techniques also strengthen cross-lingual capabilities without extensive retraining. Comprehensive experiments show that LARA outperforms existing methods by 3.67% in average accuracy.
Introduction:
- Chatbots play a crucial role in e-commerce platforms for efficient customer service.
- Multi-turn conversations pose challenges due to contextual factors and evolving user intentions.
Problem Formulation:
- Single-turn and multi-turn intent classification differ in complexity and context dependency.
LARA Framework:
- Combines XLM-based model training with in-context learning for multi-turn dialogue classification.
Experiments:
- Datasets from eight markets used to evaluate performance metrics like accuracy.
Results and Discussions:
- Comparison of LARA with baselines shows improved performance across different prompts.
Conclusion:
- LARA offers an effective solution for multi-turn intent classification, enhancing accuracy and efficiency.
Statystyki
Comprehensive experiments demonstrate that LARA achieves state-of-the-art performance on multi-turn intent classification tasks, enhancing the average accuracy by 3.67% compared to existing methods.