Neeko presents a novel framework for multi-character role-playing, utilizing dynamic LoRA blocks to adapt seamlessly to diverse characters. The approach breaks down the role-playing process into distinct stages, enhancing adaptability and performance in engaging user interactions.
Large Language Models (LLMs) have revolutionized open-domain dialogue agents but face challenges in multi-character role-playing scenarios. Neeko addresses this issue by employing a dynamic low-rank adapter (LoRA) strategy, breaking down the role-playing process into agent pre-training, multiple characters playing, and character incremental learning. This approach effectively handles both seen and unseen roles by enhancing adaptability to unique attributes, personalities, and speaking patterns.
The framework of Neeko is designed to play multiple characters within long conversations and handle both seen and unseen characters well. By pre-training LoRA blocks for each predefined character and dynamically activating them based on user-specified character prompts, Neeko demonstrates superior performance in multi-character role-playing over existing methods.
Neeko's innovative framework offers more engaging and versatile user interaction experiences by adapting seamlessly to diverse characters through distinct LoRA blocks. The incremental learning stage of Neeko includes fusion and expansion strategies to handle new roles efficiently without compromising previous character features.
Ke Bahasa Lain
dari konten sumber
arxiv.org
Wawasan Utama Disaring Dari
by Xiaoyan Yu,T... pada arxiv.org 03-04-2024
https://arxiv.org/pdf/2402.13717.pdfPertanyaan yang Lebih Dalam