The study presents a personalized LoRA framework for human-centered text understanding, emphasizing the importance of personalization in NLP tasks. By combining task-specific adaptation with user-specific knowledge injection, the proposed method outperforms existing models in various learning scenarios. The PLoRA framework is effective, lightweight, and easy to deploy in PLMs for HCTU tasks.
The research explores the challenges of adapting pre-trained language models (PLMs) for human-centered text understanding. It introduces a personalized LoRA (PLoRA) framework with plug-and-play capabilities to enhance adaptability in sentiment analysis tasks. By incorporating personalized dropout and mutual information maximization strategies, PLoRA addresses few/zero-shot learning scenarios effectively.
Traditional methods like personalized knowledge injection (PKI) techniques are compared with parameter-efficient fine-tuning (PEFT) techniques such as adapter, prompt-tuning, and low-rank adaptation (LoRA). The study proposes PLoRA as a combination of PKI and LoRA mechanisms to inject personalized information into PLMs without full-model fine-tuning. Experiments on benchmark datasets demonstrate the superiority of PLoRA in full/few/zero-shot learning scenarios.
To Another Language
from source content
arxiv.org
Principais Insights Extraídos De
by You Zhang,Ji... às arxiv.org 03-12-2024
https://arxiv.org/pdf/2403.06208.pdfPerguntas Mais Profundas