QASE module enhances generative PLMs for better MRC performance.
외부 지식을 활용한 KeNet 모델은 다중 레이블 텍스트 분류에서 우수한 성능을 보입니다.
IMPOSSIBLE DISTILLATION distills high-quality paraphrase datasets and models from low-quality LMs using paraphrastic proximity and critic-guided filtering.
Generative PLMs can be enhanced with the lightweight QASE module to improve text generation quality and factual consistency in MRC tasks.
Reverse KLD is proposed for distilling LLM knowledge into smaller models, improving performance and reducing exposure bias.
Instruction tuning improves the social understanding of large language models for social scientific tasks.
Incorporating backward dependencies in large language models enhances sentence embeddings.
Efficiently conditioning sentence representations using hypernetworks and contrastive learning.
Selective parameter freezing in LoRA-SP optimizes fine-tuning efficiency without compromising model performance.
The author introduces KeNet, a Knowledge-enhanced Doc-Label Attention Network, to address challenges in Multi-Label Text Classification by incorporating external knowledge and attention mechanisms.