RoBERTa-based model ContrastWSD enhances metaphor detection by integrating Word Sense Disambiguation, outperforming other methods.
SensoryT5 integrates sensory information into the T5 model to enhance fine-grained emotion classification, showcasing improved performance and highlighting the potential of neuro-cognitive data in NLP.
Vietnamese PhoGPT models offer state-of-the-art performance and open-source availability for language generation tasks.
LoRA-SPは、大規模言語モデルの効率的なファインチューニングを実現する革新的な手法であり、計算リソースとメモリ要件を大幅に削減しながら高い性能を維持します。
Fine-tuning with ProMoT reduces format specialization and enhances generalization in LLMs.
Developing compressed language models through knowledge distillation for efficient question answering in Spanish.
Effective contrastive losses in Sentence Representation Learning depend on three key components: Gradient Dissipation, Weight, and Ratio.