Transformer-based Large Language Models (LLMs) have significantly expanded the scope of natural language processing (NLP) applications, transcending their initial use in chatbot technology. These models demonstrate versatility in diverse domains, from code interpretation and image captioning to facilitating interactive systems and advancing computational fields.
HumanEval-XL introduces a comprehensive benchmark for multilingual code generation, addressing the gap in evaluating cross-lingual NL generalization of LLMs.
PE introduces a novel method using hyperbolic spaces to model feature interactions efficiently, demonstrating effectiveness in generating hierarchical explanations.
Enhancing sequence-to-sequence models for abstractive text summarization through meta heuristic approaches.
Efficiently leverage LLMs for cost-effective data annotation in NLP.
mPLUG-Owl introduces a novel training paradigm to enhance large language models with multimodal abilities through modularized learning.
EthioLLM introduces multilingual language models for five Ethiopian languages, addressing the lack of resources in low-resource languages.
Large Language Models face knowledge cutoff issues, but EasyEdit offers an efficient solution.
Arabic poetry analysis benefits from AraPoemBERT, outperforming other models in various NLP tasks.
提案されたバイエンコーダーベースの検出器は、NLPにおけるOOD検出において他の手法を凌駕し、ラベル付きOODサンプルを必要としない場合でも優れた性能を発揮します。