ALARM introduces a framework for aligning large language models with human preferences through hierarchical rewards modeling in reinforcement learning.
LLMs can enhance induction through deduction.
PipeRAG improves generation efficiency through pipeline parallelism, flexible retrieval intervals, and performance modeling.
(Chat)GPT performs worse than BERT in detecting short-term semantic changes but slightly lower in long-term changes.
Large language models can effectively perform few-shot relation extraction tasks with the CoT-ER approach, outperforming fully-supervised methods.
ACLSum introduces a novel dataset for multi-aspect summarization of scientific papers, addressing the limitations of existing resources.
Se2은 순차적 선택 방법을 통해 In-Context Learning을 향상시키는 효과적인 방법을 제안합니다.
OATS dataset introduces new domains and comprehensive annotations for ABSA tasks, aiming to bridge gaps in existing datasets and enhance ABSA research.
LLMs struggle to follow diverse instructions in knowledge-intensive writing tasks, highlighting the need for improvement.
LoRAMoE introduces a novel framework to address the conflict between improving LLM performance on downstream tasks and preventing world knowledge forgetting during SFT.