Core Concepts
Large Language Models (LLMs) significantly impact RST discourse parsing, achieving state-of-the-art results.
Abstract
1. Abstract:
LLMs with billions of parameters impact NLP tasks.
Paper explores LLMs' benefits in RST discourse parsing.
Llama 2 fine-tuned with QLoRA shows SOTA results.
2. Introduction:
RST theory crucial in NLP tasks.
Neural methods using PLMs for RST parsing.
Shift to decoder-only LLMs for better results.
3. Proposed Approach:
Three possible approaches for LLMs in RST parsing.
Bottom-up and top-down parsing methodologies.
Use of prompts for LLMs in parsing process.
4. Data Extraction:
LLMs have billions of parameters.
Llama 2 with 70 billion parameters shows SOTA results.
Stats
최근, 수십 억 개의 매개변수를 가진 LLMs가 NLP 작업에 큰 영향을 미침.
Llama 2는 70 억 개의 매개변수로 SOTA 결과를 보여줌.
Quotes
"LLMs have demonstrated remarkable success in various NLP tasks due to their large numbers of parameters and ease of availability."
"Our parsers demonstrated generalizability when evaluated on RST-DT, showing that, in spite of being trained with the GUM corpus, it obtained similar performances to those of existing parsers trained with RST-DT."