核心概念
Large Language Models (LLMs) significantly impact RST discourse parsing, achieving state-of-the-art results.
統計
최근, 수십 억 개의 매개변수를 가진 LLMs가 NLP 작업에 큰 영향을 미침.
Llama 2는 70 억 개의 매개변수로 SOTA 결과를 보여줌.
引用
"LLMs have demonstrated remarkable success in various NLP tasks due to their large numbers of parameters and ease of availability."
"Our parsers demonstrated generalizability when evaluated on RST-DT, showing that, in spite of being trained with the GUM corpus, it obtained similar performances to those of existing parsers trained with RST-DT."