This paper investigates the role of semantic representations, specifically Abstract Meaning Representation (AMR), in the era of large language models (LLMs). Traditionally, NLP models have benefited from using rich linguistic features like AMR, but the rise of end-to-end LLMs raises questions about the continued utility of such representations.
The authors propose a theoretical framework for understanding representation power, distinguishing the ideal representation for a task from the representation that best suits a pre-trained LLM. They then empirically evaluate an AMR-driven prompting method called AMRCOT across five diverse NLP tasks, using several LLM versions.
The results show that AMR does not have a consistently positive impact on LLM performance, with a slight fluctuation between -3 to +1 percentage points. However, AMR does help on a subset of examples, suggesting that there may be systematic patterns to when it is useful.
The authors conduct further analysis to understand these patterns. They find that AMR struggles with multi-word expressions, named entities, and the final inference step where the LLM must connect its reasoning over the AMR to the final prediction. Classifiers trained to predict AMR helpfulness achieve modest performance, indicating that it is difficult to anticipate when AMR will help or hurt.
Overall, the paper suggests that the role of traditional linguistic structures like AMR is diminished in the LLM era, as LLMs can effectively learn to operate directly on text. However, there may still be opportunities to leverage such representations, particularly by improving the LLM's ability to reason over them.
To Another Language
from source content
arxiv.org
Глибші Запити