Understanding the Role of Demonstration Components in In-Context Learning of Large Language Models
The core message of this study is to investigate the impact of different demonstration components, such as ground-truth labels, input distribution, and complementary explanations, on the in-context learning (ICL) performance of large language models (LLMs) using explainable NLP (XNLP) techniques.