Evaluating the Robustness of In-Context Learning in Large Language Models: A Syntactic Generalization Study
Large language models can learn new tasks through in-context learning, but their ability to generalize beyond the provided examples in a robust, syntax-aware manner is limited. Models pre-trained on code demonstrate better out-of-distribution generalization compared to those trained only on natural language.