Resilience of Large Language Models to Noisy Instructions
Large language models exhibit varying degrees of resilience to different types of noise in instructions, including errors from automatic speech recognition, optical character recognition, grammatical mistakes, typographical errors, and distractive content. While some models show resistance to certain noise types, their overall performance significantly suffers, highlighting the need for further research to enhance model resilience.