LoRA-based fine-tuning enables 310 specialized LLMs to outperform the powerful GPT-4 model by 10 points on average across 31 diverse tasks.