Enhancing Legal Reasoning in Large Language Models through Domain-Specific Pretraining and Instruction Tuning
Instruction tuning and domain-specific pretraining on legal data can significantly improve the performance of large language models on legal reasoning tasks, but the effects vary across model sizes, tasks, and other factors.