RepairLLaMA introduces a novel approach to program repair by combining realistic code representations with LoRA fine-tuning. The experiments demonstrate significant improvements in fixing bugs compared to traditional methods. Tailored code representations with fault localization signals play a crucial role in enhancing repair effectiveness.
The study compares different code representation pairs for fine-tuning LLMs, highlighting the importance of fault localization signals. Parameter-efficient fine-tuning proves superior to full-parameter fine-tuning, showcasing better performance and resource efficiency. Additionally, RepairLLaMA outperforms state-of-the-art ChatGPT-based approaches in repairing bugs across various benchmarks.
Key findings include the significance of domain-specific code representations and the effectiveness of LoRA for efficient fine-tuning. The results emphasize the need for expertly designed code representations in automated program repair tasks.
Vers une autre langue
à partir du contenu source
arxiv.org
Questions plus approfondies