RepairLLaMA introduces a novel approach to program repair by combining realistic code representations with LoRA fine-tuning. The experiments demonstrate significant improvements in fixing bugs compared to traditional methods. Tailored code representations with fault localization signals play a crucial role in enhancing repair effectiveness.
The study compares different code representation pairs for fine-tuning LLMs, highlighting the importance of fault localization signals. Parameter-efficient fine-tuning proves superior to full-parameter fine-tuning, showcasing better performance and resource efficiency. Additionally, RepairLLaMA outperforms state-of-the-art ChatGPT-based approaches in repairing bugs across various benchmarks.
Key findings include the significance of domain-specific code representations and the effectiveness of LoRA for efficient fine-tuning. The results emphasize the need for expertly designed code representations in automated program repair tasks.
Til et andet sprog
fra kildeindhold
arxiv.org
Vigtigste indsigter udtrukket fra
by Andr... kl. arxiv.org 03-12-2024
https://arxiv.org/pdf/2312.15698.pdfDybere Forespørgsler