Yaraghi, A. S., Holden, D., Kahani, N., & Briand, L. (2024). Automated Test Case Repair Using Language Models. arXiv preprint arXiv:2401.06765v2.
This paper aims to address the challenge of automatically repairing broken test cases in software development by leveraging the power of pre-trained code language models (CLMs).
The researchers developed TARGET, a two-step approach that first identifies and prioritizes code changes in the System Under Test (SUT) relevant to the broken test case, forming a repair context. Then, it utilizes this context to fine-tune a pre-trained CLM for test repair, treating it as a language translation task. They evaluated TARGET's effectiveness using TARBENCH, a comprehensive benchmark they created, comprising 45,373 broken test repairs across 59 open-source projects. The study explored different input-output formats for the CLM, compared its performance against baselines, and investigated its generalizability and the reliability of its generated repairs.
This research demonstrates the potential of leveraging pre-trained CLMs for automated test case repair, offering a promising solution to a significant challenge in software development. The creation of TARBENCH provides a valuable resource for future research in this area.
This work significantly contributes to the field of automated software engineering by presenting a novel and effective approach for automated test case repair using language models. The comprehensive benchmark and the insights gained from the study pave the way for further advancements in this domain.
The study acknowledges that while SUT code changes are crucial for repair, additional context might be beneficial. Future research could explore incorporating more comprehensive context information and investigate alternative techniques for context prioritization.
toiselle kielelle
lähdeaineistosta
arxiv.org
Tärkeimmät oivallukset
by Ahmadreza Sa... klo arxiv.org 10-17-2024
https://arxiv.org/pdf/2401.06765.pdfSyvällisempiä Kysymyksiä