This paper provides a comprehensive review of utilizing LLMs in software testing, analyzing tasks like test case preparation and program repair. It highlights challenges, opportunities, and future research directions in this area.
The significance of software testing is emphasized for ensuring quality and reliability in software products. The paper discusses the emergence of LLMs as game-changers in NLP and AI fields.
LLMs have been used for various coding-related tasks like code generation and recommendation. The study analyzes the performance of LLMs in generating unit tests, test assertions, and system test inputs.
Research efforts are focused on pre-training or fine-tuning LLMs for unit test case generation. Studies also explore designing effective prompts for better understanding context nuances by LLMs.
The paper presents a detailed overview of the distribution of testing tasks with LLMs across the software testing lifecycle. It includes an analysis of unit test case generation, test oracle generation, and system test input generation.
Başka Bir Dile
kaynak içeriğinden
arxiv.org
Önemli Bilgiler Şuradan Elde Edildi
by Junjie Wang,... : arxiv.org 03-05-2024
https://arxiv.org/pdf/2307.07221.pdfDaha Derin Sorular