The paper proposes TELPA, a novel LLM-based test generation technique, to enhance the coverage of hard-to-cover branches. TELPA addresses two key challenges: 1) complex object construction and 2) intricate inter-procedural dependencies in branch conditions.
To tackle the first challenge, TELPA conducts backward method-invocation analysis to extract method invocation sequences that represent real usage scenarios of the target method. This helps TELPA learn how to construct complex objects required by branch constraints.
To address the second challenge, TELPA performs forward method-invocation analysis to identify all methods associated with the branch conditions. This provides precise contextual information for LLMs to understand the semantics of the branch constraints.
TELPA also incorporates a feedback-based process, where it samples a diverse set of counter-examples and integrates them into the prompt to guide LLMs to generate divergent tests that can reach the hard-to-cover branches.
The evaluation on 27 open-source Python projects shows that TELPA significantly outperforms the state-of-the-art SBST and LLM-based techniques, achieving an average improvement of 31.39% and 22.22% in branch coverage, respectively. The ablation study confirms the contribution of each main component in TELPA.
Іншою мовою
із вихідного контенту
arxiv.org
Ключові висновки, отримані з
by Chen Yang,Ju... о arxiv.org 04-09-2024
https://arxiv.org/pdf/2404.04966.pdfГлибші Запити