The study delves into the application of LLMs in Software Engineering, highlighting the importance of data collection, preprocessing, and model selection. It explores various types of datasets used and their impact on LLM performance in SE tasks.
Large Language Models (LLMs) have revolutionized Software Engineering by optimizing processes and outcomes. The study analyzes 229 research papers from 2017 to 2023 to understand the role of LLMs in SE tasks. Different architectures like encoder-only, encoder-decoder, and decoder-only LLMs are explored for their effectiveness in handling SE challenges. The research emphasizes the significance of data sources, including open-source datasets, collected datasets, constructed datasets, and industrial datasets. The analysis reveals a trend towards decoder-only LLMs for improved performance in SE applications.
To Another Language
from source content
arxiv.org
Viktige innsikter hentet fra
by Xinyi Hou,Ya... klokken arxiv.org 03-12-2024
https://arxiv.org/pdf/2308.10620.pdfDypere Spørsmål