This paper delves into utilizing ChatGPT models, specifically GPT-3.5 and GPT-4, to automatically analyze research papers related to Breast Cancer Treatment (BCT). The study involves categorizing papers, identifying scopes, and extracting key information for survey paper writing. Results show that while GPT-4 excels in category identification, it faces difficulties in accurately determining the scope of research papers. Limitations such as noisy data retrieval and inconsistent responses from ChatGPT models are also discussed.
The methodology involved constructing a taxonomy for BCT branches, collecting research articles from major databases like Google Scholar and Pubmed, and employing ChatGPT models to automate analysis tasks. Evaluation revealed that GPT-4 achieved higher accuracy than GPT-3.5 in categorizing research papers but struggled with scope detection.
Furthermore, the study highlighted challenges such as limited functionality of ChatGPT models, iterative prompt creation process, and inconsistent responses affecting the efficiency of automation. Despite these limitations, the potential of using AI models like ChatGPT for scholarly work is acknowledged with future work aimed at extending the taxonomy for BCT and compiling a comprehensive survey article on AI applications in BCT.
In un'altra lingua
dal contenuto originale
arxiv.org
Approfondimenti chiave tratti da
by Anjalee De S... alle arxiv.org 03-07-2024
https://arxiv.org/pdf/2403.03293.pdfDomande più approfondite