This paper delves into utilizing ChatGPT models, specifically GPT-3.5 and GPT-4, to automatically analyze research papers related to Breast Cancer Treatment (BCT). The study involves categorizing papers, identifying scopes, and extracting key information for survey paper writing. Results show that while GPT-4 excels in category identification, it faces difficulties in accurately determining the scope of research papers. Limitations such as noisy data retrieval and inconsistent responses from ChatGPT models are also discussed.
The methodology involved constructing a taxonomy for BCT branches, collecting research articles from major databases like Google Scholar and Pubmed, and employing ChatGPT models to automate analysis tasks. Evaluation revealed that GPT-4 achieved higher accuracy than GPT-3.5 in categorizing research papers but struggled with scope detection.
Furthermore, the study highlighted challenges such as limited functionality of ChatGPT models, iterative prompt creation process, and inconsistent responses affecting the efficiency of automation. Despite these limitations, the potential of using AI models like ChatGPT for scholarly work is acknowledged with future work aimed at extending the taxonomy for BCT and compiling a comprehensive survey article on AI applications in BCT.
Para outro idioma
do conteúdo fonte
arxiv.org
Principais Insights Extraídos De
by Anjalee De S... às arxiv.org 03-07-2024
https://arxiv.org/pdf/2403.03293.pdfPerguntas Mais Profundas