InfiAgent-DABench is introduced as a benchmark for evaluating LLM-based agents on data analysis tasks. The paper outlines the challenges faced by LLMs in data analysis and the development of a specialized agent, DAAgent, surpassing GPT-3.5. The dataset DAEval consists of 257 questions from 52 CSV files, focusing on end-to-end task solving abilities. The process involves dataset construction, agent framework development, human assessment, and model evaluation. Key findings include the challenges for LLMs in data analysis tasks and the performance comparison of various models.
다른 언어로
소스 콘텐츠 기반
arxiv.org
더 깊은 질문