Enhancing large language models (LLMs) with execution-based feedback improves code generation accuracy for data science tasks.
Pre-trained code language models struggle with self-refinement, but Cycle framework enhances self-refinement capability, improving code generation performance.