OmniACT introduces a dataset and benchmark for assessing agents' ability to generate executable programs from natural language tasks. The dataset covers diverse desktop applications and web tasks. Language model agents struggle with visual cues in UI elements. DetACT module converts UI images into structured code for downstream models. GPT-4 outperforms other baselines on the dataset, but still falls short of human proficiency. Human evaluators show high proficiency in completing tasks. Future research directions include building multimodal models for improved performance.
Sang ngôn ngữ khác
từ nội dung nguồn
arxiv.org
Thông tin chi tiết chính được chắt lọc từ
by Raghav Kapoo... lúc arxiv.org 02-29-2024
https://arxiv.org/pdf/2402.17553.pdfYêu cầu sâu hơn