Core Concepts
Introducing TOOLVERIFIER, a self-verification method for improving tool calls in language models through contrastive questions.
Abstract
The study addresses the challenge of teaching language models to use new tools efficiently. It introduces TOOLVERIFIER, a self-verification method that aids in selecting the most suitable tools and generating parameters accurately. By decomposing the tool call task into tool selection and parameter generation, verification questions are used to enhance decision-making. Synthetic data is generated for training, enabling the model to generalize to unseen tools. Experimental results show significant improvements in tool selection and complete tool calls compared to baselines.
Stats
Extensive experiments on 4 tasks from ToolBench benchmark.
Average improvement of 22% over few-shot baselines.
Dataset contains 173 synthetic tools with descriptions.
Verification questions reduce error propagation.
Quotes
"Self-verification is used at each step to reduce error propagation and enhance overall performance."
"Our proposed verification mechanism further improves performance by an additional 2.5% on average."