The content discusses the use of standing instructions to improve user request interpretation, introducing the NLSI dataset. Various reasoning types and methods for incorporating standing instructions are explored, revealing challenges faced by LLMs in this task.
Users often have to repeat preferences when making similar requests, prompting the need for persistent user constraints termed standing instructions. These can influence search results and provide tailored responses.
Large language models (LLMs) like GPT-3 are increasingly used with APIs to enhance functionality for users.
NLSI is a dataset created to study the incorporation of standing instructions in dialogue modeling tasks.
Different reasoning types such as PLAIN, MULTIHOP, MULTIPREFERENCE, MULTIDOMAIN, and CONFLICT present challenges in selecting and interpreting relevant standing instructions.
Methods like DIRECT Interpretation, SELECT-AND-INTERPRET, and SELECT-THEN-INTERPRET are evaluated for their effectiveness in incorporating standing instructions.
Results show that while LLMs can incorporate standing instructions to some extent, there is room for improvement in accurately selecting and interpreting them.
To Another Language
from source content
arxiv.org
Głębsze pytania