Leveraging the NLP capabilities of open-source large language models to enhance the quality of conversational history can significantly improve the performance of query rewriting in open-domain conversational search.
This paper proposes a conceptual framework to model the actions and decisions of users and agents during the conversational search process. The framework outlines the different actions that users and agents can perform, as well as the key decision points the agent needs to navigate to facilitate a successful and satisfactory conversational search experience.
IterCQR iteratively trains a conversational query reformulation model without human-annotated rewrites, utilizing retrieval signals as a reward to generate queries optimized for off-the-shelf retrievers.