Core Concepts
Empowering In-Vehicle Conversational Assistants with Large Language Models to enhance proactive interactions.
Abstract
In the study, researchers explore how Large Language Models (LLMs) can improve proactive interactions for In-Vehicle Conversational Assistants (IVCAs). Existing IVCAs struggle with user intent recognition and context awareness, leading to suboptimal proactive interactions. The researchers establish a framework with five proactivity levels across two dimensions—assumption and autonomy—for IVCAs. They propose a "Rewrite + ReAct + Reflect" strategy to empower LLMs to fulfill specific demands at each proactivity level. Feasibility and subjective experiments show that the LLM outperforms state-of-the-art models in success rate and achieves satisfactory results for each proactivity level. Subjective experiments with 40 participants validate the effectiveness of the framework, highlighting the most appropriate proactive level as one with strong assumptions and user confirmation.
Stats
The LLM achieves a success rate of 93.72%.
The feasibility experiments show satisfactory results for each proactivity level.
Subjective experiments validate the effectiveness of the framework.
Quotes
"We establish a proactivity framework for IVCAs with five levels along the dimensions of assumption and autonomy while integrating user control as a design principle."
"Our work is the first to explore proactive interactions for IVCAs using LLMs, verifying their potential."