This paper introduces the novel "video detours" problem for navigating instructional videos. Given a source video and a natural language query asking to alter the how-to video's current path of execution in a certain way, the goal is to find a related "detour video" that satisfies the requested alteration.
The authors propose VidDetours, a video-language approach that learns to retrieve the targeted temporal segments from a large repository of how-to videos using video-and-text conditioned queries. They devise a language-based pipeline that exploits how-to video narration text to create weakly supervised training data.
The paper demonstrates the idea applied to the domain of how-to cooking videos, where a user can detour from their current recipe to find steps with alternate ingredients, tools, and techniques. Validating on a ground truth annotated dataset of 16K samples, the authors show their model's significant improvements over best available methods for video retrieval and question answering, with recall rates exceeding the state of the art by 35%.
The key contributions are the innovative task definition, the video-language model to address it, and the high quality evaluation set and benchmark. These results help pave the way towards an interconnected how-to video knowledge base that would transcend the expertise of any one teacher, weaving together the myriad of steps, tips, and strategies available in existing large-scale video content.
다른 언어로
소스 콘텐츠 기반
arxiv.org
더 깊은 질문