The paper addresses the challenging task of identifying, segmenting, and tracking hand-held objects in unconstrained videos. This is crucial for applications such as human action segmentation and performance evaluation, as the dynamic interplay between hands and objects forms the core of many activities.
The key challenges include heavy occlusion, rapid motion, and the transitory nature of objects being hand-held, where an object may be held, released, and subsequently picked up again. To tackle these challenges, the authors have developed a novel transformer-based architecture called HOIST-Former.
HOIST-Former is adept at spatially and temporally segmenting hands and objects by iteratively pooling features from each other, ensuring that the processes of identification, segmentation, and tracking of hand-held objects depend on the hands' positions and their contextual appearance. The model is further refined with a contact loss that focuses on areas where hands are in contact with objects.
The authors also contribute an in-the-wild video dataset called HOIST, which comprises 4,125 videos complete with bounding boxes, segmentation masks, and tracking IDs for hand-held objects. Experiments on the HOIST dataset and two additional public datasets demonstrate the efficacy of HOIST-Former in segmenting and tracking hand-held objects.
To Another Language
from source content
arxiv.org
Key Insights Distilled From
by Supreeth Nar... at arxiv.org 04-23-2024
https://arxiv.org/pdf/2404.13819.pdfDeeper Inquiries