OneTracker: Unifying Visual Object Tracking with Foundation Models and Efficient Tuning
Conceitos essenciais
OneTracker unifies various tracking tasks by pretraining a Foundation Tracker on RGB datasets and adapting it to downstream RGB+X tasks using prompt-tuning techniques.
Resumo
Abstract:
Visual object tracking aims to localize the target object based on its initial appearance, with different input modalities like RGB, RGB+N, RGB+M, etc.
Introduction:
Object tracking is essential for various applications like self-driving and visual surveillance.
Methodology:
OneTracker consists of Foundation Tracker for pretraining and Prompt Tracker for finetuning on downstream tasks.
Experiments:
OneTracker outperforms other models in 6 popular tracking tasks across 11 benchmarks.
Ablation Study:
CMT Prompters and TTP Transformer layers enhance performance in Prompt Tracker.
Benchmark Results:
OneTracker achieves state-of-the-art performance in various tracking scenarios.