Task Indicating Transformer for Task-conditional Dense Predictions: A Novel Approach
The author introduces the Task Indicating Transformer (TIT) as a novel task-conditional framework to address limitations in multi-task learning. By incorporating Mix Task Adapter and Task Gate Decoder modules, the TIT enhances long-range dependency modeling and multi-scale feature interaction within a transformer structure.