Intra-task Mutual Attention-based Vision Transformer for Efficient Few-Shot Learning
An intra-task mutual attention method is proposed to enhance the feature representations of support and query sets in few-shot learning, enabling the model to effectively leverage both global and local information.