Conditional Prototype Rectification Prompt Learning for Efficient Transfer of Vision-Language Models
Conditional Prototype Rectification Prompt Learning (CPR) effectively integrates textual and visual structural knowledge, and leverages unlabeled data to mitigate biases in few-shot learning scenarios, leading to state-of-the-art performance on both few-shot classification and base-to-new generalization tasks.