Leveraging Large Language Models to Enhance Low-Shot Image Classification
Large Language Models (LLMs) can provide valuable visual descriptions and knowledge to enhance the performance of pre-trained vision-language models like CLIP in low-shot image classification tasks.