Test-Time Adaptation with SaLIP: A Cascade of Segment Anything Model and CLIP for Zero-shot Medical Image Segmentation
SaLIP, a unified framework that leverages the combined capabilities of the Segment Anything Model (SAM) and Contrastive Language-Image Pre-Training (CLIP) to perform zero-shot organ segmentation in medical images, without relying on domain expertise or annotated data for prompt engineering.