A novel forward-only adaptation method that learns prompts via a derivative-free optimizer and aligns activations to the source domain, enabling efficient test-time adaptation on resource-constrained devices without backpropagation.
Optimal Transport-guided visual prompting (OT-VP) effectively aligns the target domain representation with the source domain representation, enabling Vision Transformer models to adapt to unseen domains without modifying the pre-trained model parameters.