Enabling Zero-Shot Generalization on Encoder Models via Statement-Tuning
Statement-Tuning enables encoder models like RoBERTa to generalize to zero-shot and few-shot unseen tasks by training them to discriminate the truthfulness of natural language statements.