Learning Transferable Representations through Value Explicit Pretraining
Value Explicit Pretraining (VEP) proposes a method to learn generalizable representations for transfer reinforcement learning by pre-training an encoder using self-supervised contrastive loss. The approach enables improved performance in unseen tasks.