Combining disentangled representation learning with associative memory enables vision-based reinforcement learning agents to achieve zero-shot generalization on unseen task variations without relying on data augmentation.
Jointly learning context representations and policy enables improved zero-shot generalization in reinforcement learning.