Core Concepts
Learning a Markov abstract state representation is crucial for improving deep reinforcement learning efficiency.
Abstract
The article introduces a novel approach to learning Markov state abstractions in the context of deep reinforcement learning. It addresses the challenge of preserving the Markov property in abstract state representations, which are essential for effective decision-making in complex environments. By combining inverse model estimation and temporal contrastive learning, the proposed method aims to learn representations that capture the underlying structure of domains and enhance sample efficiency. The study evaluates the approach on visual gridworld navigation tasks and continuous control benchmarks, demonstrating improved performance over existing methods. The training objective focuses on ensuring that learned abstractions are Markov while avoiding representation collapse without relying on reward information or ground-state prediction.
Stats
Code repository available at https://github.com/camall3n/markov-state-abstractions.
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
arXiv:2106.04379v4 [cs.LG] 15 Mar 2024
Quotes
"We introduce a new approach to learning Markov state abstractions."
"Our approach learns abstract state representations that capture the underlying structure of the domain."
"Our method is effective for learning Markov state abstractions that are highly beneficial for decision making."