The article discusses the role of predictive representations, particularly the successor representation (SR) and its extensions, in building intelligent systems. It starts by introducing the reinforcement learning (RL) problem and contrasting model-based and model-free solution methods.
The SR is then presented as a predictive representation that can provide some of the flexibility of model-based approaches while retaining the computational efficiency of model-free methods. The SR captures the expected discounted future occupancy of states, allowing the value function to be computed as a linear function of the SR and the reward function. This enables rapid value computation and adaptation to changes in the reward structure.
The article then discusses extensions of the SR, including the successor model (SM) which defines a full probability distribution over future states, and successor features which generalize the SR to handle function approximation. Applications of these predictive representations are covered, including exploration, transfer, hierarchical RL, and multi-agent coordination.
The article also reviews evidence from neuroscience and cognitive science suggesting that the brain uses predictive representations akin to the SR for a variety of tasks, including decision making, navigation, and memory. This convergence between artificial and biological intelligence suggests that predictive representations may be a fundamental building block of intelligence.
To Another Language
from source content
arxiv.org
Key Insights Distilled From
by Wilka Carval... at arxiv.org 04-18-2024
https://arxiv.org/pdf/2402.06590.pdfDeeper Inquiries