Core Concepts
Growing Q-Networks (GQN) adaptively increases control resolution from coarse to fine within decoupled Q-learning, reconciling the exploration benefits of coarse discretization during early training with the need for smooth control at convergence.
Abstract
The paper introduces Growing Q-Networks (GQN), a simple discrete critic-only agent that combines the scalability benefits of fully decoupled Q-learning with the exploration benefits of dynamic control resolution. GQN adaptively grows the control resolution from coarse to fine over the course of training, enabling efficient exploration through coarse discretization early on while converging to smooth control policies.
The key highlights are:
-
Framework for Adaptive Control Resolution:
- GQN adaptively grows the control resolution from coarse to fine within decoupled Q-learning.
- This reconciles coarse exploration during early training with smooth control at convergence, while retaining the scalability of decoupled control.
-
Insights into Scalability of Discretized Control:
- The research provides insights into overcoming exploration challenges in soft-constrained continuous control settings via simple discrete Q-learning methods.
- It studies the applicability of discretized control in challenging control scenarios.
-
Comprehensive Experimental Validation:
- GQN is validated on a diverse set of continuous control tasks, highlighting the benefits of adaptive control resolution over static DQN variations and recent continuous actor-critic methods.
- GQN performs competitively with continuous control baselines while providing smoother control policies.
Stats
The paper does not contain any key metrics or important figures to support the author's key logics.
Quotes
The paper does not contain any striking quotes supporting the author's key logics.