מושגי ליבה
Extremum-Seeking Action Selection (ESA) improves the quality of exploratory actions in policy optimization, reducing the sampling of low-value trajectories and accelerating learning.
תקציר
The paper proposes the Extremum-Seeking Action Selection (ESA) method to improve both exploration and exploitation in sampling actions for policy optimization in continuous spaces. ESA follows the strategies of Extremum-Seeking Control (ESC) by applying sinusoidal perturbations on the sampled actions in each step to obtain actions of higher action values and also improve exploration.
The key insights are:
- ESC methods can be particularly sample efficient for locally optimizing unknown objectives, compared to policy gradient methods.
- The scale of ESA perturbations on the sampled actions needs to be carefully chosen to balance the trade-off between fast local improvement with ESC and reliable policy improvement over all states.
- The ability of tracking dynamic objectives makes ESC methods particularly suitable for handling problems in the continuous domain by shifting the focus from states to improving entire trajectories over time.
The authors demonstrate that adding ESA to standard policy optimization algorithms like PPO and SAC can clearly improve the learning performance in various continuous control problems.
סטטיסטיקה
The paper does not provide any specific numerical data or statistics. It focuses on describing the proposed ESA method and providing high-level comparisons with baseline approaches.
ציטוטים
The paper does not contain any direct quotes that are particularly striking or support the key logics.