核心概念
The author explores the use of Tsallis entropy for regularization in linearly solvable MDP and linear quadratic regulators, aiming to balance exploration and sparsity in control policies.
摘要
The content delves into the application of Tsallis entropy as a one-parameter extension of Shannon entropy for optimal control. It discusses how this approach can achieve high entropy while maintaining sparsity in control policies through numerical examples and theoretical derivations. The study formulates Tsallis entropy regularized optimal control problems, deriving Bellman equations and investigating linearly solvable Markov decision processes and linear quadratic regulators. The analysis showcases the utility of Tsallis entropy regularization in achieving a balance between exploration and sparsity in control laws.
Key points include:
- Introduction of Tsallis entropy as a regularization method.
- Application to linearly solvable MDPs and linear quadratic regulators.
- Derivation of Bellman equations for optimal control policies.
- Numerical examples demonstrating high entropy with maintained sparsity.
- Discussion on the implications for real-world applications like robotics.
統計資料
In [4], Tsallis entropy is used to regularize optimal transport problems to obtain high-entropy but sparse solutions.
For q = 0.25, V∗(T) is given by 22.040, 23.040, 25.284, 25.336.
For q = 0.25, C(T)(1) is calculated as 22.991.
引述
"The objective is to balance traditional cost minimization with maximization of deformed q-entropy."
"Optimal control policies achieve high entropy while maintaining sparsity."