Core Concepts
Tsallis entropy regularization balances exploration and sparsity in optimal control.
Abstract
Shannon entropy regularization promotes exploration and robustness.
Tsallis entropy extends Shannon entropy for linearly solvable MDP and LQR.
Research focuses on balancing exploration and sparsity in control policies.
Tsallis entropy regularization addresses limitations of Shannon entropy in sparse control policies.
TROC formulation for discrete-time systems with Bellman equation derivation.
Optimal control policies for linearly solvable MDP and LQR under TROC framework.
Tsallis entropy regularization enhances exploration while maintaining sparsity.
Discussion on Tsallis entropy in optimal transport problems.
Numerical examples demonstrate the effectiveness of Tsallis entropy regularization.
Stats
Shannon entropy regularization is widely adopted in optimal control.
Tsallis entropy is a one-parameter extension of Shannon entropy.
Tsallis entropy is used for the regularization of linearly solvable MDP and LQR.
Tsallis entropy regularization balances exploration and sparsity in control policies.
Tsallis entropy regularized optimal control problem (TROC) is formulated for discrete-time systems.
Quotes
"Tsallis entropy is a one-parameter extension of Shannon entropy."
"Tsallis entropy regularization balances exploration and sparsity in control policies."
"TROC formulation addresses limitations of Shannon entropy in sparse control policies."