Core Concepts
Linear mixture MDPs algorithm improvement for adversarial settings.
Abstract
Study focuses on reinforcement learning with linear function approximation, unknown transition, and adversarial losses in bandit feedback setting.
Proposed algorithm achieves regret improvement by leveraging visit information of all states and handling non-independent noises.
Bridging dynamic assortment and RL theory for insights.
Regret bounds comparison with previous works.
Detailed problem setup, algorithm components, and regret guarantee.
Stats
"Our result strictly improves the previous best-known e O(dS2√ K + √ HSAK) result in Zhao et al. (2023a) since H ≤ S holds by the layered MDP structure."
"Our algorithm attains e O(d √ HS3K + √ HSAK) regret, strictly improving the e O(dS2√ K + √ HSAK) regret of Zhao et al. (2023a) since H ≤ S by the layered MDP structure."
"Our innovative use of techniques from dynamic assortment problems to mitigate estimation errors in RL theory is novel and may provide helpful insights for future research."
Quotes
"Our advancements are primarily attributed to (i) a new least square estimator for the transition parameter that leverages the visit information of all states, as opposed to only one state in prior work, and (ii) a new self-normalized concentration tailored specifically to handle non-independent noises."
"Our algorithm is similar to that of Zhao et al. (2023a): we first estimate the unknown transition parameter and construct corresponding confident sets."