Kernkonzepte
Decision Transformer (DT), a sequence modeling approach to offline reinforcement learning, often converges to sub-optimal trajectories. This paper proposes a novel method, Diffusion-Based Trajectory Branch Generation (BG), to enhance DT's performance by expanding the dataset with generated trajectory branches leading to higher returns, thus enabling DT to learn better policies.
Liu, Z., Qian, L., Liu, Z., Wan, L., Chen, X., & Lan, X. (2024). Enhancing Decision Transformer with Diffusion-Based Trajectory Branch Generation. arXiv preprint arXiv:2411.11327.
This research paper aims to address the limitation of Decision Transformer (DT) in offline reinforcement learning, where it tends to converge to sub-optimal trajectories due to its sequence modeling approach. The authors propose a novel method to enhance DT's performance by expanding the dataset with generated trajectory branches that lead to higher returns.