"Reinforcement Learning algorithms that learn from human feedback (RLHF) need to be efficient in terms of statistical complexity, computational complexity, and query complexity."
"Our algorithm further minimizes the query complexity through a novel randomized active learning procedure."
"We aim to design new RL algorithms that can learn from preference-based feedback and can be efficient in statistical complexity (i.e., regret), computational complexity, and query complexity."
引用文
"Despite achieving sublinear worst-case regret, these algorithms are computationally intractable even for simplified models such as tabular Markov Decision Processes (MDPs)."
"In this work, we aim to design new RL algorithms that can learn from preference-based feedback and can be efficient in statistical complexity (i.e., regret), computational complexity, and query complexity."
Налаштувати зведення
Переписати за допомогою ШІ
Згенерувати цитати
Перекласти джерело
Іншою мовою
Згенерувати інтелект-карту
із вихідного контенту
Перейти до джерела
arxiv.org
Making RL with Preference-based Feedback Efficient via Randomization