Core Concepts
RLHF and DPO are compared in learning from human preferences, highlighting their statistical differences and implications.
Abstract
This paper compares RLHF and DPO paradigms for learning from human preferences. RLHF involves reward learning followed by policy optimization, while DPO directly optimizes policy parameters. The study delves into the statistical guarantees, sample complexity, convergence rates, and implications of both approaches. Key findings include the impact of reward and policy dimensions, sample size, regularization temperature, and the role of mismatch coefficients in non-realizable rewards.
The authors provide theoretical results for exact optimization settings in contextual bandits and deterministic Markov decision processes (MDPs). They analyze the suboptimality gap induced by both paradigms under various conditions. The discussion extends to approximate optimization settings with insights on gradient descent procedures for reward learning and policy optimization phases.
Implications suggest that RLHF outperforms DPO when reward dimensions are smaller than policy dimensions or for smaller sample sizes. DPO's performance improves asymptotically with larger samples but is disproportionately affected by the regularization temperature beta. The study also explores extensions to MDPs with linear rewards and loglinear policies.
Future directions include analyzing general function approximation classes for policies, conducting large-scale empirical comparisons, and extending the analysis to broader MDP scenarios.
Stats
G(πbθ) = D(πbθ) + Θ(ΛRr√dR/n)!
G(πeθ) = D(πeθ) + Θ(dPβn)!
G(dθρ) = D(dθρ) + Θ(Λ′RrdRn)!
G(dρ) = D(dρ) + Θ(Λ′MdMβn)!
Quotes
"RLHF incurs a constant additional error when ground-truth rewards are not realizable."
"DPO retains its asymptotically decaying gap by tuning the temperature accordingly."
"The discrepancy between reward and policy dimensions plays a crucial role in relative performances."