核心概念
DreamReward proposes a comprehensive framework to improve text-to-3D models by learning from human preference feedback, resulting in high-fidelity and aligned 3D results.
統計資料
25k expert comparisons based on systematic annotation pipeline.
引述
"RLHF has shown success in improving generative models."
"DreamReward successfully aligns text-to-3D generation with human intention."