DreamReward proposes a comprehensive framework to improve text-to-3D models by learning from human preference feedback, resulting in high-fidelity and aligned 3D results.
GVGEN introduces a novel diffusion-based framework for efficient 3D Gaussian generation from text input, demonstrating superior performance in qualitative and quantitative assessments.
DreamControl proposes a two-stage framework for text-to-3D generation, focusing on optimizing NeRF scenes as 3D self-prior and generating high-quality content with control-based score distillation.
DreamControl proposes a two-stage framework to optimize 3D generation, focusing on geometry consistency and texture fidelity.