AWOL leverages language to control existing parametric 3D models, enabling the generation of novel animal and tree shapes that were never seen during training.
Diffusion2 leverages the geometric consistency and temporal smoothness priors from pretrained video and multi-view diffusion models to directly sample dense multi-view and multi-frame images, which can then be employed to optimize continuous 4D representations.
DreamGaussian proposes an efficient 3D content generation framework that leverages generative Gaussian splatting and texture refinement to produce high-quality textured meshes in just a few minutes, significantly accelerating the optimization-based 2D lifting approach.
Mesh2NeRF directly derives accurate radiance fields from textured 3D meshes, providing robust 3D supervision for training neural radiance field models and improving performance in various 3D generation tasks.
Make-Your-3D enables fast and consistent subject-driven 3D content generation from a single image with text-driven modifications.
Efficiently generate high-fidelity, subject-specific 3D content from a single image with text-driven modifications in just 5 minutes using Make-Your-3D.