The paper proposes 3Doodle, a method to generate expressive and view-consistent 3D sketches from multi-view images of objects. The key idea is to represent the sketches using a compact set of 3D geometric primitives, including view-independent 3D Bézier curves and view-dependent superquadrics.
The view-independent 3D Bézier curves capture the essential 3D feature lines of the object, while the view-dependent superquadrics represent the smooth outline of the object's volume from different viewpoints. The authors introduce a fully differentiable rendering pipeline to optimize the parameters of these 3D primitives, minimizing perceptual losses such as LPIPS and CLIP to generate sketches that faithfully capture the semantic and structural characteristics of the input objects.
The proposed approach can generate abstract sketches that are more compact and expressive compared to recent sketch generation methods. It does not require any dataset of paired images and sketches, nor does it rely on reconstructing detailed 3D models like meshes or neural radiance fields. The authors demonstrate that 3Doodle can faithfully represent a wide variety of objects with a small number of 3D primitives, and the generated sketches maintain view consistency across different viewpoints.
Başka Bir Dile
kaynak içeriğinden
arxiv.org
Önemli Bilgiler Şuradan Elde Edildi
by Changwoon Ch... : arxiv.org 04-30-2024
https://arxiv.org/pdf/2402.03690.pdfDaha Derin Sorular