Kernekoncepter
The authors propose DragTex, a method for generative point-based texture editing on 3D meshes to address challenges in texture generation and editing.
Resumé
The DragTex method introduces a diffusion model for locally consistent texture editing, fine-tuning the decoder to reduce reconstruction errors, and training LoRA with multi-view images. Experimental results demonstrate the effectiveness of the proposed method in generating high-quality textures on 3D meshes through point-based dragging interactions.
Statistik
"Our method involves optimizing the training strategy, fusion of noisy latent images, and reconstructing details outside the drag region."
"The experimental results show that our method effectively achieves dragging textures on 3D meshes and generates plausible textures."
"We employ Stable Diffusion v1-5 from the DragDiffusion pipeline with configurations like 50 steps for DDIM and fusion."
"LoRA is trained with a rank of 16 and a learning rate of 2 × 10−4."
"For single-view training, the number of training steps was set to 200."
Citater
"We propose a generative point-based 3D mesh texture editing method called DragTex."
"Our method effectively achieves dragging textures on 3D meshes and generates plausible textures."
"The experimental results show that our method effectively achieves dragging textures on 3D meshes."