Diffusion models trained on large-scale datasets have shown remarkable progress in image synthesis. However, they struggle with diverse low-level tasks that require details preservation. The Diff-Plugin framework addresses this limitation by enabling a pre-trained diffusion model to generate high-fidelity results across various low-level tasks. It consists of a Task-Plugin module with dual branches to provide task-specific priors and a Plugin-Selector for selecting different Task-Plugins based on text instructions. Extensive experiments demonstrate the superiority of Diff-Plugin over existing methods, particularly in real-world scenarios.
A otro idioma
del contenido fuente
arxiv.org
Ideas clave extraídas de
by Yuhao Liu,Fa... a las arxiv.org 03-04-2024
https://arxiv.org/pdf/2403.00644.pdfConsultas más profundas