LoMOE is a novel framework for localized multi-object editing that enables various operations on objects within an image in a single pass. It leverages foreground masks and text prompts to achieve high-quality, seamless image editing with minimal artifacts compared to existing methods. The approach combines cross-attention and background preservation losses to ensure realistic edits while maintaining the integrity of the original image.
The content discusses diffusion models' exceptional ability to generate prompt-conditioned image edits and the limitations of previous approaches relying on textual prompts for precise object editing. LoMOE aims to overcome these challenges by introducing a novel framework for zero-shot localized multi-object editing through a multi-diffusion process.
The method draws inspiration from compositional generative models and utilizes pre-trained Stable Diffusion 2.0 as the base generative model. It involves manipulating the diffusion trajectory within specific regions earmarked for editing, employing prompts that exert localized influence on these regions while incorporating a global prompt for overall image reconstruction.
Experiments against existing state-of-the-art methods demonstrate the improved effectiveness of LoMOE in terms of both image editing quality and inference speed. A new benchmark dataset named LoMOE-Bench is introduced for evaluating multi-object editing performance.
Para Outro Idioma
do conteúdo original
arxiv.org
Principais Insights Extraídos De
by Goirik Chakr... às arxiv.org 03-04-2024
https://arxiv.org/pdf/2403.00437.pdfPerguntas Mais Profundas