Core Concepts
Cross-attention guidance can enable zero-shot control over object shape, position, and movement in text-to-video diffusion models, despite the limitations of current models.
Abstract
This paper investigates the role of cross-attention layers in text-to-video (T2V) diffusion models and their potential for enabling zero-shot video editing. The authors explore two approaches: forward guidance and backward guidance.
Forward guidance faces limitations due to size and shape mismatch, as well as cross-attention overlap between different tokens. The authors show that backward guidance, which biases the cross-attention through backpropagation, is a more promising approach for video editing.
The paper highlights the current limitations of T2V models, particularly the noisy cross-attention maps they produce compared to text-to-image models. To bypass this, the authors manually generate target cross-attention maps for their experiments, acknowledging the need for future T2V models with better cross-attention quality.
The results demonstrate the ability to control object size and motion through backward guidance, even when the original video is missing certain elements described in the text prompt. The authors also discuss observations related to perspective and motion control, noting the trade-offs between strictly following the target cross-attention and maintaining temporal consistency.
Overall, this work provides insights into the challenges and opportunities of adapting cross-attention-based editing techniques from the image domain to the video domain, paving the way for future advancements in text-to-video generation and editing.