TrailBlazer introduces a novel approach to control the trajectory, size, and prompt-based behavior of synthesized subjects in diffusion-based video generation without the need for low-level control signals or additional training.
BIVDiff is a training-free framework that bridges specific image diffusion models and general text-to-video diffusion models to enable flexible and efficient video synthesis for various tasks.
Translation-based video-to-video synthesis aims to transform videos between distinct domains while preserving temporal continuity and underlying content features, enabling applications such as video super-resolution, colorization, and segmentation.
TrackDiffusion introduces a novel video generation framework that enables fine-grained trajectory-conditioned motion control, addressing challenges in video synthesis.