本文提出了一種新的可泛化隱式運動建模方法 (GIMM),用於影片畫面插值,透過學習影片中複雜的運動模式,以產生任意時間點的高品質中間畫面。
This research paper introduces GIMM, a novel approach to video frame interpolation that leverages generalizable implicit neural representations for superior motion modeling, enabling the generation of high-quality intermediate frames at arbitrary timesteps.
Framer is a novel video frame interpolation framework that allows for both user-interactive and automated generation of smooth and visually appealing transitions between two images by leveraging the power of large-scale pre-trained video diffusion models and point trajectory control.
VFIMamba leverages the strengths of Selective State Space Models (S6), particularly their efficiency and global receptive field, to achieve state-of-the-art performance in video frame interpolation, especially for high-resolution videos.
The proposed Motion-Aware Latent Diffusion Model (MADIFF) effectively incorporates inter-frame motion priors between the target interpolated frame and the conditional neighboring frames to generate visually smooth and realistic interpolated video frames, significantly outperforming existing approaches.
An efficient video frame interpolation framework that achieves state-of-the-art performance with clear improvement while requiring much less computational resources.
The core message of this paper is to introduce a novel perception-oriented video frame interpolation paradigm called PerVFI, which tackles the challenges of blur and ghosting artifacts by incorporating an asymmetric synergistic blending module and a conditional normalizing flow-based generator.
VIDIM, a generative model for video interpolation, creates short videos given a start and end frame by using cascaded diffusion models to generate the target video at low resolution and then at high resolution, enabling high-fidelity results even for complex, nonlinear, or ambiguous motions.
This paper proposes a comprehensive benchmark for evaluating video frame interpolation methods. The benchmark includes a carefully designed synthetic test dataset that adheres to the constraint of linear motion, consistent error metrics, and an in-depth analysis of the interpolation quality with respect to various per-pixel attributes such as motion magnitude and occlusion.