LVC-LGMC: Joint Local and Global Motion Compensation for Learned Video Compression
Core Concepts
Proposing a joint local and global motion compensation module (LGMC) for learned video compression to address the limitations of existing models.
Abstract
Abstract:
Existing video compression models focus on local contexts, neglecting global correlations.
Proposed LGMC integrates flow net and cross attention for joint local and global motion compensation.
Introduction:
Learned video compression addresses challenges in increasing video data volume.
Models employ flow net or DCN for motion estimation, followed by motion codec.
Proposed LVC-LGMC Method:
Combines flow-based local compensation with attention-based global compensation.
Multi-scale motion compensation adopted for accurate motions.
Experiments:
Trained on Vimeo-90k dataset with rate-distortion loss optimization.
LVC-LGMC outperforms baseline DCVC-TCM in rate-distortion performance.
Analyses and Discussions:
Bit allocation analysis shows lower bit consumption in motion representation with LVC-LGMC.
Ablation studies demonstrate the importance of global context for improved performance.
Conclusion:
LGMC enhances learned video compression by capturing both local and global redundancies effectively.
LVC-LGMC
Stats
"The proposed LVC-LGMC reduces 10% bit-rates on MCL-JCV test sequences."
"The parameter numbers of the proposed LVC-LGMC and DCVC-TCM are 14.09M and 10.71M, respectively."
Quotes
"The proposed method significantly boosts the model performance."
"Our LVC-LGMC has significant improvements over baseline DCVC-TCM."