Core Concepts
Efficient low-latency stereo video compression using neural networks.
Abstract
The rise of new video modalities like VR and AVs increases demand for efficient multi-view video compression.
Existing stereo video compression methods have limitations in parallelization and runtime performance.
Low-Latency neural codec for Stereo video Streaming (LLSS) introduces a bidirectional feature shifting module for efficient encoding.
LLSS processes left and right views in parallel, reducing latency and improving R-D performance.
LLSS outperforms existing neural and conventional codecs on common stereo video benchmarks.
Contributions include a novel codec architecture, bidirectional shift module, and thorough experiments showcasing efficiency.
Stats
"LLSS processes left and right views in parallel, minimizing latency."
"LLSS substantially improves R-D performance compared to existing codecs."
Quotes
"LLSS processes left and right views in parallel, minimizing latency."
"LLSS substantially improves R-D performance compared to existing codecs."