Core Concepts
Investigating human perception of VR videos in the wild is crucial for quality assessment.
Abstract
The study focuses on understanding how people perceive user-generated VR videos, analyzing viewing behaviors, and developing a blind VQA model. The VRVQW database contains diverse videos with authentic distortions. A psychophysical experiment recorded scanpaths and quality scores under different viewing conditions. Results show significant impact of viewing conditions on perceived quality. A blind VQA model based on pseudocylindrical representation outperforms existing models.
Stats
VRVQW database contains 502 user-generated videos.
139 participants rated video quality under two viewing conditions.
Model performance superior to existing methods.