This study explores the effect of sharpness features on Blind Video Quality Assessment (BVQA) models, which aim to predict the perceived quality of videos without access to reference or original videos.
Existing video quality datasets suffer from the "easy dataset" problem, where they can be solved by simple blind image quality assessment models, failing to properly challenge current video quality assessment models.
Deep learning-based video quality assessment can be optimized through knowledge distillation and model compression to create a lightweight yet high-performing metric.
The author presents RankDVQA-mini, a lightweight video quality assessment method achieved through knowledge distillation and model compression, maintaining superior performance with reduced complexity.
The author proposes a modular BVQA model to address spatial resolution and frame rate impact on video quality, achieving superior performance through rectifiers and training strategies.