Alapfogalmak
This research paper introduces a novel multi-task deep learning framework for no-reference image quality assessment (NR-IQA) that outperforms existing methods by leveraging high-frequency image information and a distortion-aware network.
Statisztikák
The proposed method achieves 3.0%, 3.8% (PLCC, SRCC) higher than the second-best method on the CSIQ dataset.
The proposed method achieves 3.0%, 3.2% (PLCC, SRCC) higher than the best method on the TID2013 dataset.
The proposed method's performance on the LIVE dataset differs from the best method by only 0.1% (SRCC).
The proposed method achieves the best results of 0.916, 0.897 (PLCC,SRCC) on the TID2013 dataset.
The proposed method achieves 0.928, 0.919 (PLCC,SRCC) on the KONIQ dataset.
The proposed method, when applied to JPEG compression and Fast Fading Rayleigh distortions, achieves 0.974, 0.988 (SRCC, PLCC) and 0.947, 0.945 (SRCC, PLCC), respectively.
The proposed method achieves optimal performance on four specific distortion types in the CSIQ dataset: Gaussian white noise, JPEG compression, JPEG2000 compression, and additive Gaussian pink noise.
The proposed method demonstrates exceptional performance across 20 out of 24 distortion types in the TID2013 dataset.
The proposed method outperforms the second-best performing methods in impulse noise, quantization noise, and comfort noise by 5.1%, 5.9%, and 8.5%, respectively.
Idézetek
"Existing methods have not explicitly exploited texture details, which significantly influence the image quality."
"To further address the above problems and to efficiently improve the generalization of IQA models, many recent studies have explored multi-task strategy."
"Since the high frequency information reflects the texture and details of the image, HVS pays more attention to the high frequency content of the image."