Adversarial attacks can significantly degrade the performance of no-reference image quality assessment metrics. This study investigates the effectiveness of various adversarial purification methods in defending against such attacks and restoring the original metric scores.
The proposed YOTO network can effectively perform both full-reference (FR) and no-reference (NR) image quality assessment tasks using a single unified architecture, outperforming existing task-specific models.
This study establishes a large-scale database, AIGCOIQA2024, to assess human visual preferences for AI-generated omnidirectional images from the perspectives of quality, comfortability, and correspondence. The database is used to analyze human preference characteristics and conduct a benchmark experiment evaluating state-of-the-art IQA models.
PromptIQA directly adapts to new requirements without fine-tuning, outperforming SOTA methods.
Improving the adversarial robustness of No-Reference Image Quality Assessment models through gradient norm regularization.
QualiCLIP proposes a quality-aware image-text alignment strategy to enhance CLIP's ability to generate accurate quality-aware image representations.
Introducing PICNIQ, a novel pairwise comparison framework for image quality assessment that addresses domain shift and uncertainty challenges in BIQA.
Contemporary NR-IQA models can be compared using MAP estimation in diffusion latents, providing insights into their relative strengths and weaknesses in perceptual optimization.