This paper introduces a self-supervised constraint framework, SSC-SR, to enhance the performance of existing image super-resolution (SR) models. The key highlights are:
The authors revisit the learning process of SR models and identify that while smooth areas are easily super-resolved, complex regions with rich edges or textures pose greater challenges due to the ill-posed nature of the task.
SSC-SR employs a dual asymmetric framework that consists of an online SR network, a target SR network updated via exponential moving average (EMA), and a projection head. This setup enables the introduction of a self-supervised consistency loss that compares the output of the online network's projection head with the target network's output.
The self-supervised constraint specifically targets and refines areas of uncertainty encountered during the training process, stabilizing the representation of smooth areas and emphasizing complex regions.
Comprehensive experiments demonstrate that retrained versions of various SR models, including EDSR, RCAN, NLSN, SwinIR, and HAT, consistently achieve measurable improvements across benchmark datasets when integrated with the proposed SSC-SR framework.
Ablation studies corroborate the effectiveness of the EMA strategy, the choice of loss function, and the projection head design in the SSC-SR framework.
Overall, the authors present a versatile and effective self-supervised constraint paradigm that can be easily integrated with existing SR models to enhance their performance, particularly in complex image regions.
To Another Language
from source content
arxiv.org
Key Insights Distilled From
by Gang Wu,Junj... at arxiv.org 04-02-2024
https://arxiv.org/pdf/2404.00260.pdfDeeper Inquiries