A novel Multi-Teacher Knowledge Distillation (MTKD) framework that exploits the advantages of multiple teacher models to enhance the performance of compact student networks for image super-resolution.
The core message of this paper is to present a novel knowledge distillation framework, called MiPKD, that effectively transfers the teacher model's prior knowledge to the student model at both feature and block levels, reducing the capacity disparity between them and enabling efficient image super-resolution.
The proposed Dense-Residual-Connected Transformer (DRCT) model effectively mitigates the issue of spatial information loss in deeper network layers, addressing the information bottleneck problem that limits the performance of existing super-resolution models.
DeeDSR introduces a novel two-stage framework that enhances the diffusion model's ability to recognize content and degradation in low-resolution images, enabling the generation of semantically precise and photorealistic details, particularly under significant degradation conditions.