Efficient Knowledge Distillation for Image Super-Resolution with Multi-granularity Mixture of Priors
The core message of this paper is to present a novel knowledge distillation framework, called MiPKD, that effectively transfers the teacher model's prior knowledge to the student model at both feature and block levels, reducing the capacity disparity between them and enabling efficient image super-resolution.