Efficient Image Processing Transformer with Hierarchical Attentions for Restoring High-Quality Images from Degraded Inputs
The proposed IPT-V2 architecture with focal context self-attention, global grid self-attention, and re-parameterization locally-enhance feed-forward network can effectively construct accurate local and global token interactions to restore high-quality images from degraded inputs.