Mansformer: An Efficient Transformer with Mixed Attention for Image Deblurring and Beyond
The proposed Mansformer combines multiple self-attentions, gate, and multi-layer perceptions (MLPs) to efficiently explore and employ more possibilities of self-attention for image deblurring and other restoration tasks.