Core Concepts
Transformers have shown effectiveness in image restoration tasks, and Continuous Scaling Attention (CSAttn) explores the potential of attention without using feed-forward networks, outperforming existing approaches.
Abstract
Transformers with multi-head self-attention and feed-forward networks are effective in image restoration. CSAttn introduces continuous attention without FFN, improving performance. The study analyzes the impact of various design components on image restoration tasks like deraining, desnowing, low-light enhancement, and dehazing.
Stats
Transformers have demonstrated effectiveness in image restoration tasks.
CSAttn explores attention potential without using feed-forward networks.
CSAttn outperforms existing approaches in image restoration tasks.