核心概念
The SFormer model leverages selective attention mechanisms to dynamically adapt receptive fields and prioritize relevant spatial-spectral information, leading to improved accuracy in hyperspectral image classification compared to traditional CNNs and existing transformer-based methods.
統計資料
On the Pavia University dataset, SFormer achieved an overall accuracy (OA) of 96.59%.
The baseline model with a single convolutional layer achieved an OA of 87.67% on the Pavia University dataset.
Increasing the number of convolutional layers to nine only yielded an OA of 86.33% on the Pavia University dataset.
The introduction of the KSTB module alone improved the OA to 93.44% on the Pavia University dataset.
Applying the TSTB module alone increased the OA to 95.81% on the Pavia University dataset.
Using only the spatial selection mechanism in KSTB resulted in an OA of 96.09% on the Pavia University dataset.
Employing only the spectral selection mechanism in KSTB yielded an OA of 96.01% on the Pavia University dataset.