The proposed HSIMamba model employs a novel bidirectional feature extraction approach combined with specialized spatial processing to achieve superior classification performance on hyperspectral image data, while maintaining high computational efficiency.
The proposed knowledge-embedded contrastive learning (KnowCL) framework unifies supervised, unsupervised, and semi-supervised hyperspectral image classification into a scalable end-to-end framework, leveraging labeled and unlabeled data to achieve superior performance compared to existing methods.
SpectralMamba is a novel state space model-integrated deep learning framework that efficiently and effectively processes hyperspectral data for accurate image classification.
The proposed 3D-ConvSST model utilizes a 3D-Convolution Guided Residual Module to effectively fuse spectral and spatial information, and employs global average pooling to capture discriminative high-level features, outperforming state-of-the-art traditional, convolutional, and Transformer-based models on three benchmark hyperspectral image datasets.
The proposed Pyramid Hierarchical Transformer (PyFormer) model effectively captures both local and global context in hyperspectral images by organizing the input data hierarchically and applying dedicated transformer modules at each level, outperforming state-of-the-art approaches.
Deep learning techniques, including Convolutional Neural Networks, Recurrent Neural Networks, Autoencoders, and Transformers, have significantly advanced the field of hyperspectral image classification by automatically learning discriminative features and capturing complex spatial-spectral relationships.
A novel spatial-spectral state space model (S2Mamba) is proposed for efficient and accurate hyperspectral image classification, leveraging selective structured state space models to capture long-range spatial and spectral dependencies.
The proposed spectral-spatial Mamba (SS-Mamba) model can effectively utilize Mamba's computational efficiency and powerful long-range feature extraction capability to achieve competitive performance in hyperspectral image classification.
The proposed method introduces an attentional fusion of 3D Swin Transformer and Spatial-Spectral Transformer to significantly enhance the classification performance of Hyperspectral Images by leveraging the complementary strengths of hierarchical attention, window-based processing, and long-range dependency modeling.
A novel CNN-Transformer approach with Gate-Shift-Fuse (GSF) mechanisms is proposed to effectively extract local and global spatial-spectral features for enhanced hyperspectral image classification.