The author proposes the Contrastive EEG-Text Masked Autoencoder (CET-MAE) model to enhance EEG-based language decoding by integrating self-supervised learning across and within EEG and text. The E2T-PTR framework leverages pre-trained modules alongside the CET-MAE to decode text from EEG sequences.
脳波からテキストへのデコーディングを向上させるための新しい手法を提案する。
EEG-to-Text decoding is enhanced through the Contrastive EEG-Text Masked Autoencoder, leading to significant advancements in brain-computer interface applications.
A novel method, SEE, that seamlessly integrates a Cross-Modal Codebook and a Semantic Matching Module into a pre-trained BART language model to enhance the feasibility of accurate EEG-to-Text decoding.