แนวคิดหลัก
EEG-to-Text decoding is enhanced through the Contrastive EEG-Text Masked Autoencoder, leading to significant advancements in brain-computer interface applications.
บทคัดย่อ
EEG-based language decoding holds promise for brain-computer interfaces.
Challenges in EEG-based language decoding include the absence of a hybrid strategy and under-utilization of large language models.
The Contrastive EEG-Text Masked Autoencoder (CET-MAE) integrates self-supervised learning for EEG and text.
The E2T-PTR framework leverages pre-trained modules and large language models for EEG-to-Text decoding.
Extensive experiments on the ZuCo dataset show the superiority of E2T-PTR in EEG-to-Text decoding.
สถิติ
Comprehensive experiments conducted on the ZuCo dataset.
E2T-PTR outperforms the state-of-the-art in ROUGE-1 F1 and BLEU-4 scores.
คำพูด
"EEG-to-Text can convey more intended commands from the human brain to computers."
"Our proposed framework sets new SOTA standards in EEG-to-Text decoding."