The study presents two innovative models for EEG artifact removal:
IC-U-Net series: These models build upon the U-Net architecture, with enhancements such as dense skip connections (IC-U-Net++) and self-attention mechanisms (IC-U-Net-Attn) to better capture intra- and inter-channel relationships.
ART series: This transformer-based model leverages attention mechanisms to effectively model the complex temporal dynamics of EEG signals. Three variants are explored - ARTclean, ARTnull, and ARTnoise - which differ in their target sequences used for training.
The models were trained on a large dataset of synthetic noisy-clean EEG pairs, generated using independent component analysis (ICA) and ICLabel. This approach enables the models to learn effective mapping from artifact-contaminated to clean EEG signals.
Comprehensive evaluations were conducted across a wide range of open EEG datasets, including motor imagery, steady-state visually evoked potentials (SSVEP), and simulated driving tasks. The assessments involved traditional metrics like mean squared error (MSE) and signal-to-noise ratio (SNR), as well as advanced techniques such as source localization and EEG component classification.
The results demonstrate that the ART model consistently outperforms other deep learning-based artifact removal methods, setting a new benchmark in EEG signal processing. ART's superior performance is evident in its ability to effectively suppress diverse artifacts, including eye movements, muscle activity, and channel noise, while preserving the integrity of underlying brain signals. This advancement promises to catalyze further innovations in the field, facilitating the study of brain dynamics in naturalistic environments.
Til et andet sprog
fra kildeindhold
arxiv.org
Vigtigste indsigter udtrukket fra
by Chun-Hsiang ... kl. arxiv.org 09-12-2024
https://arxiv.org/pdf/2409.07326.pdfDybere Forespørgsler