Core Concepts
Exploiting explanations from XAI methods can help locate and transform relevant input features to build EEG-based BCI systems that generalize better across different data distributions, such as across recording sessions.
Abstract
This paper investigates the use of various XAI methods to explain the decisions of machine learning models trained on EEG data for emotion recognition tasks. The key findings are:
Among the tested XAI methods, LRP, Integrated Gradients, and DeepLIFT produced more reliable explanations compared to Saliency and Guided Backpropagation, especially in the cross-session setting.
Counterintuitively, the XAI explanations seem more reliable for cross-session data than for data from the same session used for training. This is likely due to the trained classifier being more robust to data from the same session.
While the XAI methods can identify relevant features/channels/bands for individual input samples, they struggle to generalize and identify a common set of relevant components across the dataset. Using the average relevance from the training data does not lead to the same steep performance drop as using the effective relevance.
Further analysis shows that the most relevant components identified by the XAI methods are indeed discriminative on their own. However, the average relevance from training data is not as effective, suggesting the need to better leverage XAI explanations to build more generalizable BCI systems.
The paper concludes that exploiting XAI explanations is a promising direction to mitigate the dataset shift problem in EEG-based BCI, but more work is needed to effectively transfer the relevant components across different data distributions.
Stats
The SEED dataset consists of EEG signals recorded from 15 subjects stimulated by 15 film clips to induce negative, neutral and positive emotions.