Core Concepts
Unified data fusion framework for EEG and MEG data using coupled generator decomposition.
Abstract
The content introduces a novel approach called coupled generator decomposition for fusing electroencephalography (EEG) and magnetoencephalography (MEG) data. It demonstrates the efficacy of this framework in identifying common features in response to face perception stimuli while accommodating modality- and subject-specific variability. The study compares models of varying complexity, revealing altered fusiform face area activation for scrambled faces. The implementation is done in PyTorch, providing faster execution compared to conventional methods like quadratic programming inference.
I. Introduction:
- Data fusion modeling identifies common features across diverse sources.
- Coupled generator decomposition generalizes sparse principal component analysis (SPCA).
II. Methods:
- Definition of a linear matrix decomposition minimizing sum-of-squared-errors.
- Sparse principal component analysis with l1 and l2 regularization terms.
III. Results and Discussion:
- Evaluation of stochastic optimization in PyTorch against traditional methods.
- Optimal regularization coefficients for different sparse PCA models.
- Comparison of test loss across different model orders.
IV. Conclusions:
- Unified data fusion framework presented with promising results for understanding shared neural features.
Stats
Our findings reveal altered ∼170ms fusiform face area activation for scrambled faces, particularly evident in the multimodal, multisubject model.
Model parameters were inferred using stochastic optimization in PyTorch, demonstrating comparable performance to conventional quadratic programming inference for SPCA but with considerably faster execution.