核心概念
MetaKD, a novel meta-learning approach, effectively addresses the challenge of performance degradation in multi-modal learning when key modalities are missing by dynamically optimizing modality importance and performing modality-weighted knowledge distillation.
统计
MetaKD outperforms the state-of-the-art performance by 3.51% for enhancing tumor, 2.19% for tumor core, and 1.14% for the whole tumor in terms of the segmentation Dice score on the BraTS2018 dataset.
On the ADNI classification task, MetaKD achieves an average accuracy of 62.83% compared to Flex-MOE’s 58.71% and an average F1-score of 44.64 versus 40.42.
In the Audiovision-MNIST classification task with missing audio data, MetaKD achieves an accuracy of 94.22% compared to the second-best model with 93.56% at an audio rate of 10%, and 94.89% vs. 93.78% at an audio rate of 15%.
With missing visual data in the Audiovision-MNIST dataset, MetaKD improves by around 0.5% at visual rates of 5% and 10% compared to the second-best models.