핵심 개념
PCMEA introduces a semi-supervised approach for multi-modal entity alignment, enhancing alignment quality through pseudo-label calibration and contrastive learning.
초록
PCMEA proposes a novel framework for multi-modal entity alignment, addressing challenges like modal-specific noise and limited labeled data. The method combines diverse encoders and attention mechanisms to extract features, filters noise using mutual information maximization, and improves alignment with pseudo-label calibration. Experimental results demonstrate superior performance compared to state-of-the-art methods on two benchmark datasets.
통계
PCMEA achieves 0.6763 Hits@1 on FB15K-DB15K with 20% seeds.
PCMEA outperforms MCLEA by 16.21% on average across all metrics.
PCMEA surpasses the best baseline by 12.66% in Hits@1 on FB15K-YAGO15K with 20% seeds.
인용구
"PCMEA combines pseudo-label calibration with momentum-based contrastive learning."
"Experimental results show that PCMEA consistently outperforms prior state-of-the-art methods."
"Our model brings about significant improvement in alignment performance under semi-supervised settings."