Core Concepts
Proposing a task-customized mixture of adapters for general image fusion to enhance compatibility and performance across multiple fusion tasks.
Abstract
Introduces TC-MoA for adaptive multi-source image fusion.
Utilizes mutual information regularization for diverse sources.
Achieves superior performance in VIF, MEF, and MFF tasks.
Demonstrates prompt controllability and router controllability.
Conducts hyperparameters analysis and ablation studies.
Stats
"By only adding 2.8% of learnable parameters, our model copes with numerous fusion tasks."
"The code is available at https://github.com/YangSun22/TC-MoA."
Quotes
"Our TC-MoA controls the dominant intensity bias for different fusion tasks, successfully unifying multiple fusion tasks in a single model."
"Extensive experiments show that TC-MoA outperforms the competing approaches in learning commonalities while retaining compatibility for general image fusion."