Efficient Multitask Learning with Intuition-Aware Mixture-of-Rank-1-Experts
A novel framework Intuition-MoR1E that leverages the inherent semantic clustering of instances to mimic human intuition, enhancing the decision-making efficacy of the router in Mixture-of-Experts (MoE) networks. Additionally, the introduction of an ultra-lightweight Mixture-of-Rank-1-Experts (MoR1E) architecture supplemented with Low-rank Adapter (LoRA) optimizes the efficiency of the model finetuning.