Efficient Distillation of Multilingual Speech Models: DistilWhisper Approach
DistilWhisper proposes a method to bridge the performance gap in automatic speech recognition for under-represented languages by leveraging language-specific experts and knowledge distillation.