Recent advancements in NLP have led to progress in various tasks by fine-tuning pre-trained language models. However, limited training data for languages like Spanish hinders progress. The lack of efficient models for resource-constrained environments is a challenge. The study introduces SpanishTinyRoBERTa, a compressed model based on RoBERTa, using knowledge distillation to maintain performance while improving efficiency. Experimental results show that the distilled model preserves performance while significantly increasing inference speed. This work aims to facilitate the development of efficient language models for Spanish across NLP tasks.
Til et andet sprog
fra kildeindhold
arxiv.org
Vigtigste indsigter udtrukket fra
by Adri... kl. arxiv.org 03-19-2024
https://arxiv.org/pdf/2312.04193.pdfDybere Forespørgsler