Adapting XLM-RoBERTa for Ancient and Historical Languages: Insights from the TartuNLP Submission to the SIGTYP 2024 Shared Task
The authors present a parameter-efficient fine-tuning approach based on the adapters framework to adapt the XLM-RoBERTa language model for various natural language processing tasks on 16 ancient and historical languages.