toplogo
Logg Inn

Training BERT Models for Literary Translation Perception Trends in Hungary


Grunnleggende konsepter
BERT models are trained to carry over a coding system developed for literary translation perception trends in Hungary, showcasing the effectiveness of hyperparameter tuning and domain adaptation.
Sammendrag
Introduction Transition from Socialist Kádár era to democracy in Hungary. Large pilot project examines literary translators' perception. Objective and Contributions Detailed classification technology using BERT models. Extensive hyperparameter tuning for complex sequence labeling. Related Work Training word embeddings on Pártélet journal corpus. BERT usage in various domains for sequence labeling. Dataset Alföld and Nagyvilág journals used for training. Manual annotation of paragraphs for content and context labels. Training Domain adaptation with Masked Language Modelling. Finetuning for imbalanced label classification. Evaluation on Target Domain Test set sampling and importance sampling. Validation results for content and context labels. Comparisons Baseline methods tested for low-cost classification. BERT training methods evaluated for performance. Qualitative Analysis Examination of misclassifications and model performance. Conclusion Successful training of BERT models for literary translation perception trends in Hungary.
Statistikk
"Extensive hyperparameter tuning is used to obtain the best possible results and fair comparisons." "We show that our models can carry over one annotation system to the target domain." "The mean imbalance ratio for content labels is 27.34 and for context labels is 36.31." "The batch size is set to the largest value that fit in the NVIDIA A100 40GB GPU." "Training with the best hyperparameters brings down the perplexity score of the original model to 2.88."
Sitater
"We show that with extensive hyperparameter tuning both in pretraining and finetuning, we can teach BERT models complex and highly imbalanced sequence labelling systems." "We verify that our models can carry over one coding system to the target domain." "Domain adaptation to OCR-ed text gives the most performance boost." "We show that domain adaptation to OCR-ed text of distinct subject matter already significantly helps task performance."

Dypere Spørsmål

How can the findings of this study be applied to other languages and cultural contexts?

The findings of this study can be applied to other languages and cultural contexts by serving as a framework for analyzing trends in social perception within different literary domains. The methodology of training BERT models to carry over coding systems developed for one corpus to another can be adapted to various languages to track shifts in perception over time. By adjusting the coding system and training the models on different datasets, researchers can explore how literary translation is perceived in diverse cultural contexts. This approach can provide valuable insights into the evolution of social attitudes towards translation in different linguistic and cultural settings.

What are the potential limitations of using BERT models for domain adaptation in literary studies?

Using BERT models for domain adaptation in literary studies may have some limitations. One potential limitation is the availability and quality of annotated data for training the models. Annotated datasets in literary studies may be limited in size and scope, which can affect the performance of the models. Additionally, the complexity of literary texts and the nuances of language in literature may pose challenges for BERT models, which are primarily designed for general language understanding tasks. Adapting BERT models to specific literary domains may require extensive preprocessing and fine-tuning to achieve optimal results. Furthermore, the interpretability of BERT models in the context of literary analysis may be a challenge, as the inner workings of deep learning models can be opaque and difficult to interpret for human researchers.

How can the insights gained from this research impact the field of Natural Language Processing beyond literary translation studies?

The insights gained from this research can have broader implications for the field of Natural Language Processing (NLP) beyond literary translation studies. The methodology of training BERT models to carry over coding systems developed on one corpus to another can be applied to various NLP tasks in different domains. This approach can be used to analyze trends, track shifts in perception, and study social attitudes in diverse contexts beyond literature. The techniques for handling label imbalance, domain shift, and ensemble learning demonstrated in this study can be valuable for NLP applications such as sentiment analysis, information extraction, and document classification. The research also highlights the importance of robust evaluation methods and the impact of domain adaptation on model performance, which can inform best practices in NLP research and application development.
0
visual_icon
generate_icon
translate_icon
scholar_search_icon
star