The content discusses the evaluation of named entity recognition using mono- and multilingual transformer models on Brazilian corporate earnings call transcriptions. It covers dataset collection, weak supervision annotation, model fine-tuning, and performance analysis. Key highlights include the framing of NER as a text generation task, comparison of BERT and T5 models, macro F1-score results ranging from 98.52% to 98.99%, memory and time consumption differences between models, and insights into entity recognition approaches.
To Another Language
from source content
arxiv.org
Key Insights Distilled From
by Ramon Abilio... at arxiv.org 03-20-2024
https://arxiv.org/pdf/2403.12212.pdfDeeper Inquiries