The study introduces a new Turkish fact-checking dataset, FCTR, with 3238 claims from three Turkish fact-checking organizations. It evaluates the performance of large language models through zero-shot and few-shot learning approaches. The results show that fine-tuning models with Turkish data yields superior results compared to zero-shot and few-shot approaches.
The rapid spread of misinformation on social media platforms has raised concerns about its impact on public opinion. Automated fact-checking methods aim to assess the truthfulness of claims while reducing human intervention. Cross-lingual transfer learning is explored as a solution for building fact-checking systems in low-resource languages.
Datasets for fact-checking have emerged primarily in English, creating an imbalance between languages. Leveraging large datasets in English and cross-lingual transfer learning can help build fact-checking systems for other languages efficiently. The study highlights the importance of utilizing native data for successful model development.
Key metrics or figures:
To Another Language
from source content
arxiv.org
Key Insights Distilled From
by Recep Firat ... at arxiv.org 03-04-2024
https://arxiv.org/pdf/2403.00411.pdfDeeper Inquiries