The paper proposes a two-stage fine-tuning (FT) process for large language models (LLMs) to generate high-quality financial reports. The key insights are:
The first stage of FT allows the LLM to learn domain-specific jargon and writing style, even if it leads to some hallucinations. This promotes creativity and compound sentence generation.
The second stage of FT focuses on correcting the hallucinations identified in the first stage, allowing the LLM to self-learn and improve its performance.
The two-stage FT process doubles the number of correct answers and reduces hallucinations by over 50% compared to an untrained LLM. It also shows improvements in perplexity, ROUGE, TER, and BLEU scores, as well as higher creativity and knowledge density with lower uncertainty.
The authors introduce novel metrics to assess the performance of fine-tuned LLMs, including averaged sequential log-loss per sentence (ASLS) and knowledge density per sentence (KDPS), which enable tracking creativity and hallucination control.
The proposed framework can be generalized to domain-specific fine-tuning tasks with minimized tuning costs, making it a promising approach for financial report generation and other specialized applications of LLMs.
Іншою мовою
із вихідного контенту
arxiv.org
Ключові висновки, отримані з
by Sohini Roych... о arxiv.org 09-20-2024
https://arxiv.org/pdf/2408.05365.pdfГлибші Запити