The launch of ChatGPT by OpenAI has revolutionized the use of Large Language Models (LLMs) across various domains. While LLMs like ChatGPT exhibit remarkable conversational capabilities, they are prone to errors such as hallucinations and omissions. These errors can have significant implications, especially in critical fields like legal compliance and medicine. This systematic literature review explores the importance of human involvement in error detection to mitigate risks associated with LLM usage. By understanding human factors, organizations can optimize the deployment of LLM technology and prevent downstream issues stemming from inaccurate model responses. The research emphasizes the need for a balance between technological advancement and human insight to maximize the benefits of LLMs while minimizing risks.
Іншою мовою
із вихідного контенту
arxiv.org
Ключові висновки, отримані з
by Christian A.... о arxiv.org 03-18-2024
https://arxiv.org/pdf/2403.09743.pdfГлибші Запити