The launch of ChatGPT by OpenAI has revolutionized the use of Large Language Models (LLMs) across various domains. While LLMs like ChatGPT exhibit remarkable conversational capabilities, they are prone to errors such as hallucinations and omissions. These errors can have significant implications, especially in critical fields like legal compliance and medicine. This systematic literature review explores the importance of human involvement in error detection to mitigate risks associated with LLM usage. By understanding human factors, organizations can optimize the deployment of LLM technology and prevent downstream issues stemming from inaccurate model responses. The research emphasizes the need for a balance between technological advancement and human insight to maximize the benefits of LLMs while minimizing risks.
In un'altra lingua
dal contenuto originale
arxiv.org
Approfondimenti chiave tratti da
by Christian A.... alle arxiv.org 03-18-2024
https://arxiv.org/pdf/2403.09743.pdfDomande più approfondite