Explainable AI (XAI) is evolving towards Large Language Models (LLMs), presenting challenges and opportunities. This paper introduces 10 strategies for Usable XAI in the context of LLMs, focusing on enhancing LLMs with XAI and vice versa. Case studies demonstrate the benefits of explanations in diagnosing model behaviors, evaluating response quality, and detecting hallucinations. Challenges include semantic explanation of outputs and exploring new explanation paradigms beyond attribution methods.
他の言語に翻訳
原文コンテンツから
arxiv.org
深掘り質問