Evaluating Explainable AI Methods as Intentional Distortions: Separating Successful Idealizations from Deceptive Explanations
Explainable AI (xAI) methods should be evaluated as intentional distortions or "idealizations" of black-box models, rather than as faithful explanations. The SIDEs framework provides a systematic approach to separate successful idealizations from deceptive explanations.