In the chapter, the author critiques Erasmus et al.'s defense of applying traditional models of scientific explanation to machine learning. The argument highlights the challenges in explaining opaque ML systems using Deductive-Nomological, Inductive-Statistical, and Causal Mechanical models. The discussion emphasizes the limitations of these approaches due to the complexity and lack of verifiability in neural networks. Instead, a pragmatic approach focusing on understanding ML systems is proposed, advocating for interpretable models as tools for grasping interconnected parts and achieving functional representation.
The analysis delves into various explanations offered by Erasmus et al., dissecting their applicability and shortcomings in providing genuine understanding in machine learning. The critique extends to counterfactual methods used in XAI, highlighting issues with robustness, causal grounding, and practical benefits. Ultimately, the chapter advocates for a contextual view of understanding that prioritizes successful usage over strict alethic standards.
In un'altra lingua
dal contenuto originale
arxiv.org
Approfondimenti chiave tratti da
by Andr... alle arxiv.org 03-04-2024
https://arxiv.org/pdf/2403.00315.pdfDomande più approfondite