Core Concepts
CrystalBox offers high-fidelity future-based explanations in input-driven environments, enhancing understanding and observability in complex systems.
Abstract
The content introduces CrystalBox, a model-agnostic explainability framework for Deep Reinforcement Learning (DRL) controllers in input-driven environments. It highlights the challenges in interpreting DRL decisions and the importance of future-based explanations. CrystalBox decomposes reward functions to generate meaningful explanations, showcasing its utility in applications like adaptive bitrate streaming and congestion control. The framework's architecture, training process, and applications are detailed, emphasizing its efficiency and effectiveness in providing insights.
Stats
Controllers in input-driven environments face challenges in interpretation, debugging, and trust.
CrystalBox generates future-based explanations by decomposing reward functions.
CrystalBox does not require modifications to the controller and offers high-fidelity explanations.
Quotes
"We propose an efficient algorithm to generate future-based explanations across both discrete and continuous control environments."
"We demonstrate the usefulness of CrystalBox’s explanations by providing insights when feature-based explainers find it challenging."