In the context of the ongoing energy transition, demand-side flexibility from residential sectors is crucial. Traditional sources are being supplemented by solar PV, home batteries, and EVs. Developing effective control frameworks for managing household energy consumption is challenging but essential. The article proposes a reinforcement learning-based approach using differentiable decision trees to address this challenge. By integrating scalability with explainability, the method aims to provide adaptable control policies that enhance user acceptance. Comparing performance with rule-based and neural network controllers, the proposed method shows promising results in terms of cost savings and simplicity of explanation.
The shift towards sustainable energy necessitates grid balancing services and demand-side flexibility. Model Predictive Control (MPC) has been a prominent method but is limited to large commercial buildings due to accurate model dependencies. Recent research focuses on data-driven RL methods for controller design, showing promise in HEMS applications. However, the lack of explainability in RL algorithms poses a significant hurdle for user acceptance.
To overcome this limitation, the article introduces differentiable decision tree (DDT)-based RL policies as a solution. By replacing deep neural networks with simple decision trees, the approach offers structurally explainable control policies while leveraging data and gradient descent for learning. The study demonstrates the usability of DDT agents in HEMS scenarios and highlights their performance compared to standard controllers.
The preliminary findings indicate that DDT-based agents outperform baseline controllers and offer comparable performance to standard RL agents while being more interpretable. The explainability of DDT policies through visualization enhances user understanding and acceptance of AI-driven HEMS solutions.
Vers une autre langue
à partir du contenu source
arxiv.org
Questions plus approfondies