A platform for research: civil engineering, architecture and urbanism
Model-free dynamic management strategy for low-carbon home energy based on deep reinforcement learning accommodating stochastic environments
Highlights A model-free low-carbon management strategy for home energy is proposed. The proposed strategy accelerates the dynamic operation through deep Q network. The strategy can achieve the unified effect of economy, emission and satisfaction. The model adapts to random environment guaranteeing robustness and stability.
Abstract This paper presents a model-free dynamic optimal management strategy for a low-carbon home energy management system (HEMS) based on deep reinforcement learning (DRL). The method can ideally handle the uncertainties and dynamics of renewable energy and demand-side load. Firstly, the load model is established by a deep Q network (DQN) algorithm with the advantage of ignoring traditional forecasting steps on stochastic environments such as renewable energy generation, load demand, price, etc. Then multi-agents are established for dynamic management based on the DRL. Through “dynamic acquisition, dynamic decision” mechanism, the proposed model-free strategy achieves real-time energy management that can adaptively respond to stochastic environments. Secondly, considering the constraints of system carbon emissions and carbon trading, the proposed strategy can minimize the energy consumption cost, carbon trading cost, and user satisfaction penalties. Ultimately, the effectiveness of the proposed strategy is verified through case studies. Experimental results demonstrate that the strategy can significantly reduce the overall cost, including a 36.7% reduction in carbon trading. At the same time, user satisfaction penalties are reduced by 50.2%. Further, the agent hyperparameter could also be adjusted to capture the trade-off between cost savings and satisfaction penalties. And compared with the traditional forecast-based management strategy, it overcomes the problem of uncertainties and avoids forecasting errors to better accommodate the stochastic environment.
Model-free dynamic management strategy for low-carbon home energy based on deep reinforcement learning accommodating stochastic environments
Highlights A model-free low-carbon management strategy for home energy is proposed. The proposed strategy accelerates the dynamic operation through deep Q network. The strategy can achieve the unified effect of economy, emission and satisfaction. The model adapts to random environment guaranteeing robustness and stability.
Abstract This paper presents a model-free dynamic optimal management strategy for a low-carbon home energy management system (HEMS) based on deep reinforcement learning (DRL). The method can ideally handle the uncertainties and dynamics of renewable energy and demand-side load. Firstly, the load model is established by a deep Q network (DQN) algorithm with the advantage of ignoring traditional forecasting steps on stochastic environments such as renewable energy generation, load demand, price, etc. Then multi-agents are established for dynamic management based on the DRL. Through “dynamic acquisition, dynamic decision” mechanism, the proposed model-free strategy achieves real-time energy management that can adaptively respond to stochastic environments. Secondly, considering the constraints of system carbon emissions and carbon trading, the proposed strategy can minimize the energy consumption cost, carbon trading cost, and user satisfaction penalties. Ultimately, the effectiveness of the proposed strategy is verified through case studies. Experimental results demonstrate that the strategy can significantly reduce the overall cost, including a 36.7% reduction in carbon trading. At the same time, user satisfaction penalties are reduced by 50.2%. Further, the agent hyperparameter could also be adjusted to capture the trade-off between cost savings and satisfaction penalties. And compared with the traditional forecast-based management strategy, it overcomes the problem of uncertainties and avoids forecasting errors to better accommodate the stochastic environment.
Model-free dynamic management strategy for low-carbon home energy based on deep reinforcement learning accommodating stochastic environments
Hou, Hui (author) / Ge, Xiangdi (author) / Chen, Yue (author) / Tang, Jinrui (author) / Hou, Tingting (author) / Fang, Rengcun (author)
Energy and Buildings ; 278
2022-10-18
Article (Journal)
Electronic Resource
English