A platform for research: civil engineering, architecture and urbanism
Study on deep reinforcement learning techniques for building energy consumption forecasting
Highlights The potentials of deep reinforcement learning (DRL) techniques are investigated. Three commonly-used DRL techniques, i.e. A3C, DDPG and RDPG are utilized for building energy consumption forecasting. The prediction performances of six popular predictive algorithms are studied. DDPG and RDPG can evidently enhance prediction performance while accounting for more computation time. A3C doesn't show advantage in forecasting building energy consumption.
Abstract Reliable and accurate building energy consumption prediction is becoming increasingly pivotal in building energy management. Currently, data-driven approach has shown promising performances and gained lots of research attention due to its efficiency and flexibility. As a combination of reinforcement learning and deep learning, deep reinforcement learning (DRL) techniques are expected to solve nonlinear and complex issues. However, very little is known about DRL techniques in forecasting building energy consumption. Therefore, this paper presents a case study of an office building using three commonly-used DRL techniques to forecast building energy consumption, namely Asynchronous Advantage Actor-Critic (A3C), Deep Deterministic Policy Gradient (DDPG) and Recurrent Deterministic Policy Gradient (RDPG). The objective is to investigate the potential of DRL techniques in building energy consumption prediction field. A comprehensive comparison between DRL models and common supervised models is also provided. The results demonstrate that the proposed DDPG and RDPG models have obvious advantages in forecasting building energy consumption compared to common supervised models, while accounting for more computation time for model training. Their prediction performances measured by mean absolute error (MAE) can be improved by 16%-24% for single-step ahead prediction, and 19%-32% for multi-step ahead prediction. The results also indicate that A3C performs poor prediction accuracy and shows much slower convergence speed than DDPG and RDPG. However, A3C is still the most efficient technique among these three DRL methods. The findings are enlightening and the proposed DRL methodologies can be positively extended to other prediction problems, e.g., wind speed prediction and electricity load prediction.
Study on deep reinforcement learning techniques for building energy consumption forecasting
Highlights The potentials of deep reinforcement learning (DRL) techniques are investigated. Three commonly-used DRL techniques, i.e. A3C, DDPG and RDPG are utilized for building energy consumption forecasting. The prediction performances of six popular predictive algorithms are studied. DDPG and RDPG can evidently enhance prediction performance while accounting for more computation time. A3C doesn't show advantage in forecasting building energy consumption.
Abstract Reliable and accurate building energy consumption prediction is becoming increasingly pivotal in building energy management. Currently, data-driven approach has shown promising performances and gained lots of research attention due to its efficiency and flexibility. As a combination of reinforcement learning and deep learning, deep reinforcement learning (DRL) techniques are expected to solve nonlinear and complex issues. However, very little is known about DRL techniques in forecasting building energy consumption. Therefore, this paper presents a case study of an office building using three commonly-used DRL techniques to forecast building energy consumption, namely Asynchronous Advantage Actor-Critic (A3C), Deep Deterministic Policy Gradient (DDPG) and Recurrent Deterministic Policy Gradient (RDPG). The objective is to investigate the potential of DRL techniques in building energy consumption prediction field. A comprehensive comparison between DRL models and common supervised models is also provided. The results demonstrate that the proposed DDPG and RDPG models have obvious advantages in forecasting building energy consumption compared to common supervised models, while accounting for more computation time for model training. Their prediction performances measured by mean absolute error (MAE) can be improved by 16%-24% for single-step ahead prediction, and 19%-32% for multi-step ahead prediction. The results also indicate that A3C performs poor prediction accuracy and shows much slower convergence speed than DDPG and RDPG. However, A3C is still the most efficient technique among these three DRL methods. The findings are enlightening and the proposed DRL methodologies can be positively extended to other prediction problems, e.g., wind speed prediction and electricity load prediction.
Study on deep reinforcement learning techniques for building energy consumption forecasting
Liu, Tao (author) / Tan, Zehan (author) / Xu, Chengliang (author) / Chen, Huanxin (author) / Li, Zhengfei (author)
Energy and Buildings ; 208
2019-12-02
Article (Journal)
Electronic Resource
English
A study of deep learning-based multi-horizon building energy forecasting
Elsevier | 2023
|A New Deep Learning Restricted Boltzmann Machine for Energy Consumption Forecasting
DOAJ | 2022
|