A platform for research: civil engineering, architecture and urbanism
On-policy learning-based deep reinforcement learning assessment for building control efficiency and stability
Deep reinforcement learning (DRL) has been considered as a potential solution to efficiently control and manage building systems. However, broad assessment of DRL-based building control is still required to characterize their pros and cons in comparison with conventional rule-based feedback controls. In this paper, we assessed DRL-based controls with on-policy learning-based algorithms and continuous control actions for cooling control of large office buildings in the summer season to minimize whole-building energy use and occupant discomfort. We compared DRL-based control methods with two baseline control methods: (1) a pre-determined schedule with supply temperature and static pressure setpoints, and (2) advanced reset method that adjusts setpoints based on heuristic rules, i.e., ASHRAE Guideline 36. We also tested the DRL algorithms to evaluate their performances in multiple climate locations. We found that DRL-based control methods outperformed the baseline control methods in terms of energy savings while maintaining a thermal comfort. DRL reduced energy use between ∼4%–22% on average compared to the baseline methods, depending on climate location. We also evaluated DRL-based control in terms of control stability and showed that DRL-based methods should consider hardware lifetimes in practical operations.
On-policy learning-based deep reinforcement learning assessment for building control efficiency and stability
Deep reinforcement learning (DRL) has been considered as a potential solution to efficiently control and manage building systems. However, broad assessment of DRL-based building control is still required to characterize their pros and cons in comparison with conventional rule-based feedback controls. In this paper, we assessed DRL-based controls with on-policy learning-based algorithms and continuous control actions for cooling control of large office buildings in the summer season to minimize whole-building energy use and occupant discomfort. We compared DRL-based control methods with two baseline control methods: (1) a pre-determined schedule with supply temperature and static pressure setpoints, and (2) advanced reset method that adjusts setpoints based on heuristic rules, i.e., ASHRAE Guideline 36. We also tested the DRL algorithms to evaluate their performances in multiple climate locations. We found that DRL-based control methods outperformed the baseline control methods in terms of energy savings while maintaining a thermal comfort. DRL reduced energy use between ∼4%–22% on average compared to the baseline methods, depending on climate location. We also evaluated DRL-based control in terms of control stability and showed that DRL-based methods should consider hardware lifetimes in practical operations.
On-policy learning-based deep reinforcement learning assessment for building control efficiency and stability
Lee, Joon-Yong (author) / Rahman, Aowabin (author) / Huang, Sen (author) / Smith, Amanda D. (author) / Katipamula, Srinivas (author)
Science and Technology for the Built Environment ; 28 ; 1150-1165
2022-09-27
16 pages
Article (Journal)
Electronic Resource
Unknown
Reinforcement learning building control approach harnessing imitation learning
DOAJ | 2023
|Optimizing the hyper-parameters of deep reinforcement learning for building control
Springer Verlag | 2025
|Deep reinforcement learning with planning guardrails for building energy demand response
DOAJ | 2023
|