A platform for research: civil engineering, architecture and urbanism
Real-Time Pump Scheduling in Water Distribution Networks Using Deep Reinforcement Learning
Pump scheduling in water distribution networks (WDNs) influences energy efficiency and water supply reliability. Conventional optimization methods usually face challenges in intensive computational requirements and water demand uncertainty handling. This study presents a deep reinforcement learning (DRL) method, i.e., proximal policy optimization (PPO), for real-time pump scheduling in WDNs. The PPO agents are trained to develop offline policies in advance, avoiding the online optimization process during the scheduling period. They are compared with genetic algorithm-based baseline methods, including online optimization methods (i.e., scenario-specific optimization and model predictive control) and a robust optimization method, using the Anytown and D-town networks. The results obtained indicate that the PPO agents outperform the robust optimization method regarding operational cost and robustness to demand uncertainty and achieve the same level of pump scheduling performance as the online optimization methods. Including the demand and time information in the input for PPO agent training improves the performance of the DRL method. A smaller scheduling step size could improve the performance of PPO agents. This study illustrates the potential of PPO in real-time pump scheduling in WDNs and provides insight into the development and application of this method in practice.
Real-Time Pump Scheduling in Water Distribution Networks Using Deep Reinforcement Learning
Pump scheduling in water distribution networks (WDNs) influences energy efficiency and water supply reliability. Conventional optimization methods usually face challenges in intensive computational requirements and water demand uncertainty handling. This study presents a deep reinforcement learning (DRL) method, i.e., proximal policy optimization (PPO), for real-time pump scheduling in WDNs. The PPO agents are trained to develop offline policies in advance, avoiding the online optimization process during the scheduling period. They are compared with genetic algorithm-based baseline methods, including online optimization methods (i.e., scenario-specific optimization and model predictive control) and a robust optimization method, using the Anytown and D-town networks. The results obtained indicate that the PPO agents outperform the robust optimization method regarding operational cost and robustness to demand uncertainty and achieve the same level of pump scheduling performance as the online optimization methods. Including the demand and time information in the input for PPO agent training improves the performance of the DRL method. A smaller scheduling step size could improve the performance of PPO agents. This study illustrates the potential of PPO in real-time pump scheduling in WDNs and provides insight into the development and application of this method in practice.
Real-Time Pump Scheduling in Water Distribution Networks Using Deep Reinforcement Learning
J. Water Resour. Plann. Manage.
Pei, Shengwei (author) / Hoang, Lan (author) / Fu, Guangtao (author) / Butler, David (author)
2025-06-01
Article (Journal)
Electronic Resource
English
Injection Mold Production Sustainable Scheduling Using Deep Reinforcement Learning
DOAJ | 2020
|Deep Reinforcement Learning Model to Mitigate Congestion in Real-Time Traffic Light Networks
DOAJ | 2021
|