A platform for research: civil engineering, architecture and urbanism
Active structural control framework using policy-gradient reinforcement learning
Abstract This paper presents a novel data-driven approach for active structural control through the use of deep reinforcement learning, wherein, the control system learns to react in an optimal manner through a training process that utilizes deep neural networks within a reinforcement learning framework. The key advantage of this paradigm is the data-driven approach to active control which helps circumvent the need for high-fidelity modeling that typically requires extensive prior knowledge about the structure of interest. Furthermore, the proposed framework is applicable for designing a variety of active controllers, and different external load types, for example, wind and seismic loads for any desired building. The efficacy of the proposed framework is demonstrated in the context of seismic response control through three numerical case studies. The results confirm that the proposed approach yields significant structural response reductions in the linear and nonlinear regimes. Furthermore, implementation issues such as sensitivity to structural property variations, and time delay are thoroughly investigated.
Highlights Data-driven active control framework circumventing detailed modeling using deep RL Customizable environment for various building geometries and control systems Active controller trained for linear/nonlinear buildings and tested by unseen motions Trainable using synthetic ground motion data and generalizable to real earthquakes A robust data-driven control model to system property variations and time delay
Active structural control framework using policy-gradient reinforcement learning
Abstract This paper presents a novel data-driven approach for active structural control through the use of deep reinforcement learning, wherein, the control system learns to react in an optimal manner through a training process that utilizes deep neural networks within a reinforcement learning framework. The key advantage of this paradigm is the data-driven approach to active control which helps circumvent the need for high-fidelity modeling that typically requires extensive prior knowledge about the structure of interest. Furthermore, the proposed framework is applicable for designing a variety of active controllers, and different external load types, for example, wind and seismic loads for any desired building. The efficacy of the proposed framework is demonstrated in the context of seismic response control through three numerical case studies. The results confirm that the proposed approach yields significant structural response reductions in the linear and nonlinear regimes. Furthermore, implementation issues such as sensitivity to structural property variations, and time delay are thoroughly investigated.
Highlights Data-driven active control framework circumventing detailed modeling using deep RL Customizable environment for various building geometries and control systems Active controller trained for linear/nonlinear buildings and tested by unseen motions Trainable using synthetic ground motion data and generalizable to real earthquakes A robust data-driven control model to system property variations and time delay
Active structural control framework using policy-gradient reinforcement learning
Sadeghi Eshkevari, Soheila (author) / Sadeghi Eshkevari, Soheil (author) / Sen, Debarshi (author) / Pakzad, Shamim N. (author)
Engineering Structures ; 274
2022-10-09
Article (Journal)
Electronic Resource
English
Reinforcement Learning for Structural Control
British Library Online Contents | 2008
|Reinforcement Learning for Structural Control
Online Contents | 2008
|