A platform for research: civil engineering, architecture and urbanism
Learning from an Expert Agent: An Imitated-Reinforcement Learning Application
In the recent times, there has been a high rise in the use of robotic systems in industrial settings, particularly those that require flexibility such as maintenance, refueling, as well as ones that ought to replicate human motion. These robots often have higher number of outputs compared with the system input, deeming them underactuated, have nonlinear system dynamics, and often have coupled inputs. The rotary flexible joint (RFJ) robot can be used as a starting point for testing state-of-the-art controllers for control of such complex system dynamics. One of the most prominent industrial controllers used for the control of robots is evidently the Proportional Integral Derivative (PID) controller. Although it is often the most sought out controller for industrial robots, with more complex systems it becomes more difficult to use PID for dynamic environments. Controlling such robots becomes an intriguing problem because of this. This paper evaluates the proposed controller technique that uses a tuned PID controller to be the expert controller as a reference to train our controller for ground truth, and then retrain the agent to perfect the controller performance using an RFJ robot. The simulation results show good training of the neural network (NN) trained using an expert agent. The NN was then used as the actor of a reinforcement learning (RL) agent and later compared with the tracking performance of the expert controller, PID. The results demonstrate good tracking control of the robot and most importantly training convergence time is seen to have been significantly reduced.
Learning from an Expert Agent: An Imitated-Reinforcement Learning Application
In the recent times, there has been a high rise in the use of robotic systems in industrial settings, particularly those that require flexibility such as maintenance, refueling, as well as ones that ought to replicate human motion. These robots often have higher number of outputs compared with the system input, deeming them underactuated, have nonlinear system dynamics, and often have coupled inputs. The rotary flexible joint (RFJ) robot can be used as a starting point for testing state-of-the-art controllers for control of such complex system dynamics. One of the most prominent industrial controllers used for the control of robots is evidently the Proportional Integral Derivative (PID) controller. Although it is often the most sought out controller for industrial robots, with more complex systems it becomes more difficult to use PID for dynamic environments. Controlling such robots becomes an intriguing problem because of this. This paper evaluates the proposed controller technique that uses a tuned PID controller to be the expert controller as a reference to train our controller for ground truth, and then retrain the agent to perfect the controller performance using an RFJ robot. The simulation results show good training of the neural network (NN) trained using an expert agent. The NN was then used as the actor of a reinforcement learning (RL) agent and later compared with the tracking performance of the expert controller, PID. The results demonstrate good tracking control of the robot and most importantly training convergence time is seen to have been significantly reduced.
Learning from an Expert Agent: An Imitated-Reinforcement Learning Application
Choutri, Kheireddine (author) / Siddique, Tanjulee (author) / Fareh, Raouf (author) / Dylov, Dmitry (author) / Bettayeb, Maamar (author)
2024-06-03
1893203 byte
Conference paper
Electronic Resource
English
Imitated fair-faced concrete protective agent and construction process thereof
European Patent Office | 2024
|Imitated bare concrete protective agent and construction method thereof
European Patent Office | 2021
|