A platform for research: civil engineering, architecture and urbanism
Vision-Based Body Pose Estimation of Excavator Using a Transformer-Based Deep-Learning Model
Devoted to safety, efficiency, and productivity management on construction sites, a deep-learning method termed transformer-based mechanical equipment pose network (TransMPNet) is proposed in this research to work on images for the body pose estimation of excavators in effective and efficient ways. TransMPNet contains data processing, an ensemble model coupled with DenseNet201, an improved transformer module, a loss function, and evaluation metrics to perform feature processing and learning for accurate results. To verify the effectiveness and efficiency of the method, a publicly available image database of excavator body poses is adopted for experimental testing and validation. The results indicate that TransMPNet provides excellent performance with a mean-square error (MSE) of 218.626, a root-MSE (RMSE) of 14.786, an average normalized error (NE) of , and an average area under the curve (AUC) of , and it significantly outperforms other state-of-the-art methods such as the cascaded pyramid network (CPN) and the stacked hourglass network (SHG) in terms of evaluation metrics. Accordingly, TransMPNet contributes to excavator body pose estimation, thereby providing more effective and accurate results with great potential for practical application in on-site construction management.
Vision-Based Body Pose Estimation of Excavator Using a Transformer-Based Deep-Learning Model
Devoted to safety, efficiency, and productivity management on construction sites, a deep-learning method termed transformer-based mechanical equipment pose network (TransMPNet) is proposed in this research to work on images for the body pose estimation of excavators in effective and efficient ways. TransMPNet contains data processing, an ensemble model coupled with DenseNet201, an improved transformer module, a loss function, and evaluation metrics to perform feature processing and learning for accurate results. To verify the effectiveness and efficiency of the method, a publicly available image database of excavator body poses is adopted for experimental testing and validation. The results indicate that TransMPNet provides excellent performance with a mean-square error (MSE) of 218.626, a root-MSE (RMSE) of 14.786, an average normalized error (NE) of , and an average area under the curve (AUC) of , and it significantly outperforms other state-of-the-art methods such as the cascaded pyramid network (CPN) and the stacked hourglass network (SHG) in terms of evaluation metrics. Accordingly, TransMPNet contributes to excavator body pose estimation, thereby providing more effective and accurate results with great potential for practical application in on-site construction management.
Vision-Based Body Pose Estimation of Excavator Using a Transformer-Based Deep-Learning Model
J. Comput. Civ. Eng.
Ji, Ankang (author) / Fan, Hongqin (author) / Xue, Xiaolong (author)
2025-03-01
Article (Journal)
Electronic Resource
English
Vision-based excavator pose estimation for automatic control
Elsevier | 2023
|Vision-based excavator pose estimation for automatic control
Elsevier | 2024
|Vision-based estimation of excavator manipulator pose for automated grading control
British Library Online Contents | 2019
|Optimization-based excavator pose estimation using real-time location systems
British Library Online Contents | 2015