A platform for research: civil engineering, architecture and urbanism
Automatic vision-based calculation of excavator earthmoving productivity using zero-shot learning activity recognition
Abstract Recently, vision-based methods have been widely used to analyze the construction productivity based on onsite videos owing to their low cost, simple deployment, and easy maintenance. However, existing vision-based methods rely on supervised learning for activity recognition, which is computationally intensive owing to the necessity of labeling large-scale training datasets. To address this problem, this paper describes a vision-based method for automatically analyzing excavators' productivities in earthmoving tasks by adopting zero-shot learning for activity recognition. The proposed method can identify activities of general construction machines (e.g., excavators and loaders) without pre-training or fine-tuning. To verify the feasibility, the proposed method has been tested on videos recorded from real construction sites. The accuracy values for activity recognition and productivity evaluation are 86% and 87.8%, respectively.
Highlights A vision-based method is proposed for productivity analysis in earthmoving. Zero-shot learning method CLIP has been adopted for activity recognition of excavators. Proposed activity recognition does not need pre-training and datasets. The results achieved an accuracy of 86% for excavator activity recognition. The results achieved an accuracy of 87.8% for productivity analysis.
Automatic vision-based calculation of excavator earthmoving productivity using zero-shot learning activity recognition
Abstract Recently, vision-based methods have been widely used to analyze the construction productivity based on onsite videos owing to their low cost, simple deployment, and easy maintenance. However, existing vision-based methods rely on supervised learning for activity recognition, which is computationally intensive owing to the necessity of labeling large-scale training datasets. To address this problem, this paper describes a vision-based method for automatically analyzing excavators' productivities in earthmoving tasks by adopting zero-shot learning for activity recognition. The proposed method can identify activities of general construction machines (e.g., excavators and loaders) without pre-training or fine-tuning. To verify the feasibility, the proposed method has been tested on videos recorded from real construction sites. The accuracy values for activity recognition and productivity evaluation are 86% and 87.8%, respectively.
Highlights A vision-based method is proposed for productivity analysis in earthmoving. Zero-shot learning method CLIP has been adopted for activity recognition of excavators. Proposed activity recognition does not need pre-training and datasets. The results achieved an accuracy of 86% for excavator activity recognition. The results achieved an accuracy of 87.8% for productivity analysis.
Automatic vision-based calculation of excavator earthmoving productivity using zero-shot learning activity recognition
Chen, Chen (author) / Xiao, Bo (author) / Zhang, Yuxuan (author) / Zhu, Zhenhua (author)
2022-12-01
Article (Journal)
Electronic Resource
English
Vision-Based Excavator Activity Recognition and Productivity Analysis in Construction
British Library Conference Proceedings | 2019
|Vision-based nonintrusive context documentation for earthmoving productivity simulation
British Library Online Contents | 2019
|