A platform for research: civil engineering, architecture and urbanism
Augmented reality, deep learning and vision-language query system for construction worker safety
Abstract Low situational awareness contributes to safety incidents in construction. Existing Deep Learning (DL)-based applications lack the capability to provide context-specific and interactive feedback that is essential for workers to fully understand their surrounding environments. This paper proposes the Visual Construction Safety Query (VCSQ) system. The system encompasses real-time Image Captioning (IC), safety-centric Visual Question Answering (VQA), and keyword-based Image-Text Retrieval (ITR), integrated with head-mounted Augmented Reality (AR) devices. System validation includes benchmarks and real-world images. The ITR module posted high recall rates of 0.801 and 0.835 for Recall@5 and @10. The VQA module achieved an 89.7% accuracy rate, and the IC module had a SPICE score of 0.449. Feasibility tests and surveys confirmed the system's practical advantages in different construction scenarios. This study establishes an integration roadmap adaptable to future advancements in interactive DL and immersive AR.
Highlights A vision-language system is developed for real-time captions and safety queries. A dataset is created containing 7 safety considerations for unsafe situation query. An intuitive AR application is incorporated for an immersive user experience. The system achieves 89.7% accuracy and 83.5% recall rates against benchmark datasets.
Augmented reality, deep learning and vision-language query system for construction worker safety
Abstract Low situational awareness contributes to safety incidents in construction. Existing Deep Learning (DL)-based applications lack the capability to provide context-specific and interactive feedback that is essential for workers to fully understand their surrounding environments. This paper proposes the Visual Construction Safety Query (VCSQ) system. The system encompasses real-time Image Captioning (IC), safety-centric Visual Question Answering (VQA), and keyword-based Image-Text Retrieval (ITR), integrated with head-mounted Augmented Reality (AR) devices. System validation includes benchmarks and real-world images. The ITR module posted high recall rates of 0.801 and 0.835 for Recall@5 and @10. The VQA module achieved an 89.7% accuracy rate, and the IC module had a SPICE score of 0.449. Feasibility tests and surveys confirmed the system's practical advantages in different construction scenarios. This study establishes an integration roadmap adaptable to future advancements in interactive DL and immersive AR.
Highlights A vision-language system is developed for real-time captions and safety queries. A dataset is created containing 7 safety considerations for unsafe situation query. An intuitive AR application is incorporated for an immersive user experience. The system achieves 89.7% accuracy and 83.5% recall rates against benchmark datasets.
Augmented reality, deep learning and vision-language query system for construction worker safety
Chen, Haosen (author) / Hou, Lei (author) / Wu, Shaoze (author) / Zhang, Guomin (author) / Kevin (author) / Zou, Yang (author) / Moon, Sungkon (author) / Bhuiyan, Muhammed (author)
2023-10-26
Article (Journal)
Electronic Resource
English
Addressing construction worker safety in the design phase - Designing for construction worker safety
Online Contents | 1999
|Addressing construction worker safety in the design phase - Designing for construction worker safety
British Library Online Contents | 1999
|Using Eye-Tracking to Measure Worker Situation Awareness in Augmented Reality
Elsevier | 2024
|