Eine Plattform für die Wissenschaft: Bauingenieurwesen, Architektur und Urbanistik
Automated Detection and Segmentation of Mechanical, Electrical, and Plumbing Components in Indoor Environments by Using the YOLACT++ Architecture
Indoor construction environments, with their high density and detailed components, are complex areas for progress monitoring and reporting. Traditional manual monitoring systems, often constrained by poor lighting and accessibility, are inaccurate and time-consuming. With recent technological advancements, deep learning-based object recognition models have achieved considerable attention in construction. This paper introduces a novel method for progress monitoring and reporting of construction operations, employing digital imaging and the You Only Look At CoefficienTs (YOLACT++) deep learning algorithm to automatically recognize mechanical, electrical, and plumbing (MEP) components in challenging indoor settings. Data augmentation techniques and transfer learning were applied to improve the model’s generalization and adaptability. The study distinctively focuses on complex components in complicated indoor environments, a less explored area in current research that mainly centered on outdoor or simpler indoor settings. To achieve this, the study enhanced the dataset quality by generating synthetic images that closely represent actual indoor conditions including different lighting, object complexity and scale, occlusion, clutter, and viewpoints. This study also evaluated different mixes of synthetic and real images to determine the optimum combination for effective training. Moving beyond commonly used algorithms such as Mask R-CNN and You Only Look Once (YOLO), the method applied in this work is the YOLACT++ with deformable convolutional neural networks v2 (DCNv2), enhancing the model’s ability to handle objects with different scales, postures, rotations, and viewpoints in the images that are essential in indoor environments. The model is validated on a large test dataset, including real images from construction sites, to cover different indoor scenarios. The model achieved a precision of 84.80% and a recall of 85.58% for HVAC duct detection and a precision of 86.87% and a recall of 73.93% for pipe detection, demonstrating its effectiveness under challenging conditions. This method contributes to more accurate automated progress monitoring in indoor environments by reducing manual and error prone inspections.
Automated Detection and Segmentation of Mechanical, Electrical, and Plumbing Components in Indoor Environments by Using the YOLACT++ Architecture
Indoor construction environments, with their high density and detailed components, are complex areas for progress monitoring and reporting. Traditional manual monitoring systems, often constrained by poor lighting and accessibility, are inaccurate and time-consuming. With recent technological advancements, deep learning-based object recognition models have achieved considerable attention in construction. This paper introduces a novel method for progress monitoring and reporting of construction operations, employing digital imaging and the You Only Look At CoefficienTs (YOLACT++) deep learning algorithm to automatically recognize mechanical, electrical, and plumbing (MEP) components in challenging indoor settings. Data augmentation techniques and transfer learning were applied to improve the model’s generalization and adaptability. The study distinctively focuses on complex components in complicated indoor environments, a less explored area in current research that mainly centered on outdoor or simpler indoor settings. To achieve this, the study enhanced the dataset quality by generating synthetic images that closely represent actual indoor conditions including different lighting, object complexity and scale, occlusion, clutter, and viewpoints. This study also evaluated different mixes of synthetic and real images to determine the optimum combination for effective training. Moving beyond commonly used algorithms such as Mask R-CNN and You Only Look Once (YOLO), the method applied in this work is the YOLACT++ with deformable convolutional neural networks v2 (DCNv2), enhancing the model’s ability to handle objects with different scales, postures, rotations, and viewpoints in the images that are essential in indoor environments. The model is validated on a large test dataset, including real images from construction sites, to cover different indoor scenarios. The model achieved a precision of 84.80% and a recall of 85.58% for HVAC duct detection and a precision of 86.87% and a recall of 73.93% for pipe detection, demonstrating its effectiveness under challenging conditions. This method contributes to more accurate automated progress monitoring in indoor environments by reducing manual and error prone inspections.
Automated Detection and Segmentation of Mechanical, Electrical, and Plumbing Components in Indoor Environments by Using the YOLACT++ Architecture
J. Constr. Eng. Manage.
Shamsollahi, Dena (Autor:in) / Moselhi, Osama (Autor:in) / Khorasani, Khashayar (Autor:in)
01.08.2024
Aufsatz (Zeitschrift)
Elektronische Ressource
Englisch