A platform for research: civil engineering, architecture and urbanism
Deep semantic segmentation for visual scene understanding of soil types
Abstract One of the state-of-the-art computer vision applications is scene understanding and visual contextual awareness. Despite the numerous detection and classification-based studies, the literature lacks semantic segmentation methods for a more comprehensive and precise understanding of the soil included scene due to the scarcity of annotated datasets; the extracted information from an understood scene is worthwhile in project fleet management, claims management, equipment productivity analysis, safety, and soil classification. Hence, this study presents a vision-based approach for soil-included scene understanding and classifying them into different categories according to ASTM D2488, using semantic segmentation. An annotated dataset of various soil types containing 3043 images was developed to train four Deeplab v3+ variants with modified decoders. Five-fold cross-validation indicates the remarkable performance of the best variant with a mean Jaccard index of 0.9. The application and effects of subpixel upsampling and exit-flow CRF were also examined.
Highlights An automated visual scene understanding method for soil types was presented. A construction-based soil dataset was developed for semantic segmentation purposes. Four variants of Deeplab v3+ were trained and evaluated on the developed dataset. The best variant achieved 0.9 mean Jaccard index and 0.8837 mean class AUROC. The applications of the subpixel upsampling layer and CRF layer were examined.
Deep semantic segmentation for visual scene understanding of soil types
Abstract One of the state-of-the-art computer vision applications is scene understanding and visual contextual awareness. Despite the numerous detection and classification-based studies, the literature lacks semantic segmentation methods for a more comprehensive and precise understanding of the soil included scene due to the scarcity of annotated datasets; the extracted information from an understood scene is worthwhile in project fleet management, claims management, equipment productivity analysis, safety, and soil classification. Hence, this study presents a vision-based approach for soil-included scene understanding and classifying them into different categories according to ASTM D2488, using semantic segmentation. An annotated dataset of various soil types containing 3043 images was developed to train four Deeplab v3+ variants with modified decoders. Five-fold cross-validation indicates the remarkable performance of the best variant with a mean Jaccard index of 0.9. The application and effects of subpixel upsampling and exit-flow CRF were also examined.
Highlights An automated visual scene understanding method for soil types was presented. A construction-based soil dataset was developed for semantic segmentation purposes. Four variants of Deeplab v3+ were trained and evaluated on the developed dataset. The best variant achieved 0.9 mean Jaccard index and 0.8837 mean class AUROC. The applications of the subpixel upsampling layer and CRF layer were examined.
Deep semantic segmentation for visual scene understanding of soil types
Zamani, Vahid (author) / Taghaddos, Hosein (author) / Gholipour, Yaghob (author) / Pourreza, Hamidreza (author)
2022-05-06
Article (Journal)
Electronic Resource
English
A Deep Learning Semantic Segmentation Method for Landslide Scene Based on Transformer Architecture
DOAJ | 2022
|