A platform for research: civil engineering, architecture and urbanism
Multioutput Image Classification to Support Postearthquake Reconnaissance
After hazard events, large numbers of images are collected by reconnaissance teams to document the post-event state of structures, and to assess their performance and improve design procedures and codes. The majority of these data are captured as images and manually labeled. This highly repetitive task requires considerable domain expertise and time. Advances in deep learning have enabled researchers to rapidly classify reconnaissance images. Thus far, these classification methods are limited to a simple classification schema in which the classes are all either mutually exclusive or independent. To date, an efficient classification system of a complex schema containing many classes arranged in a multi-level hierarchical structure is not available to support earthquake reconnaissance. To address this gap, this paper introduces a comprehensive classification schema and a multi-output deep convolutional neural network (DCNN) model for rapid postearthquake image classification. In contrast to past work, herein a single multi-output DCNN classification model with a hierarchy-aware prediction was trained to enable the rapid organization of images. The performance of the proposed multi-output model was validated through comparisons with multi-label and multi-class models using an F1-score. As result, the multi-output model outperformed other models. Then, the multi-output model was deployed to a web-based platform called the Automated Reconnaissance Image Organizer, which can be used to easily organize earthquake reconnaissance images.
Multioutput Image Classification to Support Postearthquake Reconnaissance
After hazard events, large numbers of images are collected by reconnaissance teams to document the post-event state of structures, and to assess their performance and improve design procedures and codes. The majority of these data are captured as images and manually labeled. This highly repetitive task requires considerable domain expertise and time. Advances in deep learning have enabled researchers to rapidly classify reconnaissance images. Thus far, these classification methods are limited to a simple classification schema in which the classes are all either mutually exclusive or independent. To date, an efficient classification system of a complex schema containing many classes arranged in a multi-level hierarchical structure is not available to support earthquake reconnaissance. To address this gap, this paper introduces a comprehensive classification schema and a multi-output deep convolutional neural network (DCNN) model for rapid postearthquake image classification. In contrast to past work, herein a single multi-output DCNN classification model with a hierarchy-aware prediction was trained to enable the rapid organization of images. The performance of the proposed multi-output model was validated through comparisons with multi-label and multi-class models using an F1-score. As result, the multi-output model outperformed other models. Then, the multi-output model was deployed to a web-based platform called the Automated Reconnaissance Image Organizer, which can be used to easily organize earthquake reconnaissance images.
Multioutput Image Classification to Support Postearthquake Reconnaissance
J. Perform. Constr. Facil.
Park, Ju An (author) / Liu, Xiaoyu (author) / Yeum, Chul Min (author) / Dyke, Shirley J. (author) / Midwinter, Max (author) / Choi, Jongseong (author) / Chu, Zhiwei (author) / Hacker, Thomas (author) / Benes, Bedrich (author)
2022-12-01
Article (Journal)
Electronic Resource
English
Machine Vision-Enhanced Postearthquake Inspection
British Library Online Contents | 2013
|Program Accelerates Postearthquake Evaluations
ASCE | 2016
|Machine Vision-Enhanced Postearthquake Inspection
ASCE | 2013
|Technology Transfer Following Postearthquake Investigations
British Library Conference Proceedings | 1994
|Machine Vision-Enhanced Postearthquake Inspection
British Library Conference Proceedings | 2013
|