A platform for research: civil engineering, architecture and urbanism
Vision-based size distribution analysis of rock fragments using multi-modal deep learning and interactive annotation
Abstract Real-time size distribution analysis is of great significance to rock fragments in practical engineering. Traditional methods have struggled to strike a balance between analysis speed and precision, prompting the recent adoption of deep learning. However, prevalent approaches for estimating rock fragment sizes from RGB images (single-modality) suffer from two defects: (a)time-consuming and labour-intensive dataset annotation, (b) poor transferability between cases. To solve the above problems, a comprehensive multi-modal framework for size distribution prediction of rock fragments (SDPRF) is proposed in this paper. This framework comprises three essential components: a multi-modal image dataset generation method, a multi-modal rock surface net (Mrsnet) for fragment edge detection and a 2-step breakpoints connection algorithm. The test results indicate:(1) The generation method of SDPRF dataset greatly reduces the time required for dataset annotation, (2) Mrsnet shows better generalization ability for other cases outside the training set than traditional single-modal learning.
Highlights The multi-modal dataset generation method for size distribution prediction of rock fragments is proposed. The watershed algorithm with interactive markers is used for annotation. The edge detection method via multi-modal feature learning (Mrsnet) is proposed. The two-step breakpoint connection algorithm is developed for deep learning prediction. The gradation curves of the rock fragment samples are painted and analyzed.
Vision-based size distribution analysis of rock fragments using multi-modal deep learning and interactive annotation
Abstract Real-time size distribution analysis is of great significance to rock fragments in practical engineering. Traditional methods have struggled to strike a balance between analysis speed and precision, prompting the recent adoption of deep learning. However, prevalent approaches for estimating rock fragment sizes from RGB images (single-modality) suffer from two defects: (a)time-consuming and labour-intensive dataset annotation, (b) poor transferability between cases. To solve the above problems, a comprehensive multi-modal framework for size distribution prediction of rock fragments (SDPRF) is proposed in this paper. This framework comprises three essential components: a multi-modal image dataset generation method, a multi-modal rock surface net (Mrsnet) for fragment edge detection and a 2-step breakpoints connection algorithm. The test results indicate:(1) The generation method of SDPRF dataset greatly reduces the time required for dataset annotation, (2) Mrsnet shows better generalization ability for other cases outside the training set than traditional single-modal learning.
Highlights The multi-modal dataset generation method for size distribution prediction of rock fragments is proposed. The watershed algorithm with interactive markers is used for annotation. The edge detection method via multi-modal feature learning (Mrsnet) is proposed. The two-step breakpoint connection algorithm is developed for deep learning prediction. The gradation curves of the rock fragment samples are painted and analyzed.
Vision-based size distribution analysis of rock fragments using multi-modal deep learning and interactive annotation
Tang, Yudi (author) / Wang, Yulin (author) / Si, Guangyao (author)
2024-01-03
Article (Journal)
Electronic Resource
English
Combining Multi-Modal Statistics for Welfare Prediction Using Deep Learning
DOAJ | 2019
|Convenient Structural Modal Analysis Using Noncontact Vision-Based Displacement Sensor
British Library Conference Proceedings | 2016
|