A platform for research: civil engineering, architecture and urbanism
Zero-Shot Scene Classification for High Spatial Resolution Remote Sensing Images
Due to the rapid technological development of various sensors, a huge volume of high spatial resolution (HSR) image data can now be acquired. How to efficiently recognize the scenes from such HSR image data has become a critical task. Conventional approaches to remote sensing scene classification only utilize information from HSR images. Therefore, they always need a large amount of labeled data and cannot recognize the images from an unseen scene class without any visual sample in the labeled data. To overcome this drawback, we propose a novel approach for recognizing images from unseen scene classes, i.e., zero-shot scene classification (ZSSC). In this approach, we first use the well-known natural language process model, word2vec, to map names of seen/unseen scene classes to semantic vectors. A semantic-directed graph is then constructed over the semantic vectors for describing the relationships between unseen classes and seen classes. To transfer knowledge from the images in seen classes to those in unseen classes, we make an initial label prediction on test images by an unsupervised domain adaptation model. With the semantic-directed graph and initial prediction, a label-propagation algorithm is then developed for ZSSC. By leveraging the visual similarity among images from the same scene class, a label refinement approach based on sparse learning is used to suppress the noise in the zero-shot classification results. Experimental results show that the proposed approach significantly outperforms the state-of-the-art approaches in ZSSC.
Zero-Shot Scene Classification for High Spatial Resolution Remote Sensing Images
Due to the rapid technological development of various sensors, a huge volume of high spatial resolution (HSR) image data can now be acquired. How to efficiently recognize the scenes from such HSR image data has become a critical task. Conventional approaches to remote sensing scene classification only utilize information from HSR images. Therefore, they always need a large amount of labeled data and cannot recognize the images from an unseen scene class without any visual sample in the labeled data. To overcome this drawback, we propose a novel approach for recognizing images from unseen scene classes, i.e., zero-shot scene classification (ZSSC). In this approach, we first use the well-known natural language process model, word2vec, to map names of seen/unseen scene classes to semantic vectors. A semantic-directed graph is then constructed over the semantic vectors for describing the relationships between unseen classes and seen classes. To transfer knowledge from the images in seen classes to those in unseen classes, we make an initial label prediction on test images by an unsupervised domain adaptation model. With the semantic-directed graph and initial prediction, a label-propagation algorithm is then developed for ZSSC. By leveraging the visual similarity among images from the same scene class, a label refinement approach based on sparse learning is used to suppress the noise in the zero-shot classification results. Experimental results show that the proposed approach significantly outperforms the state-of-the-art approaches in ZSSC.
Zero-Shot Scene Classification for High Spatial Resolution Remote Sensing Images
Li, Aoxue (author) / Lu, Zhiwu / Wang, Liwei / Xiang, Tao / Wen, Ji-Rong
2017
Article (Journal)
English
Local classification TIB:
770/3710/5670
BKL:
38.03
Methoden und Techniken der Geowissenschaften
/
74.41
Luftaufnahmen, Photogrammetrie
Classification of Remote Sensing Images Having High Spectral Resolution
Online Contents | 1996
|SVM-Based Fuzzy Decision Trees for Classification of High Spatial Resolution Remote Sensing Images
Online Contents | 2012
|