Eine Plattform für die Wissenschaft: Bauingenieurwesen, Architektur und Urbanistik
Semantic Interpretation for Convolutional Neural Networks: What Makes a Cat a Cat?
The interpretability of deep neural networks has attracted increasing attention in recent years, and several methods have been created to interpret the “black box” model. Fundamental limitations remain, however, that impede the pace of understanding the networks, especially the extraction of understandable semantic space. In this work, the framework of semantic explainable artificial intelligence (S‐XAI) is introduced, which utilizes a sample compression method based on the distinctive row‐centered principal component analysis (PCA) that is different from the conventional column‐centered PCA to obtain common traits of samples from the convolutional neural network (CNN), and extracts understandable semantic spaces on the basis of discovered semantically sensitive neurons and visualization techniques. Statistical interpretation of the semantic space is also provided, and the concept of semantic probability is proposed. The experimental results demonstrate that S‐XAI is effective in providing a semantic interpretation for the CNN, and offers broad usage, including trustworthiness assessment and semantic sample searching.
Semantic Interpretation for Convolutional Neural Networks: What Makes a Cat a Cat?
The interpretability of deep neural networks has attracted increasing attention in recent years, and several methods have been created to interpret the “black box” model. Fundamental limitations remain, however, that impede the pace of understanding the networks, especially the extraction of understandable semantic space. In this work, the framework of semantic explainable artificial intelligence (S‐XAI) is introduced, which utilizes a sample compression method based on the distinctive row‐centered principal component analysis (PCA) that is different from the conventional column‐centered PCA to obtain common traits of samples from the convolutional neural network (CNN), and extracts understandable semantic spaces on the basis of discovered semantically sensitive neurons and visualization techniques. Statistical interpretation of the semantic space is also provided, and the concept of semantic probability is proposed. The experimental results demonstrate that S‐XAI is effective in providing a semantic interpretation for the CNN, and offers broad usage, including trustworthiness assessment and semantic sample searching.
Semantic Interpretation for Convolutional Neural Networks: What Makes a Cat a Cat?
Xu, Hao (Autor:in) / Chen, Yuntian (Autor:in) / Zhang, Dongxiao (Autor:in)
Advanced Science ; 9
01.12.2022
14 pages
Aufsatz (Zeitschrift)
Elektronische Ressource
Englisch
Wiley | 2022
|Deep convolutional neural networks for semantic segmentation of cracks
Wiley | 2022
|Dense Semantic Labeling of Subdecimeter Resolution Images With Convolutional Neural Networks
Online Contents | 2017
|VAV Systems-What Makes Them Succeed? What Makes Them Fail?
British Library Conference Proceedings | 1997
|VAV Systems - What Makes Them Succeed? What Makes Them Fail?
British Library Online Contents | 1997
|