A platform for research: civil engineering, architecture and urbanism
Interpretation and explanation of convolutional neural network-based fault diagnosis model at the feature-level for building energy systems
Graphical abstract Display Omitted
Highlights Propose explainable CNN model with fault diagnosis accuracy 98.5% for building energy system chillers. Interpret CNN model diagnosis mechanism and explain the decision-making process from the perspective of fault-discriminative features. Propose 5 explanation methods, 3 evaluation metrics for model explainability at the feature-level by building energy fault diagnosis knowledge. Influencing factors for CNN diagnosis model explanation: algorithm, structure and parameters, modelling data quality and volume, fault severity levels. 2 explanation methods SHAP and LRP outperform other 3 in terms of fault-discriminative feature-localization accuracy, sparsity and sensitivity.
Abstract Although deep learning models have been rapidly developed, their practical applications still lag behind for building energy systems (BESs) fault diagnosis. Owing to the “black-box” nature, deep learning fault diagnosis models need explanations to earn the trust from building operators and managers. To overcome the issue, this study firstly developed explainable deep learning model, then interpreted the diagnosis criteria, uncovered the inner diagnostic mechanism and explained the decision-making process via fault-discriminative features using domain knowledge. However, there is a lack of systematic study on these feature-level model explanation methods and their influencing factors, such as the choice of interpretation methods, the complexity of the model architecture, and the quality and availability of the training data. To address the gap, with concerns on BES data characteristics, this study proposed and compared five feature-level interpretation methods that can explain the model diagnosis criteria, including activation maximization, gradient-weighted class activation mapping, occlusion sensitivity, layer-wise relevance propagation (LRP), and Shapley additive explanations (SHAP). Three performance metrics were developed to evaluate their effectiveness: fault-discriminative feature localization accuracy (LA), sparsity (SP), and sensitivity (FS). The public ASHRAE RP-1043 chiller fault data were employed for validation. Results indicate that LRP and SHAP outperform the other three methods. Both identify the fault-discriminative features as diagnosis criteria accurately and achieve FS and SP as high as 0.962 and 81.4%, respectively. Feature-level explanation contributes to better understanding of the diagnosis mechanism of deep learning models. Comprehensive analyses on the influencing factors provide references for the development of explainable deep learning models and feature-level model explanation methods for BES fault diagnosis.
Interpretation and explanation of convolutional neural network-based fault diagnosis model at the feature-level for building energy systems
Graphical abstract Display Omitted
Highlights Propose explainable CNN model with fault diagnosis accuracy 98.5% for building energy system chillers. Interpret CNN model diagnosis mechanism and explain the decision-making process from the perspective of fault-discriminative features. Propose 5 explanation methods, 3 evaluation metrics for model explainability at the feature-level by building energy fault diagnosis knowledge. Influencing factors for CNN diagnosis model explanation: algorithm, structure and parameters, modelling data quality and volume, fault severity levels. 2 explanation methods SHAP and LRP outperform other 3 in terms of fault-discriminative feature-localization accuracy, sparsity and sensitivity.
Abstract Although deep learning models have been rapidly developed, their practical applications still lag behind for building energy systems (BESs) fault diagnosis. Owing to the “black-box” nature, deep learning fault diagnosis models need explanations to earn the trust from building operators and managers. To overcome the issue, this study firstly developed explainable deep learning model, then interpreted the diagnosis criteria, uncovered the inner diagnostic mechanism and explained the decision-making process via fault-discriminative features using domain knowledge. However, there is a lack of systematic study on these feature-level model explanation methods and their influencing factors, such as the choice of interpretation methods, the complexity of the model architecture, and the quality and availability of the training data. To address the gap, with concerns on BES data characteristics, this study proposed and compared five feature-level interpretation methods that can explain the model diagnosis criteria, including activation maximization, gradient-weighted class activation mapping, occlusion sensitivity, layer-wise relevance propagation (LRP), and Shapley additive explanations (SHAP). Three performance metrics were developed to evaluate their effectiveness: fault-discriminative feature localization accuracy (LA), sparsity (SP), and sensitivity (FS). The public ASHRAE RP-1043 chiller fault data were employed for validation. Results indicate that LRP and SHAP outperform the other three methods. Both identify the fault-discriminative features as diagnosis criteria accurately and achieve FS and SP as high as 0.962 and 81.4%, respectively. Feature-level explanation contributes to better understanding of the diagnosis mechanism of deep learning models. Comprehensive analyses on the influencing factors provide references for the development of explainable deep learning models and feature-level model explanation methods for BES fault diagnosis.
Interpretation and explanation of convolutional neural network-based fault diagnosis model at the feature-level for building energy systems
Li, Guannan (author) / Chen, Liang (author) / Fan, Cheng (author) / Li, Tao (author) / Xu, Chengliang (author) / Fang, Xi (author)
Energy and Buildings ; 295
2023-06-26
Article (Journal)
Electronic Resource
English