Abstract
The idea to explain the decisions of artificial intelligence (AI) model started in the 1970s to test and engender user trust in expert systems. However, specular advances in computation power and improvements in optimization algorithms shifted the focus towards the accuracy, while the ability to explain the decision has taken a back seat. In future, the decision-making process would be partially or completely dependent on machine learning (ML) algorithms which require humans to trust these algorithms in order to accept those decisions.
Several explainable methods and strategies are proposed in the quest to explain the output of black-box ML models. This research compares the explainable machine learning method with the expert system based on belief-rule-base (BRB). Unlike traditional expert system, BRB has the competency to learn from the data and can include knowledge of domain-expert. It can explain the single decision and chain of events leading to the decision. The black-box ML models use local interpretability methods to explain a specific decision and global interpretability method to understand entire model behaviour. In this research, the explainability of mortgage loan decision was compared. It was found that model-agnostic method Shapley provided consistent explanation compared to LIME (local interpretable model-agonistic explanation) for high-performance models such as deep-neural-network, random forest and XGBoost. The global interpretation method, feature importance has issue of dividing the importance among two correlated features. Compared to BRB, these methods cannot reveal the true decision-making process and chain of events leading to a decision.
Several explainable methods and strategies are proposed in the quest to explain the output of black-box ML models. This research compares the explainable machine learning method with the expert system based on belief-rule-base (BRB). Unlike traditional expert system, BRB has the competency to learn from the data and can include knowledge of domain-expert. It can explain the single decision and chain of events leading to the decision. The black-box ML models use local interpretability methods to explain a specific decision and global interpretability method to understand entire model behaviour. In this research, the explainability of mortgage loan decision was compared. It was found that model-agnostic method Shapley provided consistent explanation compared to LIME (local interpretable model-agonistic explanation) for high-performance models such as deep-neural-network, random forest and XGBoost. The global interpretation method, feature importance has issue of dividing the importance among two correlated features. Compared to BRB, these methods cannot reveal the true decision-making process and chain of events leading to a decision.
Original language | English |
---|---|
Publication status | Published - 4 Jun 2019 |
Event | 10thANNUAL EUROPEAN DECISION SCIENCES CONFERENCE - Nottingham, United Kingdom Duration: 2 Jun 2019 → 5 Jun 2019 http://www.edsi-conference.org/ |
Conference
Conference | 10thANNUAL EUROPEAN DECISION SCIENCES CONFERENCE |
---|---|
Country/Territory | United Kingdom |
City | Nottingham |
Period | 2/06/19 → 5/06/19 |
Internet address |