Comparing Strategies on Explainability of Machine Learning Models with Belief-Rule-Based Expert Systems: A Case Study on Lending Decisions

Swati Sachan, Jian-Bo Yang, Dong-Ling Xu

Research output: Contribution to conferencePaperpeer-review

Abstract

The idea to explain the decisions of artificial intelligence (AI) model started in the 1970s to test and engender user trust in expert systems. However, specular advances in computation power and improvements in optimization algorithms shifted the focus towards the accuracy, while the ability to explain the decision has taken a back seat. In future, the decision-making process would be partially or completely dependent on machine learning (ML) algorithms which require humans to trust these algorithms in order to accept those decisions.

Several explainable methods and strategies are proposed in the quest to explain the output of black-box ML models. This research compares the explainable machine learning method with the expert system based on belief-rule-base (BRB). Unlike traditional expert system, BRB has the competency to learn from the data and can include knowledge of domain-expert. It can explain the single decision and chain of events leading to the decision. The black-box ML models use local interpretability methods to explain a specific decision and global interpretability method to understand entire model behaviour. In this research, the explainability of mortgage loan decision was compared. It was found that model-agnostic method Shapley provided consistent explanation compared to LIME (local interpretable model-agonistic explanation) for high-performance models such as deep-neural-network, random forest and XGBoost. The global interpretation method, feature importance has issue of dividing the importance among two correlated features. Compared to BRB, these methods cannot reveal the true decision-making process and chain of events leading to a decision.
Original languageEnglish
Publication statusPublished - 4 Jun 2019
Event10thANNUAL EUROPEAN DECISION SCIENCES CONFERENCE - Nottingham, United Kingdom
Duration: 2 Jun 20195 Jun 2019
http://www.edsi-conference.org/

Conference

Conference10thANNUAL EUROPEAN DECISION SCIENCES CONFERENCE
Country/TerritoryUnited Kingdom
CityNottingham
Period2/06/195/06/19
Internet address

Fingerprint

Dive into the research topics of 'Comparing Strategies on Explainability of Machine Learning Models with Belief-Rule-Based Expert Systems: A Case Study on Lending Decisions'. Together they form a unique fingerprint.

Cite this