Created by Materia for OpenMind Recommended by Materia
5
Start Explainable AI Systems: Understanding the Decisions of the Machines
11 October 2017

Explainable AI Systems: Understanding the Decisions of the Machines

Estimated reading time Time 5 to read

Introduction

DARPA (Defense Advanced Research Projects Agency), is a division of the American Defense Department that investigates new technologies. It has for some time regarded the current generation of AI technologies as important in the future. It has been in the forefront of AI research in image recognition, speech recognition and generation, robotics, autonomous vehicles, medical diagnostic systems, and more. However, DARPA is well aware that despite the high level of problem-solving capabilities of AI programs – they lack explainability. AI deep learning algorithms use complex mathematics that is very difficult for human users to understand or comprehend. In short, AI systems, using current technologies, are woefully inadequate in explaining and making easily transparent their decision making reasoning to human beings. This could be an impediment in systems, such as military applications or autonomous vehicles – particularly when unexpected decisions need to be understood.

Despite the high level of problem-solving capabilities of AI programs, they lack explainability. / Image: Pixabay

It is against this background that DARPA announced, in August 2016, that they would provide funding a series of new projects, called Explainable Artificial Intelligence (XAI). They say that the purpose of these projects is to create tools that will enable a human on the receiving end of information or a decision from an AI program to understand the reasoning behind that decision. As they say: “Pursuit of explainable AI reflects the need for the US military to be able to have full trust in robotic battlefield systems of the future”. The Successful DARPA projects began in May 2017 and have four years to completion. The projects chosen range from adding further machine-learning systems tailored towards explanation, to the development of new machine-learning approaches that incorporate an explanation by design.

However, using explanation facilities in AI systems is not a new concept. An earlier generation of AI, called symbolic AI, used them with moderate success. In this paper I briefly outline how their architecture were better suited to explanation than the current generation and how these technologies have combined in the past to facilitate explanation. I describe some of the factors that could improve the quality of explanations and outline a possible approach to Explainable AI.

Explanation Facilities using Symbolic AI

Symbolic AI systems store knowledge about their subject domains explicitly using symbols and manipulate these symbols using logical operatives. Many expert systems used symbolic AI to manipulate knowledge stored in the form of rules. Explanations would then derive directly from these rules and provide a useful side-effect. This explanation could be activated during or after a consultation with the system. Fig. 1 provides a conceptual framework that shows the way explanation facilities work with symbolic expert system architectures. An end-user would communicate with the system via a user interface and an explanation facility which would interact with an expert system inference engine. The inference engine would use domain knowledge stored in the knowledge-base (often stored as rules) and control the consultation by determining what questions are to be asked in order to achieve its goals and derive conclusions or specify actions to be taken. The explanation component would combine with the user-interface and knowledge-base to provide explanations. These explanations could take the form of the user wanting to know how the system advice was given – or why the system needs the answer to a question from a user. For example, consider the following rule taken from a healthcare expert system:

 

RULE 1

If alcoholic consumption is high

And patient salt intake is high

Then blood pressure is likely to be high.

 

This very simple example can be used to illustrate how explanation facilities work. If the above expert system arrived at the conclusion that the risk of heart failure is high, then the user could find out “how” the expert system arrived at that conclusion. The expert system might respond (all system responses shown in italic print) with something like the following:

 

I found patient alcohol consumption is high from user input

And I also found patient salt intake is high from user input

THEREFORE blood pressure is likely to be high from the activation of RULE 1.

 

A user could also find out why a particular question is being asked. For example, if the user is asked the question: How many units of alcohol does the patient consume per week?

The user could use explanation facilities to find out why it is asking this question. So user might respond: Why?

The expert system might respond: I am trying to prove RULE 1, to find out if blood pressure is likely to be high. To do this I need to find out if the alcohol consumption is high.

Fig. 1. Relationship between an Expert System and Explanation Component / Image: author

Explanation Facilities using Machine Learning AI

In my last submission to OpenMind, I described how the main machine learning paradigm predominantly uses neural networks –a non-symbolic AI paradigm that does not use explicit knowledge stored as rules of operation. Neural networks (NNs) work by learning from the use of large amounts of training data. Implicit knowledge is encoded in numeric parameters – called weights – and distributed all over the system. However, this learning paradigm is not well suited to explanation because of the mathematical complexity of the network. Nevertheless, a number of research models have been devised to incorporate explanation. Some use a decompositional approach for the extraction of rules from networks. This approach decomposes the network into single units, and then extract’s rules to describe a unit’s behaviour. The Explainable AI project is more ambitious in that it seeks to integrate the problem-solving performance with explainabilty by using the same machine learning technology. One approach to this could train the neural network to associate semantic attributes with hidden layer nodes – to enable learning of explainable features by modifying machine learning techniques. For example, in learning to explain a neural network to identify birds from photographs, the semantic attributes would be things like “can fly” and “builds nests”, and so on.

Characteristics of a Good Explanation

DARPA’s case for Explainable AI has been borne out from all the empirical research that has been done to date. This shows that a strong case can be made for the inclusion of explanation facilities (Dhaliwal 93, Darlington 2013). However, system designers of explanation facilities should consider the factors that have been shown to affect explanation usage. The most important of these are:

  • The user type – at a simplified level, this might be classified according to whether the user is novice or expert in the subject domain. Novice users may use explanation facilities to understand the domain and improve their problem-solving performance. Experts, on the other hand, may use explanation facilities to perhaps, understand any dissonance they may have with system advice or get a second opinion perhaps on a medical diagnosis.
  • The access mechanism. This refers to the way that the explanation is presented to the user during a system consultation. It could take the form of an explanation embedded in the user consultation screen, or via a pop-up window, or it could be user invoked when the user requires an explanation.
  • The type of knowledge required in the explanation. This could take a number of forms. The user could require an explanation of the problem-solving knowledge invoked during a consultation, or justification knowledge that explains why knowledge is present in an expert system, or even terminological knowledge explaining how the concepts in a subject domain relate to each other.

Other factors to take into account should be the length of the explanation, and the orientation – whether post advice or during a consultation.

Conclusions

There is a high performance versus poor explainability dichotomy evident in neural network learning systems. In the past, rule extraction techniques offer a possible way in which opaque technologies can deliver explanation facilities by mapping output data to rules making them more amenable to natural explanation. However, this approach is unlikely to be considered because it is time consuming and tedious adding this extra layer of work for every system machine learning system. The benefit of the Explainable AI project is that new techniques or methodologies could be developed that could circumvent the need for this extra layer by developing machine learning methods directly linked to improving explanation facilities.

Keith Darlington

References

[1] Dhaliwal, J. S.,(1993). An Experimental Investigation of the Use of Explanations Provided by Knowledge-based Systems, Unpublished Doctoral Dissertation, University of British Columbia, 1993.

[2] Darlington. K, (2013). Aspects of Intelligent Systems Explanation. Universal Journal of Control and Automation 1(2): 40-51, 2013.

 

Comments on this publication

Name cannot be empty
Write a comment here…* (500 words maximum)
This field cannot be empty, Please enter your comment.
*Your comment will be reviewed before being published
Captcha must be solved