Skip to main content

Is AI explainability important?

AI explainability or the art of explaining why a certain AI model makes the predictions it makes is all the buzz these days. This is because unlike traditional algorithms we don't really understand what goes on inside AI systems and while we could peer into these models as they operate or log their actions in many cases we cannot really explain exactly why they make the decisions they do.


As modern AI matures and we apply it to many mission-critical systems, the need to understand these systems is growing because nobody wants to hand over critical decision making to some poorly understood black box and indeed this is very understandable.

If you are building a nuclear reactor, you want software that is provably correct at least 99%  of the time. You don't want a system that "maybe" acts correctly. You want a system that is always correct because of the critical nature of your task.

When it comes to AI models its almost impossible to prove that these systems are correct because their actions are not very predictable. Even when we tune models to bring down training error and test error, it is impossible to know how these systems will behave when faced with some new test case they have not encountered before and are more likely to encounter when deployed in the wild.

There are two ways we can think about AI explainability.

1. Why do we need to burden AI systems with such a requirement?
2. If AI systems are meant to augment human decision making why do we place more responsibility on AI systems than we place on human decision making?

Starting with No. 1, I will ask again why do we need to burden AI systems with explainability? Having the right perspective on the true capabilities of AI systems will relieve these systems from the requirement of explainability.

AI, as it is practised now, is more like engineering, but not the kind of engineering that is required to build bridges which is a very critical system that has reached the status of a complete science. But the kind of engineering that AI represents is the kind of engineering we expect from our flat-screen TVs. No body dies if your flat screen TV blacks out, you just return it to the store from which you purchased it and either get a new one or a refund, at least this happens in developed countries.

Even though our flatscreen TVs are sophisticated pieces of engineering they have to function without the burden of criticality. But when it comes to bridges we want them to function well all the time, the burden of criticality is upon bridges or nuclear reactors and rightfully so.

AI should be viewed more like we would view a flat-screen TV, sophisticated but not critical. When we try to view AI systems as critical because we want to apply them to domains that they are not well suited for then we have the problem of explainability. If we apply AI to the appropriate domains, then no one needs to find a reason for why they make certain decisions.

The second reason why we shouldn't burden AI systems with explainability is that we do not ask people to explain their decision-making processes. Humans like AI systems observe data and make decisions but they do not know how or why they made these decisions even though some people pretend they do.

If you ask someone about the process behind their decision-making process they could come up with some kind of detailed explanation about their process but this is really some murky made-up explanation no matter how rigorous this person tries to make their decision-making process look like.

In reality, human decision-making process involves so many variables, both those for which the individual is conscious and unconscious of, such that there is no way the individual can reveal all the variables and mental processes that lead to them making a certain decision.

Individuals could come up with some kinds of a story which summarizes their decision-making processes and actually believe the story but what they do not understand is that even the creation of the story and its believability to themselves is determined by a brain for which they have absolutely no control of and for which they are merely representatives being dictated stories and what to believe by the automatic brain.

Same with AI systems, they deal with so many variables it will be impossible for sometime to really explain how they came about the decisions they made so they will remain black boxes for a while.

If we really place AI where it belongs, as something that is meant to enhance the human cognitive process and aid the discoverability of knowledge, then we will not have so much need to explain how the AI comes up with its decisions but we will be content to know that it was tested on sufficient test data and the error rate is very low.

We will trust the AI system to do what it does best, assist a human being who is skilled in the domain that the AI is being applied to. And not try to take the human out of the equation and put all the work on the AI system.

So, in the final analysis, we should just focus on making AI systems better, getting more data, trying not to game test results or overhype the capabilities of the AI systems. Explainability is a philosophical question and not really necessary to spend so much effort on as it concerns this deep current deep learning systems.

Comments

Popular posts from this blog

Next Steps Towards Strong Artificial Intelligence

What is Intelligence? Pathways to Synthetic Intelligence If you follow current AI Research then it will be apparent to you that AI research, the deep learning type has stalled! This does not mean that new areas of application for existing techniques are not appearing but that the fundamentals have been solved and things have become pretty standardized.

What is Intelligence: Software writing Software

Sometimes I wonder why programmers are hell-bent on writing programs that can communicate in natural language and not even putting adequate effort into writing programs that write other programs. Maybe is because of the natural tendency to protect one's source of livelihood by not attempting to automate it away or maybe because writing programs is hard enough such that contemplating of writing some program that writes programs might be even harder.

Virtual Reality is the next platform

VR Headset. Source: theverge.com It's been a while now since we started trying to develop Virtual Reality systems but so far we have not witnessed the explosion of use that inspired the development of such systems. Although there are always going to be some diehard fans of Virtual Reality who will stick to improving the medium and trying out stuff with the hopes of building a killer app, for the rest of us Virtual Reality still seems like a medium that promises to arrive soon but never really hits the spot.