From the course: Ethics and Law in Data Analytics

XAI the issues

- There are several issues faced by anyone wanting to make artificial intelligence explainable. Let's review a few of the basic ones. First, many data scientists are worried that there's a basic trade-off between explain-ability and performance. On this view, at the extremes, we could perhaps make systems that are highly explainable or we can make systems that work well, but we cannot do both. And you can imagine the continuum such that the more explainable an AI system is, the less effective it is. The concern from some data scientists is that an algorithm would have to sacrifice a lot of its performance and accuracy in order to make it capable of delivering the kind of explanation expected by such legal frameworks as the GDPR. Secondly, when we demand explain-ability, to whom are we expecting it to be explainable? To the leading experts in AI? To entry-level data scientists right out of college? To the executives and other professionals that will use the algorithmic outputs to make decisions? To the regular people who are affected by the decision? Now this particular problem doesn't seem insurmountable, but it is an important one that the XAI proponents must have a serious conversation about. Because providing explanations at those different levels will present different challenges. Third, and I know you're going to think this sounds like a future dystopia dreamed up by Hollywood, but it is something we should think about now. The question is how similar, exactly, artificial intelligence will eventually be to human intelligence? Remember, DARPA wants to design systems that "can explain the rationale, "characterize their strengths and weaknesses, "and convey an understanding of how they "will behave in the future." In other words, what they are saying is that we should want machine decision-makers to be able to reflect on their own decisions, and something like the way human decision-makers can do. But many brain researchers, social scientists, and even philosophers believe that humans generate explanations for their decisions after the fact. That is, we make a decision for a reason that we ourselves don't understand, perhaps because it is an explanation that involves the collision of neurons. And then when asked to give an explanation for our decision, we come up with something that sounds plausible, but is not the true explanation. Now this is not necessarily lying because we might believe it ourselves. And, of course, it could be worse, as in when humans lie about their own decision-making process intentionally, perhaps to cover up some bias. So will machine decision-makers learn to mimic our bad habits and give us false explanations to satisfy our inquiries? Fourth, explain-ability can mean several things. Remember, so far, we have only been able to agree that we want non-inscrutable logarithms. But in the AI literature, explanation has been thought of as transparency, or justification, or demonstration, or interpretation. These words might show up together in a thesaurus, but they actually mean very different things. So we have to get clear on what we want when we want an explanation. Lastly, it's important to recognize that not all artificial intelligence uses machine-learning algorithms that work the same way. And this will, inevitably, lead to differences in both the quality and degree of available explanations. These last two issues, the meaning of explain-ability and the differences between algorithms are especially difficult issues. So let's explore them a little more.

Contents