From the course: Ethics and Law in Data Analytics

XAI or GAI

- Now that we have an idea of the challenges of increasing algorithmic complexity let's review those options I mentioned for understanding the concept of explainability and contrast that with another movement that more or less gives up on the notion of explainability. First, what if we interpret the ideal of explainability as a kind of transparency, as several data scientists have suggested. There are a few concerns here but let me just speak to what I regard as the main one. As noted in module one, big data algorithms work by analyzing correlation, not by revealing causation. This is a mismatch between the cognition of human and of machines. We humans are more comfortable saying we know something when we can explain its cause. So even if we had a full view of all the correlations discovered by some algorithm, it would not be the kind of explanation that allows us to infer the causal link between the correlations. What about justification? David Weinberger, in an April 2017 article that is getting some traction, linked in the Further Reading section, speaks in these terms. He contrasts the idea from ancient philosophy that we must justify our true beliefs on the one hand with the kind of justification that complex machines could give, on the other. I think this is a fair contrast. If justify means something like giving the reasons in terms of the principles that could explain our true opinions, which it probably does, I think we're right to worry that justification won't help much. Remember, an AI system like Deep Blue learned from principles. And so we could potentially explain a particular action of Deep Blue in terms of a principle. But as noted, more complex AI aren't given principles nor do they find their own principles. So how could we ask for a justification in the traditional sense? What about interpretation? For my money, interpretability is the most promising and yet the most poorly defined notion of explainability. At the 2016 International Conference on Machine Learning, the participants explicitly worked to find some common ground on what kinds of concepts are nested in interpretability, with little agreement. It seems like this is going to be a wait and see situation. As AI becomes more sophisticated we are going to get more and more information on whether humans can get a inhuman interpretation of what's going on. We'll definitely keep our eye on that. And to make matters even more complicated there is what I think of as another basket of answers that is more or less prepared to give up on the goal of explainability. You might call this governable artificial intelligence and if you like comparable terms we could abbreviate it as GAI. The basic idea here is that we may never be able to understand how algorithms come up with their conclusions, but that's okay, because while XAI is not attainable for the more complex algorithms, it is not even desirable. Instead, what we should want and what is actually attainable by technology is a kind of governance or regulation of the algorithms. Peter Norvig, Head of AI Research at Google, has articulated a version of this in June 2017. His claim is that analyzing the output of algorithms, the actual decisions, is more realistic than trying to peer inside the system. If we had to summarize this idea in a word it would probably be, most likely, what computer scientists have been calling auditing. There are also old models from computer science that don't rely on auditing but something more like verification. And there is another model getting traction in the AI literature around accountability. What these concepts have in common is an attempt to manage AI, rather than explain how it reaches its conclusions.

Contents