From the course: Ethics and Law in Data Analytics

XAI complex algorithms

From the course: Ethics and Law in Data Analytics

XAI complex algorithms

- One of the last issues I mentioned was that we needed to get clear on what we want when we want explainability. But before exploring that problem, we need to talk first about a previous problem, namely that demanding explainability must recognize that there are different types of AI with different levels of complexity. Let's start with a simple kind of artificial intelligence. In fact, it's so simple that some wouldn't even classify it as a kind of artificial intelligence. To be sure, what I have in mind is not an example of machine learning, but go with me here. Let's think about a system entirely governed by an if/then program. Such as a digital spreadsheet. This is a basic, straight-forward, rule based program, but to our accountant ancestors in the 20th century, this may well have been magic. After all, you could just change one number in your spreadsheet. And the spreadsheet was intelligent enough to update all of the other numbers automatically. So how does Microsoft Excel work? How does it get its powers? Well that's my point, unless you're a computer scientist, it would probably not even occur to you to demand an explanation. Because we can safely assume that Excel is just executing functions exactly as we told it to. It's just following rules. But there are more recognizable kinds of AI, such as IBM's Deep Blue. Which captured the popular imagination in the 90's. Deep Blue was programmed with many best practiced chess principles. Such as, secure the center of the board before launching an attack on the flank. Deep Blue, the program, then optimized those principles by reviewing thousands of games and eventually, famously defeated the human chess champion, Garry Kasparov. But that was 20 years ago, which in tech-time, is an eternity. Today we have systems such as AlphaGo, which learned to play the board game, Go, by using neural network algorithms. This algorithm has recognizable outputs, namely, moves in the game Go. But AlphaGo wasn't given principles that it then had to optimize, like Deep Blue was. And here is the important part, it also did not generate its own principles from which to act. If that were the case, we could simply ask AlphaGo what principles it had discovered so that we humans might learn them too. Now it did generate something like a model, but there are two problems. First, this model is hidden under layers of complexity. And second, if you're imagining us digging through the top layers to get to the hidden model, what we would find still wouldn't be recognizable to us. And of course, any artificial intelligence system is what it is because of some kind of algorithm or algorithms. So a more accurate way to think about this increasing complexity of AI, is the varying level of complexity of algorithms. It seems safe to say that some algorithms are highly explainable, or at least could be designed to be that way. But this does not seem to be true of all algorithms. This is a major complication of the XAI objective.

Contents