From the course: Ethics and Law in Data Analytics

Why XAI

- The American Department of Defense has an important and well-known branch called DARPA, which is the branch that attempts to understand and use emerging technologies in the context of national security. Naturally, much of their work is top secret, but they do sometimes publicly disclose their general goals. One such goal is to create artificial intelligence systems that are explainable to human beings, abbreviated XAI. They say that if their goal's achieved, and I quote, "New machine learning systems will have the ability "to explain their rationale, "characterize their strengths and weaknesses, "and convey an understanding "of how they will behave in the future." This language has also surfaced as the European Union moves to implement the General Data Protection Regulation, GDPR next year, which we have already described in some detail. One element of this comprehensive regulatory framework is that it attempts to establish a right to explanation, so that when an algorithm makes a decision about you, such a decision to reject your credit application, you have the right to ask why that decision was made. Several researchers have been using the term inscrutable to refer to unexplainable algorithms, so at the most general level, inscrutable is the opposite of explainable. There is consensus that avoiding inscrutable algorithms is a praiseworthy goal. We believe there are legal and ethical reasons to be extremely suspicious of a future, where decisions are made that affect or even determine human wellbeing, and yet, no human could know anything about how those decisions are made. The kind of future that is most likely compatible with human wellbeing is one where algorithmic discoveries support human decision making, instead of actually replacing human decision making. This is the difference between using artificial intelligence as a tool to make our decisions and policies more accurate, intelligent, and humane, and on the other hand, using AI as a crutch that does our thinking for us. In module one, we talked about how to respond properly to algorithmic conclusions. We believe that there are many options in between simply accepting them, or rejecting them. Instead, it is important that we accept them critically, and this kind of thinking requires asking thoughtful questions about them. But in addition to being properly cautious, in order for algorithms to serve the role of tools and not crutches, they must be, on some level and in some sense, non-inscrutable. So at this most basic level, we should all be able to agree that we want explainable algorithms, at least, if we understand that term simply meaning non-inscrutable. But that is where the agreement stops and the complexity starts. I will further introduce XAI by naming some of the general complications with XAI in the next video.

Contents