In this video, explore an overview of the course, what the term responsible means, and how it will be fleshed out during the course.
- [Instructor] Responsible AI, what is it? It's a term that's been used a number of times by different people in different organizations. Most people hear the term and they think it's some kind of oversight process, like our way to see how AI is being developed and deployed. I would say that most people think it's important, especially in a world where decisions affecting people's lives are made automatically, and almost mysteriously, by different technological systems. Most of my career has been about trying to bring these models, or at least automated decision systems, to life. And through this course, I'll be coming from a personal level. I've been the person trying to answer questions about making a responsible system, and I've also been the person requesting that systems be created in a more transparent manner. It usually comes down to the analyst or the person involved in designing these decision systems. Ideally, we would like them to make a decision system that has the ability to be held responsible pretty much exactly like a person would. I guess that's a good way to think about it as well. At what level would you expect a person to make these decisions? So we'll break down both what AI and responsible means later in this course, but for now, the working definition for responsible AI is a framework of a set of principles that allows artificial intelligence applications to be held responsible for the decisions they make. This framework can take many different forms, and through this course, we'll investigate them.