From the course: Ethics and Law in Data Analytics

AI design principles

- Going back to mod one in our intro when Nathan had talked about how really on the ethics side of things we talk about design, and that is, we must think about how to architect and build our systems in an ethical way before it goes to market, before it causes problems. And I think that's what this presentation's going to try to give you are some fundamental principles that come from the Microsoft space, but really are trying to extend out our current thinking on best practices as it relates to data in AI and ethics and design. So, Satya Nadella back in 2016 had talked about how the debate that's out there about ethics in AI isn't really about good versus evil, or it really shouldn't be. The debate rather should be about the values that people have, that institutions have as well in creating this technology. And it's such a change, right? It's just about the values as it relates to this technology rather than it being bad or it being it shouldn't be allowed at all. I think the reality is it's here and what we do with it now is the question. So when we're working with this, the first principle that he gives us is that AI must be designed to assist humanity and maximize efficiency without destroying the dignity of people. So, that's really about augmenting our abilities as humans, right? It's not to replace them, to destroy what we have, and empower those who perhaps, even let's say, many use cases with artificial intelligence are being used to enhance a person with disabilities to be able to engage with technology more freely and enable diversity rather than constrict it. So the challenge, right, is that there are many different types of AI systems and applications in different industries and verticals, so is there a one size fits all? Well, again that's why we're trying to give a principle and instead of just specific guidelines that you should step through. And some questions you might ask yourself around this on assisting and maximizing human efficiencies is that, you know, what's a real world consequence of what I'm doing? Is there benefit to the customer? Does my project show respect to individuals and communities? The second one devolves from this, so assisting humanity, we must also have these systems be transparent and have accountability as you're hearing more about, from even Nathan, so that we can undo unintended harm. AI should be fair and inclusive, but bias is hard to detect, and really the system may reflect those beliefs of individuals and also the data itself might even be opaque or hard to understand, perhaps even you've seen this in our lab, the data itself can be hard to interpret. So when it comes to data though, not all bias is bad. If we're trying to create a service or a product that is catered towards an individual or a population of course that service or product is going to be skewed or biased perhaps towards that particular use. And some questions, have you tested your assumptions out there? And can you detect harm from your work and move on? And have you made some disclosures? Like perhaps the algorithm itself or the method is difficult to see; however, let's explain it in common terms, right? And make sure that people know what we're doing here. And do you feel good about it? Almost like a personal gut check. The most direct one towards bias is we must guard against it, right? So we, our customers trust us with their data and we need to innovate on that data and create new and exciting things that bring new stuff to the world. And this can be kind of systemic though, and so I think that we must guard against it actively. So again, disclosure comes up and that we should also respect the context in which this data arrives. We should minimize harm and increase trust. And some questions around bias come up in that is there a problem how I'm framing the whole experiment or the whole service itself? Have I chosen the correct targets or variables? And does bias in the data put some population at a disadvantage? That's a great question to ask as you're architecting telemetry all the way through to the outputs in an AI system. And is there anyone I can consult with perhaps? You know, subject matter experts who know the field, who have that domain specific knowledge that you as a data practitioner might not have. Next, AI must be designed for intelligent privacy. And this means it's not just privacy at all costs, it just really means that we respect a number of things, number of stakeholders, customer wishes, we don't leak customer data and we don't, we leave those people alone who would like to be left alone. They have an opt in opt out ability as we've seen with the GDPR, that kind of stuff is becoming more common. And the challenge, and you've seen in lab two, there's more to PII than just credit card numbers. Proxies can come into play, even just a simple thing like name and address and your date of birth can individually identify you. And we should do as much as we can in an intelligent fashion to provide flexibility but also that privacy. So questions for you. Do I understand how personal data differs from PII? You now do. And have I taken steps to plug holes if there are any? Or preemptively work against a breach? And perhaps has my work or my system gone through a privacy review in my company or organization? And the last principle that we have is that AI must be secure. Related to intelligent privacy, it's really about the consumer confidentiality that they give us, that we uphold their integrity. We provide them those protections that they need in order to trustfully use our services. And it's more than just basic security, it's really about being predictive, using that AI in a way that really leverages all of its capability to detect anomalies, to establish that foundation for a trusting AI future. And some questions that you can ask yourself about security and AI are what are your specific use cases? How is the input coming in? Is it coming through gestures and human like interactions? How can you document your assumptions? How can you build upon what your company already has in terms of best practices there? And also, it's thinking about the attacker, the potential attacker, are there any ways that the attacked could interact with your AI in such a way to make others look bad or use that information for other nefarious purposes or discriminate or incriminate others? In summary, hopefully you've seen that design for AI is just as important as design for the right telemetry system in that the more complex it gets, the higher up you go in scale, the more important it is that we treat humans with dignity and respect as well as enable and unlock the full potential that this technology has in the world. Thank you.

Contents