From the course: Learning XAI: Explainable Artificial Intelligence

Introduction to AI and ML

From the course: Learning XAI: Explainable Artificial Intelligence

Start my 1-month free trial

Introduction to AI and ML

- [Narrator] With the proliferation of artificial intelligence and machine learning algorithms in the world, it is getting more and more important for human beings to be comfortable with using these sometimes mysterious systems. Before we get into the details of XAI, I will take a few minutes to review the basics of AI, machine learning, deep learning, and go over some other terms used throughout this course. Artificial intelligence, or sometimes called machine intelligence, refers to a very broad set of technologies that is developed to make computers or machines intelligent. Now, the term intelligence itself is subjective and can be defined differently, depending on who you talk to. There is certainly a lot of ambiguity about what AI actually is. In fact, some people like to say that a system is only called AI until it becomes used in everyday products. Then, it's just technology. We don't need to worry about these distinctions in this course. It is sufficient to just know that the AI we will refer to here is generally what is discussed in the media and in business situations. Now, let's talk a bit more about what an AI system is. If we massively simplify things, these systems have three main parts. There is the input, there is the model or algorithm, and there is the output. Inputs are the data that you want the AI to analyze. These could be some photos, data from a factory, or sensor information from your self-driving car. The outputs are the decisions the system makes. Continuing with the previous example, it would be the fact that the picture is of a cat, the settings you need to make the factory run smoother, or the fact that your car doesn't run into something on your way home from work in auto-pilot mode. Then, there's the meat of the system: the model, or the algorithm. This is the part that does the actual analysis of the input data to get the output. There are many, many different types, far too many to get into here. Within the large category of AI, there is a subset of methods that uses statistical techniques to learn how to classify or predict outcomes from an existing set of data. This is what's called machine learning. As the name suggests, the system is taught or trained to be able to make these classifications or predictions. Briefly, in machine learning, you would need to prepare a large amount of data, called the training set, to teach the system. We'll talk a little more about the details of this in a subsequent slide. If we go one level deeper within machine learning, we have deep learning, which is based on architecture that is somewhat similar to how our brains work. In fact, deep learning is really just a new name for neural networks, a system that has existed for years. We'll talk a bit about deep learning, as this is often the system that researchers are trying to make more transparent by making it explainable. Here, we have an example deep learning system. You can see that we have an input layer, hidden layer, and output layer. Often, there are many hidden layers, depending on the specifics of an application. In general, the input layer will break down the data it receives into basic elements and then hands them off to the first hidden layer. In the example of a photograph, these basic elements are often individual or small groups of pixels. Then, each set of hidden layers will group these into larger components that it finds. For example, edges of objects, groups or combinations of edges, larger objects, and so on. The final layer, again, is the output layer, which is the decision or recommendation that you want the system to make. In the cat example, it would be the decision of whether the photo is of a cat or not. A big problem with AI systems today is that the process is not transparent. If you asked your friend to perform the same prediction task, she may pick a photograph from a pile and tell you that this is one of a cat and that she is quite certain about that decision. If pressed, she could also explain to you that she came to this conclusion because she identified square ears, long whiskers, and paws that she associates with cats. Unlike humans however, current AI systems cannot explain to you how it came to its decision. In the trivial example of cats, you may not care so much about why some photos were identified as cats and why some were not. However, in more critical cases, such as the recommendation to remove a different organ than originally intended during a surgery, or buying a large position in a stock of a company that you thought did not have a bright future, you may want to know why that recommendation was made. This is where XAI, or explainable AI, would be extremely valuable.

Contents