Join Doug Rose for an in-depth discussion in this video Backpropagation, part of Artificial Intelligence Foundations: Thinking Machines.
- I have a vivid memory of being a kid splitting a small bag of jelly beans. My friend and I were very good at sharing the bag. He would eat two, then I would eat two. We worked together to empty the bag. As we ate our way down, I noticed that my friend was ignoring the black jelly beans. So when we got close to the bottom the number of these beans increased. I asked him, hey why did you leave me these little black jelly beans? He said he was saving them all for me. So without thinking, I popped two in my mouth and began to chew.
These little beans were one of the vilest things I've ever tried. They tasted like a mixture of soap and bug spray and candles. I spit them out and that ruined the rest of the bag. From that day forward, I was deeply suspicious of any of the darker colored beans. I figured I'd made an error by eating the black ones, so I was determined to correct the error by staying closer to the other end of the color gradient. My friends and family encouraged me to move further down the color gradient. I delved into more experimental colors, like green, purple, and red.
Each time I achieved some success with a lighter color, I would go for darker colors further down the gradient. I wasn't thinking about it at the time, but I was actually using a gradient ascent to do a form of backpropagation. Backpropagation is a term you'll often hear around artificial neural networks. It's a very common way to deal with error correction. It's also commonly called backpropagation of errors, or backprop for short. Remember that each neuron in your artificial network has a weight.
Your neural network will strengthen and adjust these weights as a way to match different patterns. A very strong connection shows some type of match. A weaker connection shows only a small match, or nothing at all. It's almost like when you see lines on a dirt road. The deeper grooves are the ones where you see the most traffic. The deeper the groove, the more likely you are to follow the same path. The challenge is that when you're doing supervised learning, there has to be a way to gently know that the artificial neural network has made a mistake.
You've fallen out of the groove into the road and you need a way to get safely back. Remember that in supervised learning a human being is helping to train an artificial neural network. So if you're training your network to classify jazz music, there needs to be some way for the network to learn when it classifies the wrong song. So let's say that you use an artificial neural network. It makes a mistake and spits out a black jelly bean. It classifies one of your jazz songs as country music.
When that happens you need an algorithm to go back through the network and tweak the weights. The backprop algorithm will adjust the weight of your neural connections. In a sense, you're going back through and telling a few of your neurons that they identified the wrong note. You don't want your neuron to throw out all their weights and storm off. Instead you just want them to make slight adjustments to see if that improves their recognition. You should think of this as a gradient adjustment.
If you've ever seen an image gradient, you'll notice it's basically two or more colors blended together with decreasing intensity. It's the same with those black jelly beans. You have a black one that's an error, and then you might have the red one which is correct. You want to train the network to get from black to red in small color increments. The backprop algorithm will be saying, this time try a little bit more red. Maybe a touch more red, a touch more black, it's a match.
The backpropagation of errors in a gradient descent will help you twist the dials of your artificial neural network so that you can zoom in on the correct answer. It's important to keep in mind that backprop is typically only used for supervised learning. Remember that my friends and family had to coax me into eating darker color jelly beans. It's the same thing with your artificial neural network. A human being has to identify the black jelly beans and then help the network twist the dials to get you back on track.
This course will introduce you to some of the key concepts behind artificial intelligence, including the differences between "strong" and "weak" AI. You'll see how AI has created questions around what it means to be intelligent and how much trust we should put in machines. Instructor Doug Rose explains the different approaches to AI, including machine learning and deep learning, and the practical uses for new AI-enhanced technologies. Plus, learn how to integrate AI with other technology, such as big data, and avoid some common pitfalls associated with programming AI.
- The history of AI
- Machine learning
- Technical approaches to AI
- AI in robotics
- Integrating AI with big data
- Avoiding pitfalls