- In this lecture we're going to talk about machine learning models and algorithms. Eric Nyberg is back again to help. We're going to review some key terms and concepts that we already covered, and we're going to introduce a few new ones. If you want to have a productive conversation with your engineer or your vendors about one of the hottest areas of cognitive technologies, machine learning, you'll need to be familiar with these concepts and terms. There are many approaches to machine learning. They differ in the kinds of algorithms they use and the kinds of models they construct or learn.
Now an algorithm is a procedure, a sequence of steps that is performed to product a result, like a recipe. In machine learning, the algorithm describes the learning process, the process of creating the model from the data. And the model is a mathematical formula which produces a desired output given an input. Let's review a few other terms and concepts that you're likely to encounter in your work with artificial intelligence and cognitive technologies. The first is artificial neural networks. Now neural networks have become extremely important in machine learning because of how well they work.
They are inspired by the structure and the working of the brain. A neuron in the brain receives chemical input from other neurons through its dendrites. If the input exceeds a certain threshold, then the neuron fires its own impulse onto the neurons its connected to by its axon. In artificial neural networks, nodes represent neurons and the links between them represent the neural connections, those dendrites and axons. Neural nets are inspired by the brain in the way that airplanes are inspired by birds.
They copied some of the ideas but they don't work in exactly the same way. Now a neural network can express complex mathematical models. The links contain variable values and the nodes are math functions themselves. A training algorithm automatically adjusts the values of the variables in the model until it's able to produce the right output given the training data that's input into it, this is the learning process. Once trained, the model can produce good results even on data it wasn't trained on.
Now I have a question for you Eric. What supervised learning techniques are neural networks really good for? - Well David, neural networks have a long history of being very effective in speech recognition, where they can be used to segment the signal into phonemes and then associate the phonemes with words in the dictionary. Another application area for neural networks is in named entity recognition, where neural networks are trained to recognize person names, place names, and other phrases which represent entities in text.
- So they're good for pattern recognition applications? - Yes. - Great. Another important technique for machine learning is called support vector machines. Support vector machines are for classification. And there's an advanced version that can be used for regression too. They're currently a popular approach for off the shelf supervised learning. How do they work? Well to oversimplify, with support vector machines, the learning algorithm finds a way of drawing a line between items in the training data that belong to different categories.
The location of any new input relative to that line determines what category it belonged in. Common applications for support vector machines include image recognition, text classification, and handwriting recognition. Now Eric, why would somebody use support vector machines verses other techniques like neural networks? - Well David, support vector machines are very popular because they're simple to implement and they're fairly straightforward to train. And they also allow you to train models that use a lot of variables or a lot of features in the model.
So they're a common choice as a kind of first cut approach to classification because often what we're trying to do is to sort of test whether or not we're identifying the right features in the domain. And because SVMs can be trained quickly on a large number of features, they're a great choice to sort of do some testing on the features that you'll find. - And they can help with feature engineering. - Exactly. - So the last concept I'd like to introduce in this lecture is ensemble learning.
Ensemble learning refers to using a collection of models together, combining the output to get a better result. It's basically applying a model to the predictions of multiple models, like taking an average of stock analyst's forecasts. Now Eric, where have ensemble learning techniques had an important impact in the real world? - So one great example David is IBM Watson. The Watson system that was built to play Jeopardy on television actually used hundreds of algorithms for assigning scores to individual candidate answers for each Jeopardy clue, and then a regression approach was used to train an ensemble of all of those methods to come up with a final score for each candidate answer and in the end that ensemble approach worked better than any subset or any single approach alone, which really means that there are some problems today where the best solution is really an ensemble solution.
- Perfect, that's what they're for. And the last question I wanted to ask you is just thinking about machine learning in the context of cognitive computing. So how has machine learning evolved in this world of cognitive computing? - Well I think when we talk about cognitive agents, we're talking about software that becomes more and more autonomous and learns more on its own and less as a kind of passive learner with a human instructing it, which means that we want to explore not only ensemble methods, but also different ways of having the software learn more autonomously.
So for example, finding examples that it's less certain about and asking asking the user for some input. These are referred to as active learning or proactive learning. All of these are going to become very important in the future because we're going to expect machine learning systems to learn in the wild or learn more autonomously with less feedback from users and more independently. - Excellent. Great, so let's summarize. Algorithms are recipes for performing a procedure. Machine learning algorithms train models, meaning they create mathematical models from data.
In this lecture we learned about algorithms for training neural networks and support vector machines both of which are good for classification. And we discussed ensemble learning, which combines multiple machine learning models in an effort to produce better results than any solo model can produce.
- Artificial intelligence explained
- Cognitive technologies explained
- Supervised, unsupervised, and reinforcement learning
- Machine learning models and algorithms
- Language, speech, and visual processing
- Business applications of cognitive tech
- The impact of cognitive technologies at work
- Future of cognitive technologies