From the course: Deep Learning: Image Recognition

Pre-trained neural networks included with Keras - Python Tutorial

From the course: Deep Learning: Image Recognition

Start my 1-month free trial

Pre-trained neural networks included with Keras

- [Narrator] Researchers around the world compete with each other to build the most accurate and capable image recognition systems. So instead of them bending their own neural network designs from scratch, it often makes sense to reuse an existing neural network design as a starting point for your own projects. But even better, researchers also trained these neural network designs on large data sets and share the trained versions of the neural networks. So we can take those pre trained neural networks and either reuse them directly, or use them as a starting point for our own training. Keras the library that we're using to build neural networks includes copies of many popular pre trained neural networks that are ready to use. The image recognition models included with Keras are all trained to recognize images from the ImageNet data set. The ImageNet data set is a collection of millions of pictures of objects that have been labeled so that you can use them to train computers to recognize those objects. Each year ImageNet holds a worldwide image recognition contest called the ImageNet Large Scale Visual Recognition Challenge, or ILSVRC. Teams from universities and companies around the world compete to build the most accurate image recognition models. The pre trained models included with Keras are trained on the more limited data set used by this contest. The data set contains images of 1,000 different types of objects, like breeds of animals and types of foods. For example, one of the types of objects in the data set is Granny Smith apple. The date set includes over 1200 pictures of just this specific kind of apple. Let's talk about the neural network designs included with Kares that we can reuse. First is VGG, VGG is a deep neural network with either 16 or 19 layers. It was the state of the art in 2014. It's a very standard convolutional neural network design. It's still used widely today as a basis for other models because it's easy to work with and easy to understand. But newer designs tend to be more efficient. ResNet-50 is a state of the art from 2015, it's a 50 layer neural network that manages to be more accurate, but use less memory than the VGG design. ResNet uses a more complicated design, where higher layers in the neural network are connected not just the layer directly below them, but they also have multiple connections to deeper layers. Inception v3 is another design from 2015 that also performs very well. It has an even more complex design based around layers that branch out into multiple separate paths before rejoining. These networks show the research trends in 2014 and 2015 to make neural networks bigger and more complex that try to increase their accuracy. More recent neural network designs tend to be more specialized. For example, Google's MobileNet created in 2017 is designed specifically to be able to run well on low power devices. The idea was to create a neural network that could run quickly on a cell phone without using too much power while still maintaining a decent level of accuracy. Google's NASNet which was created at the end of 2017, explores the idea of having algorithms design neural networks. In a sense they're using machine learning to build and tweak machine learning models on their own. This let them create something that was more accurate than existing models while still using even less computer power. Having access to these pre trained models is useful for two reasons. First you can reuse any of these models directly in your own programs to recognize objects and images. If you need the ability to recognize any of the 1,000 types of object they're already trained on, you're problems already solved. Second if you want to recognize a different type of object that's not in the 1,000 object training set, it's much faster to start with a pre trained neural network and adapt it to your needs, instead of training your own model from scratch. In this course we'll test out both options.

Contents