Fine-tuning involves replacing the existing classifier with connected layers. In this video, you find out how to do this.
- [Instructor] Now normally for a VGG-16 network we input an image to the network. The image forward propagates through the network and then finally we obtain our final classification probabilities at the end of the network. Now when we do fine tuning as part of transfer learning we need to make a couple of changes to our network. Firstly we freeze the features section of our network. This is because the first part of the network would've captured things like edges and textures which would be common to most images so we don't want to lose this. We then either remove the fully connected nodes at the end of the network so this is the classifier portion of the network, or replace a portion of the classifier with our own creation. The new classifier head has the number of required output classes based on our new classification task instead of the 1,000 classes for image net. So we start training but we only train the fully connected layer heads. And then you might want to optionally unfreeze some of the convolution layers in the network and perform a second pass of training. You can continue fine tuning till you get the required level of accuracy required..
- What is transfer learning?
- Using autograd
- Creating a fixed feature extractor
- Training an extractor
- Fine-tuning the ConvNet
- Learning rates and differential learning rates