From the course: Transfer Learning for Images Using PyTorch: Essential Training
Unlock the full course today
Join today to access over 22,600 courses taught by industry experts or purchase this course individually.
Training the fixed feature extractor
From the course: Transfer Learning for Images Using PyTorch: Essential Training
Training the fixed feature extractor
We'll be looking at training the network shortly, so let's get an intuitive understanding of what we're doing when training the network. So what we do is take the entire training CIFAR dataset and divide them up into batches. We then pass a batch of images with labels to the VGG-16 network, and in our example, this is a batch of 64 CIFAR-10 images with the 10 classes and the labels. These labels state which class the image belongs to. So let's take a look at one of these batches. The weights, represented by W in the diagram, contain information that VGG-16 has learned from CIFAR-10 data that passes through it. Now remember that we have frozen all of the layers so none of the weights will get modified. This is because the VGG-16 model has been pre-trained on the image net dataset. So the weight values have already been adjusted after they've been trained with a million images. We then define the fixed feature extractor and…
Practice while you learn with exercise files
Download the files the instructor uses to teach the course. Follow along and learn by watching, listening and practicing.
Contents
-
-
-
-
Creating a fixed feature extractor5m 30s
-
(Locked)
Understanding loss: CrossEntropyLoss() and NLLLoss()3m 37s
-
(Locked)
Autograd1m 33s
-
(Locked)
Using autograd4m 9s
-
(Locked)
Training the fixed feature extractor3m 24s
-
(Locked)
Optimizers1m 49s
-
(Locked)
CPU to GPU59s
-
(Locked)
Train the extractor37s
-
(Locked)
Evaluate the network and viewing images2m 22s
-
(Locked)
Viewing images and normalization5m 52s
-
(Locked)
Accuracy of the model2m 40s
-
-
-
-