Join Andy Needham for an in-depth discussion in this video Tracking the motion in a shot, part of Learning Foundry CameraTracker for After Effects 2014.
So, let's take a look at basic tracking. I've gone ahead and RAM previewed the shot. It's always worth evaluating the footage before tracking so you can observe the camera's motion. We can see that the camera's just rotating in this shot. So, let's note that for later on. We'll add the Camera Tracker effect, make sure the layer's selected. Choose Effects > Foundry > CameraTracker. Just running through a few of these parameters. The analysis range, we're going to use the source clip range in our case, but if you had a long clip and you wanted to test out your tracking on just a short range, then maybe just use specified range.
And you can do that there. Let's bring it back to source clip range and come down to the tracking parameters here. We noted the, the motion of our camera was rotating. So, we're going to set the track validation to rotating camera. The defaults are usually fine. So, let's just click Track Features. And the Camera Tracker will begin tracking forward. So, the Camera Tracker makes use of auto tracking to record a layer's feature data. And as a feature is tracked, it becomes a series of 2D coordinates that represent the position of the feature across a series of frames.
By having hundreds of them moving at different speeds relative to the camera's movement, they can be used to calculate the camera's position in 3D space. So, it's important to note, then, that the Camera Tracker needs a minimum of 100 features per frame in order to create an accurate solve. So, we've got 150 in our default settings. If you find that you're not getting enough track points per frame, you might want up those number of features, but maybe consider reducing the feature separation.
Feature separation is just the distribution of features relative to each other. So, you can force an even spread of features by using a higher value than 12. But 12 is usually, normally good. Shots vary from shot to shot. The detection threshold is the way the tracks are distributed over the layer. You might want to think about the lowering the detection threshold if you're tracking a relatively featureless shot. You get it more even spread of points across the frame. Where as a high threshold will produce more localized groups or features.
The track's threshold setting can be reduced to produce longer tracks, if you're finding that the Camera Tracker is just dropping tracks after a few frames. And if you want to increase the track smoothness, that can remove tracks that error over time consistently. Track consistency is how inconsistent a feature can be, before the Camera Tracker rejects and recedes it. Now that the camera tracker is tracking backwards, it's validating all the tracks that it did in the forewords process. And it will be receding tracks that were rejected and adding more points per frame.
You'll probably end up with way more than 150 that you started out with. Now that the Camera Tracker has tracked the shot, it's time to solve. Which we'll look at in the next video.
- Tracking the motion in a shot
- Solving 3D camera data
- Refining tracks
- Identifying and fixing tracking errors
- Exporting 3D track data
- Using CameraTracker data in composites