Nuke's 3D Camera Tracker is a powerful tool that allows you to do 3D matchmove shots with live-action clips. It reconstructs the original live-action camera move and generates a point cloud, 3D reference points located in the scene. You can then use that information to add 3D objects to the original scene. First, let's get an overview of the CameraTracker workflow. We'll start by getting a clip, so type R on the keyboard for Read node, select the Project Media folder.
Go to the Lesson_04_Media and select the camera tracking clip. And we'll open that and hook it up to the Viewer. Let's get a little more screen space here and I am going to type H on the keyboard to fit to the viewer. Now this is an idealized training clip that I created to have a successful and happy CameraTracking experience. Using the real live-action clip will introduce problems and difficulties and make the process a lot harder to learn. We'll stop this.
Now let's add the CameraTracker node, up to the 3D tab, select the CameraTracker node. Now the Camera Tracking process moves in three distinct steps. First is the Track Features. The program throws a whole bunch of 2D trackers on the screen and uses those to track 2D locations. Second, the Solve Camera step, this takes the 2D track information collected and then computes or reverse engineers, if you will, the live-action cameras location.
Three, Create Scene, this builds the point cloud and puts the 3D camera in so that you can now line up your 3D objects. So let's start by tracking our features. And it doesn't matter where the playhead is, because when you click Track Features, it jumps to the first frame in the clip. I am going to move the Progress bar down here so I can show you, I am going to lower the Gain just little bit, here are the 2D point trackers. The algorithm just throws a whole bunch of them all over the screen and they're used to track 2D targets in the scene, which becomes data for the Camera Solve.
And it's now tracking backward because the Nuke CameraTracker is a two-pass tracker. We are all done now and notice that we have orange colored points. Let's push in and take a look at one. As I scrub through the Timeline, you can see that it's showing you the path of that track. And the shorter the path, the slower it's moving and it also shows you the angle or the kind of motion, it's got a circular motion here, whereas these points up here are more straight.
Also, if you put your cursor on a point, it gives you the length of that track. That says that this point was actually tracked for a full 100 frames, this being a 100 frame clip, it did a fine job. Okay, I'm going to re-home the viewer with the H key and take a look at our next step, the Camera Solve. All we have to do is click the Solve Camera button and the CameraTracker analyzes all of those 2D tracking points to compute the camera's position. After the Camera Solve, the points have been colorized to sort them into different categories.
The green points are high quality or very reliable points, the red points are unreliable, and sometimes you'll see some yellow which are questionable. Notice as I zoom in to the same point, we now have some additional information. I put my cursor on that point. There is our original track length of 100, but now there is an Error of 0.74. Now watch what happens when I move the playhead one frame, and now the Error is 1.08, and another frame, Error is 1.07.
So that error number is showing you the tracking error for that point on this one frame. The next line, the RMS Error, is kind of an average of the tracking error over the whole length of the clip. And the third one, the Max Error is simply the largest error that it encounters during tracking. Okay, so let's re-home the viewer with an H and now go to step three, create the scene. Before I do that I want to move my Viewer node over here, because it tends to create a bunch of nodes and clobbers my viewer. So let's click Create Scene.
And now here's our new nodes, it's added the CameraTrackerPointCloud node, here is my camera, and of course the scene node just to tie it altogether. So let's switch to the 3D view, see what we got. Switch to 3D, zoom out a little bit; there is my point cloud, now if I play the clip, there is my Camera Solve. This is a 3D camera that duplicates the motion of the original live-action camera. Now personally I like to adjust the camera's uniform scale way down, so it doesn't have a silly huge camera in the shot.
Now let's take a look at our point cloud through the camera's viewfinder. Don't forget to turn on the Lock View button. So we can see the point cloud is now moving based on our moving camera, and yes that does look like our scene. We can see the wall over here with Marcy, there is our picture window and the little wooden box in the front, and these points out here are the points outside the window of the far mountains. Now we can do a confirmation of how good our camera solve and point cloud is by simply compositing the moving point cloud over the original clip.
Here is how we do that. Let's hook up the original Read node to input two of the viewer. Then, we'll set the viewer wipe controls to over, and we want the CameraTracker over the Read node and then switch the viewer to 3D. So you see, the CameraTracker, which is the point cloud, is over the Read node which is our background play. Move the wipe control off to the side so you can see the whole thing and you might want to adjust the viewer gain control for best visibility.
Now we can play the clip and confirm that the points seem to be locking to the scene. Of course, the points that represent the far distant mountains are cruising way in the background there and may look kind of funny. But all the points that are in the room appear to be locked to their targets, which is what we want. Now, this 3D grid can get in your way. If you want to get rid of it, click in the viewer, type S on the keyboard to get your viewer setup, select the 3D tab, and turn off the grid right here with this button.
And now you can watch your confirmation without any interference from the grid. When you are all done, don't forget to turn the 3D grid back on, there you go. Now that we have the big picture of how the CameraTracker works, let's back up and take a closer look at each step of the process.
Nuke 6.3 New Features was created and produced by Steve Wright. We are honored to host his material in the lynda.com library.
- What's new in Nuke 6.3
- 3D tracking in the Camera Tracker
- Creating a cloud with the Point Cloud Generator node
- Building meshes with the PoissonMesh Generator node
- Understanding the Reconcile3D node
- Demoing the PointsTo3D node
- Introducing the Modeler node
- Aligning 2D and 3D points
- Displacing with the Displacement shader
- Adding audio
- Working with the Planar Tracker
- Working with 3D particle systems in Nuke 6.3