Start learning with our library of video tutorials taught by experts. Get started
Viewers: in countries Watching now:
Author Steve Wright explores the new features found in the 3D digital compositor Nuke 6. The course introduces the RotoPaint node for drawing and painting effects, the Keylight keyer for creating mattes and composites, and the SplineWarp node for warping images. The course also explains how to merge keys, animate with keyframes, and create image-based blurs. Exercise files accompany the course.
Nuke 6 New Features was created and produced by Steve Wright. We are honored to host his material in the lynda.com Online Training Library®.
The surprising thing about the MotionBlur2D node is that it does not do any motion blur. It converts 2D transformation information like translate, scale, and rotate into forward UV data. This forward UV data is then used by the VectorBlur node to do the actual blur. Let's see how it works. First we'll need a clip, so let's get the Read node, go to the Project Media, select the gingerbreadman clip. There it is, yup! See, this clip has no motion blur, so we're going to add motion blur with MotionBlur2D node.
We'll open the clip, hook it up to the viewer. Okay, to apply motion blur to this clip, first we'll select the Read node, go to the Filter tab, and add the MotionBlur2D node. Notice if I hook directly up to the clip, I have no forward channels, but if I hook the viewer up to MotionBlur2D node, I now have forward channels, but it's not populated with data yet. To the MotionBlur2D node, we'll go to the Filter tab and add the VectorBlur node.
For the VectorBlur node to do the blur, it's got a have forward UV data from the MotionBlur2D node, and for that we need a 2D transform. So one thing we can do is go to the Transform tab and get a Transform node. Hook that up to the 2D Transform input right there. Now we need some motion in the Transform node, so I'll jump the playhead to frame 1. I'll set a keyframe in the Transform node, translate x as 0, jump to the last frame of the clip, and give it a big translate x of let's say 500. So that's moving very fast.
Since it's not connected to the Read node, it's actually not moving my clip, but it is generating translate x data, which the MotionBlur2D node can now read. And now if we look in the Channels list we do have a forward channel that we can see in the viewer. I'm going to clear the Property bin and set the viewer back to RGBA for our gingerbreadman layouts. To set up the motion blur, we'll open up the MotionBlur2D node, and you can see here that it's going to output the UV data into the motion, or forward channels-- either one will work.
So the forward UV channels are not populated with data. Then we open up the VectorBlur node. And by default it isn't looking for any data so what we have to do is tell it to go look into the forward channels and voila! We could also set for channels we want motion blur, which would be the RGBA channels. Now these settings here are used to adjust any data that you input from another system. Maybe you have forward UV data from Maya or 3ds Max, so you can add constants to the U and the V values or scale the motion UV data up and down here or do offsets to it.
That way you can reformat any imported motion UV data in Nuke. Okay, I'll disconnect the MotionBlur2D nodes 2D transform input from the Transform node to show you another technique. What we really want is motion blur applied to this object that's driven by its motion. We can do that using a Tracker node. So I'll select the Read node, come to the Transform tab, do a Shift+Click on the Tracker node--I'll move it over here-- so that we can use the Tracker node to collect transformation data from the original clip, feed that to the motion blur, give that the VectorBlur, and impart a correct motion blur on this moving target.
So I'll go to frame 1. I'll set my tracker here, make it larger, and I want to make my tracking box really big, because this thing is moving very quickly, so I need a big wide search box here. Okay, we'll track forward. Done! All right, I now have tracking data over the whole length of the clip. You can see the tracker right here. If I switch to the Transform tab, you can see the translate x and y data here.
All I have to do is connect to 2D transform input to the Tracker node. However, if I check my channels, I don't have any forward data. The reason there's no forward channels is because the Transform tab is set for none. It's not outputting any transform data. But if I set it for match move, suddenly we have our motion data. I'll clear the Property bin, switch the viewer back to RGBA, and there's the motion blur on our character. And I can toggle the MotionBlur node on an off and you can see it happening.
We now have correct motion blur for the actual motion of this object. The reason the motion blur is gone on the last frame is because the motion blur calculation looks at the current frame to the next frame, and frame 10 is the last frame. There's no motion between frame 10 and frame11. So to fix that, we go to the Tracker node, and we want to fix the translate x and y curve. Go to the Curve Editor, select translate x and y, select the last points in the curves, and set them for Linear.
This way they retain their slopes, there is a speed difference between frame 10 and frame 11 now, and the motion blur returns. The motion blur no longer dies on the last frame. Back to the Node Graph, clear the Property bin. So here we saw how the MotionBlur2D node is used to capture forward UV data, which is then fed to the VectorBlur node. But there are other situations where you don't need to use the MotionBlur2D node to use the VectorBlur.
There are currently no FAQs about Nuke 6 New Features.
Access exercise files from a button right under the course name.
Search within course videos and transcripts, and jump right to the results.
Remove icons showing you already watched videos if you want to start over.
Make the video wide, narrow, full-screen, or pop the player out of the page into its own window.
Click on text in the transcript to jump to that spot in the video. As the video plays, the relevant spot in the transcript will be highlighted.