navigate site menu

Start learning with our library of video tutorials taught by experts. Get started

Nuke 7 New Features

Nuke 7 New Features

with Steve Wright

 


Nuke 7 is the largest release of Nuke in the history of The Foundry. Join author Steve Wright as he covers all of the big new features in Nuke 7 such as alembic support, the ModelBuilder node to build geometry from images, normals relighting with the Relight node, and a powerful new suite of nodes to turn your images with depth maps into 3D point clouds, and much more.

This course was created and produced by Steve Wright. We are honored to host this content in our library.
Topics include:
  • Working with stereo in the RotoPaint node
  • Keyframe tracking
  • Exploring Primatte Keyer
  • Setting up and using Motion Blur
  • Adding depth of field with ZDefocus
  • Animating warps and morphs
  • Measuring and viewing deep data
  • Retiming a shot with optical flow
  • Tracking and point generation in the PointCloudGenerator
  • Creating separate cameras and points with the Camera Tracker
  • Setting up displacements
  • Modeling more complex geometry with the ModelBuilder node
  • Casting semitransparent shadows
  • Relighting renders

show more

author
Steve Wright
subject
Video, Compositing, Visual Effects
software
Nuke 7
level
Intermediate
duration
4h 6m
released
Apr 26, 2013

Share this course

Ready to join? get started


Keep up with news, tips, and latest courses.

submit Course details submit clicked more info

Please wait...

Search the closed captioning text for this course by entering the keyword you’d like to search, or browse the closed captioning text by selecting the chapter name below and choosing the video title you’d like to review.



Introduction
Welcome
00:12Hi! This is Steve Wright, welcoming you to my Nuke 7 New Features course.
00:16This course is specifically designed for compositors already familiar with Nuke.
00:21The Foundry really outdid themselves with this release.
00:24It's the largest and most comprehensive new release of Nuke ever, with major
00:29technological innovations and tons of exciting new features and tools, here is just a sample.
00:35First up is Deep Compositing, with the recent release of EXR 2.0 deep images
00:40and deep compositing are now an industry standard fully supported by Nuke.
00:44You'll learn all about this extremely important innovation in this course.
00:49Another major new technology is Alembic Geometry, a recent new industry
00:54standard, it allows 3D elements to be exchanged between different platforms and
00:58offers great efficiencies in workflow, render times, and database sizes
01:03compared to FBX files.
01:06Nukes venerable Tracker Node has been completely rewritten and now supports
01:10unlimited trackers, keyframe tracking, a snap to feature, and automatic track
01:15averaging, and much, much more.
01:18The all new PointCloudGenerator Node has been completely rewritten and now
01:22generates high density Point Clouds with great accuracy.
01:25It also incorporates a truly impressive mesh generator for skinning your Point
01:30Clouds, replacing the old PoissonMesh Node.
01:34The truly awesome ModelBuilder node replaces the old modeler node and allows you
01:38to quickly create 3D geometry for your clips.
01:41You can use this geometry for camera projection and set extension shots.
01:46My personal favorite new 3D power feature is the brand-new DepthToPoints Node.
01:51It takes a CG image with a depth channel and creates a 3D Point Cloud that you can
01:55use to line up geometry.
01:56You will have hours of fun with this new tool.
02:01At last, Nuke can now do normals relighting with the new Relight Node.
02:05You can relight CG renders in Nuke changing the light direction, color,
02:09intensity, and quality.
02:11You will love the new ZDefocus Node which is a major upgrade to the old ZBlur Node.
02:18It has major improvements in photorealistic depth blur, workflow, plus extensive
02:22new creative control over the look of the Bokeh, that's the look and feel of the
02:26out of focus elements.
02:29For those members that have purchased the premium subscription, this course
02:32comes complete with over 700 MB of Exercise Files.
02:36The Exercise Files are for your personal use only.
02:40So, join me in my Nuke 7 New Features course and learn about these and many more
02:45new features in this latest release of Nuke.
02:48And by the way, you can download my free iPhone app that puts all the
02:52NukeHotkeys at your fingertips, just search for NukeHotkeys in the app store.
Collapse this transcript
1. The RotoPaint Node
Understanding the toolbars
00:01Nuke's Roto and RotoPaint Nodes have been completely overhauled improving both
00:05performance and the tools available.
00:08Most of the changes affect the Roto Node, which are carried over to RotoPaint
00:11Node, so we really only need to look at the Roto Node.
00:15By the way, this video assumes you already know the Roto and RotoPaint Nodes, if
00:20you don't, go back and check out those videos.
00:23This video only covers the Nuke 7 New Features.
00:28So, to get the new Roto Node just click in the Node Graph, type 0 as before,
00:34look at up to the Viewer, and we will come up over here to the toolbars and let's
00:38do a pop-up and take a look.
00:40Here are the new things, the Cusped Bezier and the Cusped Rectangle.
00:44Let's see what those are.
00:45I'll select the original Bezier, click and drag, click and drag, click and drag,
00:50click and drag return to close.
00:52The first thing you'll notice is they're red. Of course, you can change that
00:56over here by clicking on this button, and picking a different color, if you
01:02wish. Okay, I'll undo that.
01:05The second thing you might want to notice is it didn't fill it in.
01:10That's because the output option has now been changed by default to be
01:13the Alpha Channel only.
01:15If you wish to have it RGBA, you can do that, but by default back to Alpha.
01:21So back to our Toolbar, if we come over here and we pick the Cusped Bezier.
01:27So we click and drag, click and drag, click and drag, click and drag, no
01:31curves, return to close.
01:33So, the difference is this has already got Cusped at each point.
01:37You might use this to draw around buildings or windows.
01:40On this shape over here if I select a point, I have my control handles that I
01:44can mess with, but over here, you do not.
01:48Next, let's take a look at the Rectangle, again, as before you can draw a
01:53rectangle, but we now have the new Cusped Rectangle, click and drag that, and
01:59again, the difference is, this is actually a Bezier with the handle's preset to
02:05make a nice rectangle, but the Cusped Rectangle actually has Cusped Points, so
02:10that there are no handles.
02:12Of course, you can always change that by turning them into smooth and now you
02:17get your control handles back.
02:18There is a bunch of new toys up here on the Tool Settings, so let's take a look at those.
02:24The Auto Key is still here, just has a little different icon, same function,
02:28just a different look.
02:29The Feather Link also has a new look, now remember, the Feather Link is, if
02:33you pull out a feather point like this, by default they are linked together.
02:37If you turn off the feather link, you can now move the main control point
02:41without affecting the feather point, I'll turn that back on.
02:44Let me push in a little bit here, all right.
02:49Next up, this button shows the Label Points, so if I click that all the labels
02:55pop up on each of the control points, I'll turn that back on.
02:59This one will hide the Curve Lines, so if I turn that on, all my curve lines
03:04disappear, but any selected control points don't.
03:07So, if I click off to the side, it'll clear everything, turn that back on, so
03:11I can see my splines.
03:12Let me select some control points here.
03:16This button will show and hide your points.
03:18So, if I turn that on, the points disappear.
03:21I can still select everything, move it around, transform it, have a good time
03:25like that, it's just that the points themselves don't show up, sometimes that's
03:30a very helpful thing to do, we will turn on the points again.
03:34This button is the Hide Transform.
03:35What that means is normally when you select two or more points, you get the transform box.
03:40If you turn that on, you don't get the transform box, you can still move your
03:45points around, but you can't see the box, we'll turn that back on.
03:49The button next to it is the hide transforms_jack while moving.
03:52In another words, normally when you are moving, the transforms_jack is quite visible.
03:56If you click this button, the transforms_ jack does not go away, unless you move it.
04:01And now it will hide it, while you're trying to fit things, very nice.
04:06The next button here is the constant_ selection, let's see how that works.
04:10If I select a shape and then click off to the side, it'll deselect it, so I'll
04:14click over here, deselect it.
04:16However, if I enable that feature, when I select the shape and then I click over
04:21here, click over here, click over here, it does not deselect it anymore.
04:27If you really, really want to deselect it, just come down here to the Layer list
04:30and click in here and it'll deselect everything, I'll turn that back on.
04:36This is the ripple edit, same as before, just have a shiny new icon, and when
04:39you enable it, then you have all your ripple edit controls, we'll turn that back on.
04:44And these of the same Set and delete key frame buttons.
04:47Thank you Foundry for all the spiffy new features in the toolbars.
04:51Next up, let's take a look at some of the new goodies in the Roto Node
04:55Property panel.
Collapse this transcript
Exploring new commands and features
00:01I've homed the viewer in Node Graph so we can take a look at some new features in
00:05Roto Property Panel plus some great new copy and paste features.
00:09First up the Property Panel, the Roto tab's got a little bit of a facelift.
00:14You might notice that the format and output mask fields are missing.
00:18Not true, they are just hidden here under the twirl down, there they are
00:22format and output mask.
00:23So, this just kind of cleans up the Property Panel a bit.
00:27However, on the Transform tab there are some new toys, we've the skew X and
00:32skew Y sliders, plus the skew order, how that works is, you can now do
00:38individual skewing in X or Y, and you can change the skew order which of course changes the look.
00:47The Shape, Clone, and Lifetime tabs are unchanged in Nuke 7.
00:51Back to the Roto tab.
00:52There is a nifty new feature down here in the Curves list.
00:56Over here in the Life column you can now do a double right-click and it will pop
01:01up this frame range control, so we could for example set the frame range for
01:05this spline to be from 10 to 90, click OK.
01:10You don't have to go over to the Lifetime tab anymore if you don't want to.
01:14There is also another new interesting feature added to the Lifetime, look at
01:18that the spline actually disappears when it's out of its Lifetime range.
01:23In the past, the spline didn't go away, it just stop doing the paint fill.
01:28So this is a little, little more intuitive don't you think.
01:32Okay, big new features in the copy/paste.
01:33If I select this spline and do a right -mouse pop up, and I go to Copy, it's
01:40telling me that I have selected one curve.
01:43If I select two curves, and right mouse pop up, Copy, it now says two curves, deselect.
01:54If I select a point here, right mouse pop up, copy, it says I've 1 point on 1 curve selected.
02:07If I do two points, it will tell me my copy is for 2 points.
02:10Now let's take a look at these values animation and link, what's that about.
02:14All right, to show you that, first I want to take this control point and give it
02:19just a little bit of animation over link for the shot.
02:21All right, so that point now animates.
02:24Let's push in a little bit, so we catch the action.
02:27If I select this point and I say copy> 1 point (values), so that's copied it into
02:36the clipboard, then I'll select another point and I'll say paste>1 point
02:42(values), so this point is now coincident to the other point.
02:47However, the other points moving, so it doesn't move with it.
02:50If I want it to move with it, I'll go to my source point right-mouse pop up,
02:57copy>1 point (animation).
03:01Select the other point, right mouse, paste>1 point (animation) and now they move together.
03:10However, if I wish to reposition the original point like so, the other point
03:17doesn't go with it because it just has a copy of the animation, they are not
03:21linked, I want to link them.
03:23Then I'll go to my point, right mouse pop up, copy>single point link.
03:30Select the other point, paste> single point link, and now the points are
03:38actually linked together.
03:40So, no matter how I edit the first point, the other point will follow.
03:45You can also paste the point link into a Transform Node, or Corner Pin
03:48node, anyone you want.
03:49I am sure you'll find these new copy features are very helpful, and if you work
03:54in stereo you'll really appreciate these next new features.
Collapse this transcript
Improving productivity with the new stereo features
00:01I've restarted Nuke to show you some funny productivity features we're working
00:05in stereo with the Roto or RotoPaint Nodes.
00:08First let's add a Roto Node;
00:10I want to show you something in the Property panel.
00:13So cursor in the Node Graph, type O to get a Roto Node, type 1 to hook it to the Viewer.
00:18Now I'm going to float Property panel down here to show you this, and then
00:22we'll open up the Project Settings and click on the Views tab, now watch this
00:28part of the Roto Property panel, when I click on the Set up views for stereo, bang, see that?
00:34We got these new stereo fields.
00:36Okay, we're done with the Project Settings and I'm going to redock my Roto
00:41Property panel into the Property bin.
00:44Next we'll need a stereo pair, so let's click in the Node Graph and type R and
00:49we'll select the Exercise Files folder and browse to our stereo_pair.exr, open
00:54that, hook it up to our Roto Node, we will tidy up here .Okay, so let's push in
01:01with the H key, take a look at our left and right views, here's left, right,
01:06left so this is in fact a stereo pair, let's push in on this tombstone, I'm
01:11going to use that to draw a stereo roto pair.
01:14So I'll select Cusped Bezier, ah we don't need this Read Node any more, so let's
01:20clear that, so we can watch the action over here in the Roto Property panel.
01:23First I'll set the view for a left and right, then from the Tool tab, I'll
01:28select Cusped Bezier.
01:32So I'll click, click, click, click and close.
01:37And we'll make sure that the views for the both left and right.
01:41I now have two shapes that are coincident, the left and right views, but they're
01:45right on top of each other.
01:46You can tell that right here, because this Bezier has both the left and
01:49right views, but if I toggle the Viewer, nothing happens.
01:53So what I want to do is make my right view, so I'll switch the viewer to look at
01:59the right view, come over here to the stereo offset and then we'll split off the
02:05right and then we'll adjust the position of the view.
02:10Now if I click on the left and right views, you can see my shapes, top, left, and right.
02:16And if I deselect, we can see it without the control points.
02:21There's another workflow, if you have a disparity field created by ocular,
02:25you can use that to automatically offset the other view.
02:28Now you don't need to have ocular, you can have a buddy with ocular and he can
02:32render the disparity fields for you.
02:34So let's take a look at that workflow, I'm going to delete this node, and point
02:39out that I do in fact have a disparity field here for you.
02:42Okay, so let's go back to the RGB view and we'll add a new Roto Node and we will
02:49tell it that we want to use our Cusped Bezier all around here, there, there,
02:53there and return to close.
02:56Now we have both left and right views here and this one Bezier has the two
03:00views, again, coincident on each other.
03:03This time we'll use the disparity field we got from ocular, to do the
03:06offset automatically.
03:09So I'm going to go over here, right mouse pop up and say, Correlate points, I'm
03:15going to correlate from the left view to the right view, it's tradition that the
03:19left view is the hero view, but you can do anything you want.
03:22So I'll set that and then click OK, and there you can see I have my other shape.
03:27I'll deselect here, and again, I can go right left, right, left, I have a new shape.
03:33Now you might notice that the shape doesn't quite fit exactly right, so let's
03:38try that again using the other option.
03:41I'm going to delete the right view, I'm going to tell it that I did is both left
03:46and right, again, so I now I have left and right views here.
03:50And this time I'm going to pop up and say Correlate average.
03:53See the Correlate points, puts the control points exactly where the
03:57disparity maps says.
03:59Whereas Correlate average does it like a little smoothing and an averaging on
04:03the disparity map, sometimes that works a little better.
04:05So we'll select this option here, and again, we're going to correlate from the
04:10left view to the right view, click OK, deselect and switch to the right view,
04:19left view, right view, left view.
04:23The really big news with the Roto and RotoPaint Nodes is the complete rewrite to
04:27improve system response time and make smaller Nuke scripts.
04:31I'm sure you'll find the new Roto Shapes and Tool Settings helpful and if
04:35you work in Stereo, these new stereo features will definitely speed your
04:39shot development along.
Collapse this transcript
2. The Tracker Node
Exploring autotracking upgrades
00:00The Tracker node has been substantially rewritten with many very cool
00:04new features added.
00:06At last we now have unlimited trackers and they're managed in an all-new tracks list.
00:11Keyframe tracking has been added, plus a one click track averaging feature, the
00:16snap to markers feature makes it much easier to plant accurate keyframes, and
00:21the new export options include the automatic creation of a corner pin node.
00:26What we used to just call tracking is now called auto tracking to differentiate
00:30it from the keyframe tracking.
00:32So let's take a look at just auto tracking first.
00:36Our Exercise File here is Lab Guy, find that folder here and load this clip if
00:42we want to play along.
00:43I am going to use the tab key, search function to go find a Tracker node, and
00:48hook it into my Read node and rezoom the viewer.
00:52So two immediate differences are, the Property panel here no longer has the
00:57options and buttons that we've seen before, they've all been moved up here and
01:01there's a lot more of them.
01:03There's been some important new features and functions added.
01:05We'll be taking to look at those in just a bit.
01:08So, first up there are three ways to add new trackers to your clip.
01:13Over here is the Add Track button, we turn that on, notice that it turns red,
01:17and all I have to do is go click, click, click, click, and I have added new
01:21trackers, I'll turn that off.
01:25You can also just use the Shift key, just hold down the Shift key and click,
01:28click, click, click.
01:29Okay, also there is the Add Track button very much like the old one.
01:35When I click on Add Track, it drops a new tracker in the old position, and then
01:40you have to pick it up, and move it to where you want it.
01:43A big new feature are these zoom windows here, this really helps with the
01:48precise positioning of things.
01:50The little window here shows you your previous setting. So watch this.
01:55I'll move the big window over to here and watch the little window jump to
01:59match it, then I'll move the big window down to there, and the little window jumps to match it.
02:04So, it gives you a kind of a history, of where your cursor used to be, put that back.
02:09Now let's take a look at this Track list, every tracker you create is added in
02:13to this list, and we get a little more window, so I can show you whole thing,
02:18here is your familiar Enable button, and there is of course the name of your
02:24track, this is the track X and Y data that you've seen all along. But this is new.
02:28Here's your Offset X and Y, in case you do offset tracking, these are the
02:32familiar TRS buttons for each tracker and another new thing the Error column,
02:37selecting tracks is a little different now.
02:39So, for example, over here is track 9 sitting all alone, if I select the track 9
02:44track, it lights up and adds the reference box and the search box.
02:50If I want to select multiple, I hold down the Command or Ctrl key.
02:55I'll select this track here and then do a Shift+Click down there, and I get a
03:00whole block of them.
03:02Okay, I am going to select all and delete tracks.
03:05Next, let's take a look at the very important new average tracks feature.
03:10Let me show you how that works.
03:11Make sure, our play head is on frame one.
03:14I'll come up here to this corner and let's say, I would like to average four
03:18trackers around this area here.
03:19So, I'll use the Shift button, Shift+ Click, add tracker Shift+Click, add
03:23another, Shift+Click and Shift+Click.
03:26I now have four trackers to track them all, we will zoom out a little bit, to
03:30track them all, I'll do a Shift+Click to pick the whole list, then we will track
03:34forward, our track forward, and reverse button are all up here now.
03:38Same as before, just in a little different location.
03:42Okay, so we're tracking all four points over the entire length of the shot.
03:47Okay we're all done now.
03:48I'll put the playhead back to frame 1.
03:51Now if I deselect the tracks, notice they all turn a lovely color.
03:55So each one of the tracks is now a different color.
03:58If I want to do an average of these four tracks, it could not be easier, I'll
04:03just click the top one, Shift+Click on the bottom one to select all the tracks I
04:07want to average and I'll click on average tracks.
04:10After months of computation, Nuke comes up with a brand-new track for me, that's
04:15the average of those four.
04:17We now have a new track here named Average track 1 and of course, we can rename
04:22these anything we want.
04:23And then we still have our other four tracks. Let's put in here.
04:26Here is my new Average track 1, I can keep the old tracks or I could select them
04:33and make them all go away, so that I just keep my Average track.
04:38It's not a link, it's actually baked in, so I can lose the originals. All right!
04:42Let's rehome the Viewer and take a look at another interesting new feature
04:46called the snap to markers feather.
04:48I am going to delete this guy here, and go to the Settings tab and down
04:54here snap to markers.
04:56By default, it's off and you have a choice of blobs or corners and these are the
05:01kinds of features, the snap to future it's going to look for.
05:04Let's turn that on.
05:06I'll do a Shift+Click to add a new tracker, Shift+Back to the Tracker tab, so I
05:10can see the onscreen reference and search boxes, see this little green circle,
05:15watch what happens, when I move the tracker around the screen.
05:19The green circle jumps to different landmarks, it has a love of corners, okay.
05:26And the other cool thing is once it's snapped to the corner, if you will let go,
05:31the tracker will jump to that position and you can see right here it's
05:34magnificently lined up with the center of the corners.
05:38These new auto tracking features will really help with your 2D point tracking.
05:42Now let's take a look at the all-new keyframe tracking.
Collapse this transcript
Introducing keyframe tracking
00:01Keyframe tracking is used for more difficult tracking targets due to either
00:05complex image content, or rapid motion.
00:08The idea is to plant periodic tracker keyframes that will guide the tracking
00:12calculations from keyframe to keyframe.
00:16I have a fresh Tracker Node here that I can use to show you how it works.
00:20I am going to zoom in here;
00:22I am using the Shift key.
00:24I am going to plant a tracker right on this box.
00:30Notice, I've got a keyframe on the timeline.
00:32I am going to move the playhead partway through the shot, I will reposition the
00:38tracker, and again, I got a keyframe and notice I now have two keyframe windows,
00:44each labeled with what frame they are on.
00:46So, let's go forward a little more, reposition, a new keyframe window, and we
00:55will go to last frame here and reposition there, and there's my fourth keyframe
01:01window, again, each neatly labeled with what keyframe they're on the timeline.
01:05Now I can jump between the keyframes by either clicking on the zoom window
01:10or come down here on the timeline and use the either Next Keyframe or
01:14Previous Keyframe buttons. All right!
01:16So, let's say I'm happy with my keyframes and I'm ready to do Keyframe Tracking.
01:21Unlike Auto Tracking where I have to make sure when a playhead is on frame 1 or
01:26I have to make sure it's exactly where I want to start tracking, it doesn't
01:30matter where the playhead is, you just come up here and you click on Track All
01:34Keyframes, and we're done.
01:39So, I now have tracking data on the entire timeline.
01:42Now here is something very interesting.
01:45If I come up here to the Delete All My Tracking Data Button, click, notice that
01:50it has left behind all my keyframes, that button does not delete your keyframes,
01:56unless you have enabled this button, which is the Delete My Keyframe Along With
02:00The Data Button, turn that back off.
02:02Now that we've seen how the Key Track All Button works, let's take a look at the
02:07Key Track Current Button right here. Here is the idea;
02:11my playhead is between these two keyframes here and there, that's what
02:16this button is for.
02:17When I click that button, it'll only track between the keyframes that are on
02:21either side of my playhead.
02:23So, I'll click that and adjust tracks only in between the two keyframes on
02:27either side of my playhead.
02:29Now that we've seen both Automatic and Keyframe Tracking, we can now check out
02:33the New Menu Bar up here.
Collapse this transcript
Using the new menu bar
00:00Many of the buttons in the Tracker Property panel have been moved to the top
00:04menu bar, plus it has several new features.
00:06Let's check it out.
00:07Okay, I need a little more screen space.
00:10So, let's move this down, so we can get a good look at all of our Top Menu Bar Controls.
00:15This first button here when it's red as you saw before, this adds tracks, click,
00:20click, click, click, turn that off.
00:22This is the pop-up menu for setting the rules for when to grab New Reference
00:26Frame, this used to be on the Property panel, and this is the threshold for the
00:30error that will trigger the new grab feature.
00:33This button forces the grab of a new pattern;
00:35this of course is our familiar Track Forward and Track Backward buttons, which
00:39are now the Auto Track Buttons.
00:42This is the Key Track All button that we used earlier, and is the Key Track
00:47Current button that we used just a moment ago to track between keyframes.
00:50Re-track on Move Link is enabled by default.
00:55Let's take a look at what that does.
00:58If I have some track data here and if I were to come in and just move one
01:03key point, watch the playhead, boom, bang, it re-tracks everything between the keyframes.
01:10This button here, the Create Key on Move Link, what that does is, I am going to
01:15put the playhead down here for you.
01:17Notice there is no keyframe down here on the timeline, but if I were to move my
01:21tracker, boom, I get a keyframe.
01:24The Auto Tracks Delete Keyframes button means, if you do an auto track,
01:28remember earlier, we did the keyframe track, I deleted all the data, and it left my keyframes.
01:35Well, in this case, if I do an auto track like this, it will overwrite all of
01:43my keyframes, and by default that button is enabled, so my keyframes are now all gone.
01:50These of course are the familiar Add a Keyframe, and Remove a Keyframe;
01:54this is a kicky new button, the show_error_on_track button.
01:57When I turn this on, all of my track points are now color-coded, green means
02:03they are very good values, yellow means they're a little bit wobbly, and if they
02:08turn red, that means they are very wobbly.
02:11So, this feature gives you a visual cue on what parts of your track are reliable
02:15and which parts might be flaky.
02:18This the new position for the Center Viewer button so that centers the Viewer,
02:23and this is the new position for the Update Viewer button, I am going to turn
02:27off that Home Viewer button.
02:28And again, this is our clear data, and Clear Backward buttons, Clear
02:32Forward, and don't forget;
02:34the clear_actions_remove_keyframes, by default, this is not enabled, so you can
02:39clear without deleting your keyframes.
02:41If you enable this, your keyframes will be wiped out when you clear your data.
02:45This is the Clear Offset button, so if you're doing offset tracking, click this
02:50to reset back to normal tracking.
02:52This is the Track Reset button, so if your tracker has been resized, you can
02:56just click there to restore it back to the default size and shape.
03:00The New Tracker Workflow Design is to use the auto tracking, for the easy
03:04tracking targets in your clip, then augment that with keyframe tracking for the hard targets.
03:09Next, let's see what has been changed in the Settings and Transform tabs.
Collapse this transcript
Understanding updates to the Settings and Transform tabs
00:00The Settings and Transform tabs are still here, but with some changes.
00:05The Settings tab in very different, while the Transform tab has hardly changed at all;
00:10let's take a look at the Transform tab first.
00:12I have put up the Transform tab from the Nukes 6 Tracker, so that we can do a
00:17straight across head-to-head comparison here.
00:18At the top here there seems to be very little change.
00:21We don't see a difference until we get down here to the Live Link Transform.
00:26When enabled, it recalculates the transform, if trackers are linked to other
00:30nodes and they're moved.
00:32Okay translate, rotate, scale are all the same.
00:35Now here's a new one, skew X and skew Y with the skew order, compared to the
00:39old skew, like all the transform nodes in Nuke, skew X and Y have now been broken out.
00:45The Filter and Motion Blur Settings down here are the same as before.
00:49Now let's take a look at the Settings tab and here is the Settings tab in Nuke 6.
00:54Across the top the track channels are unchanged, but we have a new thing
00:58here, pre-track filter.
01:00This pop-up allows you to choose between a couple of pre-processing operations
01:04that are applied to the clip before tracking.
01:06The default, adjust contrast increases the contrast a bit, median applies a
01:10median filter and that's good for like noise and grain.
01:14If you enable Adjust Track for luminous changes, it attempts to compensate for
01:17changes in the brightness in the scene lighting.
01:20The max-iterations, epsilon and max- error are the same as before, but here
01:26are some new toys.
01:27The clamp super-white, sub-zero footage puts a clamp on any code values above 1
01:31point or below 0, they can sometimes spoof the tracker.
01:36This one shows the error on your track paths and this one hides the progress bar.
01:40You saw snap to markers earlier, where we demonstrated its snapping to corners,
01:44here are some zoom window controls and the old Grab New Pattern section has been
01:50put down here into the Auto- Tracking twirled down menu.
01:52Here is the warp type pop-up menu move from the Nuke 6 location, down to the new
01:57Nuke 7 and we have the same exact options.
02:00Remember the warp type is telling the tracker what kind of motion to look for in the clip.
02:06So if you're tracking an element that's rotating for example, you'd want to
02:08choose this option before you do your tracking.
02:11We have many options for different types of grab behaviors and depending on
02:16which behavior you choose, these parameters will wake up down here.
02:19All right, so we'll close the Auto- Tracking options and twirl down the Keyframe
02:24Tracking options, I'll move this down out of the way.
02:27These three options here are duplicates of these three in the menu bar, and
02:31these down here are just some keyframe display options.
02:34As you can see, the Settings tab has major changes to support the new features
02:39in the Nuke 7 Tracker, there has also been a new Export menu added that you'll
02:43want to know all about.
Collapse this transcript
Trying out the new export options
00:00The new Export options menu is down here at the bottom of the Tracker tab.
00:05This pop up gives you several different choices.
00:08This pre-builds nodes for you.
00:10The first choice is the CornerPin2D Node using whatever current frame the
00:14playhead is on, so if I select that, I then have to make sure I choose four and
00:20only four tracks and then I say Create, and there is my CornerPin Node, using
00:26Alt+E to show the Expression Link, it is in fact a link to the original Tracker
00:31Node, I will put that over here.
00:36I could also choose the CornerPin Node that uses the transform reference frame;
00:41that would be on the Transform tab, the reference frame right here.
00:47Now both of these are going to be Link Nodes, these two options are the exact
00:51same type of nodes, only the data is baked in, no links.
00:56Here I'm going to create a Transform tab that would be set to stabilize my data,
01:01so I'm going to choose let's say this track and this track, and then I say
01:07Create and there is my Transform Node set for stabilizing data.
01:13Again, it's linked.
01:15The next option is for Link Transform Node that's set for match move.
01:20And these two options here are Stabilized and Match Move, but again, they're
01:23baked and not linked.
01:26With the addition of so many new features, not to mention the outstanding
01:29keyframe tracking capability, the Nuke 7 Tracker Node is faster to work with and
01:34able to solve even more tracking problems than ever before.
Collapse this transcript
3. Primatte 5
Exploring the new Primatte 5 features
00:01Primatte 5 introduces four powerful new tools that speed up the keying process
00:05and improve results by providing new tools to solve old problems.
00:10Let's check this out.
00:12We'll start over here with the Auto-Compute;
00:15by the way, all these images are in the Tutorial assets folder.
00:18We'll open up our Primatte Node and the first feature we'll look at is the Auto-Compute.
00:24This is a brand-new algorithm that is very effective;
00:27you can get a great key very quickly.
00:32First, I'm going to increase the size of our Viewer area so that we can see more pictures.
00:37Here we go!
00:39Then click the Auto-Compute button and there's our Alpha Channel. Wow!
00:45It automatically selects the background color and does a cleanup on the
00:48foreground and the background.
00:50Note that the Operation has already been set up for Clean Foreground Noise, so
00:54it's done all three of these for you automatically.
00:57You can now move directly to Keying Refinements and Spill Suppression.
01:01We're back to RGB, let's take a look at the new smart select background color
01:08feature, hook this up, open up the New Primatte.
01:13This new feature is actually the first operation in the Operation stack.
01:17To show you how effective this is, I'm actually going to use the original Simple
01:24Select BG Color operation and compare.
01:27So we'll select that, come up to our picture, select a region of green screen
01:32pixels to set the background color, and then we'll look at the alpha channel,
01:36look at that, all the transparency.
01:39Now, I'm going to now switch to the Smart Select BG Color tool and I'm going
01:43to carefully select exactly the same area, bang, look at that, the foreground
01:49is much more solid.
01:51That's because the new Smart Select Background Color algorithm actually uses a
01:54histogram analysis for separating the foreground from the background and it also
01:58performs a little clean foreground noise operation internally.
02:02In fact, all I have to do now is set the Clean Background Noise, couple of
02:07strokes here and there, and we're ready to go to launch. Next up;
02:13the new Adjust Lighting feature. This is very cool!
02:18How many difficult keys have you tried to solve because the backing region is unevenly lit?
02:23Here we have hotspots over here on the site and darker spots over here.
02:28Well, the Adjust Lighting feature actually corrects the green screen backing to
02:33give it a much more uniform lighting.
02:35Let's see how that works.
02:37First, we'll select Auto-Compute and then we'll turn on the Adjust Lighting and
02:42now let's switch to the Alpha Channel to take a look.
02:44I'm going to toggle the Adjust Lighting on and off and you can see where it's
02:48actually cleared out a lot of the haze in the background.
02:51The thing I love about this is it doesn't disturb the edges of your key,
02:56there's no degradation, there's no dilatory erode, beautiful, re-home that, go back to RGB.
03:03Now along with the Adjust Lighting feature there's two new Output Mode options.
03:09First the Adjust Lighting Foreground, this shows you the green screen after
03:14it's been affected.
03:15In fact, watch what happens when I toggle Adjust Lighting on and off, okay,
03:20this is the original;
03:21this is the corrected, original, corrected.
03:23Now there is another feature in the output, the Adjust Lighting Background.
03:29What this is doing is it actually building a clean plate, and it's using that
03:33clean plate in order to help pull the key.
03:36This allows you to dial in the clean plate if you whish.
03:38We can open up Adjust Lighting, the Threshold Slider as you move it to the
03:44right, brings more of the foreground into the clean plate.
03:47The Adjust Lighting algorithm actually divides the picture up into a grid, so if
03:51we increase the grid size, we get more fine detail in the grid, okay, so we'll
03:56put those back to default, close that up and return to our lovely composite.
04:03Next is my personal new favorite the New Hybrid Matte feature.
04:08Let's open up that Primatte, close this, look up the Viewer, check this out.
04:15The Hybrid Matte feature actually creates a core matte inside of the Primatte
04:19Node, you may never have to use additional keyers again to create your
04:23composites, now that you could do it all in Primatte.
04:26First, we'll select the Auto-Compute, we'll go to the Alpha Channel and we can
04:32see we have all this transparency in here.
04:35The Hybrid Matte feature is especially useful when the foreground colors are
04:40similar to the backing color, like somebody shoots blue jeans on a blue screen,
04:44like who would ever do that, right?
04:48Watch what happens as I toggle the Hybrid Render feature on and off, you see,
04:52it's adding in the core matte that's created internally in Primatte.
04:58Now we can see this core matte, come down to the Output Mode, pop-up Hybrid
05:03Core, there's your core matte and now we can adjust that Hybrid Matte, you can
05:09dial in the erode or adjust the blur radius, I'll put those back to default.
05:18You can also see the Hybrid Edge.
05:21What this really is is your original key.
05:24So hybrid edge will show you the original key, hybrid core will show you the
05:29core matte, and then go back to composite, you'll see the two together.
05:33And you can actually just toggle that node on and off to see the effect of it, outstanding!
05:39Primatte 5's powerful new tools and smarter algorithms promise faster and
05:44higher quality keys with less cleanup work, and the awesome new hybrid render
05:48feature, virtually eliminates the need to use multiple keying nodes to create
05:53your core mattes.
Collapse this transcript
4. The New MotionBlur Node
Setting up and using MotionBlur
00:00The new MotionBlur is a NukeX Node that was lifted from the F MotionBlur Node in
00:05the FurnaceCore plugin set.
00:07It uses Nuke's motion vector technology to intelligently apply a high quality
00:12motion blur to the moving parts of the clip, it supports GPU processing for much
00:17faster rendering, but that require certain NVIDIA GPUs and CUDA drivers.
00:26The images we're using here are in the Tutorial assets.
00:31We'll find the MotionBlur Node up here in the Filter tab and there is our
00:35MotionBlur Node, and we'll hook it up to the source clip there.
00:42Let's make a little more room for our Viewer, set H to get our maximum
00:48viewer, maybe a little more, okay, let's zoom in a bit, see what we got, let's
00:55start with the Shutter Time, here's what this number means.
00:58A Shutter Time at 0.5 is 180 degree shutter which is like a normal film camera,
01:04we say a shutter time of 1;
01:06that's a 360 degree shutter, so you're getting a lot of motion blur, and a
01:10Shutter Time of 2 is going to get you even more.
01:13Now notice that we're getting sort of a double or triple exposure here, that's
01:19because we need to increase the shutter samples.
01:21Right now we're only getting three samples, you can actually see, one, two,
01:24three copies of the image.
01:26So if we tap that up to 4, it looks better, 5, it looks better, 6, there you go.
01:32So you turn that up until it smooths it out, keep in mind, the more shutter
01:36samples, the more processing time.
01:40Next, let's take a look at the Vector Detail value.
01:44The Vector Detail increases the amount of detail in the picture, which gives you
01:48a higher quality render if you've got a lot of fine detail in your picture.
01:52So if you set the Vector Detail to 1;
01:56that means you're going to get a motion vector calculated for every pixel on the screen.
02:00If you set it for 0.5, you're going to get a vector detail for every 2 pixels
02:06and the default of 0.2;
02:09you're getting one vector for every five pixels.
02:11Now let's take a look at what that means.
02:14Let's scoop down here and I'm going to push into this part here.
02:19I'm going to set the Vector Detail down to 0.1, which is 1 vector, every 10 pixels.
02:27Now, I want to make a copy of the Motion Blur Node, hook it up to this
02:32source, hook my Viewer up to it and then I'll set the Vector Detail to 1,
02:37maximum resolution.
02:39Now as I toggle between the two, you can see the difference in sharpness.
02:44It's even more noticeable over here.
02:50So the greater the vector detail, the more detail that's kept in your picture, but
02:54once again, there is a processing price to pay.
02:57Down here, the Matte Input.
03:01This is the input from matte that you might draw over your foreground character
03:05in order to isolate him from the background.
03:07This will prevent tearing of the background that you sometimes see with the
03:10motion vector processes.
03:12The foreground vectors input here, is if you're using the vector generated node
03:16to precalculate the vectors, this is a good idea if you're going to use those
03:20vectors in several locations, otherwise the motion blur node calculates it's own vectors.
03:25A kicky thing you can do is to use the vector generator node to calculate
03:29vectors from another clip, feed them into this motion blur node and apply them to
03:33a different clip, gets you some very interesting effects.
03:35Let's take a look at actually a classic setup.
03:39Here you have my Viewer, set it to ping-pong;
03:43I will play this for you.
03:45This clip has absolutely no motion blur, so you're getting horrible motion strobing.
03:49So we're going to use the motion blur node to fix that, there you go.
03:56Now let me play that and you'll see with the motion blur it looks a whole
04:01bunch more natural.
04:02Stop that and you can actually see how the motion blur is changing its
04:06direction and it's intensity on different frames, depending on the speed and
04:10the direction of the target.
04:12If you're doing a speed change using Oflow or Kronos, they'll take care of the
04:16motion blur as well.
04:17The Motion Blur Node is for those situations like this, where you have motion
04:21strobing in a clip that need to give it a realistic motion blur.
Collapse this transcript
5. The New ZDefocus Node
Setting up and adjusting
00:01ZDefocus is a major upgrade to the old ZBlur node and includes GPU Acceleration,
00:06as well as an improved algorithm, plus several new features.
00:10Considerable effort has been put into improving its handling of edges in
00:14occluded areas, compared to the old ZBlur node.
00:18Although ZDefocus node is GPU accelerated, for the GPU processing to work, it
00:23requires certain NVIDIA GPUs and CUDA drivers.
00:25We'll start off by looking at the ZDefocus on a CG item, if you'd like to
00:31play along, you can get from the tutorial's asset, ZFighter.exr and the ZFighter BG.dpx.
00:39Now the ZFghter exr file has its own depth channel built in.
00:44By the way, critical point, the depth channel must not be anti-aliased;
00:49if it is anti-aliased, it will introduce edge artifacts.
00:52So make sure your depth channels or not anti-aliased.
00:56If your depth channel is anti-aliased, then you can unpre-multiply it with the
01:00alpha channel back out the anti -aliasing. Okay back to RGBA.
01:04Now we want to comp this over the background, so we'll select the Read node,
01:09type M, get a Merge node, hook that up to the background and we have a lovely
01:13composite, push in a little bit, take a look at the action.
01:16Now in this Merge node, I want to retain my depth layer, which was cutoff by the
01:22merge node, so I'm going to tell it to also merge depth, so now the depth
01:27channel comes out the output of the merge node;
01:29all right, put that back to RGBA.
01:32Okay, let's add our ZDefocus node, we'll select the Read node and come to the
01:36Filter Tool tab, click on that, go all the way to the bottom to get the ZDefocus
01:42node, and we'll adjust this up to make it look pretty.
01:47Okay, let's push in and see what we got, well, this is not very nice.
01:51So our first step is we're going to pick up the focal point and put it where we
01:55want it to be focused on, which I'll put it right here.
01:58Ah, much better, the first thing you want to do is make sure that your depth channel
02:01is set correctly, if it's not in the depth Z in your data stream then use the
02:05browser to go get it.
02:07So wherever I place this focal point, that's going to be the part of the picture
02:12that's in focus, you can move it there or you can dial it over here, or you can
02:16even attach it to trackers, so you could do a follow focus if you wanted.
02:20Depth of field of course, defines how deep this is going to be in perfect focus,
02:25and then out of focus on either side of that, we'll come back to that in a minute.
02:29Size of course is the amount of defocus, so let me push in here, I'll cut the
02:33size down, it gets sharper.
02:35I'll jump the size up, and of course, I get a lot more defocusing.
02:40So the maximum slider sets an upper limit for the Deblur size, no matter how big
02:44the size value, the blur itself will get no larger than the maximum setting, put
02:49that back to default.
02:50Now let's take a look at this math thing, this math popup selects the rule for
02:56interpreting your depth map. By default the math property is set to far equal
03:02zero which is the behavior of Nuke and Render map, but other apps have different
03:07rules for their depth channel, so you have to choose the right one here, you can
03:11look those up in the user guide.
03:13Now let's take a closer look at the output options, result of course is
03:17the defocused image.
03:19We can also choose the focal plain setup, now this is a diagnostic view. This
03:24divides the picture into three colored zones, red is in front of the depth of
03:28field, blue is behind it and green is the depth of field.
03:32Well, our depth of field was set at 0, so let's tap that up to .1, there we go.
03:38So the green part will always be in focus and the red part will get
03:42progressively more out of focus towards the camera and the blue, more out of
03:46focus away form the camera.
03:47As we move the focal point around, we of course can change where that green zone
03:52lands, put it back to here, and if we increase the depth of field value, then
03:57the depth of field gets larger or smaller, so we can adjust that, and this setup
04:01allows you to actually see where it's happening.
04:04If you wish you can also actually dial in the focus plain location right here,
04:08but normally, you'll use the focus point, because what it does is it just
04:12samples the depth channel wherever you drop it off, it fills it in for you.
04:15Another output you might find helpful is the Layer setup.
04:20The way the ZDefocus node works is it actually sorts the image into layers in
04:25Z, you can actually see those layers here, this allows you to adjust the layer rule.
04:30By default the automatic layer spacing is selected and the ZDefocus node has
04:35actually sorted it into what it thinks are the best number of layers.
04:38However, you can turn that off and set it yourself here.
04:42If I set it to 5 layers, you can see I only have 5 layers in front of the
04:47camera. So I can tap that up to increase the number of layers.
04:51The more you increase the layers, the better the quality of the render but the
04:55longer the rendering time.
04:58The layer curve control allows you to control the distribution of the layers.
05:02If you move this down, it stretches them from the focal point and moves them
05:06away, further away from the camera.
05:08If you go in the other direction, it squeezes them, what this does is it gives
05:12you higher quality renders, as you get closer to the focal point.
05:15And we'll turn our automatic layering back on.
05:18Once you have the defocus parameter set, the next step is to dial in the
05:23appearance of the defocus parts of the picture;
05:25we'll look at that next.
Collapse this transcript
Adding depth of field to a live action plate
00:01Now we'll see how to adjust the appearance of the defocus parts of the picture,
00:04as well as how to create a depth map for a live-action plate with no depth
00:09channel, so we can add our own depth of field to it.
00:12Now this area down here is to affect the BOKEH.
00:15The BOKEH is the brightness and appearance of the defocused parts of the picture.
00:21So let's have our output back to result, and we'll zoom in, to a part of the
00:26picture that has some nice highlights to play with.
00:29Filter type is the shape of the filter;
00:32you have disk bladed, which can be a heptagon, octagon, pentagon or image, in
00:39which case we will have an input image.
00:41We'll start with the disk;
00:42these sliders affect the disk shape.
00:45Now this is not actually the shape of the disk, when the filter shape is set to
00:501, you're getting a solid disk, if you slide that down to 0, it becomes a bell
00:55curved Gaussian type shape.
00:57So you can then slide that back and forth to pick the best look.
01:02The aspect ratio allows you to stretch in X or Y the overall disk shape. If we
01:08choose the bladed filter, you then have a whole list of different parameters
01:13for adjusting the look of the blade, you can spend an entire afternoon playing with this one.
01:18We'll go back to disk, now to show you the image option, I've created a little
01:23shape here with my Roto Node, okay.
01:26So I want to just hook that up here, go back to the composite and we'll zoom
01:32back in to our area here to see the effect of the image, so I'll tell it to now
01:36go look at that image input.
01:38So I'll select the image input for the filter type and it will be looking at
01:42this image here, so let's push in a little closer.
01:48Now the filter bounds effect the results, if I set up for shape, it's just going
01:52to look at that image within the bounding box, but if I set it for format, it
01:57takes a larger view, and I get these lovely little X patterns, which is exactly
02:01what my little shape is.
02:02So this will control the BOKEH shape, down here we control the BOKEH brightness,
02:08if you turn on Gamma Correction, the ZDefocus node applies a gamma of 2.2 in the
02:13image, applies the filter, and then puts it back to linear, this has the effect
02:17obviously of brightening things up for you.
02:20The Bloom parameter gives you two sliders, one, the bloom gain, let me gain down
02:27my viewer, so you can see this better.
02:31The bloom gain is how much brighter the blooms get, so if I turn this down, you
02:36see they get darker, and I bring it up, and they get a lot brighter, okay.
02:41So we can affect how bright they are with this.
02:43I'm going to leave it to a high value.
02:46The Bloom Threshold is the cutoff point;
02:49any bloom that's brighter than .8, will get the bloom gain, anything below that will not.
02:54So if I lower the threshold, more of those guys are going to pickup the bloom gain.
02:59Alright, I will just turn those off.
03:02Reset the viewer gain, back to default, and re-home the Viewer, and down here
03:08at the bottom of the defocused Property panel, the mask and mix parameters are the usual stuff.
03:13Now let's take a look at the ZDefocus node used for some live action work.
03:17I will cruise over here, hook up my Viewer, so if you like to play along, you
03:25can go get the alley.jepg image out of the Tutorial assets.
03:28All right, so let's see what we got here, so I would like to use the ZDefocus
03:34node and a major depth of field to this shot.
03:37So what I've done is I've used the Roto Node to create a synthetic depth
03:41channel if you will.
03:42I'm going to put the depth Z into the Viewer's Alpha Channel so you can see it,
03:47open up my Roto Node.
03:48So I just drew a little rectangle and pulled out the feathered edges.
03:52The key is to put the output into the depth channel, so that the ZDefocus node can find it.
03:57We are done with that and go back to RGB.
04:04So let's add our ZDefocus node, we'll use the tab search function here and type
04:10zd, and there it is, okay add that in, first I'll check that the depth channel is set
04:18correctly, okay, that's good.
04:19Then I want to move my focal point to here, because I want the foreground to be in focus.
04:25Now this doesn't look right, because I haven't set the math correctly, because I
04:29use the Roto Node my far distance is 1, so we'll pop up the math, we'll say, far
04:36is equal to 1, now we are set up correctly.
04:39Next, let's take a look at the focal plain setup, I have no depth of field, 0,
04:44so let's introduce some depth of field, maybe a little more, more, more, more,
04:48okay and then I'll move the focal point here to walk the depth of field into this
04:53area of the picture, so it'll be in complete focus, and out here, it's where
04:58I'll get my depth of field effect.
05:01So we'll set the output, back to results, home the viewer, and I'm going to just
05:08punch up the size, just so it is really obvious, there we go, all right.
05:12So I'm going to zoom in here.
05:14So the foreground is completely in focus, and as we walk towards the background,
05:18it gets progressively out of focus, which I exactly what I wanted.
05:23By the way, if you have any old Nuke scripts that use the old ZBlur node, not to worry.
05:27The Foundry kept the old ZBlur node here in the basement in Nuke.
05:31If you get all nostalgic and you want to actually use the ZBlur node, you can do
05:35that by putting a cursor in the node graph, type X to get this little browser
05:39window, make sure it's set for TCL and not Python, then type ZBlur, remembering
05:45that they are case sensitive.
05:46Now we click OK and there is the old ZBlur node.
05:50Nuke 7's new ZDefocus node offers major improvements in speed, creative control,
05:55and ease of use, and is equally useful, for both CG and live action.
Collapse this transcript
Setting up and adjusting bokeh
00:01In the previous ZDefocus node tutorial, we took a quick look at the Bokeh settings.
00:05In this tutorial, we'll dive in for a much closer look to see the amazing amount
00:10of control you have to dial in the look of a lens Bokeh.
00:13I'll be using this city lights picture that has this depth Z channel already
00:17built in, to get my ZDefocus node, I'll just use the tab search, zd, there it is.
00:23Ah! All set.
00:28And because the image has a depth channel, I already get a default defocus.
00:33We'll be looking at all three of these filter types right here starting with the disk filter.
00:38The first parameter, the filter shape, determines whether the Bokeh is a hard
00:44circle or a soft fuzzy blob, so as you move towards 0, it becomes just a
00:49Gaussian curve, back to 1, a sharp disk.
00:55The aspect ratio will squeeze it vertically or stretch it horizontally, so
01:02you're covered, whether you're working on anamorphic plates or you're working
01:05flat and going out to anamorphic.
01:08Next, the blade filter type, this refers to the bladed iris.
01:16Again, we have the aspect ratio as before, back to default, and here is the
01:21number of blade setting, by default you have five blades, you can see them right
01:25there, but we can turn that to any number of blades we want.
01:29I like 5, so I'm going to put that back.
01:32Now the roundness is how straight the edges are, if I go 100% roundness, way up
01:38here, it becomes almost a circle.
01:40If I go on the opposite direction, the shape becomes concave.
01:44We'll put that back to default, which is just a bit of roundness.
01:50Rotation of course allows you to rotate the filter, so that you can get any
01:56orientation you like.
01:57The inner size and inner feather will show up better if I take the inner
02:05brightness down to something like to this, see this dark centre, that's what
02:09we're talking about.
02:11So I can change the inner size to make it smaller, large as I wish, and inner
02:16feather is how soft it is, here we go, we'll put that back to default.
02:21Now here's an interesting little toggle right here, the catadioptric feature.
02:28Catadioptric lenses use a combination of both mirrors and lenses, which produce
02:33a unique Bokeh with a hole in the center, check it out.
02:36We'll turn this on, there's my hole.
02:39You can also adjust of course the size of the hole by adjusting the catadioptric
02:42size, we'll turn that off.
02:46So far we've seen the built in Bokeh shapes, next, we'll see how to use an image
02:51to create a custom Bokeh shape.
Collapse this transcript
Customizing the bokeh
00:00We have seen the disk and bladed filter types, so let's see what happens when we
00:05supply our own image to create a custom bokeh shape.
00:07Before I select that however, I am going to show you the images I have.
00:11I have this Star Filter here and the important thing about this is there are
00:16really two boxes to be aware of.
00:19The outer box out here is what we call the Format.
00:22The inner box here is the bounding box of the shape.
00:26Note that the shape is off center from the format, it's on the lower left-hand
00:32corner, you will see why this is important in a minute.
00:35So I am going to take the filter input of the ZDefocusNode and hook it up to my
00:39Star Filter, switch back to the ZDefocusNode and zoom in.
00:44Now we will switch the filter type to image, and there you go.
00:48Of course, we can change the size a bit, by adjusting the size and maximum values.
00:54So why was I going on about the format or the bounding box of the shape?
00:59That's for right here, filter bounds. If you say shape,
01:03you are telling it that you only want to use the shape inside the bounding box.
01:07However, if you select Format, that means you want the large outer box and
01:11you can see now the Star bokeh is shifted down lower left and has kind of a lengthy treatment.
01:20Next thing I want to show is very cool, chromatic aberration.
01:23Let me show you my Chromatic Aberration Node.
01:27I have hooked up a little tchotchke here that allows me to dial in the amount of
01:35chromatic aberration I want, it's just a three channel filter and all I am doing
01:39is an offset of the RGB values, like so.
01:44These offset channels will cause an offset in the bokeh of the image, so
01:49let's check it out.
01:51First, we will hook up the filter to our chromatic aberration and switch the
01:55viewer back to ZDefocusNode, and oops, an error message.
01:59We don't need this any more.
02:02The error message comes right here, by default, it's looking for the alpha
02:08channel to contain the filter input, and that's not what we have here, we have a
02:11three channel image.
02:12So we have to use this.
02:14This means, use the same three channels in that filter input as we are using in
02:18the image, which is RGB.
02:19So we turn that on and ah, much better.
02:22Okay, let's push in and see what it looks like.
02:25So you can see all my bokehs now have a chromatic aberration, Red fringe in the
02:30upper, blue in the lower right, in fact, if I open this guy up again, I can
02:34dial it up and down to increase or decrease the amount of chromatic aberration, very, very cool!
02:41Here I will turn it off and toggle that for you so you can really see the effect.
02:49Okay, we're done with this, so I will close that, go back to the ZDefocusNode.
02:56Next, let's take a quick review of the Gamma Correction and Bloom. Toggle the
03:01Gamma Correction on and the bokeh gets a lot brighter.
03:05You have no control over this because it's doing a Gamma 2.2 change to the
03:09bokeh, relative to the image.
03:10So we'll turn that off.
03:12If you want to dial in your own control, use Bloom.
03:16With the Bloom feature turned on, these two parameters wake up.
03:19The Bloom Gain of course, allows you make it brighter or darker, put that back.
03:25As you lower the Bloom Threshold, darker and darker pixels get bloomed.
03:31With these powerful new Bokeh features the ZDefocusNode can match the most
03:35obscure lenses for Visual Effect shots, or if you're doing Animation, you have
03:40complete flexibility to generate your own creative looks.
Collapse this transcript
6. The SplineWarp Node
Exploring the SplineWarp node
00:00The SplineWarp Node in Nuke 7 has received several improvements that speed up
00:04workflow and improve your command and control of the work process. Let's take look.
00:09Let's go get a new SplineWrap node, we will use the tab search, spl, here it is,
00:13SplineWarp and we will hook it in.
00:19And the first thing you'll notice is there are some new tools in the top toolbar.
00:23Now this row is hide and show for points and splines and on-screen control
00:28jacks, just like the Roto Node.
00:30But down here, this is the new source and source work buttons, here is the B side
00:36source and source warp, we'll look at those in another video.
00:39But the really good news is there are some really cool new source and
00:42destination spline drawing techniques, let's take a look.
00:45I'll open this up, we'll push in here, so I am going to select my Spline and
00:52draw, click and drag, click and drag, click and drag, draw, return. Okay, the
00:57first method of creating your destination is to draw your source, right-mouse
01:03pop up, duplicate and join.
01:06And notice over here, I now have two Beziers, even though it only looks like one.
01:10Notice also, that because they're linked, you have this ghosty reference here to
01:14which one they are linked to.
01:15I'll make this easier by naming this one the source (src), and this
01:20one destination (dst).
01:22So, now you can see that the src is linked to the des shape, and the des
01:27shape linked to the src.
01:28Now this might seem obvious here, but in a real job these two shapes might be
01:33very far apart, so it's terribly handy to know who they are linked to.
01:36So I'll select the destination shape, come into the Viewer, Command+A or Ctrl+A
01:42to select All the control points in the destination shape, and now, I can size
01:47it up, or I cam go in here and do individual control points as I wish. All right!
01:54That's Method 1. Draw a shape and do the duplicate and join command.
01:58Method number 2, let's scoot over here, push in.
02:01I'll draw a shape on this eye and then I'll draw a second shape, so this is
02:10your second method.
02:12You can draw two shapes and then join them, and here is the new tool.
02:17This is the Join tool right here. We'll select that, click on the src, and click
02:24on the des, and the new join tool will link the source to the destination,
02:30change their colors, bright red, pale red or pink, put them in the list, and
02:35identify who they are linked to.
02:37We also have a new Preferences, let's go up to Nuke>Preferences>Viewers, down
02:45here draw source stippled, draw destination stippled, I'll close that and you
02:51get this dotted outline.
02:52Now you only get the dotted outline for joined shapes.
02:56If I draw a new shape over here, it's not joined, it's not stippled.
03:01So, I'll turn that off, Preferences> Viewers>Source and Destination Stipple, done.
03:09Let's take a look at some of the new copy commands.
03:12I am going to clear all these out by selecting and do the minus key, I'll rehome
03:17the viewer and let's push in a little bit and I am going to draw one simple
03:24shape here, and I'll draw a second simple shape up here.
03:28Okay, and then I'll join them with our new Join tool. All right!
03:37And again, they are marked as linked right over here.
03:39I'll go select the Selection tool, so I can see my control points. All right!
03:46The new copy commands are when you do the right mouse pop up on a control point,
03:51here's your copy command, this now tells you whether you have one or more curves
03:55and one or more points and you get to choose which you want to copy.
03:59If I select 2 points, the copy command now says you got 1 curve, but you got 2 points.
04:06And if I have both shapes selected, the copy command now says you have two
04:13curves, so you get choose exactly what you want to do. More control than ever.
04:19Another important new feature is the ability to copy and paste single points
04:23between source and destination curves.
04:24For example, with my selection tool enabled, I'll select this point, right mouse
04:30pop up, say copy this point value, go to my destination shape, right mouse pop
04:37up, paste, the 1 point, bang, they are not coincident.
04:43Another important new feature is the ability to select and drag coincident point
04:47pairs together like these two.
04:49First, you have to have both the shapes selected of course, now if I select this
04:54point, I get the on-screen control jack, and now the points are moving together.
04:59I can adjust this and rotate that.
05:02So, this allows you to do the control points and leave them coincident.
05:07I'll click off to the side to deselect.
05:10The bbox dropdown has been replaced by this crop to format checkbox.
05:18Another new feature is the ability to link trackers to source or destination
05:22curves and points independently.
05:24So, we'll select point, right-mouse pop-up, link to a tracker, very nice.
05:32For the next new feature, I need to load a new script.
05:34In this SplineWarp node, I setup several splinewarps, one for the left, one for
05:41the right eye, and another one for the face and I can toggle that on and off and
05:45you can see the effect of that.
05:48First I am going to turn off the Overlays with the cursor in the Viewer, type O on the keyboard.
05:53What I wanted to show you here are these Warp sliders right here; Root Wrap,
05:57Layer Warp, and Pair Warp.
05:58Now Layer Wrap and Pair Wrap are ghosted out and they will be ghosted until you select one.
06:03So, let's start with the eyes, I am going to select the eyes folder, which is
06:07what they're calling a layer.
06:08Watch what happens when I dial it down, look at that, I have a slider now for
06:14each layer, which can be of course individually animated.
06:19I'll choose the face layer, and again, dial that down, and I'll put that back.
06:27Next, I can choose just the left eye pair for example, and now the Pair Warp
06:32lights up, dial that down, and put that back.
06:36And the Root Wrap is the slider that controls all the wraps, so I can dial that
06:42down, and down, to give me a global control over everything.
06:48The new tools and features in the Nuke 7 SplineWarp Node will both speed up your
06:52work and give you more control over your warps.
Collapse this transcript
Warping techniques
00:00Here we will take a look at the changed workflow when warping with the Nuke 7
00:04SplineWarp Node, and see how the new tools help speed things along.
00:08Now this video assumes that you are already familiar with the Nuke 6 version of
00:11the SplineWarp Node.
00:12By the way, you'll find this face A .tif file in our Tutorial assets.
00:17We will start by using the tab key to do a search on SplineWrap and there it is.
00:26Now we've already introduced an overview of the new features in the SplineWrap Node
00:30in a previous video.
00:31So, here we're going to look at the workflow of actually doing a warp, a little
00:36more room for my SplineWrap please.
00:39So, I like to wrap this happy looking eye into kind of an angry purple alien, so
00:45let's start by making an angry mouth.
00:47So, I'll click and drag, click and drag, click and drag, click and drag,
00:51and return to close;
00:53this will be my source wrap.
00:55I am going to draw a new shape for the destination warp, again, this one of the new workflows.
01:00So, I'll select the Spline again, click and drag, click and drag, click and
01:06drag, click and drag, click and drag, return to close.
01:09Now I am going to use the new Join tool, select that, click on the source, and
01:15then click on the destination, and I get an immediate warp.
01:18To checkout my correspondence lines, I'll go the selection tool, which lights
01:22them up and now I can pick the correspondence tool and I'll select the modify
01:28correspondence point tool, and we will adjust that here and there and bring
01:35this up here, okay.
01:39Let's say we like that.
01:39I would like to do a little refinement;
01:42I'd like this control point to be coincident with that control point.
01:45So, I am going to move in here, using one of the new features I want to select
01:50this point and I'll say copy 1 point values, select the other point, I'll
01:57scoot it up a bit, do a right mouse pop up, go to paste, 1 point, and now the
02:02points are coincident.
02:05Now one of the new features is when you have coincident points, you can in fact
02:09control them together.
02:10So, I am going to select both shapes and then select the point and I get this on
02:15screen transform jack.
02:17Now I can refine the position of both of them together like so. All right!
02:23Now to refine the destination shape, I am going to turn off the source shape so
02:27I can see it better and maybe turnoff my correspondence lines.
02:31And now I will come in here and edit my destination shape a little bit, all
02:36right, I'll turn my source back on and I can toggle the Overlays off, and then
02:46to see the effect come and go, I can either go to the source image, warp
02:50source, or over here the A side , or warped, or come down here, this is my
02:57favorite, go to the SplineWrap Node itself and use the D-key to toggle it on and
03:01off, that's faster.
03:04Now let's check this out.
03:07To keep my project more organized, I am going to put in a folder, put these two Beziers in.
03:12So, I'll select Root, plus Rename this, mouth, select these two, and drop them
03:20in and close them up. Much more organized.
03:22Now one thing I have noticed about my angry mouth is as I toggle it on and off,
03:28you see it's deforming the entire shape of the head.
03:30Okay, we don't like that, we just want frowny mouth, so, I'll turn this back
03:35on, go back to the A side, and I'll draw a new shape around the perimeter to
03:42act as a hard boundary.
03:43So, click and drag, click and drag, click and drag, all the way around, because
03:48I need to lock the outside of this head.
03:50So, let me cruise around here and very quickly edit my control points.
03:57Okay, let's come over here and rename this Bezier; head, and we're going to
04:01turn on the hard boundary feature.
04:05Now watch would happens when we switch back to be A side warped and toggle it on
04:09and off, ooh, let me turn off the Overlays for you and now we can admire,
04:16fact, the head is no longer deforming, okay cool!
04:19Next, we'll see how to give him an even more angry look.
Collapse this transcript
Animating a warp
00:00Now that we have the head and mouth set up, let's move on to giving him an
00:04even more angry brow.
00:05I will turn my SplinWarp back on.
00:07I am going to switch to the A side unwarped, select my Bezier, come over
00:13here, click and drag, click and drag, click and drag, click and drag, I want
00:18an open shape here, so to let it know I am done drawing, I will just select another tool.
00:24I will refine the position of my points, now that's my source shape.
00:29To make my destination shape I am going to use the new, right mouse
00:33pop-up, duplicate and join.
00:36Now I can pull out on the destination, now if I turn on the warp, I can see how
00:42much I am warping it.
00:46Very nice, so now I will toggle that on and off, and go yes, that looks nice
00:51and angry. Hmmm, but you know what, it's also deforming the nose and that's just not right.
00:57So let's put in a soft boundary to protect that nose from deforming.
01:00I am going to turn the node back on, switch back to the A side unwarped, select
01:08my Bezier tool, scooch in here and a click and drag, click and drag, click and
01:15drag, return to close, refine the shape.
01:22Now this is a boundary shape, so I don't need a source and a destination, but I
01:27do need to name this, I am going to call this one nose, and I am going to set
01:32that as a soft boundary.
01:33Again, I can turn off my Overlays, switch to the A side warped, then toggle with
01:41SplineWarp Node on and off, and go yeah, that looks better.
01:44Okay, to clean things up a bit, I'm going to select root and make another
01:48folder and I am going to call this one brow, so I could put these two Beziers in the
01:54brow and keep my job much more organized.
01:59Okay, let's put in a little animation, get up my curve editor here, we'll
02:07select the brow folder. So that we can do the new layer warp, so we'll set
02:11that as a keyframe on frame 1, where my timeline is.
02:15I want to make that 0, and I want to do the same thing for the mouth.
02:18Set a keyframe on frame 1 and make the layer warp factor 0, set the mouth to 1,
02:26select the brow layer, set that to 1.
02:30So now we have a little animation, all right.
02:32To make the animation look a little more organic, let's put some accelerations on our speed.
02:44So we will select this warp, select that, add a control point here in the middle
02:52and make the brow warp start quickly at the beginning and slowdown at the end.
02:57Next, let's pick the mouth layer, select his warp animation curve, insert a
03:02control point and have him start off slow at the beginning and accelerate at the end.
03:07Okay, that should give our animation a little character.
03:09All right, so let's reposition everything, jump to the first playhead here, I
03:15want to ping-pong my playhead for you, and we'll play.
03:23Okay, so we have deformed our purple alien to make him even more angry than
03:27he was to being with.
03:28These new tools in the SplineWarp Node will not only speed up your workflow, but
03:33also improve your creative control when warping images.
Collapse this transcript
Morphing techniques
00:00When morphing two images together, the first step is to apply a warped image A
00:05that matches it to common points of image B, such as eyes, nose and mouth.
00:10Matching image A to image B, takes some changes to the set up in the workflow,
00:15compared to a simple warp.
00:16Here we'll see how the new tools help with that process.
00:19By the way, you can find our new face B in the Tutorial assets and be sure you
00:26hook up face A to input A of the SplineWarp and face B to the B input, because
00:31we're going to be warping A to B. Okay, rehome the Node Graph.
00:38First thing we will want to do is setup a Viewer wipe so we can bounce quickly
00:42between our two images.
00:43So we'll select Read 2 and type 2 on the keyboard and that way we can ping pong
00:47quickly between the two.
00:48However, sometimes we're going to want to see a wipe, so let's go up to our wipe
00:55controls, set it for wipe, SplineWarp3 on the left, Read 2 on the right.
01:02Now we can use our wipe controls, the fader bar, so we can do a dissolve,
01:07sometimes you want to dissolve like this, sometimes you want to ping pong like
01:11that. So we're ready to go either way.
01:14Now I'm going to turn off the Viewer wipe ,and we'll start by jumping over to the
01:20Input 2 and see that what we want to do is change the general outside shape.
01:25This jaw line is very distinctive. So let's start.
01:29We'll select the original A, this is the unwarped A side and switch our Viewer
01:35input to the SplineWarp Node by typing 1 on the keyboard.
01:38Remember this is the A warped and this is the A original, by the way it's
01:43duplicated over here.
01:46So, we'll select our Bezier, click and drag, click and drag, click and drag, click
01:51and drag. And draw ourselves a nice little shape all the way around the perimeter
01:56of our guy, so we can get this head shape to look correct, return to close.
02:02Now give me a moment to tighten up my spline, in the mean time we'll do a speed
02:06change, so it won't take very long.
02:19Okay, I've drawn the outline around the A side and notice that the color of the spline is red.
02:24Now I'm going to switch over to the B view and draw a spline around this head,
02:30and note that the color of this is blue. So blue is for the B side and red is
02:37for the A side. We'll close that, and once again, I'll lit up my spline, while you
02:47just zip through it.
02:56Okay, there is my B side spline. Note that when I switch to the A unwarped, I
03:02see the red and the B, I see the blue.
03:05If I want to see both of the splines, we click on the A, B button, and now I see
03:10both of the splines.
03:12Now the question is how do I control whether I am seeing the A or the B side, I
03:17can see both of the splines, but what about the pictures?
03:20That's controlled right here, mix.
03:21The mix slider is set to 0, means you're looking at the A side, I'll jump over
03:25to here and then up to there, so now I am looking at the B side and I'll put
03:30that back to default.
03:32Okay, with both the A and the B splines drawn, what we have to do is connect
03:36them. So we'll go over to the Join tool, and we're going to connect, click on the
03:44A to the B and bang, we get this lovely deformation.
03:49All right, what's going on here is the correspondence points are a little
03:53unhappy. That's an easy fix.
03:56We'll click on the Selection tool, so we can see our correspondence points. Then
04:03we'll get the Modify Correspondence tool, and just get these correspondence
04:07points to line up real nice, put them where you want the control to be. I want
04:16to get this jaw just right.
04:20Now we can improve how well this fits by adding some more, so we'll select the
04:23Add Correspondence Point tool; add some points here, maybe over there, how about
04:30here, and down there. And again, the Modify Correspondence tool to tighten them
04:37up, maybe a little bit over there.
04:41Okay, let's say we like that, so to see how my morph looks, cursor in the
04:46Viewer, type O to turnoff the Overlays, then I can type 1, 2, 1, 2 to see how
04:52the shape fits, okay, or turn the Overlays back on and turn my Viewer light
05:00controls on, so I can do the fader bar thing.
05:04Now if I don't want to see the splines, I can come up here and turn off the
05:08source and destination spline, now I can just look at the image.
05:12I turn them off with the Overlays, then I don't get my fader bar.
05:18Let's turn off the wipe controls and let's make a folder. So we'll select Root,
05:23click on plus (+), rename this; head, and pick up these two Beziers, drop them in and fold it up.
05:34Next, let's take a look at the mouth, they toggle back and forth between the
05:38two, you can see that the mouth on B side is way different than the A side. So,
05:44let's go to the A side, turn off the deformations, I want to draw the shape on
05:49the undeformed image, zoom in, select my Bezier tool, let's draw, draw, draw,
05:55draw, close. A little tidy up here.
06:00I want to keep this real close to the lips, because I want a real tight fit on the B side.
06:10Now we'll use a new tool, I'll select my Bezier, right mouse pop-up and click
06:16duplicate in B, and now if I switch to the B side, there is my spline. It's not
06:23joined yet, so, select that, I'll bring it down here to my destination, and
06:37we're going to bring it in nice and tight and I want to set these corners real
06:41close on this mouth right here and this corner right there, and we use very
06:46tightly fit to the edge.
06:52Okay, let's say we like that, we'll go back to the A-B view, so I can see both
06:59of my splines, and again, I'll select the Join tool and I'll click source A to
07:06destination B, and now I get a lovely puckered mouth. I'll turn the source and
07:11destination splines back on, so I can see what I am doing. My correspondence
07:16points look pretty darn good, but I am getting a little bit -- let me do Overlay
07:20off here, you see I have got little wobblies here, Overlay on.
07:23Well I can fix that by adding some correspondence points, so we'll add
07:27correspondence tool, click here, click there, click there, click there, Overlay
07:32off, ah, much better.
07:36Okay, let's see how well it works with the other mouth, so I am going to toggle
07:39A, B, 1, 2, 1, 2, that looks nice, all right, I am going to keep that and we'll
07:45make a folder for those two.
07:47Select Root, click on Plus (+), double- click, type mouth and pick these two guys
07:53up and drop them in the mouth folder and fold them up for neatness sake.
07:58Now there's another way to organize your shapes list folders that works
08:01well for morphs.
Collapse this transcript
Animating a morph
00:01Here's an example of this other approach where the shapes are reorganized into A
00:04side and B side folders. Also to save time, I have completed most of the shapes
00:09and joined them already, but here are our folders, we have the B side and the A
00:14side, and how this was set up, I drew all the A side first and then I would do
00:19the duplicate in B, and then move the duplicate down here to the B side. I'll
00:26show you how that works by doing the nose for you.
00:28Let's do the nose together, make sure I have selected my Bezier tool, draw my
00:36shape, tighten it up a little bit.
00:40Okay, so I'm going to rename this nose, and I'm going to slip it inside the A
00:50side folder. Here we are, notice, that it says it's an A side shape and it has
00:56no partner to link to.
00:58Okay, now I'm going to switch to the B side here, and then I'll go back to the
01:04nose and I'll say duplicate this in B, and I get nose1.
01:09Notice that it's a B side shape, so we'll pick that up and put it into the B
01:14side folder, there it is.
01:18Now I'll adjust it to fit. There, now to do the join, we'll select the AB morph
01:31view, so I can see all my splines and I'll turn on the correspondence
01:36visibility, so I can see what I'm doing.
01:37We'll go to the Join tool, click on the source or the A side, and click on
01:44B, the destination side and to make the correspondence line show up, I'll
01:49select the Selection tool.
01:51Okay, now we just got to tighten them up a little bit, so we'll modify our
01:55correspondence point, top of nose to top of nose, center to center, nostril to
01:59nostril, and nostril to nostril, and if I wish to kind of sweeten up the warp a
02:11little bit, I can do that right here.
02:13Okay, let's say we like that and now notice that the nose is in AB shape, as is
02:20nose1, also an AB and also this column shows you which shape they're joined to.
02:26All right, let's do a check on our morph, cursor in the Viewer, type O to turn
02:31off the shapes, and let's do a mix of .5, so we can see half our A side and
02:40half of the B side and I seem to have some issues here.
02:43Okay, the A side is protruding here, I'll go back a little bit, show you the A
02:50and then over a little more to the B side, and then back to about a fifty/fifty mix.
02:54All right, so I need to tuck this in, so I'm going to turn on my Overlays, I'll
02:59go get my Add Correspondence Point tool, come down here and add a correspondence
03:05point, there you go, see it has pulled it right in and then adjust the
03:09correspondence point, so it's straight across. It still has got a little
03:14protrusion here, we'll add another point there and we'll again adjust the
03:17correspondence point, so they're straight across, okay, much better, much
03:21finer, I'll zoom out.
03:24Now let's check the A warp side here, what I wanted to call your attention to is
03:28we've got kind of a jaggedly jaw line and some wobblies up here. So that can be
03:32fixed over on the Render tab, while we turn up the curve resolution, to let say
03:376, and it should smooth that stuff out. Ah, much better.
03:42So you can see the difference that that makes.
03:45Okay, the last step is to keyframe some morph animation, so let's go back to the
03:49SplineWarp tab and setup some animation for the mix and the root warp.
03:54The mix is going to be ghosted out until you set it for AB morph.
03:57Okay, make sure our playhead is on frame 1, I'll set the mix parameter to 0
04:03on frame 1, and we'll set a keyframe there, and same thing for the root warp.
04:08We want 0 at the beginning of the shot, and we need to set a keyframe there as well.
04:16Then we'll jump to playhead to the end of the shot and we'll set the root warp
04:20for 1, and the mix for 1.
04:24I'll jump the playhead to the beginning of the clip and play, and there you have
04:37it, a lovely morph between two static images.
04:40Of course, if your images are moving, then all the shape will have to be
04:44keyframed like any roto job, we'll stop this.
04:49The new tools in workflow layout in the Nuke 7 SplineWarp Node will be a real
04:53boon to your morph production jobs, for both productivity and artistic control.
Collapse this transcript
7. Deep Compositing
Setting up a deep composite
00:01Deep images and deep compositing is a major new technological development
00:04for Visual Effects.
00:06Used for incredibly complex CG renders and compositing of Avatar, it's now an
00:11industry standard supported in the release of EXR 2.0.
00:14All the images here are in your Deep Compositing folder of the Exercise Files.
00:20The deep nodes are over here in the Deep Toolbar, and these are specifically
00:25required for working with deep images.
00:27For example, if we want to read in a deep image we have to use a DeepRead node.
00:32So what are deep images?
00:34Deep images have many layers of additional depth and transparency compared to a regular image.
00:40Let's take a look.
00:40I'm reading in this deepFalcon image, which is in the EXR file format.
00:45So I have RGB, Alpha, and I also have a deep layer.
00:51Of course deep data is not at all interesting to look in the Viewer because the
00:56numbers are so huge, so we won't do that anymore; back to RGBA.
01:00But we can see what the deep samples are.
01:03We hook up a DeepSample's node, open it up.
01:06When I move the position indicator on top of my CG image, you can see all
01:12the deep data here.
01:13For that one pixel it has all these different depths, so here's your depth here,
01:19and then RGB values, and then a transparency.
01:22You have all those different values for that one pixel.
01:26Let's look at another one.
01:27Here's another deep image, and again, it's got the deep layer with it.
01:35And if I hook the DeepSample up to this one, and I'll push in here, and I move
01:40my position sample over here, there you can see all the deep image data.
01:44If I move it off, no deep data.
01:50So at this point I have two deep images with their deep data, all I have to do
01:55is apply a DeepMerge right here.
01:58Back that out for you.
02:01The brilliant part about the DeepMerge is there is no Alpha Channel being
02:06used for this composite.
02:08Each pixel is sorting itself out with the element in front or behind, with the
02:12correct transparency.
02:14Since there are multiple transparency samples, then the composite edges are very
02:20nice, even though we're using depth for compositing.
02:22You know that Depth Z compositing will get you bad edges.
02:25We can now move our DeepSample node to look at the composite, so now if I shift
02:30the position on top of the cattail, I'm actually seeing the deep data from the
02:36cattail here, and then the bird on back.
02:39If I move it over here I just see the falcon deep data.
02:44Also in the DeepMerge operation I now have an Alpha Channel that is the
02:49combination of the two layers.
02:52This will become very important in just a minute.
02:54I'm going to close my DeepSample Node, switch back to RGB, to take it down
03:01to here, DeepToImage.
03:07At this point my data is deep data, but here I'm converting it to flat data, so
03:12I can do a comp over a regular background, and we can see that comp here.
03:19So I had to turn it to a flat image in order to composite it over this flat background.
03:25Deep data works with deep data, but deep data does not work with flat, so you're
03:29converting back and forth between flat and deep as required for your shot.
03:33Now, regular 2D nodes will not work with deep data.
03:36Let me show you here.
03:38I'm going to bring in a Grade node and it will not hook in, because the Grade
03:43node knows that this is a deep image.
03:46However, if I bring it down here, I can hook it up. No problem I'll delete that.
03:54So you see you have to use the deep nodes with deep images and the regular 2D
03:58nodes with flat images.
04:00Now, some workflows the deep data is separate from the RGBA.
04:04Let's take a look at that.
04:09Here, this is a regular RGBA image using the standard read node.
04:14It has an Alpha Channel, but it has no deep channels.
04:20Let me put that back to RGB.
04:22I'm going to close this.
04:25So the deep data comes in here, in a DeepRead node.
04:30The way we get the RGB data and the deep data together in the same data stream
04:34is with the DeepRecolor node, that's what this node is for.
04:38Now, this RGB and Alpha data is now combined with this deep data in one single
04:44image, and there it is.
04:50Same thing with the Read Render.
04:52This is a standard Read node.
04:53This is only an RGBA image, there is no deep data here.
04:58Here's my deep data over here for this element.
05:02Use the DeepRecolor node to join them together.
05:04Notice the deep data comes in on the depth input and the RGB comes in on the color input.
05:13At this point I now have deep data for my falcon and deep data for the read, so
05:18all I need to do is a DeepMerge to do the composite.
05:22And again, DeepToImage to turn it flat, and then I can composite it over my background.
05:28The results of these two workflows are identical.
05:31Deep compositing solves the edge artifacts encountered when trying to do Depth Z
05:35compositing with regular flat images that only have one depth channel.
05:39While deep images can be very large file sizes, the main advantage is that
05:45compositing with them saves a lot of rendering time.
05:48In the next segment, we'll see why this is so.
Collapse this transcript
Processing deep images
00:01Nuke 7 supports several of the typical image processing operations for deep
00:05images such as color-correcting, transforming, cropping, and reformatting.
00:09But, you can only use the Deep node with the deep images;
00:12the ones we find here on our Deep Tab.
00:17All the images I am using here you'll find in the Deep Compositing folder of
00:20the Exercise Files.
00:22So, let's start with the DeepColorCorrect.
00:25We'll push into here.
00:28And I am going to put the ground plane up on the screen, and I'll open up the
00:32DeepColorCorrect node.
00:35Let me hook my viewer up to that.
00:38Notice that it looks exactly like the FlatColorCorrect node.
00:42The only difference is we have this Masking Tab here.
00:44We'll come back to that in a minute.
00:47So, I am going to apply Color Correction.
00:49I'll set the Saturation to 0.5, and the Gain, let's make this really blue to
00:55match that night city.
00:59So now if we look at the Comp, we can see we have this very, very blue floor.
01:03Well, what I want to do is control the depth of the color correction, so it
01:08starts here and then gets more blue towards the back near the blue city.
01:12To do that, let me go to the Masking Tab.
01:16The Masking Tab has a trapezoidal curve editor rather like the Keyer node.
01:23For it to take effect, you have to turn on the limit_z button, but watch what
01:27happens, when I turn this on, boom!
01:29I lost all my color correction.
01:31That's because this is now taking control and it says the color-correction will
01:35only occur between a distance of 0 to 1 in depth, and these objects are hundreds
01:41of units away from the camera.
01:42So, I am going to have to put in reasonable numbers here before my
01:46color-correction will look right.
01:49To do that, I'll open up the DeepSample node, and I will sample the ground,
01:54let's say I want it to start right about here.
01:56So, it's 1152, and if I go all the way to the back, it's around 2023.
02:05Okay, so I am going to set the Near at 1100 and the Far at 2100, there.
02:10Now my gradient is being controlled by the zmap curve.
02:18Okay, we're done with that.
02:18Now, let's take a look at the Deep Crop operation.
02:24I am going to move my viewer over to here where we have these deep lampposts.
02:29I'll open up the Crop node and then I'll enable it.
02:33And of course, I lost everything.
02:36The reason is that this znear and zfar are set for very small values right near
02:41the camera, and they're both enabled.
02:43So, if I turn off the Use for zfar and znear, I now see my picture.
02:48So, we'll adjust the crop for the part of picture we want to crop, there.
02:56There is this very useful option here to keep outside the bounding box.
02:59But here we're going to keep inside.
03:02Now, let's take a look at our composite.
03:05So, now the composite only has the two lights.
03:08But, I can get even cagier than that.
03:10So, what I want to do is a crop in Z where I will crop out one of the light
03:16posts and keep the other.
03:17So, I'll open up the DeepSample node;
03:21cruise around looking for the depth of this light post, it's 1125.
03:24So, I am going to enter some values of 1100 and 1150.
03:31So, I'll turn these on.
03:32I'll enter the zfar of 1150 and the znear of 1100.
03:43And there, those depth ranges kept this light post here, and eliminated
03:49everything outside of that crop.
03:51Of course, we can also invert that with the 'keep outside zrange'.
03:55Okay, I am going to disable the Crop node so we can get back to our picture, and
04:01clear the property bin, and rehome the viewer.
04:03Now let's take a look at DeepReformat.
04:04We can see that here on this dancer element.
04:09I'll open up DeepReformat, and this looks very much like the FlatReformat node.
04:15So, let's switch the viewer back to the composite, and watch what happens when
04:19we set the Type to Scale for example.
04:21It has the box, but we're going to use the scale.
04:24So now I'll inch down the scale factor, and it my element gets smaller, and I inch
04:28it up, and it gets bigger, no surprise there.
04:32You can also use the flip and flop buttons.
04:36Now, let's take a look at the DeepTransform.
04:38This works a bit differently than the FlatTransform node. translate X;
04:43I am going to put in a 10 here, and it does behave rather like you would expect.
04:48Okay, I'll move it 10 pixels in X. I will undo that.
04:54Here, Y. As I inch the Y up, our character goes higher off the ground.
04:59If I inch Y down, something funny happens.
05:02He starts penetrating through the floor, because the DeepCompositing node knows
05:07that he is now below the floor and he crops him automatically.
05:10I will undo that and restore that to default.
05:13The Z does something even more interesting.
05:17It is not pushing it further away from the camera lens, it is changing the Depth value.
05:23So, if I increase the Depth value, and push him away from the camera 100, 200,
05:28there, he went behind that light post.
05:32If I keep going, he starts penetrating into the ground.
05:35Let me go the other way.
05:37As I come towards the camera, he now jumps to this side of the light post. All right!
05:42So, we'll undo that, back to default.
05:44Z scale is different than the translate Z. Translate Z repositions it forwards
05:51and backward in depth.
05:52Z scale actually scales the Depth values.
05:56If I set the Z scale to greater than 1, it moves closer to the camera. Oops!
06:00And I will see he popped in front of the post.
06:03I'll walk that back.
06:05If I set the Z scale to less than 1, it walks it away from the camera.
06:09Now, it's behind the post, and in fact penetrated the floor.
06:12So, we'll put that back to default.
06:19If you want to write a deep image to disk, you have to use the DeepRight node;
06:23a couple of things you want to know.
06:25If you select the RGBA channels, you'll get RGBA and all the Deep channels.
06:31However, if you select Deep, you'll get the Alpha channel plus all the deep data but no RGB.
06:38And of course, if you want to save a flat image to disk, you have to use the
06:42standard Write node.
06:43And of course, you'll need to use the DeepToImage node in order to make it flat.
06:49Beyond color-correcting and repositioning your deep elements, you'll also want
06:52to do masking and holdouts.
06:54We'll take a look at that next.
Collapse this transcript
Measuring and viewing deep data
00:00While working with deep images is a very powerful workflow, it can be hard to visualize.
00:05Here, we'll look at several tools that will help you to navigate your deep terrain.
00:11All the images that we're using are in the Deep Composing folder of the Exercise Files.
00:15Let's start out by taking a look at how to measure your deep images.
00:19We have two ways to do that;
00:21the DeepSample node and the DeepGraph.
00:24I'll start by hooking up to my DeepCloud, and open up the DeepSample node
00:28which is connected to it.
00:29If I move this position point around the screen, you can see the DeepSample's
00:34printing over here in the Property panel.
00:36If I go to a thin area, there are not very many samples, and if I move over to a
00:42thicker area, there's are a lot more.
00:44So you are actually seeing the number of layers plus their values.
00:47Another way to look at your deep images is right here hidden away the DeepGraph.
00:54I am going to close my DeepSamples.
00:57So, I move the cursor over my deep image, it gives a constant update to the
01:02depth and transparency under the cursor.
01:05And remember, this is for 1 pixel.
01:08So, let me zoom in here, and I'm going to plant 1 pixel. There it is!
01:13This 1 pixel, it goes from about 92 to about 71 in depth.
01:20The Vertical Scale is Opacity.
01:22So, this particular pixel doesn't get very opaque.
01:26Let's switch to the Alpha Channel, and you can see that's pretty transparent.
01:32But, if I move my sample over here to this very opaque part of the picture, you
01:36can see that the cumulative transparency has reached 100% Opacity.
01:40Now, I'll close my DeepGraph and restore the Viewer to a normal state.
01:46Another thing to know about the DeepSample node is you can sample it no matter
01:51where you are in the flow graph.
01:53I could be here at the final composite, open up this DeepSample, and as I
01:57move the position point around, I'm getting an update only on this DeepSmoke element.
02:03So, it doesn't matter where the viewer is connected.
02:06Next, let's take a look at how to visualize a point cloud with DeepImages.
02:10Now, this is very much like our DepthToPoints for regular flat images.
02:15This is using just the Z channel, plus a camera.
02:17So, let's see what happens when we have a true deep image.
02:20I'll connect the Viewer to the DeepSmoke element and let's clear the Property Bin.
02:29This DepthToPoints node is connected to the smoke layer and a camera.
02:32You must have a camera just like in the DepthToPoints node.
02:36The difference is instead of having one depth, we're going to have lots of them.
02:41Open up DeepToPoints, switch to the 3D Viewer, and now we can see our deep image in 3D space.
02:48We can swing around, look at it from different angles.
02:51You can also get a sense of its position.
02:54This particular square is 100.
02:56So, that means the back end of my cloud is about 100, and the front end is at about 25.
03:04Not only can you use the DepthToPoints to visualize your 3D elements, you can
03:07use it to align elements like this.
03:10Come around here, push in a little bit, and I am going to open up for my jet the
03:16DeepToPoints for the jet.
03:17And now, you can see the jet embedded in the cloud in its correct position in 3 space.
03:24You can also use the DeepTransform node as we saw earlier to move it front to
03:29rear in Z. Okay, I am moving it back by 10 units, 20, 30, 40.
03:33So, I pushed it way behind the cloud.
03:36Now, if I switch the 2D render right here, you can see that the jet is now
03:43pushed way behind my cloud, or I am going to walk it forward.
03:49Here it is coming closer and closer.
03:52Actually, it's not coming closer to the camera;
03:54it's actually pushing its position inside of the cloud.
03:57I can even walk it in front of the cloud completely.
04:00Next, let's take a look at creating holdouts.
04:04One of the huge advantages of Deep Compositing is the ability to create holdouts
04:10without rerendering.
04:12This is a huge win when working with very complex CG elements like the jungles
04:16of Pandora from Avatar.
04:17Let me show you how.
04:19I am going to start with my DeepSmoke element, and I have a DeepJet here.
04:26I want to create a holdout in the smoke of the jet.
04:29So, I hook up the DeepHoldout node here.
04:33The setup is to connect the main input to the element you want to have the
04:36holdout, and then the holdout input to the element that's going to perform the
04:40holdout, in this case, the jet.
04:42So, right now, I have a cloud with a holdout of the jet.
04:46Over here I have a jet with a holdout of the cloud.
04:50Notice the Alpha Channel.
04:52So, the holdout is affecting the transparency; back to RGB.
04:57Now, here is a key issue.
05:00When you perform a DeepHoldout, the output of the DeepHoldout node is a flat image.
05:05So, notice that my Merge node is not the DeepMerge, it's the regular FlatMerge,
05:10and now I can merge them together.
05:12Here is another critical point.
05:14I'm going to open up the Merge node.
05:16Notice that the operation is Plus.
05:18You must use the Plus operation after the DeepHoldout, and not the default Over operation.
05:23The reason is the Over operation will damage the Alpha Channel.
05:27Here, I'll show you.
05:28I'll switch it Over.
05:30You notice wel lost a little transparency.
05:32I'll show you the Alpha Channel. See that?
05:35That's bad.
05:36I'll come back to the operation, put it back to Plus, and we now have a nice
05:40solid Alpha Channel.
05:41Viewer back to RGB, and now we have two flat composited images that we can then
05:49composite over our flat sky.
05:50When properly done, the DeepHoldout composite will be visually identical with
05:56the DeepMerge composite.
05:57Now, let's take a look at making our own deep images which you can do to a limited degree.
06:03For this, we'll need the DeepFromImage node.
06:05We'll start with this flat jet image right here.
06:08Now, it has a depth channel, classic depth z channel which you can see right
06:13here, but it has no deep.
06:18So, this flat jet will composite over this flat background.
06:22Let me switch my view back to RGBA.
06:24So this would be as any ordinary composite.
06:27However, if I take my flat jet and hook it up to a DeepFromImage node, I have
06:34now added a deep channel, and we can see that right here; deep.
06:39However, this is the important part, all we've done is taken that depth z
06:44channel, and copied it into the deep channel.
06:47So, we have no new information.
06:50The result is an image that has one deep layer exactly like the depth z
06:54channel, and you'll have the exact same compositing results as if you had used the ZMerge node.
06:59The difference is you can now play with deep images.
07:03We can put up the DeepSample node.
07:05As I move the Position node, you can see I am measuring different depths.
07:08And if I open up the DeepGraph, as I move the cursor over the jet, you can see
07:12the DeepGraph reflecting the depth at each point.
07:15But, notice there is only one sample, and it goes from 0 to 100% white, because
07:20this guy has one deep layer instead of multiple deep layers like the cloud.
07:25Deep Compositing is the big new technology in compositing visual effects.
07:29Nuke is an industry leader in supporting this technology.
07:32So, mastering Deep Compositing is a great way to future-proof your
07:36compositing career.
Collapse this transcript
8. The VectorGenerator Node
Understanding the setup and operation
00:01The VectorGenerator produces both forward and backward vectors that can then be
00:05piped to nodes that use vector fields such as Kronos and MotionBlur which we'll
00:09be looking at shortly.
00:11The VectorGenerator supports GPU processing for much faster rendering, but that
00:16requires certain NVIDIA GPUs and CUDA drivers.
00:20The clip I'm using here is in the tutorial assets.
00:25We'll find our VectorGenerator node after we select the Read node and go up to
00:30the Time Tab, and it will be right down here at the bottom.
00:33As soon as you hook it in, it's actually rendered our vector fields.
00:38We can see them up here. I'll pop this up.
00:40And we actually get three different vector fields.
00:43The forward vector field;
00:46the U and V data is put in the red, and the green channels. There you go!
00:51Now, this is the vector field required to take the next frame and morph it into this frame.
00:58So it's like look forward in time to move that frame back to this one.
01:03The next vector field is the backward vector field, and again, the horizontal
01:09values are in the red, and the vertical values in the green.
01:13This is the vector field that will take the frame behind the current frame and
01:17move it forward in time.
01:20The third is the motion vectors.
01:23This is all of them combined into a single four-channel image.
01:26We can see that in the red, and the green, and the blue, and the alpha.
01:32So, depending on how you like it bundled, you pick which one you want to work with.
01:38We'll work with the forward motion vectors.
01:40Okay, let's take a look at what these motion vector values look like. I'll zoom in here.
01:46I'm going to sample a pixel value here, and we can look at it down here below the Viewer.
01:52This says it's 9.8 pixels horizontally, and 1.2 pixels vertically.
01:58So, this not a picture, it's data about the picture.
02:01Let me sample another spot here.
02:05Here, it's 12 pixels horizontally and -1.3 pixels vertically.
02:09In other words, there are negative code values in here. There you go!
02:13But, in the Viewer course, they show up as black.
02:16We'll go back here.
02:17We can also take a look at the Red Channel, and by gaining down the Viewer, we
02:24can see the motion vector values here.
02:27I'll put the Viewer back and set back to RBG.
02:29Now, let's take a look at the Property Panel.
02:32Over here we have a couple of useful adjustments.
02:34The Vector Detail is how fine a detail in the picture we're going to
02:38create vector fields?
02:39Now the way it works is, a Vector Detail of 0.2 means the image will be scaled
02:44down to 1/5th of its size.
02:46So, you're going to get 1 vector for every 5 pixels.
02:50If we set this to 0.5, the image is being scaled down to half, so you now have 1
02:55vector for every 2 pixels.
02:57And of course if we set it for 1, highest possible resolution, we're going to
03:01have 1 vector per pixel.
03:03However, you normally don't want that because that'll be too noisy.
03:07Let's set it back to a more moderate value, of 0.5.
03:13Next, the Smoothness parameter;
03:16with motion vectors, you have to do a tradeoff on the amount of detail versus
03:20the amount of noise or chatter in the data.
03:22So, this is where that knob is.
03:24If I turn this up to a high value, that means we're going to get less
03:28detail, but less noise.
03:29Set it down to a low value;
03:32more detail, but more noise.
03:35And don't forget, when you increase your vector detail, you are also going to be
03:40increasing your processing time.
03:42Now, one thing you can do to help the process is to hook up a matte input here.
03:47When you do, the first thing you'll have to do is tell the VectorGenerator node
03:52where to look for the matte.
03:54So, you can tell it's on the Alpha Channel of the source image or the matte
03:57input, wherever you've stuck it.
03:59If you have a matte hooked up, and only if you have a matte hooked up, then
04:03these options become available.
04:05As in this example here, the Matte is normally set to isolate the
04:09foreground character.
04:10So, if you select the Foreground, you are going to get motion vectors for
04:13only the character.
04:14If you select Background, it's going to put out motion vectors for the black
04:18part of the Matte, in other words, background in the picture.
04:22Let's twirl down the Advanced Tab and see what we've got.
04:25The Flicker Compensation;
04:26if you turn that on, it's going to compensate for any dancing lights like maybe
04:31caustics or rain falling on the sidewalk.
04:33And the Tolerance Tab, this is where you control the equation that calculates
04:38the luminance image.
04:40The VectorGenerator analyzes a luminance version of the image, not a red, green, and blue.
04:45This is the equation that is used to create the luminance version, and by
04:48default, it will be 0.3 Red, 0.6 Green, and 0.1 Blue which is appropriate for
04:53normal scene content.
04:55But, what if you had a very, very blue picture?
04:58Well, you'd want to turn up the Blue, and turn down the Green and the
05:03Red because there's a very little picture information here and most of it's in the Blue.
05:08Use the VectorGenerator node when you have two or more nodes that require vector fields.
05:12So, the vectors are only calculated once but used multiple times.
05:17This will reduce your render times and speed up your shot development.
Collapse this transcript
9. Kronos Optical Flow Retimer
Retiming a shot with optical flow
00:01The Nuke 7 Kronos is an optical flow- based re-timer which you find over here on
00:07the Time tab, with an improved algorithm and GPU rendering;
00:11it's based on F Kronos previously available only with the finest plug-in set,
00:16but now included with Nuke X.
00:19Kronos supports GPU processing for much faster rendering, but that requires
00:23certain NVIDIA GPUs and CUDA drivers.
00:27By the way, all the images that we're using here, you'll find in our tutorial assets.
00:32Let's open up this first Kronos Node here and I have set the speed for a very
00:36slow .1 to show you what happens, we'll play this.
00:42You notice we're getting all this background pulling here and around the legs
00:46and especially on the ground, in fact, virtually everywhere around it and we're
00:50 using all default settings right here, Vector Detail of 0.2 and Smoothness of 0.5.
00:55We'll stop this, jump back to frame 1 and let's see what happens if we
01:02increase the Vector Detail to 0.5, and we'll play this. Okay, we've made
01:09our situation better, we're not pulling quite so much in the background here,
01:13but we still have some pulling all the way around, and of course, down here on the ground.
01:17We'll stop this and let's try a higher vector detail.
01:21The Vector Detail Setting refers to how many vectors per pixel, if you have a
01:25Vector Detail setting of 0.5 that means the image is scaled down to half
01:29resolution and you get 1 vector for every 2 pixels. If the vector detail is 1.0,
01:35then you have a vector for every single pixel.
01:38However, you have increased your rendering time. Right, so let's play this
01:42setting and we're even better now than we were at the vector detail of 0.5, but
01:48we still have an awful lot of background pulling.
01:50So what can be done about this?
01:52Well, the answer is to put in a Matte, to mask off the foreground area.
01:57Let's take a look at that.
01:58We'll switch back to the Node Graph, happen to have a Matte right here, we
02:04will hook that up to the Matte input.
02:06Go back to our Property panel. We need to tell Kronos that we have a Matte, so
02:13we go to the Matte channel setting, pop that up and how and where to look for it
02:17into the matte input, the Luminance right here, so I'll use that.
02:21Now, let's check out the results.
02:22We'll play this, ah; much better.
02:26We now have no background pulling by using the mask input.
02:29Stop that, jump to frame 1.
02:33Now, let's take a look at this Smoothness setting and see what it does for us.
02:37We'll go back to the Node Graph and scoot over here to another setup that I have for you,
02:42and we'll switch to this Viewer.
02:44I'll open the Property panel of Kronos 3, to show you the Overlay Vector, right
02:49here you'll find them if you twirl down the Advanced tab.
02:53So when it turns those on, and this shows you the Motion Vectors.
02:56We'll go back to the Node Graph here where I have two Kronos Nodes set up
03:01identically, except for this Smoothness setting, this one is set for high
03:04smoothness and this one is for low.
03:06Let's see the effect.
03:09I selected the Kronos Node with the Smoothness set Low;
03:13you could see how the vectors have these little curls to them.
03:15We will switch to the high smoothness and it smoothes them out.
03:20You can think of the smoothness parameter as running a comb through the vectors
03:25and smoothing them like you were combing your hair.
03:27See the difference?
03:29Smoothness high, smoothness low.
03:32With smoothness set to high, it reduces the little wavies and jaggies, you might
03:36see along the edge of your foreground, but it also will lose fine detail.
03:40So again, it's a balancing act.
03:43The default of 0.5 works well on most shots.
03:47Let's go back to the Property panel so we can talk about the output
03:50setting right here.
03:51You have four options, by default, the nodes Output is result, but you can
03:56choose the re-timed Matte or the re-timed Foreground Alone or the re-timed background alone.
04:04In addition to performing a high- quality speed change on a shot, Kronos can
04:08also add Motion Blur;
04:10let's take a look at that next.
Collapse this transcript
Managing motion blur
00:00If a shot has been sped up the original Motion Blur will not be sufficient to
00:04avoid motion stroking.
00:06Here we'll see the controls for adding Motion Blur to a shot using Kronos.
00:10First, we'll select the Read Node, go to the Time tab and get Kronos.
00:18The default Speed is 0.5, slowing it down.
00:20So let's go to a Speed of 2 in order to speed it up.
00:24I need to set some In and Out points, because now my clip is only half as long,
00:28so I only have 12 frames to work with.
00:30So let's play that and see what happens.
00:32Okay, we have terrible motion strobing, because it's moving way too fast for the
00:36original shutter timing.
00:37So let's see how we can fix that.
00:39We'll twirl down the shutter menu and we'll find three things in here that will help us.
00:44The first thing we would do, is set the Automatic Shutter Timing, here you're
00:48telling Kronos that you want it to figure out the appropriate shutter time,
00:52unfortunately, nothing has happened.
00:55Let's zoom in a little bit.
00:56The reason is we only have 1 shutter sample.
00:59So I am going to increase that to 2.
01:02Aha, now we have a double exposure.
01:04So we are going to walk this number up to 3 and then to 4, until we get a
01:08nice smooth motion blur.
01:10I'm going to undo that.
01:13Let's say we don't use the automatic shutter time, let's say we want to set the
01:17shutter time to 2, again, we don't see any motion blurring until we take our shutter
01:22samples up to 2, 3, 4, 5, maybe 6.
01:28So again, you have to increase the shutter samples in order to smooth out the motion blur.
01:32Okay, I am going to re-home the viewer, let's play that and see how it looks.
01:36Very nice, let's take a look at the Advanced twirl down tab;
01:42this is the Flicker Compensation option.
01:45This is for a situation in a shot where you have rapidly changing little lights,
01:49like maybe caustics or rain falling on the pavement.
01:52It modifies the motion compensation algorithm to compensate for the flickering lights.
01:57Let's take a look at Tolerances.
01:59This is the way to red, green and blue value that are used to create the
02:03luminance version of the image, which is what Kronos uses to do all of its motion estimation.
02:08It does not do it on the red, green and blue channels.
02:11These values are appropriate for normal image content, but supposing that you had
02:16a shot that was really very, very blue, it had lots of blue information, but
02:20very little red and green.
02:21So for that, you'd want to dial up your blue record and drop down the
02:25red and green values, so that the algorithm would have a lot of blue data to work with.
02:31Kronos is the latest state-of-the-art re- timing technology from the Foundry that
02:35you can use to perform high quality speed changes on your clips.
Collapse this transcript
10. TimeClip Node Features
Setting up and operating
00:01The TimeClipNode is new to Nuke 7 and collects a wide variety of timing
00:05controls into a single node.
00:07The key to the TimeClipNode is that it shifts the timing of not just the
00:11source clip, but all of the nodes in the stack above it, and it appears in the dope sheet.
00:15Now let me show you this, I am going to select the Read node, we will go to Time
00:20tab and add a TimeClip directly to the Read node, notice that it fills in the
00:26frame range of 1 to 100.
00:28However, if I select my stack and I add the same TimeClipNode, it has not filled
00:37in the frame range, because it doesn't read the clip.
00:40So you have to tell it what the last frame is, in this case, 100.
00:44What I have here is a simple clip that has frame numbers in it, makes it easy
00:48to see all this TimeClip stuff.
00:50First feature is Fade In and Fade Out.
00:52So if I add a 5 to the head and 10 to the fade out at the tail, I am going to
00:59get a 5 frame Fade In and 10 frame Fade Out, very nice. Let's undo those.
01:06If I want to read in frame 10 to 90 of the clip, that's what the Frame Range is for.
01:11Now the Frame Range setting here is identical to the Read node.
01:14I'll set the Frame in at 10, and the frame out at 90 and these
01:22before-and-after features are exactly the same as the Read node.
01:25So now if I go past frame 90, the effect after is all black, and same thing, if
01:35I go ahead of frame 10, I also get black.
01:37We'll undo those.
01:40So while the Read node offers the exact same controls, the key is the TimeClip
01:45will take the entire stack of nodes and this is especially important if you
01:48want to shift the timing of the clip and rotos for example, because all the animation will go.
01:53I'll set the playhead to frame 1 and we'll click on the Reverse button, and
01:59that just plays the clip backwards. I'll undo that.
02:05Now the frame pop-up menu is again exactly like the Read node, the only
02:08difference is which one is the default.
02:10If you want to do a slip sync of your clip, you can select the offset, and
02:14set that for like 10.
02:18So playhead is on frame 1 and what we're saying here is I want the frame 1 of
02:23the clip to be offset to the timeline by 10 frames, that's why we are seeing
02:27frame 11 of the clip.
02:29Now if you are just going to do a simple frame offset, then you might look at
02:34the TimeOffset Node instead, for this reason.
02:36Here is the TimeOffset Node.
02:39TimeOffset Node offers the advantage that you can use the cursor to walk the
02:43clip up and down on the timeline like this.
02:45You cannot do that with the TimeClipNode offset.
02:48Important point, the time offset value of -10 is the opposite of the
02:52TimeClip's offset of +10.
02:55Two other important differences between TimeOffset and TimeClip.
02:58TimeOffset will shift the timing of your 3D geometry, whereas, TimeClip will not.
03:04But the TimeClip Node appears in the Dope sheet, whereas, TimeOffset doesn't. I'll close that.
03:12I'm going to set the offset back to the 0 in order to show you the Start at Feature.
03:19Okay, we're back to the TimeClip Node now, and I've set the offset to 0.
03:26If I set the Start At feature to, for example 10, that means, I'll jump the
03:34playhead to frame 10, start frame 1 of the clip at 10 on the timeline.
03:39And the last option, the Expression allows you to enter a mathematical
03:44expression for the relationship between the clip and the timeline.
03:48So I could say for example, take the frame number times 2, and now my clip is
03:55coming in on 2s; 22, 24, 26, 28.
04:00If I'd like to add an offset, I can just go and modify the expression,
04:05let's say it's like +10.
04:07So playhead is on 30, the clip is going to be 2 times 30 plus 10, which is 70.
04:14Let's delete that expression and we are now seeing the playhead one to one with the clip.
04:19We can open up the Dope sheet and the little brackets on this end here, if I
04:25slide those forward, you can see I'm actually modifying the frame range start
04:29frame up here, and I can also increment these, and it will move the Dope sheet,
04:34same thing for the last range back here.
04:37Notice that the Frame Parameter is set to Expression, but if I grab the clip
04:41here in the middle and slide it, I get an offset. I'll undo that.
04:46The last thing I wanted to show you is this original range field here;
04:51this has no effect on the output of the node.
04:54All this does, it will remember your first and last frame from the original
04:57clip, sometimes if you are cutting frames off the head and the tail of a clip
05:02and slip syncing it, it can become confusing where your original clip is, but
05:05again, let me show you, I can put a number 50 in here and it has absolutely no
05:10effect on the output.
05:11If all you want to do is reverse or offset the timing of a node stack, then
05:15the TimeOffset node might be simpler to use, but if you want more complex
05:20timing, then the TimeClip Node offers more controls, plus interaction with the
05:24dope sheet.
Collapse this transcript
11. New Viewer Guides
Understanding masks and guidelines
00:01All new for Nuke 7, we now have built-in Viewer guides for a variety of
00:06different film and video formats.
00:07There are masks for showing where your shot will be trimmed for the projection
00:11format, as well as title safe and action safe guides.
00:13Let's start here with this big Read6 frame and the first thing we are going to
00:19do is set the mask ratio, that is defining what the format of the output job is
00:24going to be, so lets say we are going to do a 1:85 feature film.
00:28Immediately our frame is masked and here's how you control that, this pop-up
00:31here, you can say I want no masks, or just draw me some lines, or I want half
00:37density mask, or I want full density mask.
00:40I am fond of half density, that way I can see outside the format.
00:45So let's pickup our Viewer and take a look at a 2k super 35 scan and we can see
00:50now we have the 1:85 format mask here and it's so noted down there.
00:56If we take a look at this HD clip, the 1:85 is very close to the HD 1:78, so we
01:01are just going to lose a little bit off the top and the bottom.
01:05However, if we look at a Standard Def NTSC picture, we are going to lose a
01:10lot of picture, of course, I don't know why you are chopping a 1:85 out of a
01:14standard def picture, anyway.
01:15But you could, if you want to.
01:19Now let's take a look at the guidelines.
01:20We can have a title safe or, an action safe, or both, and for any of them we can
01:30turn on the format center, so we get a little crosshair in the center.
01:34And here is the kicky thing about the guidelines;
01:37they will conform to your mask settings.
01:39So if I turn the masks on for 1:85, my title safe and action safe guidelines
01:44conform to the 1:85 area.
01:47We'll pick that up and put it on the 2k super 35, and you can see the same thing there.
01:51So our title safe and action safe guidelines work within the masks.
01:56Now these masks don't render with the output image.
01:58They are just Viewer Overlays to help you see the composition of your shot in
02:02the final output format, as well as to give you industry standard action safe
02:06and title safe guides.
Collapse this transcript
12. New Alembic Geometry
Importing and viewing alembic geometry
00:00Alembic geometry is a major new development in CG that allows scene content to
00:05be shared between different apps very efficiently.
00:09If you're not familiar with Alembic then I recommend you read the article that I
00:13published about it before proceeding with this video.
00:15Now there are several ways to import Alembic geometry into Nuke.
00:19But let's start with the Read node, because that allows you to bring everything in.
00:23Cursor in the Node Graph, we will type R to punch up a Read node and I'll select
00:28the alembic_scene.abc, abc is the extension for Alembic geometry.
00:34By the way, all these files are in the tutorial assets for this video.
00:37So we'll select that, click Open and it opens up the Scene Graph.
00:43Now everything in here was created in Nuke.
00:46In the Points folder there are particles and a point cloud.
00:50Under Meshes which is Alembic, speak for geometry;
00:53we have an Earth, Moon, and Sun.
00:55On the Axes, we have three Axis nodes and we have two Cameras, SceneCam and TopCam.
01:01I will fold these back up because the nature of the Read node import is to bring
01:06the entire scene in, in one gulp.
01:08So I will click on the Create all-in- one node button, and there we have it.
01:14Here are the two Cameras, the three Axis nodes,
01:20and this is all the geometry in one ReadGeo node.
01:23We will look at what we got here.
01:27What we have, if I look through my scene camera here, we have a little
01:32solar system scene.
01:36We have point clouds out here, we have a particle system here, we have three
01:40geometries, and we have two cameras and we can see that right here.
01:45There are my two cameras, okay, so back through my scene camera, and we
01:50will stop the playback.
01:51Now here is the Property panel for the ReadGeo that has all the geometry in it.
01:56If we go to the Scene Graph tab we can see all of our geometry here, we can
02:02unfold them and then we can enable and disable bits and pieces.
02:05For example, if I go to the Points and turn that off, I lose both the particles
02:10and the point clouds.
02:12Here in the meshes I could turn off for example just the sun, and only the sun disappears.
02:16So we will turn that back on and all the points back on.
02:21Now the problem with bringing them all in, in one ReadGeo node is they are now
02:25collected as a group, they are all one logical entity.
02:28So for example, if I bring in a checkerboard and attach it as a texture map
02:33everybody inherits the same texture map.
02:35Okay, this is not good, so we are going to want to bring them in as individual
02:40pieces of geometry, so let's take a look at that.
02:43So I am going to delete these, we will punch up the Read node again and select
02:50the Alembic scene one more time, Open.
02:52Now this time I am going to turn off the root, right here, you see this dot.
02:58That means the root is parent and it brings in everything underneath it.
03:01So I am going to turn off the root, then I'll go to the Meshes and unfold the Meshes.
03:08Now I have Earth, Moon, and Sun separate.
03:10So if I select the Earth, I can click over here, I get a dot.
03:17That means the Earth is a parent object, and so it will get its own separate
03:21ReadGeo node, we will do that to the Moon and the Sun, we will also do it to the Points.
03:27Remember, the Points has two elements, particles and point cloud.
03:29So now, I am going to get four ReadGeo nodes, and I have to click on Create
03:36parents as separate nodes.
03:40And there they are.
03:43This ReadGeo node as I enable and disable is the Earth, this one is the Moon,
03:49there is the Sun, and this is both the point cloud and the particles.
03:53Now you can go up to the Scene Graph of any ReadGeo node and turn off, enable or
03:59disable any bits and pieces of it that you wish.
04:02Now that my Earth, Moon, and Sun are in separate ReadGeo nodes, I can apply
04:09separate texture maps.
04:10So I will get another Read node, I will select all three of these texture maps,
04:16open, and let's hook them up.
04:18There is my Earth, my Moon, my Sun, and now I have texture maps applied to each
04:27piece of geometry separately.
04:32However, my Points ReadGeo node contains both the point cloud as well as the particles.
04:37I would like to have those separate, so let's delete those.
04:40Let's do one more Read node, get the Alembic scene one more time.
04:45This time we will unfold Points, turn off the root as the parent, so it doesn't
04:50bring them all in in one giant node.
04:52We will select particles;
04:53say I want you to be a parent and we will select point cloud, so that's a parent
04:58I will now have each one in its own separate ReadGeo node, again, Create parents
05:03as separate nodes, and there they are.
05:06Now I have the particles in one and the point cloud in another.
05:11Now you can import meshes, point clouds, particles, cameras, and transforms, but
05:17no materials, textures, or lights yet, but soon.
05:23Now that everybody is in their own separate ReadGeo node, they can be
05:26treated individually.
05:27So I am going to select my Sun ReadGeo, go to the 3D>Modify>TransformGeo.
05:35And now I could, for example, scale down the Sun, and now my scene has a much smaller Sun.
05:41I could now export the entire scene as an Alembic file or just the new Sun as an Alembic file.
05:47We will see how to export geometry later.
05:51The Read node allows you to bring in all the elements of a scene and select
05:55which you want to load.
05:56But the Read node is not the only way to import your Alembic geometry.
Collapse this transcript
Importing camera and axis information
00:00You can also use the ReadGeo, Camera, and Axis nodes to bring in specific
00:06individual Alembic scene elements, such as one piece of geometry or maybe just a camera.
00:11Let's start by looking at the ReadGeo node, so we go up to the 3D
00:15tab>Geometry>ReadGeo.
00:17We go to the File field, open up the browser and select our Alembic scene, which
00:28opens up the Import dialog box.
00:32Now we have in here everything for the entire Alembic scene, points,
00:37meshes, axis, and cameras.
00:38But we just want to bring in one element, let's say the sun.
00:43So the first thing we do is turn off Root as being the parent, then we can
00:48select the sun and turn that on as a parent.
00:52Since we only have the one parent, it doesn't matter which of these options we
00:56choose, I will just choose this one.
00:58We will switch to the 3D viewer, and there is the sun, just to make it easier to
01:05see, let's hook up a little checkerboard to it.
01:07And when I play the clip, we get the animated sun loaded in from the
01:11Alembic scene file.
01:12Up here is the sub frame option, if you read my Alembic PDF file, you know
01:17exactly what this does.
01:19But if we turn this off, it'll speed up the playback a bit.
01:21Now remember that the Alembic geometry is loading a new version of
01:26geometry every frame.
01:27So this read on each frame option is what makes it move.
01:31It I turn that off, it doesn't move, I can do a playhead, but there's no more animation.
01:38It's locked to frame 1 by default, but if I would like to use a different frame,
01:43let's say I want to use frame 20, again, lock, no animation.
01:46Okay, now, there's still no animation, but at least I am using frame 20.
01:51Now let's take a look at the Scenegraph here.
01:55Remember the Import dialog box showed us geometry and cameras and axis, but
02:00the Scenegraph for the ReadGeo node is only going to show us geometry, no cameras, no axis.
02:07Down here is a very interesting option, even though I've only told it to load in
02:11the sun, if I turn on view entire scenegraph all of the geometry shows up, so I
02:18could add or subtract from what I have got loaded in, no cameras and no axis,
02:22just geometry when you're using the ReadGeo node.
02:26Now let's see how to load a camera or an axis.
02:29Let's do a camera, we will go to the 3D, and select Camera.
02:35Again, we are going to read from file, so notice this read from file option here
02:39also appears on the File tab, they are wired together.
02:42So if we go to the File tab, select read from file, browse to our Alembic scene,
02:49select our Alembic scene, and we get a warning, because it's about to load in
02:54the camera data and if you had any animation in this Camera node, it's going to
02:57blow it away, so we are good, so we will say yes.
03:00Now if I back out here and play, you can see I have my camera data.
03:06Now the camera node will only read camera data, but the Alembic scene had two
03:11cameras, so to choose we go to node name, pop-up, and there is the SceneCam
03:17which I have, but if I wanted the TopCam, I can select that, back out a little
03:22bit and play the animated TopCam.
03:25Okay, but what I really want is the SceneCam after all.
03:33Now the Alembic scene comes in with its own frame rate and if your job happens
03:37to be a different frame rate, you can override that right here.
03:40Now let's say I would like to modify the camera, so we will go to the Camera tab
03:45and note that while I'm playing this, my numbers are ghosted in the data fields
03:51and they are gyrating wildly as the playhead moves of course.
03:53So I am going to stop this and I am going to turn off read from file.
04:00Now the data is baked into the camera, so if we go to the Curve editor and let's
04:05pick Translate Y, there is my Translate Y Curve, if I select that, now I can
04:14edit it, I am going to raise it up, maybe a little higher.
04:18Now as long as I have read from file turned on, I cannot edit the data.
04:24So if I don't like my changes, I can just go back and say read from file, and it
04:29will overwrite my changes and put it back to the original settings.
04:32We have seen how the ReadGeo, Camera, and Axis nodes allow you to import exactly
04:38the elements you want and to modify them.
04:41The only thing left is to take a look at how to write out your own Alembic file.
Collapse this transcript
Exporting alembic geometry to disk
00:01You can also write Alembic files with Nuke, either the entire scene or selected elements.
00:06However, if you want to export lights, you will have to use the FBX file format,
00:10since Alembic does not support lights yet.
00:14Here is the Node Graph used to create the original Alembic scene.
00:17We have these two cameras, we have three axis, here is our particles, there
00:23is the point cloud and we have our Sun, Earth, and Moon geometries or meshes in Alembic speak.
00:30Now let's say I would like to write out the entire scene.
00:33So we will select the Scene node, because it's connected to everybody.
00:37We will go to the 3D pop-up>Geometry>WriteGeo, I get my WriteGeo node and my dialog box.
00:48So I will browse to my destination and I will name it alembic_scene.abc, and of
00:57course, the abc extension is critical, because that's what tells Nuke that this
01:00is going to be an Alembic scene, so we'll open that.
01:04And by default the WriteGeo node is going to render out all the elements, axes,
01:10cameras, pointClouds, geometries.
01:13To render it to disk, we will select Execute and if I want a custom frame range,
01:17I can set that here, and then click OK, we will cancel that, and close this.
01:24Now suppose that I just want to render out the cameras.
01:27Remember, I have a TopCamera and a SceneCamera.
01:31So I will open up my WriteGeo node, and I will turn off everything except the cameras.
01:36Of course I am going to rename this cameras.abc, so don't forget the .abc
01:43extension, again, that's very important to let Nuke know this is your Alembic file format.
01:48Okay, we are ready to execute, I want the full frame range, I will say OK.
01:52And it renders out the two cameras to one file called cameras.abc.
01:58So let's close this, now I want to bring those cameras in.
02:02So I'll go to the 3D pop- up and get a Camera node.
02:05So now I want to load the cameras.abc file that I just rendered, go to the File
02:10tab, turn on read from file and then browse to the folder with the cameras file
02:17in it, there it is, we will open that, and again, we get the warning, it's going
02:22to overwrite any animation we may have had, we will say yes.
02:25So we can see my two new cameras, I am going to select everybody and disable all
02:30the nodes, except the Camera node.
02:32Okay, so now we look in the Viewer, there is my Camera.
02:37If I play that, that's my SceneCamera.
02:42Now the Camera node has brought in all cameras that were in that file and
02:47there were two, the SceneCamera and the TopCamera, so I could select the
02:51TopCamera, there he is, play that, but I really want my SceneCamera, so we
02:58will go back to that. And there we are.
03:01Okay, we are done with this, we will delete that Camera node, select all the
03:07nodes and turn them all on, get our scene back, and of course I want to look
03:12through my SceneCamera to admire my shot.
03:14Now let's say you wanted to render out all three of the geometries, the Sun, the
03:18Earth, and the Moon, no problem, what we need is a Scene node, get a new Scene
03:25node over here, then hook it up to the geometries I want to render out, then I
03:31will connect the WriteGeo node to the Scene node and render to disk.
03:38And because the Scene node is only hooked up to these three geometries, that's
03:42the only thing that will go into the file.
03:44Okay, we are done with that, one more case.
03:46Let's say I just wanted to do the Moon.
03:49No problem, we will go to the 3D pop- up>Geometry and add a WriteGeo node.
03:57Since the Moon geometry is the only thing connected to the WriteGeo node, it's
04:04the only thing that will be written to disk.
04:07The ability to read and write Alembic scene files gives Nuke an important new
04:11capability to share 3D scenes with any app that supports Alembic.
Collapse this transcript
13. The New PointCloudGenerator Node
Tracking and point generation
00:01The PointCloudGenerator has been rewritten to calculate cleaner, more
00:05accurate point clouds.
00:06The workflow has been changed so that you analyze a shot to set keyframes,
00:10calculating the accuracy for each keyframe.
00:13You can then select the most accurate frames to create a point cloud.
00:18To see how it works, let's go get a PointCloud node.
00:223D pop-up>Geometry>PointCloudGenerator, we hook the source input to our clip and
00:29by the way, this clip is in your tutorial assets and also it has a built-in
00:35Ignore Mask, so we are going to need that.
00:37So in the PointCloudGenerator node, we need to set the Ignore Mask to look at
00:41the Source Alpha channel.
00:43Okay, don't forget that.
00:45Next, we need a camera input, a tracked camera.
00:49So we'll just go get 3D>Camera, hook that up, you will find a tracked camera in
00:55the tutorial assets.
00:57So make sure you're on the File tab and enable read from file, then we will go
01:02load our tracked camera data, open up the folder, go to the Tutorial assets, here
01:07is our PointCloudGenerator tiff files and here is our TrackedCamera.fbx file,
01:12select that, click Open.
01:16Yes, I am sure I want to do this.
01:18We will go back to the Camera tab and we will turn off read from file, so that
01:23the data becomes live.
01:24We are done with the camera, so we can close that Property panel and take a
01:28close look at the PointCloudGenerator.
01:30The first workflow we will look at is Automatic Keyframing.
01:34To do that, we will do an Analyze Sequence.
01:37So when you click the Analyze Sequence, it takes the tracked camera and analyzes
01:42the clip to determine where to put the keyframes to calculate the point cloud.
01:49So we have keyframes here, and down on the timeline you can see the blue ticks.
01:53Notice that there is a Calculated Accuracy associated with each keyframe.
01:58So if I move the playhead to the next keyframe, I have a Calculated Accuracy of
02:030.80, the next keyframe 0.89 and so on.
02:08This is very important;
02:09we need to keep an eye on our Calculated Accuracy.
02:13Before creating the point cloud, we want to take a look at the Point Separation
02:18and the Track Threshold values.
02:21Point Separation first.
02:23This parameter puts the points closer together or further apart.
02:27If you have a very large point cloud you might want to set them further apart,
02:31and if it's a smaller one, closer together.
02:35Next Track Threshold, this parameter rejects all tracks that fall below
02:41this quality value.
02:43If you have a shot with fast cameras or a lot of motion blur, you will have a
02:47lot of low quality track value, so you might want to lower this, if you leave it
02:51high, you will have very few acceptable tracks.
02:55Okay, let's say we like all of our settings, we are ready to go to create our
02:59point cloud and we click on Track Points, and we set the range that we want to
03:03track, we will start by doing the entire clip, click OK.
03:07So the PointCloudGenerator is jumping from keyframe to keyframe using the
03:10tracked camera data and the trackers it has, to calculate the point cloud over the
03:15whole length of the shot.
03:15All right, let's go see what we got, cursor in the Viewer, Tab key, switch to
03:213D, oh, look at that, very nice.
03:25And if we play that, we can see our moving camera and we can look at the
03:32point cloud through our tracked camera right here, lock the viewfinder and play, outstanding!
03:39Now we are going to want to confirm the accuracy and we are going to use the
03:44viewer white controls to do that.
03:46So I will hook a second input of the viewer to the PointCloudGenerator node, we
03:50will come up here and set the Viewer white controls for over, and I want the
03:55point cloud over the Read6, make sure I am in 3D, make sure I am looking through my
04:00camera and I have the viewfinder locked, we will pull out a little bit.
04:06Now I like to confirm my point tracks by setting the display to wireframe, so I
04:10have a bunch of points.
04:12All right, so let's zoom out and let's play this.
04:17I am going to ping-pong the playhead, let's play this, to see how the points
04:20are locked on to target, are they drifting, are they squirming, no, everything looks great.
04:27Okay, I am happy with my point cloud track, we'll stop this and now we will take
04:32a look at setting manual keyframes.
Collapse this transcript
Keyframing manually and automatically
00:01The purpose of setting manual keyframes is to tell the PointCloudGenerator to
00:04use only the most accurate frames for its calculations.
00:08I would like to call your attention to the tick marks down here on the timeline;
00:11this is a kind of a confusing thing.
00:13I am going to open up the Dope Sheet so that you can see that each one of these
00:20tick marks is actually two keyframes, one for the accuracy, these calculated
00:26values right here and the other one for the keyframes, these guys up here.
00:31Now let's jump to frame 1 and we'll look at the Calculated Accuracy
00:350.69, not terribly good.
00:39So I'm going to delete the keyframe and notice that the Keyframes field has
00:44jumped to 21 and turned light blue;
00:46that means there's no keyframe where the playhead is.
00:49So if I jump to the next keyframe, 21, you will see it's bright blue, so if
00:54I get my cursor off the keyframe, it turns light blue, on the keyframe, bright blue.
00:59So this is the way you can tell the keyframes for the keyframes compared to the
01:03keyframes for the calculated accuracy. OK, 0.80;
01:07let's say I don't like that one, I am going to delete that.
01:10And then I'll jump to the next keyframe, ah, 0.89, that looks good, and the next one
01:15and the next, OK great.
01:16So I am just going to keep those really good ones.
01:20So I am going to track my points again, and I am going to tell it to do the
01:24entire frame range and click OK.
01:26OK, notice that my point cloud is now truncated, it's clipped off because I
01:34only had keyframes from 42 to 100, no keyframes out there.
01:39So that's one of the consequences of using a short range for your keyframes.
01:44Notice, also down here in the Dope Sheet I have the keyframes deleted for the
01:48keyframes, but I still have keyframes for the accuracy.
01:51All right, let's clear these points.
01:54We will take a look at what happens if you have too few keyframes.
02:00So I am going to jump here to frame 42 which is a keyframe keyframe and I
02:04am going to delete him and then I'll jump forward to 61, another keyframe,
02:09and delete that one.
02:11So now I only have keyframes from 81 to 100.
02:15So let's see what happens if we try to track those points and again, for
02:19the entire clip, OK.
02:20OK, an error message, I have insufficient keyframes, all right, say OK, so we
02:28will fix that just by jumping over to here and adding a keyframe there, notice
02:33it turns bright blue, jump over to here and add another keyframe.
02:37OK, let's try our track points one more time.
02:43OK, now we got a good track, but again we have a truncated point cloud because
02:47we are only working with a limited range of keyframes.
02:50OK, let's clear these points so I could show you how to render selected frame ranges.
02:54OK, I am going to delete all the keyframes and you'll notice even though the
03:00blue tick marks are on the timeline,
03:01that's because they are for the accuracy, all the keyframe keyframes are in fact
03:06gone, but you might not know it, looking at the timeline.
03:09This time we are going to put in our own uniform spacing every five frames.
03:15So I insert 5 in this field and I say add all and down here in the timeline
03:20and down here in the Dope Sheet, we have keyframes every five frames.
03:24Now I am going to render two separate frame ranges.
03:28So we will open up track points and set the frame range to 1 to 20, 80 to 100, OK.
03:37Now it's going to render two separate frame ranges of the point cloud.
03:40It will render the first frame range and put up the point cloud for that section,
03:44then render the second range and add it to it.
03:48OK, there we go, here is our first point cloud for 1 to 20, it's now working on
03:53the second frame range of 80 to 100.
03:55There, we now have the second frame range, you can actually see two different
04:00groups of point clouds, of course, where they overlap, they are convergent.
04:04But we can now get more coverage if we want to just render separate portions of the timeline.
04:08OK, I am going to clear these points to show you, but you can actually do
04:12that render one at a time, so I can go to track points and say just render frame 1 to 20, OK.
04:18There, there's my 1 through 20 point cloud, later I decide I would like to have
04:28greater coverage, I can go back to track points and I can say Render 80 to 100. OK.
04:34It will render the second group and actually add it to the first one as before.
04:39There, once again, we have our two point clouds superimposed.
04:46Next, let's take a look at post filtering.
Collapse this transcript
Filtering, grouping, and mesh generation
00:01We're back to the 2D View to show you the post filtering and grouping features
00:04in the PointCloudGenerator node.
00:06I am going to analyze a sequence one more time, so that we get the default
00:11keyframes. Then we're going to track all of the points, and get a brand-new
00:17point cloud. Done, and click Track Points 1-100, go. And our render is almost
00:28done and there you go
00:30We'll switch to the 3D View and I can kind of prefer to look at my
00:36point clouds as wireframe.
00:38Post filtering is the process of removing rejected points, now you get to set
00:43the rules of rejection, right here.
00:46First of all, the Angle Threshold, this rejects points less than the angle of
00:51parallax set in this field.
00:53If I raise this up, it'll reject points that are larger and larger angle.
00:58So far, everything I have has greater than 10.8 degrees of parallax though it's a keeper,
01:04but if I go up high enough, ah, there is my red.
01:07So, it has marked the reject points in red and this means those points have an
01:12angle threshold that are greater than 14.6 degrees.
01:16The next filtering parameter is the Density Threshold, this is actually a
01:20proximity rating, as points are closer together, they tend to be more accurate,
01:25and the more isolated they are, the less accurate they are.
01:28So, this is a measure of how close they are to their neighbors.
01:32So, if I raise the Density Threshold, I am rejecting points that are more and more isolated.
01:38So, far, there we go, there we go here we're. All right!
01:42Once I have selected my parameters for rejecting points, I can click Delete
01:47Rejected Points and the un-rejected points that are left will be turned into my mesh.
01:52I am going to undo that rejection and reset these parameters back to default.
01:58Next, let's take a look at Groups, click on the Groups tab, so we can create
02:04groups, select points, and put them into the Groups.
02:08So, let's create a group, click on Create Group, I am going to rename this left
02:14and I'd like to change the color.
02:16So, I am going to click on color and I am going to make it a vibrant red, click OK.
02:21Let's make another one.
02:22Create Group, we'll call this one right.
02:25Now before I can select any of my points, I have to turn on Vertex selection.
02:32Okay now, I can select these points over here, right-mouse pop up, add to group, left.
02:42And they've taken on the red color from the left group.
02:46I'll select these points over here, right-mouse pop up, add to group, right
02:52and they become blue.
02:55If I turn off the Vertex selection, go back to Node selection, I get a
02:58lovely colored point cloud.
03:00Not only can you change the color of the groups, but you can also control their visibility.
03:06We can also bake out the groups, if I select a group like the right group, or I
03:12could select both groups.
03:13But let me just select the right group for now and I can click Bake Selected
03:17Groups and I get a new node BakedPointCloud right, we push into that.
03:24So, it's label it with the group that I selected, I am going to clear
03:27Property bin to get rid of all the point clouds from my
03:30PointCloudGenerator node, double- click on this guy and there you have it,
03:34that's just the right group.
03:36Now once you're in the BakedPointCloud node, you can set the point size smaller
03:41or larger if you wish for better visibility.
03:44Okay, we are done with that, so I am going to delete that baked out PointCloud node.
03:48Let's go back to the PointCloudGenerator node and you can also delete the groups.
03:55So, I am going to actually select both groups and say Delete Selected Groups,
03:58this does not delete the points, just the grouping, so, and they lost their group colors.
04:04You can also create groups from within the Viewer.
04:07So, I can go up to the Viewer and again don't forget you must turn on Vertex
04:11selection or you cannot pick your points.
04:14So, I'll select all of them, right mouse pop-up, create a group, I'll call it all,
04:21and let me turn it a lovely yellow.
04:24So, I'll set Vertex selection back to Node selection and I have a lovely
04:31yellow point cloud.
04:33With the group selected, I can now bake the selected group to a mesh, click on
04:40that a few seconds of computing, and I have a new node, BakedMeshAll.
04:43Now let's see what we got here.
04:48Once again I'm going to completely eliminate my PointCloudGenerator node, we are
04:52going to zoom out and bring over some nodes here, and let's hook this guy in to
04:57this little test setup, hook that up to a Project3D node, hook that up to the
05:02RGB and put a ScanlineRender node.
05:04So, I am doing a camera projection of this checkerboard onto my mesh.
05:09Open up that now if I set the display to wireframe, we can see I have a real
05:16good high density mesh that carefully contours to the compound curve surfaces
05:23of this cliff side.
05:24I will set it back to textured, and we're actually seeing the camera projected
05:31grid on top of the mesh.
05:33I will hook up the ScanLineRender node to the tracked camera, connect my viewer to
05:40the ScanLineRender node, switch to the 2D View and there's the render of my
05:45mesh, with my texture map.
05:48Okay, I can then hook up a Merge node to the original clip and merge that
05:54over the background.
05:55I set it up for a semitransparent merge, so that we can check for any squirm or drift.
06:00So, I am going to set the Viewer to full frame for you, so that you can see all the action.
06:06We're doing a test render now with the checkerboard pattern to make sure
06:09there is no squirm or drift anywhere in the scene even in the interior part of our mesh.
06:15And there we have it, the grid pattern is beautifully registered to the mountain
06:19side even in the interior regions on a compound curved surface, normally a
06:25really nasty tracking problem.
06:29As you can see, the all new PointCloudGenerator node has dramatically improved
06:33workflow and point cloud accuracy.
06:36The ability to save out point groups or export beautiful meshes puts this node
06:40at the top of my 3D compositing list.
Collapse this transcript
14. The Improved DepthGenerator Node
Setting up and analyzing
00:00DepthGenerator has been improved to calculate cleaner more accurate depth passes.
00:04The work load has been changed, so that you can first analyze the shot to
00:08calculate the optimum frame separation.
00:11There are also new output options to convert depth to position and normals passes
00:15for use with other Nuke nodes as well as the ability to convert the depth
00:19pass to an extruded mesh.
00:21I am using the DepthClip.movie file, which you will find in the tutorial assets,
00:26if you'd like to play along.
00:28In addition to the clip, we need a tracked camera.
00:31So, let's go over to our 3D pop up, we'll get a Camera, and I happen to have
00:36tracked camera data for you.
00:38So just click on the Import chan file, browse to the DepthGenerator tutorial
00:43assets, and click on the TrackedCamera.chan file. Say open, and we now have a
00:49completely animated camera.
00:52We're done with the Camera Property panel, so we'll close that.
00:55Next, we will add our DepthGenerator node, so we will select the Read node, come
00:59over to the 3D pop up, and select DepthGenerator, and hook up our Camera.
01:05We need a little bit more space for our Property panel here, so let's do this.
01:10Now just by way of comparison, here is the old DepthGenerator Property panel, as
01:16you can see compared to this one, there are many more options and controls.
01:22First up in the DepthGenerator Property panel is the Ignore Mask option.
01:26If you have things in the scene that are troubling your tracking, like maybe
01:30people walking or something moving, you can put in an Ignore Mask, that will
01:34tell the tracker to ignore it.
01:36You can put it in the alpha channel of the source clip, or you can put in this Mask
01:42input, and then you do the pop up here and tell Nuke where to look for it.
01:46Next, is the Depth Output.
01:48By default, it's going to calculate the typical Depth channel, you could also
01:52tell it to do distance but we're going to stay with depth.
01:55The next thing to setup is the Frame Separation.
01:58Now the DepthGenerator is already computing the Depth Pass.
02:02Let's go set the Depth Pass into the alpha channel of our viewer, take a look
02:07and I am going to gain down the Viewer, so we can get a better look at it.
02:12So, if I jump the playhead from frame 1 to frame 11, frame 21, you could see the
02:16Depth Pass is updating every frame.
02:19Now the DepthGenerator triangulates between the same features on two different
02:24frames in order to calculate the Depth Pass for the current frame.
02:28The Frame Separation defines how far apart those two frames are from the
02:33current playhead position.
02:35For slow-moving cameras, you want a larger frame separation, and for a fast
02:39camera or smaller separation in order to get an equivalent camera baseline.
02:43So, that's what the Frame Separation is all about, and we're looking at a depth Z
02:48channel with a Frame Separation of 1.
02:51If I change that to 2, it's now looking two frames out from the current playhead
02:56position and again, it's only inspecting 2 frames and again if I say 5, it's
03:02looking 5 frames out.
03:05Now let's take a look at the Analyze Frame button, when you click that button,
03:09wherever the playhead is and right now I am on frame 21, it's going to look up
03:14and down the timeline analyzing the clip to determine the optimal frame
03:18separation for that frame.
03:20So, let's click on Frame Analyze, watch the playhead go up, and then down,
03:25and boom, there we go.
03:27So, it felt the best frame separation for this was 14.
03:32Notice, it also produced a Calculated Accuracy number here 0.96, the closer this
03:38number is to one, the more accurate the calculations and of course, the closer
03:42to the zero, the worse.
03:44This next button up here, the Analyze Sequence button performs an analysis
03:48of the entire clip, determining at key points down the shot where your best
03:53frame separation is.
03:54So, let's click Frame Sequence and watch what happens, okay, it says it's going
03:58to overwrite my current keyframe, we will click Yes, and it cruises the timeline,
04:04calculating the Frame Separation at key points, and there we go.
04:09I'm going to jump the playhead to frame 1, and you can see the Frame Separation
04:14it calculated here was 15 frames, and the Calculated Accuracy is 0.88.
04:18If I jump the playhead to the next keyframe, the Frame Separation changed to 14
04:24and my Calculated Accuracy .86, and so on up to frame 12, but different Frame
04:30Separation and a different Calculated Accuracy.
04:33Once we have the Frame Separation established we can move on to the
04:36DepthGeneration, the subject of the next video.
Collapse this transcript
Refining the output
00:00This is where we're going to refine the depth pass itself, starting with the Depth Detail.
00:06This is the pixel sub- sampling from the original clip.
00:10A Depth Detail of 0.5 means that the image has been scaled down to half resolution.
00:14If I set the Depth Detail to 1, it's using the image at full size, more accurate
00:20depth calculations, but slower processing.
00:23We will come back to normal detail in a minute. Next is Noise.
00:28The Noise setting tells it how much noise in the clip to ignore.
00:31If I set that up to 0.2 for example, you see the clip has become so blurred out
00:38that we don't get any good results, so we will put that back to default.
00:42The next parameter is Strength, you increase this to better match the fine
00:46detail in the picture, it sort of like pulls the Depth Map tighter to the image.
00:52You see what I mean if I crank this up to two.
00:55Okay, we will put that back to default. Next is Sharpness.
01:01Sharpness actually performs a sharpening operation on the final Depth Map
01:05itself, you can see the effect of that, if I take that from 0.5 to 0.9 for example,
01:10there you go, now I can undo and Redo that so you can see that.
01:15Next is Smoothness, Smoothness performs an intelligent blur on the depth pass itself.
01:20So let me take that from 0.5 to 0.9 and it basically has applied a blur to the depth pass.
01:27Here I will undo and redo, so you can see the difference.
01:31The problem with going too far with Smoothness is you can miss local detail.
01:34Now let's take a look at creating a card from the depth pass.
01:38We want to pick an accurate frame and this one has a 0.75 accuracy, not so good.
01:43Well, let's jump back to frame six, there we go, that's better a 0.86.
01:49The greater the accuracy, the better the mesh will fit the scene.
01:52I will click Create Card and it creates a new node called DisplaceCard.
01:59Now, it's building this mesh based on the current frame of the playhead.
02:02So now we will switch to the 3D view and see what we got.
02:07We now have a 3D mesh, that we can use to line up geometry to the live-action clip.
02:16Now, we can take a look at the Surface Point and Surface Normal outputs.
02:21Surface Point output can be used with the PositionToPoints node to create a
02:25point cloud, but first we are going to have to create the channel set.
02:29So we will just pop this up, say I want to make a new channel set, I want to
02:34call it points, with an x, and a y, and a z. We'll say OK and immediately it starts to
02:43calculate the points cloud, let's take a look back at our RGB and pop-up,
02:48there is our points pass.
02:51Now let's take a look at the Surface Normal, pop that up, we will create a new
02:55channel set, and we call that one normals and again x, y, and z. Say okay and
03:05immediately it calculates the normals pass.
03:07Now we can come over here and look at our normals pass.
03:12Now that we have a normals pass, we can take a look at the Normal Detail setting
03:16here, by default 0.25.
03:19The Normal Detail parameter controls how much default in the normals pass, the
03:24higher the value, the sharper it will look, but again more processing time.
03:27So let's take a look at that, let me run that up to 0.9, there we go and now I
03:34can toggle between the 0.25 and the 0.9 setting for you.
03:38So let's return to our depth pass.
03:41The new DepthGenerator node creates a much more precise depth pass and with
03:45its improved workflow design, makes it an even more valuable tool for your 3D
03:50compositing.
Collapse this transcript
15. The New DepthToPoints Node
Setting up and operating
00:00In Nuke 7, The Foundry introduced an awesome new node, DepthToPoints.
00:06It takes a solved camera and the depth channel of a clip to create a texture map
00:10point cloud that can be used to line of geometry to the clip in 3D. This
00:14technique was famously used in the making of the movie District 9.
00:17Here is how you se tup the DepthToPoints node.
00:22First, we need a clip that has a CGI render, so we get a Read node, go to the
00:27hulk folder and bring in the hulk.
00:30Now all these images and camera files are included in the tutorial assets, we
00:34will open that, hook up our viewer.
00:39So we have a moving piece of geometry with a moving camera.
00:43Now the CG render of course, has an alpha channel, but it also has a depth Z
00:49channel, here you can see it better, if I set up for you like this, okay.
00:57So it's going to take the depth Z data that you see here, create a point cloud
01:02relative to the camera and then texture map the images onto it, tre cool.
01:07A very important point, make sure that your depth Z channels are not anti-aliased.
01:13Okay I'll re-home the viewer and go back to RGB, and fix my Viewer settings.
01:20Next, we need the solved camera.
01:22So let's go get from the 3D pop up, a Camera node.
01:27And we will browse, then we'll Import a chan file, and select the
01:33hulkCam.chan file, and say Open.
01:37Okay, we now have our 3D camera, let's go take a look in the 3D world. Here it is, okay.
01:50So there is my moving camera.
01:52Okay. Ah. Let's do our Project Settings and make sure that the full-size format
02:00is set to PC_Video, this will speed up our render times later and I will close
02:05the Project Settings window.
02:07Okay to set up our DepthToPoints node, just select the Read node, go to
02:123D>Geometry>DepthToPoints.
02:16Now immediately, we're seeing something, but the size in the position is not
02:21correct, until you hook up the solved camera data, there. Now it's correct.
02:28So let's take a look at what we got.
02:33So, we actually have a point cloud that shows whatever side is facing the camera,
02:38it's exactly the same size relative to the camera as the original CG render was.
02:43We can then use this to line up geometry.
02:46The DepthToPoints Property panel has surprisingly few adjustments, a critical
02:50issue of course is that you tell it where your depth channel is if it's not
02:54automatically in the depth Z channel.
02:57Point detail, this is the density of the point cloud.
02:59If I change that to something like 0.05, we get far fewer points, put that back to default.
03:09You also have the point size, let's set that to 1 and you get real tiny little points.
03:15You can use these adjustments to set it appropriate for the scale of your
03:18project, let's re-home the viewer.
03:23Alright, that's how you set it up, now let's see how we use it.
03:27Let's start by getting a checkerboard right there.
03:29And to the checkerboard, let's add a cube, 3D>Geometry>Cube.
03:35Now we are going to make a pedestal for our hulk.
03:39I am going to start by taking the top of the cube and just shortening it down to
03:43zero and we need a nice big pedestal.
03:46So let's take the uniform scale up a bit to maybe 1.5 or so.
03:51Okay, now let's do a little more accurate alignment of our 3D geometry to our point cloud.
03:56We will set the viewer for an ortho Z by typing Z on the keyboard, we will push
04:01in and position it very nicely right at his feet.
04:06We will switch to the ortho side view with the X key, push in and shuffle
04:14this forward and backward until we got that lined up, back to the perspective view with the V key.
04:20And there we go, we like that.
04:22Now, an important point is that our hulk has some animation, as I step through
04:28the clip, you can see he's rotating.
04:30So now we are going to have to add rotation to the pedestal.
04:34So let's switch to the top view with the C key.
04:38We will go to our cube rotate y and set a keyframe, jump to the last frame.
04:47And we will dial in the cube rotation until it looks about right.
04:52There, I like that.
04:54And let's see how it looks, as we step through the frames, looking good, back to
04:59our Perspective view, step through the frames, looks good.
05:08Okay, we are ready to render the cube.
05:10So we will select the cube, go to the 3D, add a ScanlineRender node.
05:17Now we have to render it with the same exact camera that we were using for our geometry.
05:22So let's hook the viewer to the scanline render, switch to the 2D view, let's
05:26clear the Property bin, take a look at what we got.
05:30I am going to ping-pong this.
05:34Okay that looks reasonable, alright.
05:36So what we got to do now is composite the hulk on top of our pedestal.
05:39So I will bring in a Merge node, hook that in because the cube has to be the
05:45background, so it has to be the B side input.
05:48And we will hook the A side to the hulk.
05:50And see what we got.
05:54And there you have it, a piece of 3D geometry, beautifully lined up with a
05:58moving 3D object and a moving CG camera using the DepthToPoints node.
06:04The DepthToPoints node is a dramatic improvement in the ability to line up
06:083D objects to a source clip and raises Nuke's 3D compositing capabilities to a
06:13whole new level.
Collapse this transcript
16. The New DepthToPosition Node
Setting up and operating
00:01The all new DepthToPosition node takes 3D camera information plus an image
00:05with a Depth channel to calculate the x, y and z position of each pixel in the
00:11image, the result is called a position pass.
00:14Now these files here, you'll find in our tutorial assets.
00:17So I have set up a 3D scene with my 3D geometry and a camera just the basics here.
00:22Now let's take a look at the 2D render.
00:26So coming out of the ScanlineRender node, I have my RGB image with the alpha
00:32channel, plus a Depth Z channel, the Nuke ScanlineRender node outputs a Depth
00:37Z channel automatically.
00:40We can see the Depth Z channel better if I adjust the viewer gamma and the gain.
00:46There you go, okay back to the RGB layer and reset my viewer settings.
00:56Okay, let's add the DepthToPosition node, I will go to the 3D tab pop-up,
01:01DepthToPosition and insert the node right after the ScanlineRender node.
01:05Let me dial this down here and hook up our camera.
01:09So this is our position pass, the RGB channels are filled with XYZ data for the
01:15position of every pixel in three-dimensional space.
01:19If you look down here, the Red channel is holding the X position, the Green
01:23channel has Y and the Blue channel has Z. We could see that if we look in the
01:29viewer one channel at a time, here is the Red channel.
01:32So this is the X data, the horizontal data, and as I move the cursor back and
01:36forth, you can see the Red channel changing values.
01:39Here is our Green channel which holds the Y value, so if I bring the cursor
01:44down to the bottom, very low numbers and as I slide up to the top, the numbers get greater.
01:48And then finally here is our Blue channel which has the Z data in it.
01:53Keep in mind, the Z data we are seeing here is world Z, the three-dimensional
01:58coordinate, not Depth Z which is the distance from the camera lens to the
02:03polygon of that pixel.
02:05Okay, let's see how this works by doing a little experiment.
02:07I am going to push in here and I am going to sample the pixel value which is
02:13really the position value right here on the nose of our character.
02:18Let's go get this Sphere, open it up.
02:22So the position of our Sphere is at origin and we could see that if we switch to
02:28the 3D view, there it is.
02:31Our Sphere, turn off the hulk, so we can see the Sphere better.
02:36Okay so it's sitting at origin, we will go back to the 2D view.
02:40So I've sampled the RGB position of this pixel which now holds the XYZ data position.
02:47So let's enter that to the sphere and see what happens.
02:51For the X, I am going to call that zero, so for Y, 1.19, ok 1.19, and then for the Z, 0.40, okay 0.40.
03:05Okay, switch back to the 3D view. Where did our sphere go? There it is.
03:10We'll zoom in on that.
03:13And now you can see, it's perched right on the beak of our scary monster.
03:16Now there is an issue you need to be aware of.
03:19Let's go back to our 2d view and back to the Node Graph.
03:22First though, I want to reset the viewer, so I will home that and switch back to RGB.
03:27So we are looking at our position pass right here.
03:30Here is the problem, if I hook the viewer up to the output of the ScanlineRender
03:35node, I am seeing the RGB image, but if I hook it up to the DepthToPosition
03:40node, I am seeing the position data.
03:43So my position pass data has overwritten the RGB image.
03:47This is usually not good.
03:49So what we are going to want to do is move it into its own separate channel.
03:53So we will go back to our DepthToPosition node and here is our problem right
03:59here the output is in the RGB layer.
04:01So we are going to create a whole new layer for the position pass.
04:05So we will click that up, select New, let's call it PosPass.
04:11And we are going to do x, and y, and z, three channels, we will say OK.
04:18Now the output of the DepthToPosition node is now in its own separate position pass.
04:22So we go back to the Node Graph, we hook our viewer up to the up to the
04:27ScanlineRender node, we see our RGB image, to the DepthToPosition node we
04:31still see our RGB image.
04:33And if we want to see our position pass, we will select it here and there you have it.
04:38You can also apply the DepthToPosition node to a CG image with a Depth Z
04:42channel and a solved camera just like we saw in the depth to point video earlier.
04:47Let's take a look at that.
04:49So here we have actually a CG image and of course it has its alpha channel and
04:54its Depth Z channel.
04:57So let's see the workflow here.
04:59All we have to do is select our RGB image, 3D pop-up>DepthToPosition.
05:06Okay we will scoot this down a little bit and hook up our camera and now we have
05:11the position pass again in the RGB layer, don't forget it's going to clobber
05:15your RGB layer unless you fix it.
05:17The DepthToPosition node is used together with the DepthToPoints node to
05:21create the PointCloud in the depth to points gizmo, use it anytime you need to
05:26know the XYZ position of a pixel in an image.
Collapse this transcript
17. The New PositiontoPoints Node
Setting up and operating
00:00The all new PositionToPoints node takes an image, plus its position pass to
00:05create the same Texture Map PointCloud that we saw earlier in the
00:09DepthToPoints Node.
00:11To use this node, we need to already have the position pass rendered.
00:15By the way, all these images are in your tutorial assets.
00:18Let's take a look at the first case where we have our RGB image and the position
00:23pass rendered in the same exr file.
00:26So here we have our RGB image and take a look, here is our position pass.
00:31We will switch over to the 3D view, because we are going to make 3D points.
00:36So, to add the PositionToPoints node we go to the 3D pop-up>Geometry and
00:41PositionToPoints is in the Geometry folder, because it's making points geometry.
00:47So we will add that.
00:49Since the position pass is in the same data stream as the RGB pass, all we have
00:53to do is come up to the surface point pop -up and say, use whatever your name for
00:57the position pass is, and bang!
01:00There we go, a 3D PointCloud.
01:05It's got the same two adjustments as the DepthToPoints node, first of all the
01:08point detail can be reduced or the point size reduced.
01:13Put that back to default.
01:20Keep in mind that your PointCloud is only going to look correct when seen from
01:23the camera's point of view.
01:25Next, let's take a look at the workflow.
01:27If the position pass is in a separate render, first, I will tell the
01:32PositionToPoints node that I do not have a position pass in the same data
01:36stream, we will go back to the Node Graph and I will set the Viewer back to RGB.
01:41So we are now looking at the RGB layer of this node.
01:45To give the PositionToPoints node its position pass, pull out this little arrow
01:49here where it says pause and hook that up to your position pass render.
01:53Now we will switch to the 3D view and there we have it.
01:58Now there's an undocumented secret here, a position pass must be in the RGB
02:04layer of this Read node to come in on the position input.
02:08If it's on another layer like a position pass layer, it will not see it.
02:11So remember, the position data must come in on the RGB layer here.
02:16Now this can get confusing, we have three nodes that kind of sound the same and
02:21kind of do the same sort of thing, so let's de-conflict that.
02:24Okay, the PositionToPoints takes a position pass and the image and creates your
02:31PointCloud, no camera needed, no Depth Z. The DepthToPoints creates a PointCloud
02:38just like PositionToPoints, but you have to give it a Depth Z Channel right
02:43here it says depth, and you have to give it a camera solve.
02:48And the DepthToPosition is actually a 2D node, which needs a camera and the
02:52depth pass to create the position pass.
02:54So if you take the position pass combined with the PositionToPoints class, you
03:00get the DepthToPoints node.
03:04Use the PositionToPoints node when you want to create a PointCloud and you
03:07already have the position pass.
03:10Again, with a rendered position pass, you don't need any camera information.
Collapse this transcript
18. CameraTracker New Features
Creating separate cameras and points
00:01This video assumes you already know the camera tracker and it only covers the
00:04two new features in Nuke 7.
00:06The Foundry responded to customer requests to separate the creation of the camera
00:11from the PointCloud, so now you only create what you need.
00:14By the way, the CameraTracker is a NukeX only node.
00:19The image that we have here are in your Tutorial assets if you would like to load them in.
00:23So let's start by adding a CameraTracker node, 3D pop-up>CameraTracker.
00:28So I am going to go ahead and just click Track Features, because I know this is
00:33a sweet clip and the CameraTracker just loves it, so it will give me a very nice
00:37track with all default settings.
00:39Okay, done tracking, I will just click on Solve Camera, get our camera solve.
00:44We will go and check the RMS solve error on the refined tab,
00:49beautiful, beautiful.
00:51Now our new buttons are right here, Create Camera and Create Points, but before
00:57we use them I want to show you something on the 3D side.
01:03So look what we have here, we already have a PointCloud, even though we haven't
01:07clicked on the Create Points button.
01:09So what's going on?
01:11Well, the CameraTracker node has created its own PointCloud, now it's only
01:16available to the CameraTracker, you can't play with it, just to show you
01:19what I mean, I am going to select Vertex selection and I cannot select any of those points.
01:26So they are for the internal use of the CameraTracker node only.
01:30Okay, so let's create our camera, we will click here, here is our Camera node,
01:34there is the camera up there, we play the clip, you see the move, we look
01:40through the camera's viewfinder and we see a very plausible PointCloud movement.
01:45Okay, we will stop that, jump back to the first frame and restore the viewer to the default.
01:50Now you could export the camera solve right now and you'd be done.
01:54But if you also want to create the PointClouds, all right, so let's come over
01:57here and click on Create Points.
01:59Now interestingly, we get a second set of points superimposed over the first,
02:08these points belong to the CameraTrackerPointCloud node.
02:12I can for example, dial down the point size and if I switch my Viewer to Vertex
02:18selection I can now select these points.
02:21So the points in the CameraTracker PointCloud are real points that you can play
02:25with, but the ones in the CameraTracker node belongs to the CameraTracker node
02:29and you can't have them.
02:30We will put the Viewer back to Node Selection, again.
02:32Now let's say you want to export the PointCloud, okay, so we will just come up
02:38to our Scene node and add a WriteGeo node.
02:42And by the way you could also connect the WriteGeo node to the CameraTracker
02:46PointCloud node directly.
02:48So we will browse to our destination, we will name our file points.fbx, say
02:56Open, we now have our pathname and filename, click Execute.
03:01Now, this is a PointCloud, it's static, so I only need to export one frame, we
03:06will click OK, boom done!
03:08To see what we wrote out, I am going to clear the Property bin and disable all
03:12these nodes, so there's nothing in the 3D viewer.
03:16Now let's go fetch the fbx file we just rendered, so we'll go to the
03:203D>Geometry>ReadGeo, browse to our points.fbx file and open that.
03:29Now we will tell the fbx file that I want to bring in the PointCloud only,
03:34and there we have it.
03:37I can now set the Viewer to Vertex selection and select these and play with my points.
03:44These new features in the Nuke 7 Camera Tracker will allow you to create only and
03:48exactly what you need for your visual effects shots.
Collapse this transcript
19. Particles New Features
Understanding new emitter features and velocity controls
00:00The big story for particles in Nuke 7 is about the ParticleEmitter node, it now
00:06has several cool new emitter options, let's take a look.
00:09First up the new bbox or bounding box emitter, previously your emissions were
00:15limited to points, edges and faces.
00:17Well, now we have the bbox option, let's see how that works.
00:21Right now I have points selected, so I am getting particles emitted from just
00:25the points of this cube.
00:26If I set that to bbox, it now emits particles everywhere inside the bounding box
00:33of the cube, in other words, in its interior.
00:36This is very cool for things like rain or snow, where in the past you had to
00:41emit from a surface and do a lot of pre-roll in order to get the particles
00:45filling the frame before you could even start rendering your shot. We'll stop this.
00:51The bounding box it emits from has nothing to do with the shape of the geometry.
00:56So I am going to go over here to this sphere, turn that on.
00:59I am going to get rid of my cube, there, move out a little bit.
01:05And now we'll take a look, let me orient this right along the Z-axis.
01:10You can see it's still emitting in the bounding box of the sphere.
01:13So again, the shape of the geometry has nothing to do with it;
01:17you are getting a simple bounding box.
01:19We'll stop that and clear the Property bin.
01:22I am going to turn these off and come over here to show you the new emit
01:29from selection feature.
01:30I'll hook up my ParticleEmitter node and my sphere.
01:37The idea here is that you can emit from points, but the new feature is to emit
01:43from only selected points.
01:44I'll open up this Particle Emitter node.
01:48So it's set to emit from points and as I play the animation, you can see that
01:52every point in the sphere is emitting particles.
01:54Okay, if I however click right here and select only emit from selected points, bang!
02:01It stops emitting.
02:02That's because I don't have any selected points, so let's select some
02:05points, we'll stop this.
02:08You have to use the GeoSelect node.
02:11So I'll select my sphere node, 3D> Modify>GeoSelect and add my GeoSelect node.
02:21Now in order to select my points, I'll have to set the system into Vertex
02:26selection and I am going to enable Occluded Vertices so I can select points in the background.
02:32I am going to select all these points right here on the equator and then in the
02:36GeoSelect node click Save Selection, okay.
02:42And now the Particle Emitter only emits particles from those points that I have
02:46in the GeoSelect node.
02:47You can also do this from a PointCloud.
02:50I am going to disable these, switch over to here, I am going to show you
02:57my PointCloud, and I'll turn off my Point Occlusion and turn off my Vertex selection.
03:04So here is my PointCloud and now you can see the particles being emitted, and if
03:09we open up the ParticleEmitter node, we are emitting from points, again.
03:13So if we play that, you can see that every point in the PointCloud is
03:17now emitting particles.
03:19Again, if I select only emit from selected points, I'll turn that on, no more emission.
03:26So we need another GeoSelect node.
03:29I'll select my PointCloud, 3D>Modify>GeoSelect.
03:36Again, turn on Vertex selection, swing around and I'll select all these points.
03:43Remember, selected points turn blue.
03:45All right, so we'll go to save selection, turn off our Vertex selection back to
03:53regular Node selection.
03:54I'll pull out a little bit and check it out.
03:59Only the points in this selected region are emitting particles.
04:02Okay, we'll stop that, turn all these off.
04:08Clear the Property bin and look at emit in randomized directions.
04:15We'll select that, hook up the Viewer.
04:16Now I have two spheres here.
04:21When I play the Animation, you can see that both particle emitters have
04:25exactly the same settings.
04:27So I'll open up just the particle emitter for the sphere on the right to show
04:32you the randomized direction effect.
04:34It's right here, emit from, we're emitting from points, but no random direction.
04:41If I select randomize the direction, it's now emitting particles 360 degrees around
04:47every point and some of them are going inward and some are going outward.
04:52If I choose randomize outward, it only emits particles going outward from the geometry.
04:59Okay, we are done with that, save this, come over here, turn on these, clear the
05:07Property bin and hook up to this.
05:09I am going to open a cylinder;
05:12this is a really kicky new future.
05:16This is the transfer velocity feature. Here is the idea.
05:21I have this cylinder and with the GeoSelect node I am just selecting the points
05:25on the tip of the cylinder.
05:27So now I've set the ParticleEmitter to emit from points only from selected points.
05:34So only the tip of the cylinder is now emitting particles.
05:37Now right now the particles just spawn in place and the cylinder moves away from
05:42them, so they just sort of hang in space.
05:45The transfer velocity is right down here, transfer velocity.
05:50The idea is the spawn particles are going to pickup the velocity of their
05:54emitters and that will be added to whatever velocity you give the particles.
05:59So if transfer velocity is set from 0 and I change it to 1, now the particles
06:04are being moved with the emitter and now the particles are like flinging off the
06:08end of the cylinder like they were water drops.
06:11Now if you set the transfer velocity to a silly number like 3, this gets you a weird effect.
06:16This is giving the particles three times the velocity of their traveling
06:20emitters that might be useful for an effect somewhere.
06:22We'll put that back to something a little more reasonable.
06:24The transfer window, to calculate the velocity of the emitter, you need to use
06:31more than one frame.
06:32By default, the transfer window is to use one frame in front and one frame
06:37behind of the current frame to calculate the velocity.
06:42We can change that to a higher number, let's say 10.
06:45That means it's going to take 10 frames before and 10 frames after to average
06:50the velocity and that considerably attenuates the acceleration you get from
06:54your traveling emitters.
06:56Nuke's particle system is a true 3D system;
07:00perfectly integrated into Nuke's 3D environment.
07:03These new particle emitter features expand the range of problems that Nuke's
07:07particle system can solve.
Collapse this transcript
20. The New Displacement Node
Setting up displacements
00:01The DisplaceGeo node will displace a polygonal surface.
00:04But you must first create a lot of polygons which makes it a heavy render.
00:08A displacement shader works differently, by creating needed polygons on the fly
00:14only as they enter the render window.
00:16In previous versions of Nuke, the displacement shader was embedded in the
00:20ScanlineRender node, but in Nuke 7 it's been moved to a separate node,
00:24the Displacement node.
00:29So the images we're using are in the Exercise Files, so you can play along too.
00:32Put this back where it belongs.
00:34Now let's take a look at our 3D setup.
00:37We have a card and a camera.
00:40We'll go over to our 3D view.
00:42So here is our card and our camera, and you'll notice the card only has four
00:47polygons along here and four there for a total of 16 polygons for the whole
00:51card, in other words, a very low polygon count.
00:55Now let's add our 3D shader.
00:57We'll select our texture map image, go to 3D >Shader>Displacement, and nothing happens.
01:06The reason is we haven't hooked up any displacement yet to our picture.
01:11So let's go back to the 2D view.
01:13We don't see any displacement in the image yet, because we've not yet hooked up
01:18our displacement map, which goes on this input right here.
01:22Now normally you might take the image that you're using as the texture map, make
01:27a luminance version;
01:28paint it in Photoshop in order to get the altitudes, the elevations that you want.
01:32But in this particular case, this image happens to make a very fine displacement
01:37map all on its own, so we will use it directly.
01:40And there we have our vertical displacement based on the luminance values of this map.
01:46So let's come up to the Displacement tab and see what we got.
01:49First of all, the displacement channel is talking to this input right here, your
01:54displacement image and whether you are going to use the luminance of it or maybe
01:58choose one of the channels or perhaps an average of all three channels.
02:02In our case, the luminance version works just fine.
02:05The scale factor, this is how much displacement you're going to get.
02:09So let's set that 0.1 to something smaller, like 0.05. Boom!
02:14Much less displacement, 0.1, 0.05, back to 0.1.
02:17Next is the filter size.
02:21Now this is actually a blurring operation, it's being done on the displacement
02:25image, not the texture map image here.
02:29This is actually applying a blur and it knocks out some of the fine detail that
02:33would tend to move your polygons more than you want.
02:35Let's see what happens if we change the filter size from 5 to something
02:38larger like 20, so we're really blurring and softening the detail on the displacement map.
02:45See the difference?
02:46I'll put it back to 5, much more detail;
02:50back to 20, much smoothed out.
02:52We'll set it back to default.
02:55Now the build normals option here is designed to be used if you hook up
02:59a normals image here.
03:01This input will take a normals map and then you would turn off build normals.
03:06Now the normal expansion pop-up wakes up, it was ghosted out before and you can
03:10choose the normal expansion mode of either XY or XYZ.
03:14In our case, we're going to leave the build normals on, in order to build the
03:18normals ourselves with our own displacement map.
03:21Once the Displacement tab is roughed in, we can switch to the Tessellation tab
03:24to see the rules for the actual polygon generation.
Collapse this transcript
Dialing in the tesselation
00:01Tessellation is the process of subdividing the geometry into smaller
00:04triangles for rendering.
00:06The Tessellation tab allows you to set the parameters for the subdivision.
00:11Now tessellation is a triangular subdivision process, so let's take a look at the 3D view.
00:16Now if we look through the camera's view, we'll see there is no displacement at
00:22all, let me back out a little bit, and you can see that our camera view only
00:27covers this part of the geometry.
00:29There's no displacement of the geometry, because the displacement shader is a
00:32render event, not a geometry event.
00:34Let's go back to our 2D view so we can see the actual displacement of the
00:39final rendered image.
00:41Now let's talk a minute about what is tessellation?
00:43I've made this little demo for you.
00:46Tessellation is the process of subdividing the polygons into triangles, remember
00:51our card has only a 4x4 arrange of these polygons here, so tessellation will first
00:57divide it into a triangle and then subdividing it again for a subdivision of
01:04two, subdividing it again for a subdivision of three, and then subdividing it
01:10again so on and so forth, each time we increase the subdivisions, we get a finer
01:13and finer polygonal mesh.
01:16So that's what this max subdivision value is, is how many times you're going to
01:20subdivide the polygon, this sets an upper limit.
01:22For example, the card we have has a 4x4 polygonal array for a total 16 polygons.
01:28If the subdivision is set for 3, we're going to get 2048 polygons.
01:34So you have to be careful, you can easily overwhelm the rendering time with
01:38far too many polygons.
01:39All right, we'll go back to our 2D view and take a look at what happens when we
01:43change the max subdivision.
01:44A moment ago we had 4;
01:46you can see we have lots of fine polygons.
01:50If I drop that back to 3, we get a smoother surface.
01:54If I lower it to two subdivisions, the surface smoothes out even more, so
01:59we have fewer and fewer polygons as we walk down the subdivision tree, I'll
02:04put that back to 4.
02:05The next parameter you want to adjust is the pixel edge length, by default that's 20.
02:10Here's what that number means.
02:12Every 20 pixels, it's going to create a new polygon; now here's the importance of that.
02:18The same 20 pixels is used in the front of the picture as it is back here in the
02:22background, that means there'll be the same number polygons back here in the
02:26background, than there is here in the foreground.
02:29That would not be true if you took a high density mesh and laid it down, you
02:34would have far fewer polygons in the front, than you have in the background.
02:38So this is the whole idea of the displacement shader, the number of polygons
02:43stays consistent across the whole scene, whether it's in the background or the
02:47foreground, thus reducing your rendering time.
02:50So let's see what happens when we take the pixel edge length from 20 down to let's say 10.
02:54Now that means we're getting a new polygon every 10 pixels in the foreground and
03:01across the background.
03:02So our geometrical mesh is now starting to conform more accurately to
03:06the displacement map.
03:07Let's see what happens if we go from 10 down to 5.
03:12Now watch the background region of my polygonal mesh when I set the pixel
03:15edge length back to 10.
03:19You see, the background twitches, because the background part of the picture was
03:23not yet finely divided enough, so going from an edge lenght of 10 down to 5, I
03:28got a change, but watch what happens when I take the 5 down to 3, no
03:34change, so I had found the magic point of 5 pixels.
03:38Now going back to our Displacement tab, if we take the filter size, which
03:43remember, that's a blur on the displacement image input only, and if we take
03:47that back up to a number like 20, it really smoothes out the geometry.
03:52This has no effect on how many polygons you are generating, only how smooth the
03:57displacement map is, back to the Tessellation tab.
04:01Now let's take a look at the mode, this is the Polygon Generating mode.
04:05By default it's set for screen, that means it's going to generate polygons only
04:10as they fit into the screen.
04:13Uniform is going to create a uniform polygonal mesh, now this is inefficient,
04:18because you're going to have a lot of polygons in the background, and fewer in
04:21the foreground, but this can be faster for initial setup.
04:24Adaptive is for a situation where the displacement has large flat smooth
04:29areas, so you can have a lot fewer polygons there, so it'll adapt to the
04:33complexity of the surface.
04:34For example, if you had buildings, this would be a great time to use adaptive,
04:38because you have large flat areas.
04:40When you select adaptive, you get another set of parameters to adjust, we'll go
04:44back to our screen mode.
04:46So how do you know you've got the right balance of settings?
04:49Well, the card geometry precision, versus the max subdivision, you want to set
04:53those two parameters for your best render speed.
04:56Next, you want the filter size to smooth out the terrain, and then finally, the
05:01pixel edge length to retain the detail in the displacement.
05:05And by the way, because the displacement shader is a shader, you can take other
05:10shaders and put them in the stack and add some sophisticated lighting models.
05:15The bottom line is use the Displacement Shader node to optimize your
05:19render times for high precision meshes, like those used in stereo 3D
05:23conversion projects.
Collapse this transcript
21. The New ModelBuilder Node
Understanding the workflow
00:01ModelBuilder is a major upgrade to the old Modeler node which is now gone.
00:05Using a clip with a solved camera you can build basic geometry to project all or
00:10parts of the scene onto.
00:12The ModelBuilder is a NukeX only node, but the geometry and projections you
00:16create can be used in regular Nuke.
00:18You'll find the script in the Exercise Files the ModelBuilderScript.nk which
00:23already has a nice Camera Solve, PointCloud ready to go, this will save time.
00:28So, I'm going to ping pong my timeline, which I like to do when I'm tracking, to
00:32take a look at the little sample clip we have here.
00:35This is a little test clip that I created that makes for a very easy camera
00:39track, so everybody has a happy experience.
00:40Okay, we'll stop that and jump to frame 1.
00:43So let's take a look at our camera solve, I'll open up the Scene node, we'll
00:47jump to the 3D view and there is our solved scene.
00:52Now you do not need a PointCloud, but you do have to have a TrackedCamera.
00:57We'll go back to the 2D view and see how to hook up the ModelBuilder node, I
01:02don't need the PointCloud, so I'm going to close that Scene node Property panel.
01:06The ModelBuilder node lives on the 3D tab Geometry>ModelBuilder, and we
01:13hook that into our Read node in Viewer and the Cam Input of course goes to the camera.
01:18Now to wake it up you might want to slide your cursor into the Viewer, we're now
01:22seeing our clip with our TrackedCamera.
01:25So the mission is I want to put in a piece of geometry, just a card to track it
01:30on top of Marci here, to replace that picture.
01:33So the first thing I do is I select the frame that gives me the best view of my
01:37target, which in this case would be frame 1.
01:39Next, I'll come up here to the Shape List and you get to pick which kind of
01:45shape works best for your target.
01:48For a building you might use the Cube, in this case I just need a Card.
01:52I get the little plus cursor and then I can just click to plant the Card.
01:56Now the first thing I like to do is do a basic resize of my geometry so it's in
02:01the ballpark, so we'll select the Edit mode, come to the pop-up and say I want
02:07to edit my object, which is the Card. If I click on it, it turns green.
02:11Now we get to use the new onscreen 3D interactive scaling commands that are now in the Viewer.
02:18Put that over here and that just basically roughs it in, makes it easier to do
02:22the final alignment.
02:24By the way, the 3D grid is kind of in my way, so I'm going to go to the Viewer
02:28setups, select 3D and turn off the grid and then I'll close that Property panel.
02:33With the geometry roughed in, now we can go for the alignment, and here is our
02:38Alignment tool here, and the most important thing is the first point that you
02:43pick, you need to choose a corner point that basically allows you to get things
02:47lined up, do not choose an interior point in here.
02:50I'm going to for this corner right here, click and drag and place it over there,
02:55notice we're getting a lovely little zoom window, look at the crosshair in
02:59there and that helps me line things up.
03:01Okay, I'll come over here, click and drag and I get my zoom window, so I can
03:06line things up real pretty, and we'll line this one up and line that one up. Okay.
03:13I have my geometry positioned on frame 1, my first keyframe.
03:17Now I'm going to roll the playhead out and find another frame to set my second keyframe.
03:22So I'll drag the playhead out here to about frame 30 where my target is almost
03:27going to leave frame.
03:28Now all I have to do is realign it on this frame, watch what happens when I
03:32click and drag on my control point, I get this purple line.
03:36Normally, all you have to do is slide your point along the purple line and
03:39it'll be lined up nicely.
03:41Notice we're good up to here, I'll click this point, I get my purple line, and I
03:46line that one up and I'll click this point, I get my purple line.
03:51Now, if you ever have to go off the purple line, just hold down the Shift
03:55key, that allows you to deviate, but you don't want to do that, I'm going to undo that.
04:00So I now have two keyframes and if I scrub through the timeline between 1 and
04:0430, I got a nice track.
04:06So I'm going to set one more keyframe at the end, so I'm going to just jump to
04:10my last frame, I'll zoom in here, and yeah, I've got a little drift, I want to
04:15touch up that right there, okay, and then come up here and check him out.
04:20All right, let's say we like that;
04:25we now have a card beautifully tracked over the entire length of the clip.
04:29Now if you have geometry such as a PointCloud or some other modeling that you're
04:33going to use to help line things up, you can hook that in right here on the
04:37GeoNode, so I want to hook that up to my CameraTracker points, there are my 3D
04:42points, let's go take a look at what we've got.
04:45I'll set the Viewer to default, we'll back out and there is my card, and notice
04:50it's beautifully embedded in that wall, okay, lines up very nice.
04:54But again, you don't need to PointCloud;
04:56all you really need to have is the camera.
04:59To restore the ModelBuilder 3D view, you must do two things, you must
05:04select your TrackedCamera and lock the viewfinder, then and only then, will
05:09you get your picture back.
05:10If you have some lineup referenced geometry like this PointCloud, or some other
05:13geometry, you can turn it off right here by clicking the Pass Through Geo button,
05:18we're actually done with that, so I'm going to disconnect my Geo Input and
05:22select the ModelBuilder node.
05:24Now we're doing the simple match move case, where I want to export this card,
05:27put a texture map, and render it back on top.
05:30So to export the geometry, you come over to the Scene list and you select all
05:35the geometry you want to export, we just have the one Card.
05:40Then you click the Bake Scene Selection, click that button and you get another
05:44node, and this has the baked out geometry.
05:47We're done with our ModelBuilder, so we'll turn that off.
05:51Hook the viewer up to our new node, and here we go.
05:54Okay, this is the 3D view, we're seeing it rendered through our TrackedCamera.
06:00If we switch to 2D, we're going to need a ScanlineRender node of course, we're
06:06going to need to hook up to our TrackedCamera and then we'll need a little
06:13texture map, give it some pixels, here we go. Okay.
06:19So this is now our 2D render with a nice alpha channel, so all we have to do is
06:25attach a Merge Node and hook that back over the original clip, but we now have a
06:30match move replacing that image on the wall.
06:34We'll play that and have an admiration moment.
06:38And so the geometry is now beautifully matched moved with the original clip,
06:42we'll stop that, jump to the beginning.
06:47This demonstrated the basic workflow for a simple match move case.
06:51Next, let's kick it up a notch, and see how to use ModelBuilder to create
06:55something a bit more complex and use camera projection.
Collapse this transcript
Modeling complex geometry
00:00You can use the ModelBuilder node to create geometry more complicated than a
00:04simple card that can be used for camera projection.
00:07Here's an example of the workflow, by the way I'm using the ModelBulderScript
00:10you'll find in the Exercise Files.
00:13So let's add our ModelBuilder node, go to the 3D tab>geometry>ModelBuilder, hook
00:20it into our source clip, hook the camera input to the TrackedCamera, nothing
00:26happens until we move the cursor into the Viewer.
00:29Again, we'll turn off the grid, so we'll select our Viewer settings, 3D grid off,
00:36close Property panel.
00:39Now its time to take a look at the Shape Defaults tab.
00:43This tab defines the precision, the number of polygons for each of the shapes
00:46you're going to create.
00:47We created the card earlier and it had a 4x4, if I want that to be a 2x2 I could do that.
00:54So you can create the geometry here, just as well as you can from this popup,
01:00this popup list of course is going to use the settings over here on the Shape Defaults.
01:04So let's say I'm ready to create a Cube, I want to create a Cube for this box
01:09and camera project it.
01:10So I'm going to click Create here, notice my cursor is turned into the plus,
01:15come over to the Viewer and click to create my Cube.
01:19As before we'll do a rough edit to ball park the geometry where it belongs.
01:24So we'll come over and select the Edit mode, and then from this pop up we'll
01:28choose to Select objects for editing, which of course the Cube is an object.
01:34Select the Cube, I'll zoom in a little bit.
01:37Using the new 3D scale on screen controls, Command+Shift or Ctrl+Shift, I'm going
01:42to make my box a little smaller, bring it over here, turn it around, rough in
01:53the length, bring it over here.
01:55Here we go that's pretty close, okay, that's a good binging.
02:00Once I have a basic placement on my preliminary shape we're ready to do the alignment.
02:04First thing you got to do is make sure the playhead is where you want it. I want to
02:09start this on frame 1, switch to the Alignment mode, and again the first point
02:14is very important, so we're going to zoom in here and I'm going to pick this
02:19corner here because this is your basic placement point and put it, using the zoom
02:24window, right where I want it. Come over here. Select the next point. Position it,
02:31and I think we will put this point below here, bring this one down there, and
02:39the rest look pretty darn good.
02:41Note, that whenever I edit a point it changes color, these are now purple, a
02:47purple point means that, that point has been animated and you're sitting on a
02:51keyframe. I move the playhead one frame, and they turn blue.
02:55So blue means it's animated, but not on a keyframe and purple it is on a keyframe.
03:01Okay, I want to select my next keyframe so I'm going to roll out here to let's
03:06say frame 40, reposition my box. Again, we get our purple line and usually all you
03:10have to do is slide along the purple line that's over here, I want to pull that in
03:16a little bit, and this guy up here that looks pretty good, okay.
03:21I now have keyframes at one and 40 and the box is tracking very nicely.
03:25We'll go to the last frame in the clip over here, I will zoom in, we'll pull this in a
03:31little bit here, pull that one in a little bit there, maybe tuck in the top here,
03:37check the other end, got to fix this, there.
03:41All the other points looks just fine.
03:43We now have a box nicely tracked over the whole length of the clip.
03:48So now we're ready to export, I'm going to home the Viewer, to export the box we
03:54go back to the ModelBuilder tab, make sure we go to the Scene list and highlight
03:59everything we want to export and click Bake Scene Selection, boom!
04:04We get a Cube node.
04:05Open that up in the Property bin here and we see it on the screen, I can
04:11close the ModelBuilder node now.
04:13Now we're looking through our TrackedCamera into the 3D world and there is our
04:17new Cube for the box.
04:20Now let's set up the camera projection, I'm going to hook the Viewer node to the
04:24original clip and scrub through the clip, looking for the frame that gives me
04:29the best view of my target just like we did with the Marci picture.
04:32I'm going to say frame 20 gives me my best view of the side and the backend here.
04:37So I'm going to use frame 20 and project it on the box, we'll select the Read
04:41node, go to the Time tab, select a FrameHold right there, set it for frame 20.
04:49Now if I look at the FrameHold, I'm going to see frame 20, no matter where the playhead is.
04:55Next we'll select that and add our Project3D node fromm the 3D shader, there's
05:02Project3D, which of course was a camera.
05:06Now the camera that I need is going to be the TrackedCamera at frame 20.
05:11So let's copy the TrackedCamera, paste it up here, and by the way I'm going to
05:16rename that projection camera, ProjCam.
05:19So we can clearly distinguish between the two. I want this one held at frame 20
05:27as well, so we'll go the Time> FrameHold, set that one for 20 as well.
05:36Now if we switch to the 3D view take a look at our set up, so in the 3D view
05:42now as I scrub the playhead we can see our TrackedCamera and our static
05:46Projection Camera, so the Projection Camera is going to project frame 20 on to
05:51the box and the TrackedCamera is going to re-photograph it with the same
05:55moving camera as the clip.
05:57So let's look through our TrackedCamera of the viewfinder and we can see now the box
06:03is moving correctly.
06:05So all we have to do is hook up the camera projection, so the Project3D node needs
06:10the camera that's held on frame 20 and we're going to then use a
06:153D>Shader>ApplyMaterial to hook up that projection to our Cube, and now we have it.
06:24So frame 20 is being re-projected on the Cube no matter what frame the
06:28TrackedCamera is looking at, to re-comp this over the original clip we'll need
06:32a ScanlineRender node, so we will select ApplyMaterial, 3D>ScanlineRender, hook
06:39up the camera input to the original TrackedCamera of course, hook our Viewer up to that.
06:52Now if I switch to my 2D view this is my rendered box from the camera
06:56projection. We're ready to comp this over the original clip, but just to show a
07:04difference, so we know what we're looking at, I'll add a Grade node and I'll just
07:07gain this down so it's a lot darker, that way we'll know we made some change to
07:12the picture. Alright, then we'll take that Grade node and add a Merge node to
07:15comp it over the original clip and now we have a camera projected box comped back
07:23on top of the original clip and of course we could have made whatever change
07:27we want in that box.
07:29Exporting 3D geometry for camera projection is of course a key part of 3D
07:33compositing with Nuke, however if you had a lot of geometry to export then
07:38setting up individual camera projections for each piece of geometry like
07:41this would be tedious.
07:43Next we'll see what to do if you have a lot of geometry for camera projection.
Collapse this transcript
Exporting geometry
00:00If you've modeled a lot of geometry then exporting it one object at a time and
00:05setting up individual camera projections would be a real time consuming project;
00:09here we'll see how to do it all in one go.
00:12Now I'm using the ModelBuilderScript2 which you'll find in the Exercise Files;
00:16here the geometry has already been modeled to save us some time.
00:20We're not seeing our clip with the wireframe, and we won't see it until we open
00:24up the ModelBuilder node itself, so let's double click on the ModelBuilder node
00:28and switches to the Property panel, so I have a Node Graph here, Property panel there.
00:32So we're going to do a full up camera projection of the entire scene onto all
00:38the geometry we have.
00:40First thing we need to do is select our best frame.
00:42So we want a frame that shows us a good view of all the items of interest, which
00:47is going to be of course the Marci picture and our box.
00:50So I am going to use frame 20, so after we've selected our projection frame, in
00:55the Scene list, we'll make sure we've enabled everything we want to export.
00:59This time we'll click on Create Projection and the ModelBulder node makes this
01:03backdrop, loaded up with all the nodes we need.
01:06Notice that it's marked frame 20 just like the timeline, and we have the frame
01:12holds for both the clip and the camera would be on frame 20.
01:17So let's take a look at what we got here.
01:20this of course is our held frame, I scrub through that, no change.
01:25So now we want to put some effects into this projection.
01:27I need to put some graffiti on Marci and we're going to put a logo placement on the box.
01:32So let's come over here and move our frame hold up a little bit, we'll zoom in,
01:37and let's add a PaintNode right here.
01:39I'm going to use the PaintNode to put in my graffiti, so I'm going to go and select my
01:45Brush, let's select the lovely red color and I'll do my graffiti on Marci.
01:52Now notice if I move the playhead, my graffiti disappears, because the paint
01:57stroke is only valid on frame 46, so we have to go change the lifetime of that
02:02paint stroke, so we'll go to the Properties bin for the RotoPaint node and
02:07change the lifetime to all frames, back to the Node Graph.
02:13And now when I move the playhead, the graffiti doesn't disappear.
02:16For the logo placement, I've prebuilt a little tchotchke for you here.
02:20So all we have to do is select the CornerPin2D node and add a Merge node and
02:25slide it in, hook it up, and there's our logo.
02:29Again, this is my held frame, so no matter where the playhead is, I'm going to
02:33see my graffiti and my logo.
02:36Okay, my effects are all ready, so let's hook them into the geometry.
02:39I'm going to hook the viewer to the ApplyMaterial node, we jump to 3D and
02:44we don't see anything.
02:45First, we need to connect the geometry to this input right here, that of course
02:49will come form the ModelBuilder node.
02:52Still we don't see anything, what's going on?
02:55Let's open the ModelBuilder Property panel, and the issue is right here,
03:00display wireframe, remember, this is the 3D display, so we're going to switch that to textured.
03:06Ah, that's more I like it.
03:09Now we're looking at our Textured 3D geometry, but we have this funny
03:13transparency thing going on, what's all that about?
03:16Well, if we go back to the Node Graph and take a look at the Read node, here is the issue.
03:22The ModelBuilder node wants an input clip that has a solid alpha channel, not a
03:273 channeled image, so we'll open up our Read node, and come down here and click
03:33auto alpha, problem fixed.
03:37Now we can clear everything out of the Property bin, go back to our Node Graph
03:42and we're ready for a render.
03:45To do the render, we're going to need a ScanlineRender node and, so we'll select
03:48ApplyMaterial, come over 3D, select ScanlineRender and I'll move the
03:55ScanlineRender node over here and the camera input of course is going to be our
03:59original TrackedCamera.
04:01So now if we switch the Viewer to 2D, so as I scrub the timeline, we can see we
04:06have our modified camera projection on the geometry, all we have to do is comp
04:12this over the original clip. So we'll select the ScanlineRender node, add a Merge
04:17node, and hook that back to the original clip, and we're done.
04:22You can use the ModelBuilder node like this to create geometry, then camera
04:26project whole streets or room interiors;
04:29however, we need to be able to make more complicated geometry than cards or cubes.
04:34So next, we'll take a look at how to add fine detail to the
04:37geometric primitives.
Collapse this transcript
Editing geometry
00:00In the real world you need to model more complex shapes than cubes and spheres.
00:06ModelBuilder comes with a complete comprehensive suite of editing tools to allow
00:10you to refine the shape, and add details to the geometric primitives.
00:13I am using the ModelBuilderEdit.nk script, which you will find in the Exercise Files.
00:19It has the prebuilt geometry and a TrackedCamera to work with.
00:24To see the ModelBuilder Node, don't forget, you have to do two things, first,
00:28turn on TrackedCamera, and second, lock the viewfinder.
00:34Now I am going to gain down the viewer a little bit, just so our white lines
00:38show up a little better.
00:39I'll double-click on the ModelBuilder Node to open it up and we get all of our tools.
00:44First, let's take a look at how to edit vertices. Again, we have an align mode
00:49here and you can see how the points color up for the align mode, and we have the
00:53edit mode here where the points are not lit up.
00:56When you are in the edit mode you get to choose between editing vertices,
01:00edges, faces, or entire objects, so we're going to look at the vertices.
01:03As I click on a vertex, it lights up and I get my cardinal coordinates, so you
01:09can just choose whichever vertex you want.
01:10Now here is a critical point, you want to move the vertices using these cardinal
01:16axes, this moves it in Y, this moves it in X relative to the original shape.
01:22In other words, this point is still co -planer, it hasn't a move off the plane
01:27of the card, this is a critical issue.
01:29Let's take a look at in 3D.
01:30So as you can see the vertex I moved is still perfectly coplanar with the rest of the card.
01:37Let's go back to our ModelBuilder view by TrackedCamera and lock viewfinder.
01:45However, if you make the mistake of grabbing a vertex and just pulling on a
01:50central point like this, you will have pulled it out of alignment. There, look at that.
01:56Okay, so we want to be very, very careful.
02:01We'll go back to our ModelBuilder view.
02:03I am going to undo that and let's push in a little bit more here, so I can show
02:10you how to select with the edges.
02:16When you select an edge, it turns blue and you get your cardinal axis again.
02:20So, here's this edge, and there is that edge, and that edge there.
02:24Again, I can translate the edge, but there's another thing you can do with an
02:31edges selected and that is to subdivide it.
02:33So, I am going to select this edge and then with the edge selected, right mouse
02:38pop up and select subdivide.
02:42And what it does is it plants a vertex in the midway point of that line.
02:46So, what I'm going to do here is switch to selecting vertices, grab my new
02:53vertex and move it up and down, click to the side to deselect.
03:00Another very important tool is the Carving tool.
03:03Let's see how to carve.
03:04Now you can carve whether you are selecting vertices, edges, or faces, so I am
03:11just going to pick edges.
03:12I select this edge and with the edge selected, I then select the carve mode and
03:19when I click in any face, it turns red.
03:21So, I'll click to other face, another face and another face, that red outline
03:26tells you you're in carve mode.
03:28carve allows us to personally subdivide the polygons anyway we want.
03:32To get out of the carve mode hit Return. All right,
03:35I am going to go back to the carve mode and show you what to use it for.
03:40In the carve mode you can divide a polygon up any way you want.
03:45So, with this polygon selected red, I can select any place on any edge and
03:50insert a vertex like this, then I can go to any other edge anywhere I want and
03:55click a second time, and I have now subdivided it or carved it anyway I want.
04:00I'll click off to the side to deselect.
04:03More than that, I'm going to select this guide to carve and I can insert a control
04:09point here, and here, and here, and here, and here, and there, and there.
04:14I do not have just crossover to another edge.
04:18Deselect, this is now a whole new polygon.
04:22So, I can go over here and select Selecting faces, and there it is.
04:27So, the carve mode would be extremely valuable for you to draw your own polygonal
04:32edges exactly where you need them on top of the image.
04:35Now I am in Face edit mode --you can see I have the face icon here-- so that means
04:40any face I click on turns blue, let's take look at what we can do with faces.
04:44Obviously, I can translate them in X or Y --undo, undo-- but I can also pull them
04:49out in Z. Now I have translated this face out, there is no polygon here, I can
04:55show you that if I set the display to solid, okay.
04:59So, I have polygons everywhere, but there's no polygon here, so I am going to undo, undo.
05:04However, with the face selected, if I go right mouse pop up, extrude, now it's
05:10going to extrude this and actually build a polygon here, in fact, I can show
05:15that, by showing you the solid again, there you go.
05:23I'll undo, undo that.
05:25Another thing we can do in the Face mode is to merge polygons.
05:29For example, I am going to select this face Shift+Click to select that one, so
05:33Shift+Click will allow you to pick multiple elements, then right mouse pop
05:38up, merge and those two polygons have become one, or I can tessellate it.
05:45I'll select this guy again, right mouse pop up, tessellate, triangular fan, this
05:52will allow you to subdivide polygons to add a lot more detail.
05:55And of course, if I switch to my vertex mode, I could select this vertex and I
06:00could pull it out the Z and build myself an extruded section.
06:06Next, let's look at Bevels.
06:07I am going to zoom out here, let's go over to our cube, our box, our window box.
06:14Very few things in the real world have sharp edges like this box, so usually
06:18you're going to want to add a nice bevel to it.
06:20So, let's go to select edges, and now I can select the edges on my box. With edge
06:26selected, right mouse pop up, bevel, notice I get a little bevel here.
06:33Now you can control how large that bevel is right here, relative insert. If I walk
06:38this up, it gets larger, smaller.
06:41You can also control how rounded it is over here with the round level, set that
06:45to 1, 2, 3, 4, as much as you want. So this gives you a very elegant and simple
06:52way to bevel or add roundness to the corners of things.
06:55The last thing I want to show you is the edge loop and edge ring, but I am going
06:59to need to add a piece of geometry for that.
07:01So, I am going to re-home the Viewer, I'll go over to my Object Creation and I
07:06am going to say I want a cylinder and click in the middle of the picture, switch
07:11to the Editing Mode, Select objects, and then I will scale this guy down with
07:18the On Screen Scale Control Jack, so we know and love.
07:21Now I am going to switch to selecting edges, deselect over here, come in here,
07:29so I can show you this.
07:30I am going to select this edge right here and right mouse pop up, if I select
07:35edge ring, I am going to get this entire set of polygons around here, and then we
07:40can translate that if we wish.
07:42I'll undo that. Or, I could select edge loop, which gets this ring here and
07:50then we can translate that and even rotate it.
07:55The very powerful polygon editing tools in the ModelBuilder Node will make it
07:59possible to build complex scene geometry for sophisticated camera projections
08:03without turning to the 3D department.
Collapse this transcript
22. TimeOffset Node New Features
Setting up and operating
00:00Previous versions of the TimeOffset Node can only shift the timing for 2D elements.
00:06The updated TimeOffset Node in Nuke 7 can now also shift the timing for 3D objects.
00:12The setup here is I have a little clip that just has numbered frames.
00:17So if I take that Read Node and go to the Time tab and select a TimeOffset Node,
00:22if I set the TimeOffset by let's say 25 frames, now when I move the playhead
00:30nothing happens till I get to frame 25 and then off it goes, so that's exactly
00:36how the old TimeOffset Node worked.
00:38Now let's take a look at the 3D setup that I have, we will switch over to the 3D view.
00:44So here is my 3D scene setup, I just have these three cards and they all have a
00:48little animation tool.
00:49Notice that the animation on the geometry starts on the very first frame, but of
00:54course the Read node doesn't start updating until we get to frame 25, but now let's
00:59see what happens if we move the TimeOffset Node to a piece of geometry.
01:03I am going to select the TimeOffset, pop it out or Shift+Command+X, you all know
01:08that one, bring it in here and hook it up to just one card.
01:12So now card 1 has got the 25 frame Offset, so if I scrub the playhead, the
01:18others cards move, but card 1 doesn't move until we get to frame 25.
01:22So the TimeOffset Node has shifted the animation of that piece of geometry,
01:28but there's something else we can do.
01:30I'm going to pop out that TimeOffset Node, hook it to the entire scene, and now
01:39the entire scene has a 25 frame offset for all the geometry animation.
01:47In addition to offsetting the timing, you can also reverse the timing of
01:51your clips and geometry.
01:55You can now use the TimeOffset Node to shift the timing, as well as reverse the
01:59animation, for both 2D and 3D objects.
Collapse this transcript
23. New Shadow-Casting Features
Setting up and adjusting
00:01ShadowCasting has been in Nuke for a while now, but Nuke 7 added a couple of new
00:05features that we will take a look at here.
00:08In addition to that there are a couple of issues you need to know about to avoid problems.
00:13I'm using the ShadowCasting.nk script that you'll find in the Exercise Files to
00:17help speed things along.
00:20First to know is that a shader is absolutely required in order to do any ShadowCasting.
00:25So, let's open up the Spot light and switch to the Shadow tab.
00:30The first thing you have to do is of course turn on ShadowCasting.
00:35Now one of the new features in Nuke 7 is that each piece of geometry has a
00:39shadow cast and a shadow receive control.
00:41So, let's go to the card and open that up and right here you'll find the cast
00:46shadow and receive shadow controls.
00:49Cast shadow means it does not cast a shadow on anything, receive is it doesn't
00:53receive a shadow, and oops, our shadow went away.
00:55So, for the card, we want to leave receive shadow on.
00:59For this sphere, same thing, with cast shadow turned off it will not cast a
01:05shadow, and with receive shadow turned off, it won't receive any.
01:09So, we're done with these.
01:10I am going to close both the sphere and the card back to our spot light Property panel.
01:16First up I would like call your attention to these little marks right here,
01:20these are self shadowing marks.
01:24We can eliminate those by adjusting the bias parameter. If I increase the bias
01:29it moves the sample point away from the surface of the geometry so it doesn't
01:33cast a shadow on itself.
01:35However, that can introduce artifacts in some situations --I want to put that
01:39back-- and let's take a look that slope bias.
01:43The slope bias solves the ShadowCasting by looking at the angle between the
01:47geometry and the lens.
01:49So, if you turn up the bias and introduce problems, turn it back down and try
01:54using the slope bias and you juggle the two, until things look right.
01:58I want to put that back to default and of course our self shadows appear
02:01again, to show you this.
02:03We will open this sphere again.
02:04And if it's not going to receive any shadows, turn this off and the self
02:09shadowing disappears.
02:10All right, back to our Spot Property panel.
02:14So, let's take a look at how to make the shadows soft.
02:18Soft shadows require three adjustments, the samples, the jitter scale, and the
02:23depthmap resolution.
02:26If I just crank up the jitter scale, I don't see any softness, I have to
02:31increase the number of samples along with the jitter scale before we start to
02:35see some softness, there we go, let's zoom in a little bit.
02:41The jitter scale controls the thickness of the soft edge, whereas, the samples
02:46controls the quality, how smooth it is.
02:49If I turn samples down to 6, see I got the uglies.
02:51I'll turn that back up and set it for a higher level.
02:55But of course, as I increase these numbers my render time goes up.
02:59If your shadows are looking a little crunchy, you can increase the depthmap
03:03resolution and that will smooth it out.
03:06So, you juggle the three parameters, samples, jitter scale, and depthmap
03:10resolution to get the shadow you want and the reason for the different settings
03:14is depending on the scale of your geometry.
03:17You don't have to change these values for the best look of your shadow,
03:21compromised with your best render time.
03:23I am going to re-home my Viewer and turn on the cylinder here.
03:33Now the shadow from the sphere is landing on the cylinder and the cylinder is
03:36casting a shadow on the card.
03:39So, if turn off cast shadows, then the cylinder will no longer cast a shadow on
03:44the card, and if I turn off receive shadows, the sphere shadow will no longer
03:48fall on the cylinder.
03:49We'll turn that back on and disable the cylinder and we will close the Property
03:56panel, back to our Spot light.
03:59Next, I would like to show you an idiosyncrasy of the directional light that
04:03you really need to be aware of.
04:04I am going to turn off the Spot light;
04:06disable that, and enable the directional light, and open it up in the Property panel.
04:10Now there is something seriously wrong with our shadow here, here's the deal the
04:15directional light has to be scaled up large enough so that it sees all of the
04:20geometry you're trying to cast the shadow from.
04:22So, let's take a look. We'll switch to the 3D Viewer, punch up our directional
04:28light, lock the viewport, and we will back out a little bit.
04:34As you can see the directional light only sees this much of the geometry, and
04:39that's why we get a partial shadow.
04:41So, we go to the directional light Property panel, we go to the uniform scale,
04:45and I'll slide up the uniform scale until the geometry is completely encompassed
04:51in the viewport of the light.
04:53Now we'll switch back to 2D and now we have a complete shadow.
04:57You can cast shadows from spot lights and direct lights in Nuke, but not from
05:01point lights without using Renderman.
05:03Next, we'll see how to cast semitransparent shadows.
Collapse this transcript
Casting semitransparent shadows
00:00The solid shadow works fine for solid geometry, but if you want to cast
00:05semitransparent shadows, you have to change the light's shadow mode to use the
00:08alpha channel as a transparency mask for the shadow.
00:11I am using the ShadowCasting.nk file, which you will find in the Exercise Files folder.
00:17Our semitransparent element is this leaf here and we will look in the alpha
00:22channel, there is our semitransparency.
00:25Back to our Scanline render, let's take a quick look at the the 3D setup.
00:29So I will switch to 3D view.
00:32So I have this card floating over the other card and the top card has the leaf on it.
00:37We have our camera and our light.
00:39Okay, back to the Scanline render node.
00:43So we have a Phong shader on top of the card, so it will receive the shadows and
00:47another Phong shader on top of the floating card, so it will cast a shadow.
00:51And we have hooked up our leaf to all the inputs.
00:54Now here is the big issue you have to keep in mind.
00:57It is this unlabelled arrow input here that receives the alpha channel for the transparency.
01:03If you don't have it connected, you are going to get a solid object.
01:08So you must hook up a semi- transparent image to the unmarked input as well
01:12as the diffused specular and emission inputs, if you would like to adjust those for your lighting.
01:17Now with semitransparent object, the Shadow mode has a couple of options
01:21you want to know about.
01:21I will open up the Light4 Property panel, switch to the Shadows tab and right
01:28here the shadow mode, we are using full alpha, which means it uses the entire
01:33alpha channel to determine the semitransparency and you get variations and
01:37lightness and darkness.
01:38If we use the clipped alpha, you see the alpha channel has basically become
01:43binarized into transparent and opaque.
01:46Now there is threshold here, the clipping threshold, you can lower it to get a
01:51more solid alpha channel or raise it up to make it more transparent.
01:54The solid shadow mode is for solid geometry, and as you can see, we have
02:00completely lost all of our transparency.
02:02So I am going to put it back to full alpha, it looks very nice.
02:06Now sometimes you want to output the shadow mask itself, so you can process it
02:12and do your own thing with the shadows.
02:15And the way we do that is to enable the output mask.
02:17I am going to send the Shadow mask through the mask.a channel just because
02:22it's quick and easy.
02:23We will go over here and set mask.a into the viewer's alpha channel, then
02:33switch the viewer to the alpha channel, so I can now look at my shadow mask and
02:38back to the RGB render.
02:40Nuke's ShadowCasting uses Depth Map Shadows not Raytraced Shadows.
02:44Depth Map Shadows are much faster to render, but don't look quite as
02:48realistic as Raytraced.
02:50If you want to Raytraced Shadows, then you'll need to use the Renderman option.
Collapse this transcript
24. The New Relight Node
Setting up and operating
00:01The long-awaited ability to do normals relighting is now here with Nuke 7, using
00:05the new RelightNode.
00:07Given a normals pass, a point position pass, and the original CG camera you can
00:13Relight an RGB image.
00:15I am using the Relight.nk script from the Exercise File, which also contains
00:20all of the images here.
00:21Now you have to have your normals and point position passes,
00:25and I have them right here. My normals pass is in the layer called norms and my
00:31point position pass is in the layer called ppos.
00:36If your normals or point position passes are in separate files then use a
00:40shuffle copy node to slip them into the color image data stream, this render
00:45obviously has them all right there.
00:47So, this is our CG render and we look at the composite.
00:54So what happened here is, the director decided to make this a day for night shot.
00:58So, bang!
01:00The CG render is no longer any good.
01:02So, we're going to Relight this CG render for a night lighting.
01:07We will find the RelightNode in 3D>Lights>Relight.
01:16The color input gets hookup to our RGB image, the lights input goes to the
01:21Lights, we're going to use 3D lights to Relight the RGB image.
01:27Now if you just have a single light you can hook it directly up to the lights input.
01:32But as in this case I am using two or more lights you hook them both up with
01:36Scenenode, and Scenenode hooks up to the light input.
01:39Once you've hooked up the lights you get another arrow and that's for the
01:42camera, so we will hook that up to our camera.
01:45This is the camera that did the original CG scene render.
01:50Once you've hooked up the camera you get one more input arrow which is
01:53the material input.
01:55So, for this we're going to use a phong shader.
01:58So, we'll go to 3D>Shader>Phong and hook that up to the material input like so.
02:07We will work with the Phong shader later.
02:11Right now I want to talk about the Relight property panel.
02:14The first thing you have to do is tell the Relight where to find the normal
02:17vectors and the point positions.
02:19Our normal vectors are in a layer called norms and our point positions are in
02:24the layer called ppos.
02:26Now this use alpha button up here is for a situation where you might want to
02:30just mask off one part of the image for relighting and that mask goes into the
02:35alpha channel and this tells the node to use the alpha channel to mask off the relighting.
02:40We're going to Relight the whole thing so we don't need that here.
02:43Now let's take a look at the output of the Relight node.
02:49This isn't very useful.
02:51This is not the relit version of the RGB image;
02:54this is in fact the lighting passes that have been produced by the Phong shader,
02:58the camera, and our lights
02:59by using the normals layer and the point position layers.
03:05In order to apply the lighting pass we get out of the Relight Node to the
03:08original RGB image we're going to have to multiply them together.
03:11So, I am going to add a merge node, hook it up to the original RGB image, and set
03:19the operation to multiply.
03:22I have now relit the original RGB render.
03:25I can toggle that on and off here and you can see how that looks when we do
03:30the composite here.
03:32I will switch this to be my A side and then we'll look at our length
03:37composite, much better.
03:40And we didn't have to send it back to the 3D department.
03:43Okay, so let's dial in our lighting.
03:45First, let me show you the 3D setup that we have.
03:48I'll go to the 3D view and here's my setup, these are my two lights.
03:54Now what I have done, is I've attached them to an axis node so it makes it easy for
04:00me to adjust them, this is going to swing it around the equator, and this is going
04:04to swing it from poll to poll.
04:06The axis node is only used to make it easier to position the light that's all.
04:10Okay I want to reset that back to default.
04:13Let's go back to our 2D View and see what we got.
04:17So, the MoonAxis node is going to actually reposition the moonlight.
04:21So, I am going to rotate it around the equator.
04:23All right, I am going to rotate it from poll to poll, here you go, I will put
04:30that back to default.
04:32Next I can adjust the characteristic of the moonlight by opening up the
04:36MoonLight node, and for example, I could decrease the intensity, or change the
04:45color, and we will put that back to the original settings.
04:53And finally, we can actually adjust the surface attributes of our CG element by
04:57opening up our materials, in this case the Phong nod,e and I could, for example,
05:01increase the diffuse and lower the specular.
05:06The Relight Node brings the long awaited capability to do normals relighting in Nuke.
05:11This can save valuable production time by relighting during compositing rather
05:15than rerendering in the CG Department.
Collapse this transcript


Suggested courses to watch next:

Nuke 5 Essential Training (10h 9m)
Steve Wright

Nuke 6 New Features (4h 6m)
Steve Wright


Nuke 6.3 New Features (4h 39m)
Steve Wright


Are you sure you want to delete this bookmark?

cancel

Bookmark this Tutorial

Name

Description

{0} characters left

Tags

Separate tags with a space. Use quotes around multi-word tags. Suggested Tags:
loading
cancel

bookmark this course

{0} characters left Separate tags with a space. Use quotes around multi-word tags. Suggested Tags:
loading

Error:

go to playlists »

Create new playlist

name:
description:
save cancel

You must be a lynda.com member to watch this video.

Every course in the lynda.com library contains free videos that let you assess the quality of our tutorials before you subscribe—just click on the blue links to watch them. Become a member to access all 104,069 instructional videos.

get started learn more

If you are already an active lynda.com member, please log in to access the lynda.com library.

Get access to all lynda.com videos

You are currently signed into your admin account, which doesn't let you view lynda.com videos. For full access to the lynda.com library, log in through iplogin.lynda.com, or sign in through your organization's portal. You may also request a user account by calling 1 1 (888) 335-9632 or emailing us at cs@lynda.com.

Get access to all lynda.com videos

You are currently signed into your admin account, which doesn't let you view lynda.com videos. For full access to the lynda.com library, log in through iplogin.lynda.com, or sign in through your organization's portal. You may also request a user account by calling 1 1 (888) 335-9632 or emailing us at cs@lynda.com.

Access to lynda.com videos

Your organization has a limited access membership to the lynda.com library that allows access to only a specific, limited selection of courses.

You don't have access to this video.

You're logged in as an account administrator, but your membership is not active.

Contact a Training Solutions Advisor at 1 (888) 335-9632.

How to access this video.

If this course is one of your five classes, then your class currently isn't in session.

If you want to watch this video and it is not part of your class, upgrade your membership for unlimited access to the full library of 2,025 courses anytime, anywhere.

learn more upgrade

You can always watch the free content included in every course.

Questions? Call Customer Service at 1 1 (888) 335-9632 or email cs@lynda.com.

You don't have access to this video.

You're logged in as an account administrator, but your membership is no longer active. You can still access reports and account information.

To reactivate your account, contact a Training Solutions Advisor at 1 1 (888) 335-9632.

Need help accessing this video?

You can't access this video from your master administrator account.

Call Customer Service at 1 1 (888) 335-9632 or email cs@lynda.com for help accessing this video.

preview image of new course page

Try our new course pages

Explore our redesigned course pages, and tell us about your experience.

If you want to switch back to the old view, change your site preferences from the my account menu.

Try the new pages No, thanks

site feedback

Thanks for signing up.

We’ll send you a confirmation email shortly.


By signing up, you’ll receive about four emails per month, including

We’ll only use your email address to send you these mailings.

Here’s our privacy policy with more details about how we handle your information.

Keep up with news, tips, and latest courses with emails from lynda.com.

By signing up, you’ll receive about four emails per month, including

We’ll only use your email address to send you these mailings.

Here’s our privacy policy with more details about how we handle your information.

   
submit Lightbox submit clicked