IntroductionWelcome| 00:12 | Hi! This is Steve Wright, welcoming
you to my Nuke 7 New Features course.
| | 00:16 | This course is specifically designed for
compositors already familiar with Nuke.
| | 00:21 | The Foundry really outdid
themselves with this release.
| | 00:24 | It's the largest and most
comprehensive new release of Nuke ever, with major
| | 00:29 | technological innovations and tons of exciting
new features and tools, here is just a sample.
| | 00:35 | First up is Deep Compositing, with
the recent release of EXR 2.0 deep images
| | 00:40 | and deep compositing are now an
industry standard fully supported by Nuke.
| | 00:44 | You'll learn all about this extremely
important innovation in this course.
| | 00:49 | Another major new technology is
Alembic Geometry, a recent new industry
| | 00:54 | standard, it allows 3D elements to be
exchanged between different platforms and
| | 00:58 | offers great efficiencies in
workflow, render times, and database sizes
| | 01:03 | compared to FBX files.
| | 01:06 | Nukes venerable Tracker Node has been
completely rewritten and now supports
| | 01:10 | unlimited trackers, keyframe tracking,
a snap to feature, and automatic track
| | 01:15 | averaging, and much, much more.
| | 01:18 | The all new PointCloudGenerator Node
has been completely rewritten and now
| | 01:22 | generates high density Point
Clouds with great accuracy.
| | 01:25 | It also incorporates a truly impressive
mesh generator for skinning your Point
| | 01:30 | Clouds, replacing the old PoissonMesh Node.
| | 01:34 | The truly awesome ModelBuilder node
replaces the old modeler node and allows you
| | 01:38 | to quickly create 3D geometry for your clips.
| | 01:41 | You can use this geometry for camera
projection and set extension shots.
| | 01:46 | My personal favorite new 3D power
feature is the brand-new DepthToPoints Node.
| | 01:51 | It takes a CG image with a depth channel
and creates a 3D Point Cloud that you can
| | 01:55 | use to line up geometry.
| | 01:56 | You will have hours of fun with this new tool.
| | 02:01 | At last, Nuke can now do normals
relighting with the new Relight Node.
| | 02:05 | You can relight CG renders in Nuke
changing the light direction, color,
| | 02:09 | intensity, and quality.
| | 02:11 | You will love the new ZDefocus Node which
is a major upgrade to the old ZBlur Node.
| | 02:18 | It has major improvements in
photorealistic depth blur, workflow, plus extensive
| | 02:22 | new creative control over the look of
the Bokeh, that's the look and feel of the
| | 02:26 | out of focus elements.
| | 02:29 | For those members that have purchased
the premium subscription, this course
| | 02:32 | comes complete with over
700 MB of Exercise Files.
| | 02:36 | The Exercise Files are
for your personal use only.
| | 02:40 | So, join me in my Nuke 7 New Features
course and learn about these and many more
| | 02:45 | new features in this latest release of Nuke.
| | 02:48 | And by the way, you can download
my free iPhone app that puts all the
| | 02:52 | NukeHotkeys at your fingertips, just
search for NukeHotkeys in the app store.
| | Collapse this transcript |
|
|
1. The RotoPaint NodeUnderstanding the toolbars| 00:01 | Nuke's Roto and RotoPaint Nodes have
been completely overhauled improving both
| | 00:05 | performance and the tools available.
| | 00:08 | Most of the changes affect the Roto
Node, which are carried over to RotoPaint
| | 00:11 | Node, so we really only
need to look at the Roto Node.
| | 00:15 | By the way, this video assumes you
already know the Roto and RotoPaint Nodes, if
| | 00:20 | you don't, go back and check out those videos.
| | 00:23 | This video only covers the Nuke 7 New Features.
| | 00:28 | So, to get the new Roto Node just
click in the Node Graph, type 0 as before,
| | 00:34 | look at up to the Viewer, and we will
come up over here to the toolbars and let's
| | 00:38 | do a pop-up and take a look.
| | 00:40 | Here are the new things, the
Cusped Bezier and the Cusped Rectangle.
| | 00:44 | Let's see what those are.
| | 00:45 | I'll select the original Bezier, click
and drag, click and drag, click and drag,
| | 00:50 | click and drag return to close.
| | 00:52 | The first thing you'll notice is they're
red. Of course, you can change that
| | 00:56 | over here by clicking on this button,
and picking a different color, if you
| | 01:02 | wish. Okay, I'll undo that.
| | 01:05 | The second thing you might want
to notice is it didn't fill it in.
| | 01:10 | That's because the output option
has now been changed by default to be
| | 01:13 | the Alpha Channel only.
| | 01:15 | If you wish to have it RGBA, you can
do that, but by default back to Alpha.
| | 01:21 | So back to our Toolbar, if we come
over here and we pick the Cusped Bezier.
| | 01:27 | So we click and drag, click and drag,
click and drag, click and drag, no
| | 01:31 | curves, return to close.
| | 01:33 | So, the difference is this has
already got Cusped at each point.
| | 01:37 | You might use this to draw
around buildings or windows.
| | 01:40 | On this shape over here if I select a
point, I have my control handles that I
| | 01:44 | can mess with, but over here, you do not.
| | 01:48 | Next, let's take a look at the
Rectangle, again, as before you can draw a
| | 01:53 | rectangle, but we now have the new
Cusped Rectangle, click and drag that, and
| | 01:59 | again, the difference is, this is
actually a Bezier with the handle's preset to
| | 02:05 | make a nice rectangle, but the Cusped
Rectangle actually has Cusped Points, so
| | 02:10 | that there are no handles.
| | 02:12 | Of course, you can always change that
by turning them into smooth and now you
| | 02:17 | get your control handles back.
| | 02:18 | There is a bunch of new toys up here on the
Tool Settings, so let's take a look at those.
| | 02:24 | The Auto Key is still here, just has a
little different icon, same function,
| | 02:28 | just a different look.
| | 02:29 | The Feather Link also has a new look,
now remember, the Feather Link is, if
| | 02:33 | you pull out a feather point like this,
by default they are linked together.
| | 02:37 | If you turn off the feather link,
you can now move the main control point
| | 02:41 | without affecting the feather
point, I'll turn that back on.
| | 02:44 | Let me push in a little bit here, all right.
| | 02:49 | Next up, this button shows the Label
Points, so if I click that all the labels
| | 02:55 | pop up on each of the control
points, I'll turn that back on.
| | 02:59 | This one will hide the Curve Lines, so
if I turn that on, all my curve lines
| | 03:04 | disappear, but any
selected control points don't.
| | 03:07 | So, if I click off to the side, it'll
clear everything, turn that back on, so
| | 03:11 | I can see my splines.
| | 03:12 | Let me select some control points here.
| | 03:16 | This button will show and hide your points.
| | 03:18 | So, if I turn that on, the points disappear.
| | 03:21 | I can still select everything, move it
around, transform it, have a good time
| | 03:25 | like that, it's just that the points
themselves don't show up, sometimes that's
| | 03:30 | a very helpful thing to do, we
will turn on the points again.
| | 03:34 | This button is the Hide Transform.
| | 03:35 | What that means is normally when you select
two or more points, you get the transform box.
| | 03:40 | If you turn that on, you don't get the
transform box, you can still move your
| | 03:45 | points around, but you can't see
the box, we'll turn that back on.
| | 03:49 | The button next to it is the
hide transforms_jack while moving.
| | 03:52 | In another words, normally when you are
moving, the transforms_jack is quite visible.
| | 03:56 | If you click this button, the transforms_
jack does not go away, unless you move it.
| | 04:01 | And now it will hide it, while
you're trying to fit things, very nice.
| | 04:06 | The next button here is the constant_
selection, let's see how that works.
| | 04:10 | If I select a shape and then click off
to the side, it'll deselect it, so I'll
| | 04:14 | click over here, deselect it.
| | 04:16 | However, if I enable that feature, when
I select the shape and then I click over
| | 04:21 | here, click over here, click over
here, it does not deselect it anymore.
| | 04:27 | If you really, really want to deselect it,
just come down here to the Layer list
| | 04:30 | and click in here and it'll deselect
everything, I'll turn that back on.
| | 04:36 | This is the ripple edit, same as before,
just have a shiny new icon, and when
| | 04:39 | you enable it, then you have all your
ripple edit controls, we'll turn that back on.
| | 04:44 | And these of the same Set
and delete key frame buttons.
| | 04:47 | Thank you Foundry for all the
spiffy new features in the toolbars.
| | 04:51 | Next up, let's take a look at some
of the new goodies in the Roto Node
| | 04:55 | Property panel.
| | Collapse this transcript |
| Exploring new commands and features| 00:01 | I've homed the viewer in Node Graph so we
can take a look at some new features in
| | 00:05 | Roto Property Panel plus some
great new copy and paste features.
| | 00:09 | First up the Property Panel, the Roto
tab's got a little bit of a facelift.
| | 00:14 | You might notice that the format
and output mask fields are missing.
| | 00:18 | Not true, they are just hidden here
under the twirl down, there they are
| | 00:22 | format and output mask.
| | 00:23 | So, this just kind of cleans
up the Property Panel a bit.
| | 00:27 | However, on the Transform tab there
are some new toys, we've the skew X and
| | 00:32 | skew Y sliders, plus the skew order,
how that works is, you can now do
| | 00:38 | individual skewing in X or Y, and you can change
the skew order which of course changes the look.
| | 00:47 | The Shape, Clone, and Lifetime
tabs are unchanged in Nuke 7.
| | 00:51 | Back to the Roto tab.
| | 00:52 | There is a nifty new feature
down here in the Curves list.
| | 00:56 | Over here in the Life column you can now
do a double right-click and it will pop
| | 01:01 | up this frame range control, so we
could for example set the frame range for
| | 01:05 | this spline to be from 10 to 90, click OK.
| | 01:10 | You don't have to go over to the
Lifetime tab anymore if you don't want to.
| | 01:14 | There is also another new interesting
feature added to the Lifetime, look at
| | 01:18 | that the spline actually disappears
when it's out of its Lifetime range.
| | 01:23 | In the past, the spline didn't go away,
it just stop doing the paint fill.
| | 01:28 | So this is a little, little
more intuitive don't you think.
| | 01:32 | Okay, big new features in the copy/paste.
| | 01:33 | If I select this spline and do a right
-mouse pop up, and I go to Copy, it's
| | 01:40 | telling me that I have selected one curve.
| | 01:43 | If I select two curves, and right mouse pop
up, Copy, it now says two curves, deselect.
| | 01:54 | If I select a point here, right mouse pop up,
copy, it says I've 1 point on 1 curve selected.
| | 02:07 | If I do two points, it will
tell me my copy is for 2 points.
| | 02:10 | Now let's take a look at these values
animation and link, what's that about.
| | 02:14 | All right, to show you that, first I
want to take this control point and give it
| | 02:19 | just a little bit of
animation over link for the shot.
| | 02:21 | All right, so that point now animates.
| | 02:24 | Let's push in a little bit,
so we catch the action.
| | 02:27 | If I select this point and I say copy>
1 point (values), so that's copied it into
| | 02:36 | the clipboard, then I'll select
another point and I'll say paste>1 point
| | 02:42 | (values), so this point is now
coincident to the other point.
| | 02:47 | However, the other points
moving, so it doesn't move with it.
| | 02:50 | If I want it to move with it, I'll go
to my source point right-mouse pop up,
| | 02:57 | copy>1 point (animation).
| | 03:01 | Select the other point, right mouse, paste>1
point (animation) and now they move together.
| | 03:10 | However, if I wish to reposition the
original point like so, the other point
| | 03:17 | doesn't go with it because it just has
a copy of the animation, they are not
| | 03:21 | linked, I want to link them.
| | 03:23 | Then I'll go to my point, right
mouse pop up, copy>single point link.
| | 03:30 | Select the other point, paste>
single point link, and now the points are
| | 03:38 | actually linked together.
| | 03:40 | So, no matter how I edit the first
point, the other point will follow.
| | 03:45 | You can also paste the point link
into a Transform Node, or Corner Pin
| | 03:48 | node, anyone you want.
| | 03:49 | I am sure you'll find these new copy
features are very helpful, and if you work
| | 03:54 | in stereo you'll really
appreciate these next new features.
| | Collapse this transcript |
| Improving productivity with the new stereo features| 00:01 | I've restarted Nuke to show you some
funny productivity features we're working
| | 00:05 | in stereo with the Roto or RotoPaint Nodes.
| | 00:08 | First let's add a Roto Node;
| | 00:10 | I want to show you
something in the Property panel.
| | 00:13 | So cursor in the Node Graph, type O to get a
Roto Node, type 1 to hook it to the Viewer.
| | 00:18 | Now I'm going to float Property panel
down here to show you this, and then
| | 00:22 | we'll open up the Project Settings and
click on the Views tab, now watch this
| | 00:28 | part of the Roto Property panel, when I click
on the Set up views for stereo, bang, see that?
| | 00:34 | We got these new stereo fields.
| | 00:36 | Okay, we're done with the Project
Settings and I'm going to redock my Roto
| | 00:41 | Property panel into the Property bin.
| | 00:44 | Next we'll need a stereo pair, so let's
click in the Node Graph and type R and
| | 00:49 | we'll select the Exercise Files folder
and browse to our stereo_pair.exr, open
| | 00:54 | that, hook it up to our Roto Node, we
will tidy up here .Okay, so let's push in
| | 01:01 | with the H key, take a look at our
left and right views, here's left, right,
| | 01:06 | left so this is in fact a stereo pair,
let's push in on this tombstone, I'm
| | 01:11 | going to use that to draw a stereo roto pair.
| | 01:14 | So I'll select Cusped Bezier, ah we don't
need this Read Node any more, so let's
| | 01:20 | clear that, so we can watch the action
over here in the Roto Property panel.
| | 01:23 | First I'll set the view for a left
and right, then from the Tool tab, I'll
| | 01:28 | select Cusped Bezier.
| | 01:32 | So I'll click, click, click, click and close.
| | 01:37 | And we'll make sure that the
views for the both left and right.
| | 01:41 | I now have two shapes that are coincident,
the left and right views, but they're
| | 01:45 | right on top of each other.
| | 01:46 | You can tell that right here,
because this Bezier has both the left and
| | 01:49 | right views, but if I toggle
the Viewer, nothing happens.
| | 01:53 | So what I want to do is make my right
view, so I'll switch the viewer to look at
| | 01:59 | the right view, come over here to the
stereo offset and then we'll split off the
| | 02:05 | right and then we'll
adjust the position of the view.
| | 02:10 | Now if I click on the left and right views,
you can see my shapes, top, left, and right.
| | 02:16 | And if I deselect, we can see
it without the control points.
| | 02:21 | There's another workflow, if you have a
disparity field created by ocular,
| | 02:25 | you can use that to
automatically offset the other view.
| | 02:28 | Now you don't need to have ocular, you
can have a buddy with ocular and he can
| | 02:32 | render the disparity fields for you.
| | 02:34 | So let's take a look at that workflow,
I'm going to delete this node, and point
| | 02:39 | out that I do in fact have a
disparity field here for you.
| | 02:42 | Okay, so let's go back to the RGB view
and we'll add a new Roto Node and we will
| | 02:49 | tell it that we want to use our Cusped
Bezier all around here, there, there,
| | 02:53 | there and return to close.
| | 02:56 | Now we have both left and right views
here and this one Bezier has the two
| | 03:00 | views, again, coincident on each other.
| | 03:03 | This time we'll use the disparity
field we got from ocular, to do the
| | 03:06 | offset automatically.
| | 03:09 | So I'm going to go over here, right
mouse pop up and say, Correlate points, I'm
| | 03:15 | going to correlate from the left view to
the right view, it's tradition that the
| | 03:19 | left view is the hero view,
but you can do anything you want.
| | 03:22 | So I'll set that and then click OK, and
there you can see I have my other shape.
| | 03:27 | I'll deselect here, and again, I can go
right left, right, left, I have a new shape.
| | 03:33 | Now you might notice that the shape
doesn't quite fit exactly right, so let's
| | 03:38 | try that again using the other option.
| | 03:41 | I'm going to delete the right view, I'm
going to tell it that I did is both left
| | 03:46 | and right, again, so I now I
have left and right views here.
| | 03:50 | And this time I'm going to pop
up and say Correlate average.
| | 03:53 | See the Correlate points, puts
the control points exactly where the
| | 03:57 | disparity maps says.
| | 03:59 | Whereas Correlate average does it like
a little smoothing and an averaging on
| | 04:03 | the disparity map, sometimes
that works a little better.
| | 04:05 | So we'll select this option here, and
again, we're going to correlate from the
| | 04:10 | left view to the right view, click OK,
deselect and switch to the right view,
| | 04:19 | left view, right view, left view.
| | 04:23 | The really big news with the Roto and
RotoPaint Nodes is the complete rewrite to
| | 04:27 | improve system response time
and make smaller Nuke scripts.
| | 04:31 | I'm sure you'll find the new Roto
Shapes and Tool Settings helpful and if
| | 04:35 | you work in Stereo, these new stereo
features will definitely speed your
| | 04:39 | shot development along.
| | Collapse this transcript |
|
|
2. The Tracker NodeExploring autotracking upgrades| 00:00 | The Tracker node has been
substantially rewritten with many very cool
| | 00:04 | new features added.
| | 00:06 | At last we now have unlimited trackers and
they're managed in an all-new tracks list.
| | 00:11 | Keyframe tracking has been added, plus
a one click track averaging feature, the
| | 00:16 | snap to markers feature makes it much
easier to plant accurate keyframes, and
| | 00:21 | the new export options include the
automatic creation of a corner pin node.
| | 00:26 | What we used to just call tracking is
now called auto tracking to differentiate
| | 00:30 | it from the keyframe tracking.
| | 00:32 | So let's take a look at
just auto tracking first.
| | 00:36 | Our Exercise File here is Lab Guy, find
that folder here and load this clip if
| | 00:42 | we want to play along.
| | 00:43 | I am going to use the tab key, search
function to go find a Tracker node, and
| | 00:48 | hook it into my Read node and rezoom the viewer.
| | 00:52 | So two immediate differences are, the
Property panel here no longer has the
| | 00:57 | options and buttons that we've seen
before, they've all been moved up here and
| | 01:01 | there's a lot more of them.
| | 01:03 | There's been some important
new features and functions added.
| | 01:05 | We'll be taking to look at those in just a bit.
| | 01:08 | So, first up there are three ways
to add new trackers to your clip.
| | 01:13 | Over here is the Add Track button, we
turn that on, notice that it turns red,
| | 01:17 | and all I have to do is go click,
click, click, click, and I have added new
| | 01:21 | trackers, I'll turn that off.
| | 01:25 | You can also just use the Shift key,
just hold down the Shift key and click,
| | 01:28 | click, click, click.
| | 01:29 | Okay, also there is the Add Track
button very much like the old one.
| | 01:35 | When I click on Add Track, it drops a
new tracker in the old position, and then
| | 01:40 | you have to pick it up, and
move it to where you want it.
| | 01:43 | A big new feature are these zoom
windows here, this really helps with the
| | 01:48 | precise positioning of things.
| | 01:50 | The little window here shows you
your previous setting. So watch this.
| | 01:55 | I'll move the big window over to
here and watch the little window jump to
| | 01:59 | match it, then I'll move the big window down to
there, and the little window jumps to match it.
| | 02:04 | So, it gives you a kind of a history, of
where your cursor used to be, put that back.
| | 02:09 | Now let's take a look at this Track list,
every tracker you create is added in
| | 02:13 | to this list, and we get a little more
window, so I can show you whole thing,
| | 02:18 | here is your familiar Enable button,
and there is of course the name of your
| | 02:24 | track, this is the track X and Y data
that you've seen all along. But this is new.
| | 02:28 | Here's your Offset X and Y, in case
you do offset tracking, these are the
| | 02:32 | familiar TRS buttons for each tracker
and another new thing the Error column,
| | 02:37 | selecting tracks is a little different now.
| | 02:39 | So, for example, over here is track 9
sitting all alone, if I select the track 9
| | 02:44 | track, it lights up and adds the
reference box and the search box.
| | 02:50 | If I want to select multiple, I
hold down the Command or Ctrl key.
| | 02:55 | I'll select this track here and then
do a Shift+Click down there, and I get a
| | 03:00 | whole block of them.
| | 03:02 | Okay, I am going to
select all and delete tracks.
| | 03:05 | Next, let's take a look at the very
important new average tracks feature.
| | 03:10 | Let me show you how that works.
| | 03:11 | Make sure, our play head is on frame one.
| | 03:14 | I'll come up here to this corner and
let's say, I would like to average four
| | 03:18 | trackers around this area here.
| | 03:19 | So, I'll use the Shift button, Shift+
Click, add tracker Shift+Click, add
| | 03:23 | another, Shift+Click and Shift+Click.
| | 03:26 | I now have four trackers to track them
all, we will zoom out a little bit, to
| | 03:30 | track them all, I'll do a Shift+Click to
pick the whole list, then we will track
| | 03:34 | forward, our track forward, and
reverse button are all up here now.
| | 03:38 | Same as before, just in a
little different location.
| | 03:42 | Okay, so we're tracking all four
points over the entire length of the shot.
| | 03:47 | Okay we're all done now.
| | 03:48 | I'll put the playhead back to frame 1.
| | 03:51 | Now if I deselect the tracks,
notice they all turn a lovely color.
| | 03:55 | So each one of the tracks
is now a different color.
| | 03:58 | If I want to do an average of these
four tracks, it could not be easier, I'll
| | 04:03 | just click the top one, Shift+Click on
the bottom one to select all the tracks I
| | 04:07 | want to average and I'll
click on average tracks.
| | 04:10 | After months of computation, Nuke comes
up with a brand-new track for me, that's
| | 04:15 | the average of those four.
| | 04:17 | We now have a new track here named
Average track 1 and of course, we can rename
| | 04:22 | these anything we want.
| | 04:23 | And then we still have our other
four tracks. Let's put in here.
| | 04:26 | Here is my new Average track 1, I can
keep the old tracks or I could select them
| | 04:33 | and make them all go away, so
that I just keep my Average track.
| | 04:38 | It's not a link, it's actually baked in,
so I can lose the originals. All right!
| | 04:42 | Let's rehome the Viewer and take a
look at another interesting new feature
| | 04:46 | called the snap to markers feather.
| | 04:48 | I am going to delete this guy here,
and go to the Settings tab and down
| | 04:54 | here snap to markers.
| | 04:56 | By default, it's off and you have a
choice of blobs or corners and these are the
| | 05:01 | kinds of features, the snap to
future it's going to look for.
| | 05:04 | Let's turn that on.
| | 05:06 | I'll do a Shift+Click to add a new
tracker, Shift+Back to the Tracker tab, so I
| | 05:10 | can see the onscreen reference and
search boxes, see this little green circle,
| | 05:15 | watch what happens, when I move
the tracker around the screen.
| | 05:19 | The green circle jumps to different
landmarks, it has a love of corners, okay.
| | 05:26 | And the other cool thing is once it's
snapped to the corner, if you will let go,
| | 05:31 | the tracker will jump to that
position and you can see right here it's
| | 05:34 | magnificently lined up with
the center of the corners.
| | 05:38 | These new auto tracking features will
really help with your 2D point tracking.
| | 05:42 | Now let's take a look at
the all-new keyframe tracking.
| | Collapse this transcript |
| Introducing keyframe tracking| 00:01 | Keyframe tracking is used for more
difficult tracking targets due to either
| | 00:05 | complex image content, or rapid motion.
| | 00:08 | The idea is to plant periodic tracker
keyframes that will guide the tracking
| | 00:12 | calculations from keyframe to keyframe.
| | 00:16 | I have a fresh Tracker Node here
that I can use to show you how it works.
| | 00:20 | I am going to zoom in here;
| | 00:22 | I am using the Shift key.
| | 00:24 | I am going to plant a tracker right on this box.
| | 00:30 | Notice, I've got a keyframe on the timeline.
| | 00:32 | I am going to move the playhead partway
through the shot, I will reposition the
| | 00:38 | tracker, and again, I got a keyframe and
notice I now have two keyframe windows,
| | 00:44 | each labeled with what frame they are on.
| | 00:46 | So, let's go forward a little more,
reposition, a new keyframe window, and we
| | 00:55 | will go to last frame here and
reposition there, and there's my fourth keyframe
| | 01:01 | window, again, each neatly labeled with
what keyframe they're on the timeline.
| | 01:05 | Now I can jump between the keyframes
by either clicking on the zoom window
| | 01:10 | or come down here on the timeline
and use the either Next Keyframe or
| | 01:14 | Previous Keyframe buttons. All right!
| | 01:16 | So, let's say I'm happy with my
keyframes and I'm ready to do Keyframe Tracking.
| | 01:21 | Unlike Auto Tracking where I have to
make sure when a playhead is on frame 1 or
| | 01:26 | I have to make sure it's exactly
where I want to start tracking, it doesn't
| | 01:30 | matter where the playhead is, you just
come up here and you click on Track All
| | 01:34 | Keyframes, and we're done.
| | 01:39 | So, I now have tracking
data on the entire timeline.
| | 01:42 | Now here is something very interesting.
| | 01:45 | If I come up here to the Delete All My
Tracking Data Button, click, notice that
| | 01:50 | it has left behind all my keyframes,
that button does not delete your keyframes,
| | 01:56 | unless you have enabled this button,
which is the Delete My Keyframe Along With
| | 02:00 | The Data Button, turn that back off.
| | 02:02 | Now that we've seen how the Key Track
All Button works, let's take a look at the
| | 02:07 | Key Track Current Button
right here. Here is the idea;
| | 02:11 | my playhead is between these two
keyframes here and there, that's what
| | 02:16 | this button is for.
| | 02:17 | When I click that button, it'll only
track between the keyframes that are on
| | 02:21 | either side of my playhead.
| | 02:23 | So, I'll click that and adjust tracks
only in between the two keyframes on
| | 02:27 | either side of my playhead.
| | 02:29 | Now that we've seen both Automatic and
Keyframe Tracking, we can now check out
| | 02:33 | the New Menu Bar up here.
| | Collapse this transcript |
| Using the new menu bar| 00:00 | Many of the buttons in the Tracker
Property panel have been moved to the top
| | 00:04 | menu bar, plus it has several new features.
| | 00:06 | Let's check it out.
| | 00:07 | Okay, I need a little more screen space.
| | 00:10 | So, let's move this down, so we can get a
good look at all of our Top Menu Bar Controls.
| | 00:15 | This first button here when it's red as
you saw before, this adds tracks, click,
| | 00:20 | click, click, click, turn that off.
| | 00:22 | This is the pop-up menu for setting
the rules for when to grab New Reference
| | 00:26 | Frame, this used to be on the Property
panel, and this is the threshold for the
| | 00:30 | error that will trigger the new grab feature.
| | 00:33 | This button forces the grab of a new pattern;
| | 00:35 | this of course is our familiar Track
Forward and Track Backward buttons, which
| | 00:39 | are now the Auto Track Buttons.
| | 00:42 | This is the Key Track All button that
we used earlier, and is the Key Track
| | 00:47 | Current button that we used just a
moment ago to track between keyframes.
| | 00:50 | Re-track on Move Link is enabled by default.
| | 00:55 | Let's take a look at what that does.
| | 00:58 | If I have some track data here and
if I were to come in and just move one
| | 01:03 | key point, watch the playhead, boom, bang,
it re-tracks everything between the keyframes.
| | 01:10 | This button here, the Create Key on
Move Link, what that does is, I am going to
| | 01:15 | put the playhead down here for you.
| | 01:17 | Notice there is no keyframe down here
on the timeline, but if I were to move my
| | 01:21 | tracker, boom, I get a keyframe.
| | 01:24 | The Auto Tracks Delete Keyframes
button means, if you do an auto track,
| | 01:28 | remember earlier, we did the keyframe track, I
deleted all the data, and it left my keyframes.
| | 01:35 | Well, in this case, if I do an auto
track like this, it will overwrite all of
| | 01:43 | my keyframes, and by default that button is
enabled, so my keyframes are now all gone.
| | 01:50 | These of course are the familiar
Add a Keyframe, and Remove a Keyframe;
| | 01:54 | this is a kicky new button,
the show_error_on_track button.
| | 01:57 | When I turn this on, all of my track
points are now color-coded, green means
| | 02:03 | they are very good values, yellow means
they're a little bit wobbly, and if they
| | 02:08 | turn red, that means they are very wobbly.
| | 02:11 | So, this feature gives you a visual cue
on what parts of your track are reliable
| | 02:15 | and which parts might be flaky.
| | 02:18 | This the new position for the Center
Viewer button so that centers the Viewer,
| | 02:23 | and this is the new position for the
Update Viewer button, I am going to turn
| | 02:27 | off that Home Viewer button.
| | 02:28 | And again, this is our clear data,
and Clear Backward buttons, Clear
| | 02:32 | Forward, and don't forget;
| | 02:34 | the clear_actions_remove_keyframes, by
default, this is not enabled, so you can
| | 02:39 | clear without deleting your keyframes.
| | 02:41 | If you enable this, your keyframes will
be wiped out when you clear your data.
| | 02:45 | This is the Clear Offset button, so if
you're doing offset tracking, click this
| | 02:50 | to reset back to normal tracking.
| | 02:52 | This is the Track Reset button, so if
your tracker has been resized, you can
| | 02:56 | just click there to restore it
back to the default size and shape.
| | 03:00 | The New Tracker Workflow Design is
to use the auto tracking, for the easy
| | 03:04 | tracking targets in your clip, then augment
that with keyframe tracking for the hard targets.
| | 03:09 | Next, let's see what has been
changed in the Settings and Transform tabs.
| | Collapse this transcript |
| Understanding updates to the Settings and Transform tabs| 00:00 | The Settings and Transform tabs are
still here, but with some changes.
| | 00:05 | The Settings tab in very different, while
the Transform tab has hardly changed at all;
| | 00:10 | let's take a look at the Transform tab first.
| | 00:12 | I have put up the Transform tab from
the Nukes 6 Tracker, so that we can do a
| | 00:17 | straight across head-to-head comparison here.
| | 00:18 | At the top here there
seems to be very little change.
| | 00:21 | We don't see a difference until we get
down here to the Live Link Transform.
| | 00:26 | When enabled, it recalculates the
transform, if trackers are linked to other
| | 00:30 | nodes and they're moved.
| | 00:32 | Okay translate, rotate, scale are all the same.
| | 00:35 | Now here's a new one, skew X and skew
Y with the skew order, compared to the
| | 00:39 | old skew, like all the transform nodes in
Nuke, skew X and Y have now been broken out.
| | 00:45 | The Filter and Motion Blur
Settings down here are the same as before.
| | 00:49 | Now let's take a look at the Settings
tab and here is the Settings tab in Nuke 6.
| | 00:54 | Across the top the track channels
are unchanged, but we have a new thing
| | 00:58 | here, pre-track filter.
| | 01:00 | This pop-up allows you to choose
between a couple of pre-processing operations
| | 01:04 | that are applied to the clip before tracking.
| | 01:06 | The default, adjust contrast
increases the contrast a bit, median applies a
| | 01:10 | median filter and that's
good for like noise and grain.
| | 01:14 | If you enable Adjust Track for luminous
changes, it attempts to compensate for
| | 01:17 | changes in the brightness in the scene lighting.
| | 01:20 | The max-iterations, epsilon and max-
error are the same as before, but here
| | 01:26 | are some new toys.
| | 01:27 | The clamp super-white, sub-zero footage
puts a clamp on any code values above 1
| | 01:31 | point or below 0, they can
sometimes spoof the tracker.
| | 01:36 | This one shows the error on your track
paths and this one hides the progress bar.
| | 01:40 | You saw snap to markers earlier, where
we demonstrated its snapping to corners,
| | 01:44 | here are some zoom window controls and
the old Grab New Pattern section has been
| | 01:50 | put down here into the Auto-
Tracking twirled down menu.
| | 01:52 | Here is the warp type pop-up menu move
from the Nuke 6 location, down to the new
| | 01:57 | Nuke 7 and we have the same exact options.
| | 02:00 | Remember the warp type is telling the tracker
what kind of motion to look for in the clip.
| | 02:06 | So if you're tracking an element
that's rotating for example, you'd want to
| | 02:08 | choose this option before you do your tracking.
| | 02:11 | We have many options for different
types of grab behaviors and depending on
| | 02:16 | which behavior you choose, these
parameters will wake up down here.
| | 02:19 | All right, so we'll close the Auto-
Tracking options and twirl down the Keyframe
| | 02:24 | Tracking options, I'll
move this down out of the way.
| | 02:27 | These three options here are
duplicates of these three in the menu bar, and
| | 02:31 | these down here are just
some keyframe display options.
| | 02:34 | As you can see, the Settings tab has
major changes to support the new features
| | 02:39 | in the Nuke 7 Tracker, there has also
been a new Export menu added that you'll
| | 02:43 | want to know all about.
| | Collapse this transcript |
| Trying out the new export options| 00:00 | The new Export options menu is down
here at the bottom of the Tracker tab.
| | 00:05 | This pop up gives you several different choices.
| | 00:08 | This pre-builds nodes for you.
| | 00:10 | The first choice is the CornerPin2D
Node using whatever current frame the
| | 00:14 | playhead is on, so if I select that, I
then have to make sure I choose four and
| | 00:20 | only four tracks and then I say Create,
and there is my CornerPin Node, using
| | 00:26 | Alt+E to show the Expression Link, it
is in fact a link to the original Tracker
| | 00:31 | Node, I will put that over here.
| | 00:36 | I could also choose the CornerPin Node
that uses the transform reference frame;
| | 00:41 | that would be on the Transform tab,
the reference frame right here.
| | 00:47 | Now both of these are going to be Link
Nodes, these two options are the exact
| | 00:51 | same type of nodes, only the
data is baked in, no links.
| | 00:56 | Here I'm going to create a Transform tab
that would be set to stabilize my data,
| | 01:01 | so I'm going to choose let's say this
track and this track, and then I say
| | 01:07 | Create and there is my
Transform Node set for stabilizing data.
| | 01:13 | Again, it's linked.
| | 01:15 | The next option is for Link
Transform Node that's set for match move.
| | 01:20 | And these two options here are
Stabilized and Match Move, but again, they're
| | 01:23 | baked and not linked.
| | 01:26 | With the addition of so many new
features, not to mention the outstanding
| | 01:29 | keyframe tracking capability, the Nuke 7
Tracker Node is faster to work with and
| | 01:34 | able to solve even more
tracking problems than ever before.
| | Collapse this transcript |
|
|
3. Primatte 5Exploring the new Primatte 5 features| 00:01 | Primatte 5 introduces four powerful new
tools that speed up the keying process
| | 00:05 | and improve results by providing
new tools to solve old problems.
| | 00:10 | Let's check this out.
| | 00:12 | We'll start over here with the Auto-Compute;
| | 00:15 | by the way, all these images
are in the Tutorial assets folder.
| | 00:18 | We'll open up our Primatte Node and the
first feature we'll look at is the Auto-Compute.
| | 00:24 | This is a brand-new
algorithm that is very effective;
| | 00:27 | you can get a great key very quickly.
| | 00:32 | First, I'm going to increase the size of our
Viewer area so that we can see more pictures.
| | 00:37 | Here we go!
| | 00:39 | Then click the Auto-Compute button
and there's our Alpha Channel. Wow!
| | 00:45 | It automatically selects the
background color and does a cleanup on the
| | 00:48 | foreground and the background.
| | 00:50 | Note that the Operation has already
been set up for Clean Foreground Noise, so
| | 00:54 | it's done all three of
these for you automatically.
| | 00:57 | You can now move directly to Keying
Refinements and Spill Suppression.
| | 01:01 | We're back to RGB, let's take a look
at the new smart select background color
| | 01:08 | feature, hook this up, open up the New Primatte.
| | 01:13 | This new feature is actually the
first operation in the Operation stack.
| | 01:17 | To show you how effective this is, I'm
actually going to use the original Simple
| | 01:24 | Select BG Color operation and compare.
| | 01:27 | So we'll select that, come up to our
picture, select a region of green screen
| | 01:32 | pixels to set the background color,
and then we'll look at the alpha channel,
| | 01:36 | look at that, all the transparency.
| | 01:39 | Now, I'm going to now switch to the
Smart Select BG Color tool and I'm going
| | 01:43 | to carefully select exactly the same
area, bang, look at that, the foreground
| | 01:49 | is much more solid.
| | 01:51 | That's because the new Smart Select
Background Color algorithm actually uses a
| | 01:54 | histogram analysis for separating the
foreground from the background and it also
| | 01:58 | performs a little clean
foreground noise operation internally.
| | 02:02 | In fact, all I have to do now is set
the Clean Background Noise, couple of
| | 02:07 | strokes here and there, and we're
ready to go to launch. Next up;
| | 02:13 | the new Adjust Lighting
feature. This is very cool!
| | 02:18 | How many difficult keys have you tried to
solve because the backing region is unevenly lit?
| | 02:23 | Here we have hotspots over here on
the site and darker spots over here.
| | 02:28 | Well, the Adjust Lighting feature
actually corrects the green screen backing to
| | 02:33 | give it a much more uniform lighting.
| | 02:35 | Let's see how that works.
| | 02:37 | First, we'll select Auto-Compute and
then we'll turn on the Adjust Lighting and
| | 02:42 | now let's switch to the
Alpha Channel to take a look.
| | 02:44 | I'm going to toggle the Adjust
Lighting on and off and you can see where it's
| | 02:48 | actually cleared out a lot
of the haze in the background.
| | 02:51 | The thing I love about this is it
doesn't disturb the edges of your key,
| | 02:56 | there's no degradation, there's no dilatory
erode, beautiful, re-home that, go back to RGB.
| | 03:03 | Now along with the Adjust Lighting
feature there's two new Output Mode options.
| | 03:09 | First the Adjust Lighting Foreground,
this shows you the green screen after
| | 03:14 | it's been affected.
| | 03:15 | In fact, watch what happens when I
toggle Adjust Lighting on and off, okay,
| | 03:20 | this is the original;
| | 03:21 | this is the corrected, original, corrected.
| | 03:23 | Now there is another feature in the
output, the Adjust Lighting Background.
| | 03:29 | What this is doing is it actually
building a clean plate, and it's using that
| | 03:33 | clean plate in order to help pull the key.
| | 03:36 | This allows you to dial in
the clean plate if you whish.
| | 03:38 | We can open up Adjust Lighting, the
Threshold Slider as you move it to the
| | 03:44 | right, brings more of the
foreground into the clean plate.
| | 03:47 | The Adjust Lighting algorithm actually
divides the picture up into a grid, so if
| | 03:51 | we increase the grid size, we get more
fine detail in the grid, okay, so we'll
| | 03:56 | put those back to default, close that
up and return to our lovely composite.
| | 04:03 | Next is my personal new
favorite the New Hybrid Matte feature.
| | 04:08 | Let's open up that Primatte, close this,
look up the Viewer, check this out.
| | 04:15 | The Hybrid Matte feature actually
creates a core matte inside of the Primatte
| | 04:19 | Node, you may never have to use
additional keyers again to create your
| | 04:23 | composites, now that you
could do it all in Primatte.
| | 04:26 | First, we'll select the Auto-Compute,
we'll go to the Alpha Channel and we can
| | 04:32 | see we have all this transparency in here.
| | 04:35 | The Hybrid Matte feature is especially
useful when the foreground colors are
| | 04:40 | similar to the backing color, like
somebody shoots blue jeans on a blue screen,
| | 04:44 | like who would ever do that, right?
| | 04:48 | Watch what happens as I toggle the
Hybrid Render feature on and off, you see,
| | 04:52 | it's adding in the core matte
that's created internally in Primatte.
| | 04:58 | Now we can see this core matte, come
down to the Output Mode, pop-up Hybrid
| | 05:03 | Core, there's your core matte and now
we can adjust that Hybrid Matte, you can
| | 05:09 | dial in the erode or adjust the blur
radius, I'll put those back to default.
| | 05:18 | You can also see the Hybrid Edge.
| | 05:21 | What this really is is your original key.
| | 05:24 | So hybrid edge will show you the
original key, hybrid core will show you the
| | 05:29 | core matte, and then go back to
composite, you'll see the two together.
| | 05:33 | And you can actually just toggle that node on
and off to see the effect of it, outstanding!
| | 05:39 | Primatte 5's powerful new tools and
smarter algorithms promise faster and
| | 05:44 | higher quality keys with less cleanup
work, and the awesome new hybrid render
| | 05:48 | feature, virtually eliminates the need
to use multiple keying nodes to create
| | 05:53 | your core mattes.
| | Collapse this transcript |
|
|
4. The New MotionBlur NodeSetting up and using MotionBlur| 00:00 | The new MotionBlur is a NukeX Node that
was lifted from the F MotionBlur Node in
| | 00:05 | the FurnaceCore plugin set.
| | 00:07 | It uses Nuke's motion vector
technology to intelligently apply a high quality
| | 00:12 | motion blur to the moving parts of the
clip, it supports GPU processing for much
| | 00:17 | faster rendering, but that require
certain NVIDIA GPUs and CUDA drivers.
| | 00:26 | The images we're using here
are in the Tutorial assets.
| | 00:31 | We'll find the MotionBlur Node up
here in the Filter tab and there is our
| | 00:35 | MotionBlur Node, and we'll hook
it up to the source clip there.
| | 00:42 | Let's make a little more room for our
Viewer, set H to get our maximum
| | 00:48 | viewer, maybe a little more, okay,
let's zoom in a bit, see what we got, let's
| | 00:55 | start with the Shutter Time,
here's what this number means.
| | 00:58 | A Shutter Time at 0.5 is 180 degree
shutter which is like a normal film camera,
| | 01:04 | we say a shutter time of 1;
| | 01:06 | that's a 360 degree shutter, so
you're getting a lot of motion blur, and a
| | 01:10 | Shutter Time of 2 is going to get you even more.
| | 01:13 | Now notice that we're getting sort of a
double or triple exposure here, that's
| | 01:19 | because we need to increase the shutter samples.
| | 01:21 | Right now we're only getting three
samples, you can actually see, one, two,
| | 01:24 | three copies of the image.
| | 01:26 | So if we tap that up to 4, it looks
better, 5, it looks better, 6, there you go.
| | 01:32 | So you turn that up until it smooths
it out, keep in mind, the more shutter
| | 01:36 | samples, the more processing time.
| | 01:40 | Next, let's take a look
at the Vector Detail value.
| | 01:44 | The Vector Detail increases the amount
of detail in the picture, which gives you
| | 01:48 | a higher quality render if you've got
a lot of fine detail in your picture.
| | 01:52 | So if you set the Vector Detail to 1;
| | 01:56 | that means you're going to get a motion
vector calculated for every pixel on the screen.
| | 02:00 | If you set it for 0.5, you're going to
get a vector detail for every 2 pixels
| | 02:06 | and the default of 0.2;
| | 02:09 | you're getting one vector for every five pixels.
| | 02:11 | Now let's take a look at what that means.
| | 02:14 | Let's scoop down here and I'm
going to push into this part here.
| | 02:19 | I'm going to set the Vector Detail down
to 0.1, which is 1 vector, every 10 pixels.
| | 02:27 | Now, I want to make a copy of the
Motion Blur Node, hook it up to this
| | 02:32 | source, hook my Viewer up to it and
then I'll set the Vector Detail to 1,
| | 02:37 | maximum resolution.
| | 02:39 | Now as I toggle between the two, you
can see the difference in sharpness.
| | 02:44 | It's even more noticeable over here.
| | 02:50 | So the greater the vector detail, the
more detail that's kept in your picture, but
| | 02:54 | once again, there is a processing price to pay.
| | 02:57 | Down here, the Matte Input.
| | 03:01 | This is the input from matte that you
might draw over your foreground character
| | 03:05 | in order to isolate him from the background.
| | 03:07 | This will prevent tearing of the
background that you sometimes see with the
| | 03:10 | motion vector processes.
| | 03:12 | The foreground vectors input here, is
if you're using the vector generated node
| | 03:16 | to precalculate the vectors, this is a
good idea if you're going to use those
| | 03:20 | vectors in several locations, otherwise the
motion blur node calculates it's own vectors.
| | 03:25 | A kicky thing you can do is to use
the vector generator node to calculate
| | 03:29 | vectors from another clip, feed them
into this motion blur node and apply them to
| | 03:33 | a different clip, gets you
some very interesting effects.
| | 03:35 | Let's take a look at actually a classic setup.
| | 03:39 | Here you have my Viewer, set it to ping-pong;
| | 03:43 | I will play this for you.
| | 03:45 | This clip has absolutely no motion blur,
so you're getting horrible motion strobing.
| | 03:49 | So we're going to use the motion
blur node to fix that, there you go.
| | 03:56 | Now let me play that and you'll see
with the motion blur it looks a whole
| | 04:01 | bunch more natural.
| | 04:02 | Stop that and you can actually see
how the motion blur is changing its
| | 04:06 | direction and it's intensity on
different frames, depending on the speed and
| | 04:10 | the direction of the target.
| | 04:12 | If you're doing a speed change using
Oflow or Kronos, they'll take care of the
| | 04:16 | motion blur as well.
| | 04:17 | The Motion Blur Node is for those
situations like this, where you have motion
| | 04:21 | strobing in a clip that need to
give it a realistic motion blur.
| | Collapse this transcript |
|
|
5. The New ZDefocus NodeSetting up and adjusting| 00:01 | ZDefocus is a major upgrade to the old
ZBlur node and includes GPU Acceleration,
| | 00:06 | as well as an improved
algorithm, plus several new features.
| | 00:10 | Considerable effort has been put
into improving its handling of edges in
| | 00:14 | occluded areas, compared to the old ZBlur node.
| | 00:18 | Although ZDefocus node is GPU
accelerated, for the GPU processing to work, it
| | 00:23 | requires certain NVIDIA GPUs and CUDA drivers.
| | 00:25 | We'll start off by looking at the
ZDefocus on a CG item, if you'd like to
| | 00:31 | play along, you can get from the tutorial's
asset, ZFighter.exr and the ZFighter BG.dpx.
| | 00:39 | Now the ZFghter exr file has
its own depth channel built in.
| | 00:44 | By the way, critical point, the
depth channel must not be anti-aliased;
| | 00:49 | if it is anti-aliased, it
will introduce edge artifacts.
| | 00:52 | So make sure your depth
channels or not anti-aliased.
| | 00:56 | If your depth channel is anti-aliased,
then you can unpre-multiply it with the
| | 01:00 | alpha channel back out the anti
-aliasing. Okay back to RGBA.
| | 01:04 | Now we want to comp this over the
background, so we'll select the Read node,
| | 01:09 | type M, get a Merge node, hook that up
to the background and we have a lovely
| | 01:13 | composite, push in a little
bit, take a look at the action.
| | 01:16 | Now in this Merge node, I want to retain
my depth layer, which was cutoff by the
| | 01:22 | merge node, so I'm going to tell it
to also merge depth, so now the depth
| | 01:27 | channel comes out the output of the merge node;
| | 01:29 | all right, put that back to RGBA.
| | 01:32 | Okay, let's add our ZDefocus node,
we'll select the Read node and come to the
| | 01:36 | Filter Tool tab, click on that, go all
the way to the bottom to get the ZDefocus
| | 01:42 | node, and we'll adjust this
up to make it look pretty.
| | 01:47 | Okay, let's push in and see what
we got, well, this is not very nice.
| | 01:51 | So our first step is we're going to
pick up the focal point and put it where we
| | 01:55 | want it to be focused on,
which I'll put it right here.
| | 01:58 | Ah, much better, the first thing you want to
do is make sure that your depth channel
| | 02:01 | is set correctly, if it's not in the
depth Z in your data stream then use the
| | 02:05 | browser to go get it.
| | 02:07 | So wherever I place this focal point,
that's going to be the part of the picture
| | 02:12 | that's in focus, you can move it there
or you can dial it over here, or you can
| | 02:16 | even attach it to trackers, so you
could do a follow focus if you wanted.
| | 02:20 | Depth of field of course, defines how
deep this is going to be in perfect focus,
| | 02:25 | and then out of focus on either side of
that, we'll come back to that in a minute.
| | 02:29 | Size of course is the amount of defocus,
so let me push in here, I'll cut the
| | 02:33 | size down, it gets sharper.
| | 02:35 | I'll jump the size up, and of
course, I get a lot more defocusing.
| | 02:40 | So the maximum slider sets an upper
limit for the Deblur size, no matter how big
| | 02:44 | the size value, the blur itself will get
no larger than the maximum setting, put
| | 02:49 | that back to default.
| | 02:50 | Now let's take a look at this math thing,
this math popup selects the rule for
| | 02:56 | interpreting your depth map. By
default the math property is set to far equal
| | 03:02 | zero which is the behavior of Nuke and
Render map, but other apps have different
| | 03:07 | rules for their depth channel, so you
have to choose the right one here, you can
| | 03:11 | look those up in the user guide.
| | 03:13 | Now let's take a closer look at the
output options, result of course is
| | 03:17 | the defocused image.
| | 03:19 | We can also choose the focal plain
setup, now this is a diagnostic view. This
| | 03:24 | divides the picture into three colored
zones, red is in front of the depth of
| | 03:28 | field, blue is behind it and
green is the depth of field.
| | 03:32 | Well, our depth of field was set at 0,
so let's tap that up to .1, there we go.
| | 03:38 | So the green part will always be
in focus and the red part will get
| | 03:42 | progressively more out of focus
towards the camera and the blue, more out of
| | 03:46 | focus away form the camera.
| | 03:47 | As we move the focal point around, we of
course can change where that green zone
| | 03:52 | lands, put it back to here, and if we
increase the depth of field value, then
| | 03:57 | the depth of field gets larger or smaller,
so we can adjust that, and this setup
| | 04:01 | allows you to actually see where it's happening.
| | 04:04 | If you wish you can also actually dial
in the focus plain location right here,
| | 04:08 | but normally, you'll use the focus
point, because what it does is it just
| | 04:12 | samples the depth channel wherever
you drop it off, it fills it in for you.
| | 04:15 | Another output you might
find helpful is the Layer setup.
| | 04:20 | The way the ZDefocus node works is it
actually sorts the image into layers in
| | 04:25 | Z, you can actually see those layers here,
this allows you to adjust the layer rule.
| | 04:30 | By default the automatic layer spacing
is selected and the ZDefocus node has
| | 04:35 | actually sorted it into what it
thinks are the best number of layers.
| | 04:38 | However, you can turn that
off and set it yourself here.
| | 04:42 | If I set it to 5 layers, you can see
I only have 5 layers in front of the
| | 04:47 | camera. So I can tap that up to
increase the number of layers.
| | 04:51 | The more you increase the layers, the
better the quality of the render but the
| | 04:55 | longer the rendering time.
| | 04:58 | The layer curve control allows you to
control the distribution of the layers.
| | 05:02 | If you move this down, it stretches
them from the focal point and moves them
| | 05:06 | away, further away from the camera.
| | 05:08 | If you go in the other direction, it
squeezes them, what this does is it gives
| | 05:12 | you higher quality renders, as
you get closer to the focal point.
| | 05:15 | And we'll turn our automatic layering back on.
| | 05:18 | Once you have the defocus parameter
set, the next step is to dial in the
| | 05:23 | appearance of the defocus parts of the picture;
| | 05:25 | we'll look at that next.
| | Collapse this transcript |
| Adding depth of field to a live action plate| 00:01 | Now we'll see how to adjust the
appearance of the defocus parts of the picture,
| | 00:04 | as well as how to create a depth map
for a live-action plate with no depth
| | 00:09 | channel, so we can add our
own depth of field to it.
| | 00:12 | Now this area down here is to affect the BOKEH.
| | 00:15 | The BOKEH is the brightness and
appearance of the defocused parts of the picture.
| | 00:21 | So let's have our output back to result,
and we'll zoom in, to a part of the
| | 00:26 | picture that has some
nice highlights to play with.
| | 00:29 | Filter type is the shape of the filter;
| | 00:32 | you have disk bladed, which can be a
heptagon, octagon, pentagon or image, in
| | 00:39 | which case we will have an input image.
| | 00:41 | We'll start with the disk;
| | 00:42 | these sliders affect the disk shape.
| | 00:45 | Now this is not actually the shape of
the disk, when the filter shape is set to
| | 00:50 | 1, you're getting a solid disk, if you
slide that down to 0, it becomes a bell
| | 00:55 | curved Gaussian type shape.
| | 00:57 | So you can then slide that back
and forth to pick the best look.
| | 01:02 | The aspect ratio allows you to stretch
in X or Y the overall disk shape. If we
| | 01:08 | choose the bladed filter, you then
have a whole list of different parameters
| | 01:13 | for adjusting the look of the blade, you can
spend an entire afternoon playing with this one.
| | 01:18 | We'll go back to disk, now to show you
the image option, I've created a little
| | 01:23 | shape here with my Roto Node, okay.
| | 01:26 | So I want to just hook that up here,
go back to the composite and we'll zoom
| | 01:32 | back in to our area here to see the
effect of the image, so I'll tell it to now
| | 01:36 | go look at that image input.
| | 01:38 | So I'll select the image input for the
filter type and it will be looking at
| | 01:42 | this image here, so let's
push in a little closer.
| | 01:48 | Now the filter bounds effect the results,
if I set up for shape, it's just going
| | 01:52 | to look at that image within the
bounding box, but if I set it for format, it
| | 01:57 | takes a larger view, and I get these
lovely little X patterns, which is exactly
| | 02:01 | what my little shape is.
| | 02:02 | So this will control the BOKEH shape,
down here we control the BOKEH brightness,
| | 02:08 | if you turn on Gamma Correction, the
ZDefocus node applies a gamma of 2.2 in the
| | 02:13 | image, applies the filter, and then
puts it back to linear, this has the effect
| | 02:17 | obviously of brightening things up for you.
| | 02:20 | The Bloom parameter gives you two sliders,
one, the bloom gain, let me gain down
| | 02:27 | my viewer, so you can see this better.
| | 02:31 | The bloom gain is how much brighter the
blooms get, so if I turn this down, you
| | 02:36 | see they get darker, and I bring it up,
and they get a lot brighter, okay.
| | 02:41 | So we can affect how bright they are with this.
| | 02:43 | I'm going to leave it to a high value.
| | 02:46 | The Bloom Threshold is the cutoff point;
| | 02:49 | any bloom that's brighter than .8, will get
the bloom gain, anything below that will not.
| | 02:54 | So if I lower the threshold, more of
those guys are going to pickup the bloom gain.
| | 02:59 | Alright, I will just turn those off.
| | 03:02 | Reset the viewer gain, back to default,
and re-home the Viewer, and down here
| | 03:08 | at the bottom of the defocused Property panel,
the mask and mix parameters are the usual stuff.
| | 03:13 | Now let's take a look at the ZDefocus
node used for some live action work.
| | 03:17 | I will cruise over here, hook up my
Viewer, so if you like to play along, you
| | 03:25 | can go get the alley.jepg
image out of the Tutorial assets.
| | 03:28 | All right, so let's see what we got
here, so I would like to use the ZDefocus
| | 03:34 | node and a major depth of field to this shot.
| | 03:37 | So what I've done is I've used the
Roto Node to create a synthetic depth
| | 03:41 | channel if you will.
| | 03:42 | I'm going to put the depth Z into the
Viewer's Alpha Channel so you can see it,
| | 03:47 | open up my Roto Node.
| | 03:48 | So I just drew a little rectangle
and pulled out the feathered edges.
| | 03:52 | The key is to put the output into the depth
channel, so that the ZDefocus node can find it.
| | 03:57 | We are done with that and go back to RGB.
| | 04:04 | So let's add our ZDefocus node, we'll
use the tab search function here and type
| | 04:10 | zd, and there it is, okay add that in, first
I'll check that the depth channel is set
| | 04:18 | correctly, okay, that's good.
| | 04:19 | Then I want to move my focal point to here,
because I want the foreground to be in focus.
| | 04:25 | Now this doesn't look right, because I
haven't set the math correctly, because I
| | 04:29 | use the Roto Node my far distance is 1,
so we'll pop up the math, we'll say, far
| | 04:36 | is equal to 1, now we are set up correctly.
| | 04:39 | Next, let's take a look at the focal
plain setup, I have no depth of field, 0,
| | 04:44 | so let's introduce some depth of field,
maybe a little more, more, more, more,
| | 04:48 | okay and then I'll move the focal
point here to walk the depth of field into this
| | 04:53 | area of the picture, so it'll be in
complete focus, and out here, it's where
| | 04:58 | I'll get my depth of field effect.
| | 05:01 | So we'll set the output, back to results,
home the viewer, and I'm going to just
| | 05:08 | punch up the size, just so it is
really obvious, there we go, all right.
| | 05:12 | So I'm going to zoom in here.
| | 05:14 | So the foreground is completely in focus,
and as we walk towards the background,
| | 05:18 | it gets progressively out of focus,
which I exactly what I wanted.
| | 05:23 | By the way, if you have any old Nuke
scripts that use the old ZBlur node, not to worry.
| | 05:27 | The Foundry kept the old ZBlur
node here in the basement in Nuke.
| | 05:31 | If you get all nostalgic and you want to
actually use the ZBlur node, you can do
| | 05:35 | that by putting a cursor in the node
graph, type X to get this little browser
| | 05:39 | window, make sure it's set for TCL and
not Python, then type ZBlur, remembering
| | 05:45 | that they are case sensitive.
| | 05:46 | Now we click OK and there is the old ZBlur node.
| | 05:50 | Nuke 7's new ZDefocus node offers major
improvements in speed, creative control,
| | 05:55 | and ease of use, and is equally
useful, for both CG and live action.
| | Collapse this transcript |
| Setting up and adjusting bokeh| 00:01 | In the previous ZDefocus node tutorial,
we took a quick look at the Bokeh settings.
| | 00:05 | In this tutorial, we'll dive in for a
much closer look to see the amazing amount
| | 00:10 | of control you have to dial
in the look of a lens Bokeh.
| | 00:13 | I'll be using this city lights picture
that has this depth Z channel already
| | 00:17 | built in, to get my ZDefocus node, I'll
just use the tab search, zd, there it is.
| | 00:23 | Ah! All set.
| | 00:28 | And because the image has a depth
channel, I already get a default defocus.
| | 00:33 | We'll be looking at all three of these filter
types right here starting with the disk filter.
| | 00:38 | The first parameter, the filter shape,
determines whether the Bokeh is a hard
| | 00:44 | circle or a soft fuzzy blob, so as
you move towards 0, it becomes just a
| | 00:49 | Gaussian curve, back to 1, a sharp disk.
| | 00:55 | The aspect ratio will squeeze it
vertically or stretch it horizontally, so
| | 01:02 | you're covered, whether you're working
on anamorphic plates or you're working
| | 01:05 | flat and going out to anamorphic.
| | 01:08 | Next, the blade filter type,
this refers to the bladed iris.
| | 01:16 | Again, we have the aspect ratio as
before, back to default, and here is the
| | 01:21 | number of blade setting, by default you
have five blades, you can see them right
| | 01:25 | there, but we can turn that to
any number of blades we want.
| | 01:29 | I like 5, so I'm going to put that back.
| | 01:32 | Now the roundness is how straight the
edges are, if I go 100% roundness, way up
| | 01:38 | here, it becomes almost a circle.
| | 01:40 | If I go on the opposite
direction, the shape becomes concave.
| | 01:44 | We'll put that back to default,
which is just a bit of roundness.
| | 01:50 | Rotation of course allows you to
rotate the filter, so that you can get any
| | 01:56 | orientation you like.
| | 01:57 | The inner size and inner feather
will show up better if I take the inner
| | 02:05 | brightness down to something like to
this, see this dark centre, that's what
| | 02:09 | we're talking about.
| | 02:11 | So I can change the inner size to make
it smaller, large as I wish, and inner
| | 02:16 | feather is how soft it is, here we
go, we'll put that back to default.
| | 02:21 | Now here's an interesting little
toggle right here, the catadioptric feature.
| | 02:28 | Catadioptric lenses use a combination
of both mirrors and lenses, which produce
| | 02:33 | a unique Bokeh with a hole
in the center, check it out.
| | 02:36 | We'll turn this on, there's my hole.
| | 02:39 | You can also adjust of course the size
of the hole by adjusting the catadioptric
| | 02:42 | size, we'll turn that off.
| | 02:46 | So far we've seen the built in Bokeh
shapes, next, we'll see how to use an image
| | 02:51 | to create a custom Bokeh shape.
| | Collapse this transcript |
| Customizing the bokeh| 00:00 | We have seen the disk and bladed filter
types, so let's see what happens when we
| | 00:05 | supply our own image to
create a custom bokeh shape.
| | 00:07 | Before I select that however, I am
going to show you the images I have.
| | 00:11 | I have this Star Filter here and the
important thing about this is there are
| | 00:16 | really two boxes to be aware of.
| | 00:19 | The outer box out here is
what we call the Format.
| | 00:22 | The inner box here is the
bounding box of the shape.
| | 00:26 | Note that the shape is off center from
the format, it's on the lower left-hand
| | 00:32 | corner, you will see why
this is important in a minute.
| | 00:35 | So I am going to take the filter input
of the ZDefocusNode and hook it up to my
| | 00:39 | Star Filter, switch back to
the ZDefocusNode and zoom in.
| | 00:44 | Now we will switch the filter
type to image, and there you go.
| | 00:48 | Of course, we can change the size a bit,
by adjusting the size and maximum values.
| | 00:54 | So why was I going on about the
format or the bounding box of the shape?
| | 00:59 | That's for right here,
filter bounds. If you say shape,
| | 01:03 | you are telling it that you only want
to use the shape inside the bounding box.
| | 01:07 | However, if you select Format, that
means you want the large outer box and
| | 01:11 | you can see now the Star bokeh is shifted
down lower left and has kind of a lengthy treatment.
| | 01:20 | Next thing I want to show is
very cool, chromatic aberration.
| | 01:23 | Let me show you my Chromatic Aberration Node.
| | 01:27 | I have hooked up a little tchotchke here
that allows me to dial in the amount of
| | 01:35 | chromatic aberration I want, it's just a
three channel filter and all I am doing
| | 01:39 | is an offset of the RGB values, like so.
| | 01:44 | These offset channels will cause an
offset in the bokeh of the image, so
| | 01:49 | let's check it out.
| | 01:51 | First, we will hook up the filter to
our chromatic aberration and switch the
| | 01:55 | viewer back to ZDefocusNode,
and oops, an error message.
| | 01:59 | We don't need this any more.
| | 02:02 | The error message comes right here,
by default, it's looking for the alpha
| | 02:08 | channel to contain the filter input, and
that's not what we have here, we have a
| | 02:11 | three channel image.
| | 02:12 | So we have to use this.
| | 02:14 | This means, use the same three channels
in that filter input as we are using in
| | 02:18 | the image, which is RGB.
| | 02:19 | So we turn that on and ah, much better.
| | 02:22 | Okay, let's push in and see what it looks like.
| | 02:25 | So you can see all my bokehs now have a
chromatic aberration, Red fringe in the
| | 02:30 | upper, blue in the lower right, in
fact, if I open this guy up again, I can
| | 02:34 | dial it up and down to increase or decrease the
amount of chromatic aberration, very, very cool!
| | 02:41 | Here I will turn it off and toggle that
for you so you can really see the effect.
| | 02:49 | Okay, we're done with this, so I will
close that, go back to the ZDefocusNode.
| | 02:56 | Next, let's take a quick review of the
Gamma Correction and Bloom. Toggle the
| | 03:01 | Gamma Correction on and the
bokeh gets a lot brighter.
| | 03:05 | You have no control over this because
it's doing a Gamma 2.2 change to the
| | 03:09 | bokeh, relative to the image.
| | 03:10 | So we'll turn that off.
| | 03:12 | If you want to dial in
your own control, use Bloom.
| | 03:16 | With the Bloom feature turned on,
these two parameters wake up.
| | 03:19 | The Bloom Gain of course, allows you
make it brighter or darker, put that back.
| | 03:25 | As you lower the Bloom Threshold,
darker and darker pixels get bloomed.
| | 03:31 | With these powerful new Bokeh
features the ZDefocusNode can match the most
| | 03:35 | obscure lenses for Visual Effect shots,
or if you're doing Animation, you have
| | 03:40 | complete flexibility to
generate your own creative looks.
| | Collapse this transcript |
|
|
6. The SplineWarp NodeExploring the SplineWarp node| 00:00 | The SplineWarp Node in Nuke 7 has
received several improvements that speed up
| | 00:04 | workflow and improve your command and
control of the work process. Let's take look.
| | 00:09 | Let's go get a new SplineWrap node, we
will use the tab search, spl, here it is,
| | 00:13 | SplineWarp and we will hook it in.
| | 00:19 | And the first thing you'll notice is
there are some new tools in the top toolbar.
| | 00:23 | Now this row is hide and show for
points and splines and on-screen control
| | 00:28 | jacks, just like the Roto Node.
| | 00:30 | But down here, this is the new source
and source work buttons, here is the B side
| | 00:36 | source and source warp, we'll
look at those in another video.
| | 00:39 | But the really good news is there
are some really cool new source and
| | 00:42 | destination spline drawing
techniques, let's take a look.
| | 00:45 | I'll open this up, we'll push in here,
so I am going to select my Spline and
| | 00:52 | draw, click and drag, click and drag,
click and drag, draw, return. Okay, the
| | 00:57 | first method of creating your
destination is to draw your source, right-mouse
| | 01:03 | pop up, duplicate and join.
| | 01:06 | And notice over here, I now have two
Beziers, even though it only looks like one.
| | 01:10 | Notice also, that because they're linked,
you have this ghosty reference here to
| | 01:14 | which one they are linked to.
| | 01:15 | I'll make this easier by naming
this one the source (src), and this
| | 01:20 | one destination (dst).
| | 01:22 | So, now you can see that the src is
linked to the des shape, and the des
| | 01:27 | shape linked to the src.
| | 01:28 | Now this might seem obvious here, but
in a real job these two shapes might be
| | 01:33 | very far apart, so it's terribly
handy to know who they are linked to.
| | 01:36 | So I'll select the destination shape,
come into the Viewer, Command+A or Ctrl+A
| | 01:42 | to select All the control points in the
destination shape, and now, I can size
| | 01:47 | it up, or I cam go in here and do
individual control points as I wish. All right!
| | 01:54 | That's Method 1. Draw a shape and
do the duplicate and join command.
| | 01:58 | Method number 2, let's
scoot over here, push in.
| | 02:01 | I'll draw a shape on this eye and
then I'll draw a second shape, so this is
| | 02:10 | your second method.
| | 02:12 | You can draw two shapes and then
join them, and here is the new tool.
| | 02:17 | This is the Join tool right here. We'll
select that, click on the src, and click
| | 02:24 | on the des, and the new join tool
will link the source to the destination,
| | 02:30 | change their colors, bright red, pale
red or pink, put them in the list, and
| | 02:35 | identify who they are linked to.
| | 02:37 | We also have a new Preferences, let's
go up to Nuke>Preferences>Viewers, down
| | 02:45 | here draw source stippled, draw
destination stippled, I'll close that and you
| | 02:51 | get this dotted outline.
| | 02:52 | Now you only get the dotted
outline for joined shapes.
| | 02:56 | If I draw a new shape over here,
it's not joined, it's not stippled.
| | 03:01 | So, I'll turn that off, Preferences>
Viewers>Source and Destination Stipple, done.
| | 03:09 | Let's take a look at some
of the new copy commands.
| | 03:12 | I am going to clear all these out by
selecting and do the minus key, I'll rehome
| | 03:17 | the viewer and let's push in a little
bit and I am going to draw one simple
| | 03:24 | shape here, and I'll draw a
second simple shape up here.
| | 03:28 | Okay, and then I'll join them
with our new Join tool. All right!
| | 03:37 | And again, they are marked
as linked right over here.
| | 03:39 | I'll go select the Selection tool, so I
can see my control points. All right!
| | 03:46 | The new copy commands are when you do
the right mouse pop up on a control point,
| | 03:51 | here's your copy command, this now tells
you whether you have one or more curves
| | 03:55 | and one or more points and you get
to choose which you want to copy.
| | 03:59 | If I select 2 points, the copy command now
says you got 1 curve, but you got 2 points.
| | 04:06 | And if I have both shapes selected,
the copy command now says you have two
| | 04:13 | curves, so you get choose exactly what
you want to do. More control than ever.
| | 04:19 | Another important new feature is the
ability to copy and paste single points
| | 04:23 | between source and destination curves.
| | 04:24 | For example, with my selection tool
enabled, I'll select this point, right mouse
| | 04:30 | pop up, say copy this point value, go
to my destination shape, right mouse pop
| | 04:37 | up, paste, the 1 point, bang,
they are not coincident.
| | 04:43 | Another important new feature is the
ability to select and drag coincident point
| | 04:47 | pairs together like these two.
| | 04:49 | First, you have to have both the shapes
selected of course, now if I select this
| | 04:54 | point, I get the on-screen control jack,
and now the points are moving together.
| | 04:59 | I can adjust this and rotate that.
| | 05:02 | So, this allows you to do the
control points and leave them coincident.
| | 05:07 | I'll click off to the side to deselect.
| | 05:10 | The bbox dropdown has been
replaced by this crop to format checkbox.
| | 05:18 | Another new feature is the ability to
link trackers to source or destination
| | 05:22 | curves and points independently.
| | 05:24 | So, we'll select point, right-mouse
pop-up, link to a tracker, very nice.
| | 05:32 | For the next new feature, I
need to load a new script.
| | 05:34 | In this SplineWarp node, I setup
several splinewarps, one for the left, one for
| | 05:41 | the right eye, and another one for the
face and I can toggle that on and off and
| | 05:45 | you can see the effect of that.
| | 05:48 | First I am going to turn off the Overlays with
the cursor in the Viewer, type O on the keyboard.
| | 05:53 | What I wanted to show you here are
these Warp sliders right here; Root Wrap,
| | 05:57 | Layer Warp, and Pair Warp.
| | 05:58 | Now Layer Wrap and Pair Wrap are ghosted out
and they will be ghosted until you select one.
| | 06:03 | So, let's start with the eyes, I am
going to select the eyes folder, which is
| | 06:07 | what they're calling a layer.
| | 06:08 | Watch what happens when I dial it down,
look at that, I have a slider now for
| | 06:14 | each layer, which can be of
course individually animated.
| | 06:19 | I'll choose the face layer, and again,
dial that down, and I'll put that back.
| | 06:27 | Next, I can choose just the left eye
pair for example, and now the Pair Warp
| | 06:32 | lights up, dial that down, and put that back.
| | 06:36 | And the Root Wrap is the slider that
controls all the wraps, so I can dial that
| | 06:42 | down, and down, to give me a
global control over everything.
| | 06:48 | The new tools and features in the Nuke 7
SplineWarp Node will both speed up your
| | 06:52 | work and give you more control over your warps.
| | Collapse this transcript |
| Warping techniques| 00:00 | Here we will take a look at the
changed workflow when warping with the Nuke 7
| | 00:04 | SplineWarp Node, and see how the
new tools help speed things along.
| | 00:08 | Now this video assumes that you are
already familiar with the Nuke 6 version of
| | 00:11 | the SplineWarp Node.
| | 00:12 | By the way, you'll find this face A
.tif file in our Tutorial assets.
| | 00:17 | We will start by using the tab key to do
a search on SplineWrap and there it is.
| | 00:26 | Now we've already introduced an
overview of the new features in the SplineWrap Node
| | 00:30 | in a previous video.
| | 00:31 | So, here we're going to look at the
workflow of actually doing a warp, a little
| | 00:36 | more room for my SplineWrap please.
| | 00:39 | So, I like to wrap this happy looking
eye into kind of an angry purple alien, so
| | 00:45 | let's start by making an angry mouth.
| | 00:47 | So, I'll click and drag, click and
drag, click and drag, click and drag,
| | 00:51 | and return to close;
| | 00:53 | this will be my source wrap.
| | 00:55 | I am going to draw a new shape for the
destination warp, again, this one of the new workflows.
| | 01:00 | So, I'll select the Spline again,
click and drag, click and drag, click and
| | 01:06 | drag, click and drag, click
and drag, return to close.
| | 01:09 | Now I am going to use the new Join tool,
select that, click on the source, and
| | 01:15 | then click on the destination,
and I get an immediate warp.
| | 01:18 | To checkout my correspondence lines,
I'll go the selection tool, which lights
| | 01:22 | them up and now I can pick the
correspondence tool and I'll select the modify
| | 01:28 | correspondence point tool, and we
will adjust that here and there and bring
| | 01:35 | this up here, okay.
| | 01:39 | Let's say we like that.
| | 01:39 | I would like to do a little refinement;
| | 01:42 | I'd like this control point to be
coincident with that control point.
| | 01:45 | So, I am going to move in here, using
one of the new features I want to select
| | 01:50 | this point and I'll say copy 1 point
values, select the other point, I'll
| | 01:57 | scoot it up a bit, do a right mouse
pop up, go to paste, 1 point, and now the
| | 02:02 | points are coincident.
| | 02:05 | Now one of the new features is when you
have coincident points, you can in fact
| | 02:09 | control them together.
| | 02:10 | So, I am going to select both shapes and
then select the point and I get this on
| | 02:15 | screen transform jack.
| | 02:17 | Now I can refine the position of both
of them together like so. All right!
| | 02:23 | Now to refine the destination shape, I
am going to turn off the source shape so
| | 02:27 | I can see it better and maybe
turnoff my correspondence lines.
| | 02:31 | And now I will come in here and edit
my destination shape a little bit, all
| | 02:36 | right, I'll turn my source back on and
I can toggle the Overlays off, and then
| | 02:46 | to see the effect come and go, I can
either go to the source image, warp
| | 02:50 | source, or over here the A side ,
or warped, or come down here, this is my
| | 02:57 | favorite, go to the SplineWrap Node
itself and use the D-key to toggle it on and
| | 03:01 | off, that's faster.
| | 03:04 | Now let's check this out.
| | 03:07 | To keep my project more organized, I am going
to put in a folder, put these two Beziers in.
| | 03:12 | So, I'll select Root, plus Rename this,
mouth, select these two, and drop them
| | 03:20 | in and close them up. Much more organized.
| | 03:22 | Now one thing I have noticed about my
angry mouth is as I toggle it on and off,
| | 03:28 | you see it's deforming the
entire shape of the head.
| | 03:30 | Okay, we don't like that, we just
want frowny mouth, so, I'll turn this back
| | 03:35 | on, go back to the A side, and I'll
draw a new shape around the perimeter to
| | 03:42 | act as a hard boundary.
| | 03:43 | So, click and drag, click and drag,
click and drag, all the way around, because
| | 03:48 | I need to lock the outside of this head.
| | 03:50 | So, let me cruise around here and
very quickly edit my control points.
| | 03:57 | Okay, let's come over here and rename
this Bezier; head, and we're going to
| | 04:01 | turn on the hard boundary feature.
| | 04:05 | Now watch would happens when we switch
back to be A side warped and toggle it on
| | 04:09 | and off, ooh, let me turn off the
Overlays for you and now we can admire,
| | 04:16 | fact, the head is no
longer deforming, okay cool!
| | 04:19 | Next, we'll see how to give
him an even more angry look.
| | Collapse this transcript |
| Animating a warp| 00:00 | Now that we have the head and mouth
set up, let's move on to giving him an
| | 00:04 | even more angry brow.
| | 00:05 | I will turn my SplinWarp back on.
| | 00:07 | I am going to switch to the A side
unwarped, select my Bezier, come over
| | 00:13 | here, click and drag, click and drag,
click and drag, click and drag, I want
| | 00:18 | an open shape here, so to let it know I am
done drawing, I will just select another tool.
| | 00:24 | I will refine the position of my
points, now that's my source shape.
| | 00:29 | To make my destination shape I am
going to use the new, right mouse
| | 00:33 | pop-up, duplicate and join.
| | 00:36 | Now I can pull out on the destination,
now if I turn on the warp, I can see how
| | 00:42 | much I am warping it.
| | 00:46 | Very nice, so now I will toggle that
on and off, and go yes, that looks nice
| | 00:51 | and angry. Hmmm, but you know what, it's also
deforming the nose and that's just not right.
| | 00:57 | So let's put in a soft boundary to
protect that nose from deforming.
| | 01:00 | I am going to turn the node back on,
switch back to the A side unwarped, select
| | 01:08 | my Bezier tool, scooch in here and a
click and drag, click and drag, click and
| | 01:15 | drag, return to close, refine the shape.
| | 01:22 | Now this is a boundary shape, so I
don't need a source and a destination, but I
| | 01:27 | do need to name this, I am going to
call this one nose, and I am going to set
| | 01:32 | that as a soft boundary.
| | 01:33 | Again, I can turn off my Overlays,
switch to the A side warped, then toggle with
| | 01:41 | SplineWarp Node on and off, and
go yeah, that looks better.
| | 01:44 | Okay, to clean things up a bit, I'm
going to select root and make another
| | 01:48 | folder and I am going to call this one
brow, so I could put these two Beziers in the
| | 01:54 | brow and keep my job much more organized.
| | 01:59 | Okay, let's put in a little animation,
get up my curve editor here, we'll
| | 02:07 | select the brow folder. So that we can
do the new layer warp, so we'll set
| | 02:11 | that as a keyframe on
frame 1, where my timeline is.
| | 02:15 | I want to make that 0, and I want
to do the same thing for the mouth.
| | 02:18 | Set a keyframe on frame 1 and make the
layer warp factor 0, set the mouth to 1,
| | 02:26 | select the brow layer, set that to 1.
| | 02:30 | So now we have a little animation, all right.
| | 02:32 | To make the animation look a little more
organic, let's put some accelerations on our speed.
| | 02:44 | So we will select this warp, select that,
add a control point here in the middle
| | 02:52 | and make the brow warp start quickly at
the beginning and slowdown at the end.
| | 02:57 | Next, let's pick the mouth layer,
select his warp animation curve, insert a
| | 03:02 | control point and have him start off slow
at the beginning and accelerate at the end.
| | 03:07 | Okay, that should give our
animation a little character.
| | 03:09 | All right, so let's reposition
everything, jump to the first playhead here, I
| | 03:15 | want to ping-pong my
playhead for you, and we'll play.
| | 03:23 | Okay, so we have deformed our purple
alien to make him even more angry than
| | 03:27 | he was to being with.
| | 03:28 | These new tools in the SplineWarp Node
will not only speed up your workflow, but
| | 03:33 | also improve your creative
control when warping images.
| | Collapse this transcript |
| Morphing techniques| 00:00 | When morphing two images together, the
first step is to apply a warped image A
| | 00:05 | that matches it to common points of
image B, such as eyes, nose and mouth.
| | 00:10 | Matching image A to image B, takes some
changes to the set up in the workflow,
| | 00:15 | compared to a simple warp.
| | 00:16 | Here we'll see how the new
tools help with that process.
| | 00:19 | By the way, you can find our new face B
in the Tutorial assets and be sure you
| | 00:26 | hook up face A to input A of the
SplineWarp and face B to the B input, because
| | 00:31 | we're going to be warping A to B.
Okay, rehome the Node Graph.
| | 00:38 | First thing we will want to do is setup
a Viewer wipe so we can bounce quickly
| | 00:42 | between our two images.
| | 00:43 | So we'll select Read 2 and type 2 on
the keyboard and that way we can ping pong
| | 00:47 | quickly between the two.
| | 00:48 | However, sometimes we're going to want
to see a wipe, so let's go up to our wipe
| | 00:55 | controls, set it for wipe,
SplineWarp3 on the left, Read 2 on the right.
| | 01:02 | Now we can use our wipe controls,
the fader bar, so we can do a dissolve,
| | 01:07 | sometimes you want to dissolve like this,
sometimes you want to ping pong like
| | 01:11 | that. So we're ready to go either way.
| | 01:14 | Now I'm going to turn off the Viewer
wipe ,and we'll start by jumping over to the
| | 01:20 | Input 2 and see that what we want to
do is change the general outside shape.
| | 01:25 | This jaw line is very
distinctive. So let's start.
| | 01:29 | We'll select the original A, this is the
unwarped A side and switch our Viewer
| | 01:35 | input to the SplineWarp Node
by typing 1 on the keyboard.
| | 01:38 | Remember this is the A warped and
this is the A original, by the way it's
| | 01:43 | duplicated over here.
| | 01:46 | So, we'll select our Bezier, click and
drag, click and drag, click and drag, click
| | 01:51 | and drag. And draw ourselves a nice
little shape all the way around the perimeter
| | 01:56 | of our guy, so we can get this head
shape to look correct, return to close.
| | 02:02 | Now give me a moment to tighten up my
spline, in the mean time we'll do a speed
| | 02:06 | change, so it won't take very long.
| | 02:19 | Okay, I've drawn the outline around the A side
and notice that the color of the spline is red.
| | 02:24 | Now I'm going to switch over to the B
view and draw a spline around this head,
| | 02:30 | and note that the color of this is blue. So blue is for the B side and red is
| | 02:37 | for the A side. We'll close that, and
once again, I'll lit up my spline, while you
| | 02:47 | just zip through it.
| | 02:56 | Okay, there is my B side spline. Note
that when I switch to the A unwarped, I
| | 03:02 | see the red and the B, I see the blue.
| | 03:05 | If I want to see both of the splines, we
click on the A, B button, and now I see
| | 03:10 | both of the splines.
| | 03:12 | Now the question is how do I control
whether I am seeing the A or the B side, I
| | 03:17 | can see both of the splines,
but what about the pictures?
| | 03:20 | That's controlled right here, mix.
| | 03:21 | The mix slider is set to 0, means
you're looking at the A side, I'll jump over
| | 03:25 | to here and then up to there, so now I
am looking at the B side and I'll put
| | 03:30 | that back to default.
| | 03:32 | Okay, with both the A and the B
splines drawn, what we have to do is connect
| | 03:36 | them. So we'll go over to the Join tool,
and we're going to connect, click on the
| | 03:44 | A to the B and bang, we
get this lovely deformation.
| | 03:49 | All right, what's going on here is
the correspondence points are a little
| | 03:53 | unhappy. That's an easy fix.
| | 03:56 | We'll click on the Selection tool, so we
can see our correspondence points. Then
| | 04:03 | we'll get the Modify Correspondence
tool, and just get these correspondence
| | 04:07 | points to line up real nice, put them
where you want the control to be. I want
| | 04:16 | to get this jaw just right.
| | 04:20 | Now we can improve how well this fits
by adding some more, so we'll select the
| | 04:23 | Add Correspondence Point tool; add some
points here, maybe over there, how about
| | 04:30 | here, and down there. And again, the
Modify Correspondence tool to tighten them
| | 04:37 | up, maybe a little bit over there.
| | 04:41 | Okay, let's say we like that, so to
see how my morph looks, cursor in the
| | 04:46 | Viewer, type O to turnoff the Overlays,
then I can type 1, 2, 1, 2 to see how
| | 04:52 | the shape fits, okay, or turn the
Overlays back on and turn my Viewer light
| | 05:00 | controls on, so I can do the fader bar thing.
| | 05:04 | Now if I don't want to see the splines,
I can come up here and turn off the
| | 05:08 | source and destination spline,
now I can just look at the image.
| | 05:12 | I turn them off with the Overlays,
then I don't get my fader bar.
| | 05:18 | Let's turn off the wipe controls and
let's make a folder. So we'll select Root,
| | 05:23 | click on plus (+), rename this; head, and pick up
these two Beziers, drop them in and fold it up.
| | 05:34 | Next, let's take a look at the mouth,
they toggle back and forth between the
| | 05:38 | two, you can see that the mouth on B
side is way different than the A side. So,
| | 05:44 | let's go to the A side, turn off the
deformations, I want to draw the shape on
| | 05:49 | the undeformed image, zoom in, select
my Bezier tool, let's draw, draw, draw,
| | 05:55 | draw, close. A little tidy up here.
| | 06:00 | I want to keep this real close to the lips,
because I want a real tight fit on the B side.
| | 06:10 | Now we'll use a new tool, I'll select
my Bezier, right mouse pop-up and click
| | 06:16 | duplicate in B, and now if I switch to
the B side, there is my spline. It's not
| | 06:23 | joined yet, so, select that, I'll
bring it down here to my destination, and
| | 06:37 | we're going to bring it in nice and
tight and I want to set these corners real
| | 06:41 | close on this mouth right here and
this corner right there, and we use very
| | 06:46 | tightly fit to the edge.
| | 06:52 | Okay, let's say we like that, we'll go
back to the A-B view, so I can see both
| | 06:59 | of my splines, and again, I'll select
the Join tool and I'll click source A to
| | 07:06 | destination B, and now I get a lovely
puckered mouth. I'll turn the source and
| | 07:11 | destination splines back on, so I can
see what I am doing. My correspondence
| | 07:16 | points look pretty darn good, but I am
getting a little bit -- let me do Overlay
| | 07:20 | off here, you see I have got
little wobblies here, Overlay on.
| | 07:23 | Well I can fix that by adding some
correspondence points, so we'll add
| | 07:27 | correspondence tool, click here, click
there, click there, click there, Overlay
| | 07:32 | off, ah, much better.
| | 07:36 | Okay, let's see how well it works with
the other mouth, so I am going to toggle
| | 07:39 | A, B, 1, 2, 1, 2, that looks nice, all
right, I am going to keep that and we'll
| | 07:45 | make a folder for those two.
| | 07:47 | Select Root, click on Plus (+), double-
click, type mouth and pick these two guys
| | 07:53 | up and drop them in the mouth folder
and fold them up for neatness sake.
| | 07:58 | Now there's another way to organize
your shapes list folders that works
| | 08:01 | well for morphs.
| | Collapse this transcript |
| Animating a morph| 00:01 | Here's an example of this other approach
where the shapes are reorganized into A
| | 00:04 | side and B side folders. Also to save
time, I have completed most of the shapes
| | 00:09 | and joined them already, but here are
our folders, we have the B side and the A
| | 00:14 | side, and how this was set up, I drew
all the A side first and then I would do
| | 00:19 | the duplicate in B, and then move the
duplicate down here to the B side. I'll
| | 00:26 | show you how that works
by doing the nose for you.
| | 00:28 | Let's do the nose together, make sure I
have selected my Bezier tool, draw my
| | 00:36 | shape, tighten it up a little bit.
| | 00:40 | Okay, so I'm going to rename this nose,
and I'm going to slip it inside the A
| | 00:50 | side folder. Here we are, notice, that
it says it's an A side shape and it has
| | 00:56 | no partner to link to.
| | 00:58 | Okay, now I'm going to switch to the B
side here, and then I'll go back to the
| | 01:04 | nose and I'll say duplicate
this in B, and I get nose1.
| | 01:09 | Notice that it's a B side shape, so
we'll pick that up and put it into the B
| | 01:14 | side folder, there it is.
| | 01:18 | Now I'll adjust it to fit. There, now
to do the join, we'll select the AB morph
| | 01:31 | view, so I can see all my splines
and I'll turn on the correspondence
| | 01:36 | visibility, so I can see what I'm doing.
| | 01:37 | We'll go to the Join tool, click on
the source or the A side, and click on
| | 01:44 | B, the destination side and to make
the correspondence line show up, I'll
| | 01:49 | select the Selection tool.
| | 01:51 | Okay, now we just got to tighten them
up a little bit, so we'll modify our
| | 01:55 | correspondence point, top of nose to
top of nose, center to center, nostril to
| | 01:59 | nostril, and nostril to nostril, and if
I wish to kind of sweeten up the warp a
| | 02:11 | little bit, I can do that right here.
| | 02:13 | Okay, let's say we like that and now
notice that the nose is in AB shape, as is
| | 02:20 | nose1, also an AB and also this column
shows you which shape they're joined to.
| | 02:26 | All right, let's do a check on our morph,
cursor in the Viewer, type O to turn
| | 02:31 | off the shapes, and let's do a mix of .5,
so we can see half our A side and
| | 02:40 | half of the B side and I
seem to have some issues here.
| | 02:43 | Okay, the A side is protruding here,
I'll go back a little bit, show you the A
| | 02:50 | and then over a little more to the B side,
and then back to about a fifty/fifty mix.
| | 02:54 | All right, so I need to tuck this in,
so I'm going to turn on my Overlays, I'll
| | 02:59 | go get my Add Correspondence Point tool,
come down here and add a correspondence
| | 03:05 | point, there you go, see it has
pulled it right in and then adjust the
| | 03:09 | correspondence point, so it's
straight across. It still has got a little
| | 03:14 | protrusion here, we'll add another
point there and we'll again adjust the
| | 03:17 | correspondence point, so they're
straight across, okay, much better, much
| | 03:21 | finer, I'll zoom out.
| | 03:24 | Now let's check the A warp side here,
what I wanted to call your attention to is
| | 03:28 | we've got kind of a jaggedly jaw line
and some wobblies up here. So that can be
| | 03:32 | fixed over on the Render tab, while we
turn up the curve resolution, to let say
| | 03:37 | 6, and it should smooth that
stuff out. Ah, much better.
| | 03:42 | So you can see the difference that that makes.
| | 03:45 | Okay, the last step is to keyframe some
morph animation, so let's go back to the
| | 03:49 | SplineWarp tab and setup some
animation for the mix and the root warp.
| | 03:54 | The mix is going to be ghosted
out until you set it for AB morph.
| | 03:57 | Okay, make sure our playhead is on
frame 1, I'll set the mix parameter to 0
| | 04:03 | on frame 1, and we'll set a keyframe
there, and same thing for the root warp.
| | 04:08 | We want 0 at the beginning of the shot,
and we need to set a keyframe there as well.
| | 04:16 | Then we'll jump to playhead to the end
of the shot and we'll set the root warp
| | 04:20 | for 1, and the mix for 1.
| | 04:24 | I'll jump the playhead to the beginning
of the clip and play, and there you have
| | 04:37 | it, a lovely morph between two static images.
| | 04:40 | Of course, if your images are moving,
then all the shape will have to be
| | 04:44 | keyframed like any roto job, we'll stop this.
| | 04:49 | The new tools in workflow layout in
the Nuke 7 SplineWarp Node will be a real
| | 04:53 | boon to your morph production jobs, for
both productivity and artistic control.
| | Collapse this transcript |
|
|
7. Deep CompositingSetting up a deep composite| 00:01 | Deep images and deep compositing is
a major new technological development
| | 00:04 | for Visual Effects.
| | 00:06 | Used for incredibly complex CG renders
and compositing of Avatar, it's now an
| | 00:11 | industry standard supported
in the release of EXR 2.0.
| | 00:14 | All the images here are in your Deep
Compositing folder of the Exercise Files.
| | 00:20 | The deep nodes are over here in the
Deep Toolbar, and these are specifically
| | 00:25 | required for working with deep images.
| | 00:27 | For example, if we want to read in a
deep image we have to use a DeepRead node.
| | 00:32 | So what are deep images?
| | 00:34 | Deep images have many layers of additional
depth and transparency compared to a regular image.
| | 00:40 | Let's take a look.
| | 00:40 | I'm reading in this deepFalcon image,
which is in the EXR file format.
| | 00:45 | So I have RGB, Alpha, and
I also have a deep layer.
| | 00:51 | Of course deep data is not at all
interesting to look in the Viewer because the
| | 00:56 | numbers are so huge, so we won't
do that anymore; back to RGBA.
| | 01:00 | But we can see what the deep samples are.
| | 01:03 | We hook up a DeepSample's node, open it up.
| | 01:06 | When I move the position indicator
on top of my CG image, you can see all
| | 01:12 | the deep data here.
| | 01:13 | For that one pixel it has all these
different depths, so here's your depth here,
| | 01:19 | and then RGB values, and then a transparency.
| | 01:22 | You have all those
different values for that one pixel.
| | 01:26 | Let's look at another one.
| | 01:27 | Here's another deep image, and again,
it's got the deep layer with it.
| | 01:35 | And if I hook the DeepSample up to this
one, and I'll push in here, and I move
| | 01:40 | my position sample over here, there
you can see all the deep image data.
| | 01:44 | If I move it off, no deep data.
| | 01:50 | So at this point I have two deep images
with their deep data, all I have to do
| | 01:55 | is apply a DeepMerge right here.
| | 01:58 | Back that out for you.
| | 02:01 | The brilliant part about the
DeepMerge is there is no Alpha Channel being
| | 02:06 | used for this composite.
| | 02:08 | Each pixel is sorting itself out with
the element in front or behind, with the
| | 02:12 | correct transparency.
| | 02:14 | Since there are multiple transparency
samples, then the composite edges are very
| | 02:20 | nice, even though we're
using depth for compositing.
| | 02:22 | You know that Depth Z
compositing will get you bad edges.
| | 02:25 | We can now move our DeepSample node to
look at the composite, so now if I shift
| | 02:30 | the position on top of the cattail, I'm
actually seeing the deep data from the
| | 02:36 | cattail here, and then the bird on back.
| | 02:39 | If I move it over here I
just see the falcon deep data.
| | 02:44 | Also in the DeepMerge operation I
now have an Alpha Channel that is the
| | 02:49 | combination of the two layers.
| | 02:52 | This will become very
important in just a minute.
| | 02:54 | I'm going to close my DeepSample Node,
switch back to RGB, to take it down
| | 03:01 | to here, DeepToImage.
| | 03:07 | At this point my data is deep data, but
here I'm converting it to flat data, so
| | 03:12 | I can do a comp over a regular
background, and we can see that comp here.
| | 03:19 | So I had to turn it to a flat image in
order to composite it over this flat background.
| | 03:25 | Deep data works with deep data, but deep
data does not work with flat, so you're
| | 03:29 | converting back and forth between
flat and deep as required for your shot.
| | 03:33 | Now, regular 2D nodes
will not work with deep data.
| | 03:36 | Let me show you here.
| | 03:38 | I'm going to bring in a Grade node and
it will not hook in, because the Grade
| | 03:43 | node knows that this is a deep image.
| | 03:46 | However, if I bring it down here, I can
hook it up. No problem I'll delete that.
| | 03:54 | So you see you have to use the deep
nodes with deep images and the regular 2D
| | 03:58 | nodes with flat images.
| | 04:00 | Now, some workflows the deep
data is separate from the RGBA.
| | 04:04 | Let's take a look at that.
| | 04:09 | Here, this is a regular RGBA
image using the standard read node.
| | 04:14 | It has an Alpha Channel,
but it has no deep channels.
| | 04:20 | Let me put that back to RGB.
| | 04:22 | I'm going to close this.
| | 04:25 | So the deep data comes in
here, in a DeepRead node.
| | 04:30 | The way we get the RGB data and the
deep data together in the same data stream
| | 04:34 | is with the DeepRecolor node,
that's what this node is for.
| | 04:38 | Now, this RGB and Alpha data is now
combined with this deep data in one single
| | 04:44 | image, and there it is.
| | 04:50 | Same thing with the Read Render.
| | 04:52 | This is a standard Read node.
| | 04:53 | This is only an RGBA image,
there is no deep data here.
| | 04:58 | Here's my deep data over here for this element.
| | 05:02 | Use the DeepRecolor node to join them together.
| | 05:04 | Notice the deep data comes in on the depth
input and the RGB comes in on the color input.
| | 05:13 | At this point I now have deep data for
my falcon and deep data for the read, so
| | 05:18 | all I need to do is a
DeepMerge to do the composite.
| | 05:22 | And again, DeepToImage to turn it flat, and
then I can composite it over my background.
| | 05:28 | The results of these two
workflows are identical.
| | 05:31 | Deep compositing solves the edge
artifacts encountered when trying to do Depth Z
| | 05:35 | compositing with regular flat
images that only have one depth channel.
| | 05:39 | While deep images can be very large
file sizes, the main advantage is that
| | 05:45 | compositing with them
saves a lot of rendering time.
| | 05:48 | In the next segment, we'll see why this is so.
| | Collapse this transcript |
| Processing deep images| 00:01 | Nuke 7 supports several of the
typical image processing operations for deep
| | 00:05 | images such as color-correcting,
transforming, cropping, and reformatting.
| | 00:09 | But, you can only use the
Deep node with the deep images;
| | 00:12 | the ones we find here on our Deep Tab.
| | 00:17 | All the images I am using here you'll
find in the Deep Compositing folder of
| | 00:20 | the Exercise Files.
| | 00:22 | So, let's start with the DeepColorCorrect.
| | 00:25 | We'll push into here.
| | 00:28 | And I am going to put the ground plane
up on the screen, and I'll open up the
| | 00:32 | DeepColorCorrect node.
| | 00:35 | Let me hook my viewer up to that.
| | 00:38 | Notice that it looks exactly
like the FlatColorCorrect node.
| | 00:42 | The only difference is we
have this Masking Tab here.
| | 00:44 | We'll come back to that in a minute.
| | 00:47 | So, I am going to apply Color Correction.
| | 00:49 | I'll set the Saturation to 0.5, and the
Gain, let's make this really blue to
| | 00:55 | match that night city.
| | 00:59 | So now if we look at the Comp, we can
see we have this very, very blue floor.
| | 01:03 | Well, what I want to do is control
the depth of the color correction, so it
| | 01:08 | starts here and then gets more blue
towards the back near the blue city.
| | 01:12 | To do that, let me go to the Masking Tab.
| | 01:16 | The Masking Tab has a trapezoidal
curve editor rather like the Keyer node.
| | 01:23 | For it to take effect, you have to
turn on the limit_z button, but watch what
| | 01:27 | happens, when I turn this on, boom!
| | 01:29 | I lost all my color correction.
| | 01:31 | That's because this is now taking
control and it says the color-correction will
| | 01:35 | only occur between a distance of 0 to 1
in depth, and these objects are hundreds
| | 01:41 | of units away from the camera.
| | 01:42 | So, I am going to have to put in
reasonable numbers here before my
| | 01:46 | color-correction will look right.
| | 01:49 | To do that, I'll open up the
DeepSample node, and I will sample the ground,
| | 01:54 | let's say I want it to start right about here.
| | 01:56 | So, it's 1152, and if I go all the
way to the back, it's around 2023.
| | 02:05 | Okay, so I am going to set the Near
at 1100 and the Far at 2100, there.
| | 02:10 | Now my gradient is being
controlled by the zmap curve.
| | 02:18 | Okay, we're done with that.
| | 02:18 | Now, let's take a look at
the Deep Crop operation.
| | 02:24 | I am going to move my viewer over to
here where we have these deep lampposts.
| | 02:29 | I'll open up the Crop
node and then I'll enable it.
| | 02:33 | And of course, I lost everything.
| | 02:36 | The reason is that this znear and zfar
are set for very small values right near
| | 02:41 | the camera, and they're both enabled.
| | 02:43 | So, if I turn off the Use for
zfar and znear, I now see my picture.
| | 02:48 | So, we'll adjust the crop for the
part of picture we want to crop, there.
| | 02:56 | There is this very useful option
here to keep outside the bounding box.
| | 02:59 | But here we're going to keep inside.
| | 03:02 | Now, let's take a look at our composite.
| | 03:05 | So, now the composite only has the two lights.
| | 03:08 | But, I can get even cagier than that.
| | 03:10 | So, what I want to do is a crop in Z
where I will crop out one of the light
| | 03:16 | posts and keep the other.
| | 03:17 | So, I'll open up the DeepSample node;
| | 03:21 | cruise around looking for the
depth of this light post, it's 1125.
| | 03:24 | So, I am going to enter
some values of 1100 and 1150.
| | 03:31 | So, I'll turn these on.
| | 03:32 | I'll enter the zfar of
1150 and the znear of 1100.
| | 03:43 | And there, those depth ranges kept
this light post here, and eliminated
| | 03:49 | everything outside of that crop.
| | 03:51 | Of course, we can also invert
that with the 'keep outside zrange'.
| | 03:55 | Okay, I am going to disable the Crop
node so we can get back to our picture, and
| | 04:01 | clear the property bin, and rehome the viewer.
| | 04:03 | Now let's take a look at DeepReformat.
| | 04:04 | We can see that here on this dancer element.
| | 04:09 | I'll open up DeepReformat, and this
looks very much like the FlatReformat node.
| | 04:15 | So, let's switch the viewer back to the
composite, and watch what happens when
| | 04:19 | we set the Type to Scale for example.
| | 04:21 | It has the box, but
we're going to use the scale.
| | 04:24 | So now I'll inch down the scale factor,
and it my element gets smaller, and I inch
| | 04:28 | it up, and it gets bigger, no surprise there.
| | 04:32 | You can also use the flip and flop buttons.
| | 04:36 | Now, let's take a look at the DeepTransform.
| | 04:38 | This works a bit differently than
the FlatTransform node. translate X;
| | 04:43 | I am going to put in a 10 here, and it
does behave rather like you would expect.
| | 04:48 | Okay, I'll move it 10
pixels in X. I will undo that.
| | 04:54 | Here, Y. As I inch the Y up, our
character goes higher off the ground.
| | 04:59 | If I inch Y down, something funny happens.
| | 05:02 | He starts penetrating through the floor,
because the DeepCompositing node knows
| | 05:07 | that he is now below the floor
and he crops him automatically.
| | 05:10 | I will undo that and restore that to default.
| | 05:13 | The Z does something even more interesting.
| | 05:17 | It is not pushing it further away from the
camera lens, it is changing the Depth value.
| | 05:23 | So, if I increase the Depth value, and
push him away from the camera 100, 200,
| | 05:28 | there, he went behind that light post.
| | 05:32 | If I keep going, he starts
penetrating into the ground.
| | 05:35 | Let me go the other way.
| | 05:37 | As I come towards the camera, he now jumps
to this side of the light post. All right!
| | 05:42 | So, we'll undo that, back to default.
| | 05:44 | Z scale is different than the translate
Z. Translate Z repositions it forwards
| | 05:51 | and backward in depth.
| | 05:52 | Z scale actually scales the Depth values.
| | 05:56 | If I set the Z scale to greater than 1,
it moves closer to the camera. Oops!
| | 06:00 | And I will see he popped in front of the post.
| | 06:03 | I'll walk that back.
| | 06:05 | If I set the Z scale to less than 1,
it walks it away from the camera.
| | 06:09 | Now, it's behind the post, and
in fact penetrated the floor.
| | 06:12 | So, we'll put that back to default.
| | 06:19 | If you want to write a deep image to
disk, you have to use the DeepRight node;
| | 06:23 | a couple of things you want to know.
| | 06:25 | If you select the RGBA channels,
you'll get RGBA and all the Deep channels.
| | 06:31 | However, if you select Deep, you'll get the
Alpha channel plus all the deep data but no RGB.
| | 06:38 | And of course, if you want to save a
flat image to disk, you have to use the
| | 06:42 | standard Write node.
| | 06:43 | And of course, you'll need to use the
DeepToImage node in order to make it flat.
| | 06:49 | Beyond color-correcting and
repositioning your deep elements, you'll also want
| | 06:52 | to do masking and holdouts.
| | 06:54 | We'll take a look at that next.
| | Collapse this transcript |
| Measuring and viewing deep data| 00:00 | While working with deep images is a very
powerful workflow, it can be hard to visualize.
| | 00:05 | Here, we'll look at several tools that
will help you to navigate your deep terrain.
| | 00:11 | All the images that we're using are in the
Deep Composing folder of the Exercise Files.
| | 00:15 | Let's start out by taking a look
at how to measure your deep images.
| | 00:19 | We have two ways to do that;
| | 00:21 | the DeepSample node and the DeepGraph.
| | 00:24 | I'll start by hooking up to my
DeepCloud, and open up the DeepSample node
| | 00:28 | which is connected to it.
| | 00:29 | If I move this position point around
the screen, you can see the DeepSample's
| | 00:34 | printing over here in the Property panel.
| | 00:36 | If I go to a thin area, there are not
very many samples, and if I move over to a
| | 00:42 | thicker area, there's are a lot more.
| | 00:44 | So you are actually seeing the
number of layers plus their values.
| | 00:47 | Another way to look at your deep images
is right here hidden away the DeepGraph.
| | 00:54 | I am going to close my DeepSamples.
| | 00:57 | So, I move the cursor over my deep
image, it gives a constant update to the
| | 01:02 | depth and transparency under the cursor.
| | 01:05 | And remember, this is for 1 pixel.
| | 01:08 | So, let me zoom in here, and I'm
going to plant 1 pixel. There it is!
| | 01:13 | This 1 pixel, it goes from
about 92 to about 71 in depth.
| | 01:20 | The Vertical Scale is Opacity.
| | 01:22 | So, this particular pixel
doesn't get very opaque.
| | 01:26 | Let's switch to the Alpha Channel, and
you can see that's pretty transparent.
| | 01:32 | But, if I move my sample over here to
this very opaque part of the picture, you
| | 01:36 | can see that the cumulative
transparency has reached 100% Opacity.
| | 01:40 | Now, I'll close my DeepGraph and
restore the Viewer to a normal state.
| | 01:46 | Another thing to know about the
DeepSample node is you can sample it no matter
| | 01:51 | where you are in the flow graph.
| | 01:53 | I could be here at the final
composite, open up this DeepSample, and as I
| | 01:57 | move the position point around, I'm
getting an update only on this DeepSmoke element.
| | 02:03 | So, it doesn't matter
where the viewer is connected.
| | 02:06 | Next, let's take a look at how to
visualize a point cloud with DeepImages.
| | 02:10 | Now, this is very much like our
DepthToPoints for regular flat images.
| | 02:15 | This is using just the Z channel, plus a camera.
| | 02:17 | So, let's see what happens
when we have a true deep image.
| | 02:20 | I'll connect the Viewer to the DeepSmoke
element and let's clear the Property Bin.
| | 02:29 | This DepthToPoints node is
connected to the smoke layer and a camera.
| | 02:32 | You must have a camera just
like in the DepthToPoints node.
| | 02:36 | The difference is instead of having one
depth, we're going to have lots of them.
| | 02:41 | Open up DeepToPoints, switch to the 3D Viewer,
and now we can see our deep image in 3D space.
| | 02:48 | We can swing around, look
at it from different angles.
| | 02:51 | You can also get a sense of its position.
| | 02:54 | This particular square is 100.
| | 02:56 | So, that means the back end of my cloud is
about 100, and the front end is at about 25.
| | 03:04 | Not only can you use the DepthToPoints
to visualize your 3D elements, you can
| | 03:07 | use it to align elements like this.
| | 03:10 | Come around here, push in a little bit,
and I am going to open up for my jet the
| | 03:16 | DeepToPoints for the jet.
| | 03:17 | And now, you can see the jet embedded in
the cloud in its correct position in 3 space.
| | 03:24 | You can also use the DeepTransform
node as we saw earlier to move it front to
| | 03:29 | rear in Z. Okay, I am moving
it back by 10 units, 20, 30, 40.
| | 03:33 | So, I pushed it way behind the cloud.
| | 03:36 | Now, if I switch the 2D render right
here, you can see that the jet is now
| | 03:43 | pushed way behind my cloud, or
I am going to walk it forward.
| | 03:49 | Here it is coming closer and closer.
| | 03:52 | Actually, it's not coming closer to the camera;
| | 03:54 | it's actually pushing its
position inside of the cloud.
| | 03:57 | I can even walk it in
front of the cloud completely.
| | 04:00 | Next, let's take a look at creating holdouts.
| | 04:04 | One of the huge advantages of Deep
Compositing is the ability to create holdouts
| | 04:10 | without rerendering.
| | 04:12 | This is a huge win when working with
very complex CG elements like the jungles
| | 04:16 | of Pandora from Avatar.
| | 04:17 | Let me show you how.
| | 04:19 | I am going to start with my
DeepSmoke element, and I have a DeepJet here.
| | 04:26 | I want to create a
holdout in the smoke of the jet.
| | 04:29 | So, I hook up the DeepHoldout node here.
| | 04:33 | The setup is to connect the main
input to the element you want to have the
| | 04:36 | holdout, and then the holdout input to
the element that's going to perform the
| | 04:40 | holdout, in this case, the jet.
| | 04:42 | So, right now, I have a
cloud with a holdout of the jet.
| | 04:46 | Over here I have a jet
with a holdout of the cloud.
| | 04:50 | Notice the Alpha Channel.
| | 04:52 | So, the holdout is affecting
the transparency; back to RGB.
| | 04:57 | Now, here is a key issue.
| | 05:00 | When you perform a DeepHoldout, the
output of the DeepHoldout node is a flat image.
| | 05:05 | So, notice that my Merge node is not
the DeepMerge, it's the regular FlatMerge,
| | 05:10 | and now I can merge them together.
| | 05:12 | Here is another critical point.
| | 05:14 | I'm going to open up the Merge node.
| | 05:16 | Notice that the operation is Plus.
| | 05:18 | You must use the Plus operation after the
DeepHoldout, and not the default Over operation.
| | 05:23 | The reason is the Over
operation will damage the Alpha Channel.
| | 05:27 | Here, I'll show you.
| | 05:28 | I'll switch it Over.
| | 05:30 | You notice wel lost a little transparency.
| | 05:32 | I'll show you the Alpha Channel. See that?
| | 05:35 | That's bad.
| | 05:36 | I'll come back to the operation, put
it back to Plus, and we now have a nice
| | 05:40 | solid Alpha Channel.
| | 05:41 | Viewer back to RGB, and now we have two
flat composited images that we can then
| | 05:49 | composite over our flat sky.
| | 05:50 | When properly done, the DeepHoldout
composite will be visually identical with
| | 05:56 | the DeepMerge composite.
| | 05:57 | Now, let's take a look at making our own deep
images which you can do to a limited degree.
| | 06:03 | For this, we'll need the DeepFromImage node.
| | 06:05 | We'll start with this flat jet image right here.
| | 06:08 | Now, it has a depth channel, classic
depth z channel which you can see right
| | 06:13 | here, but it has no deep.
| | 06:18 | So, this flat jet will
composite over this flat background.
| | 06:22 | Let me switch my view back to RGBA.
| | 06:24 | So this would be as any ordinary composite.
| | 06:27 | However, if I take my flat jet and
hook it up to a DeepFromImage node, I have
| | 06:34 | now added a deep channel, and we
can see that right here; deep.
| | 06:39 | However, this is the important part,
all we've done is taken that depth z
| | 06:44 | channel, and copied it into the deep channel.
| | 06:47 | So, we have no new information.
| | 06:50 | The result is an image that has one
deep layer exactly like the depth z
| | 06:54 | channel, and you'll have the exact same
compositing results as if you had used the ZMerge node.
| | 06:59 | The difference is you can
now play with deep images.
| | 07:03 | We can put up the DeepSample node.
| | 07:05 | As I move the Position node, you can
see I am measuring different depths.
| | 07:08 | And if I open up the DeepGraph, as I
move the cursor over the jet, you can see
| | 07:12 | the DeepGraph reflecting
the depth at each point.
| | 07:15 | But, notice there is only one sample,
and it goes from 0 to 100% white, because
| | 07:20 | this guy has one deep layer instead
of multiple deep layers like the cloud.
| | 07:25 | Deep Compositing is the big new
technology in compositing visual effects.
| | 07:29 | Nuke is an industry leader
in supporting this technology.
| | 07:32 | So, mastering Deep Compositing is
a great way to future-proof your
| | 07:36 | compositing career.
| | Collapse this transcript |
|
|
8. The VectorGenerator NodeUnderstanding the setup and operation| 00:01 | The VectorGenerator produces both
forward and backward vectors that can then be
| | 00:05 | piped to nodes that use vector fields
such as Kronos and MotionBlur which we'll
| | 00:09 | be looking at shortly.
| | 00:11 | The VectorGenerator supports GPU
processing for much faster rendering, but that
| | 00:16 | requires certain NVIDIA GPUs and CUDA drivers.
| | 00:20 | The clip I'm using here
is in the tutorial assets.
| | 00:25 | We'll find our VectorGenerator node
after we select the Read node and go up to
| | 00:30 | the Time Tab, and it will be
right down here at the bottom.
| | 00:33 | As soon as you hook it in, it's
actually rendered our vector fields.
| | 00:38 | We can see them up here. I'll pop this up.
| | 00:40 | And we actually get three
different vector fields.
| | 00:43 | The forward vector field;
| | 00:46 | the U and V data is put in the red,
and the green channels. There you go!
| | 00:51 | Now, this is the vector field required to
take the next frame and morph it into this frame.
| | 00:58 | So it's like look forward in time
to move that frame back to this one.
| | 01:03 | The next vector field is the backward
vector field, and again, the horizontal
| | 01:09 | values are in the red, and the
vertical values in the green.
| | 01:13 | This is the vector field that will take
the frame behind the current frame and
| | 01:17 | move it forward in time.
| | 01:20 | The third is the motion vectors.
| | 01:23 | This is all of them combined
into a single four-channel image.
| | 01:26 | We can see that in the red, and the
green, and the blue, and the alpha.
| | 01:32 | So, depending on how you like it bundled,
you pick which one you want to work with.
| | 01:38 | We'll work with the forward motion vectors.
| | 01:40 | Okay, let's take a look at what these motion
vector values look like. I'll zoom in here.
| | 01:46 | I'm going to sample a pixel value here, and
we can look at it down here below the Viewer.
| | 01:52 | This says it's 9.8 pixels
horizontally, and 1.2 pixels vertically.
| | 01:58 | So, this not a picture,
it's data about the picture.
| | 02:01 | Let me sample another spot here.
| | 02:05 | Here, it's 12 pixels
horizontally and -1.3 pixels vertically.
| | 02:09 | In other words, there are negative
code values in here. There you go!
| | 02:13 | But, in the Viewer
course, they show up as black.
| | 02:16 | We'll go back here.
| | 02:17 | We can also take a look at the Red
Channel, and by gaining down the Viewer, we
| | 02:24 | can see the motion vector values here.
| | 02:27 | I'll put the Viewer back and set back to RBG.
| | 02:29 | Now, let's take a look at the Property Panel.
| | 02:32 | Over here we have a
couple of useful adjustments.
| | 02:34 | The Vector Detail is how fine a
detail in the picture we're going to
| | 02:38 | create vector fields?
| | 02:39 | Now the way it works is, a Vector
Detail of 0.2 means the image will be scaled
| | 02:44 | down to 1/5th of its size.
| | 02:46 | So, you're going to get 1
vector for every 5 pixels.
| | 02:50 | If we set this to 0.5, the image is
being scaled down to half, so you now have 1
| | 02:55 | vector for every 2 pixels.
| | 02:57 | And of course if we set it for 1,
highest possible resolution, we're going to
| | 03:01 | have 1 vector per pixel.
| | 03:03 | However, you normally don't want
that because that'll be too noisy.
| | 03:07 | Let's set it back to a more moderate value, of 0.5.
| | 03:13 | Next, the Smoothness parameter;
| | 03:16 | with motion vectors, you have to do a
tradeoff on the amount of detail versus
| | 03:20 | the amount of noise or chatter in the data.
| | 03:22 | So, this is where that knob is.
| | 03:24 | If I turn this up to a high value,
that means we're going to get less
| | 03:28 | detail, but less noise.
| | 03:29 | Set it down to a low value;
| | 03:32 | more detail, but more noise.
| | 03:35 | And don't forget, when you increase your
vector detail, you are also going to be
| | 03:40 | increasing your processing time.
| | 03:42 | Now, one thing you can do to help the
process is to hook up a matte input here.
| | 03:47 | When you do, the first thing you'll
have to do is tell the VectorGenerator node
| | 03:52 | where to look for the matte.
| | 03:54 | So, you can tell it's on the Alpha
Channel of the source image or the matte
| | 03:57 | input, wherever you've stuck it.
| | 03:59 | If you have a matte hooked up, and
only if you have a matte hooked up, then
| | 04:03 | these options become available.
| | 04:05 | As in this example here, the
Matte is normally set to isolate the
| | 04:09 | foreground character.
| | 04:10 | So, if you select the Foreground,
you are going to get motion vectors for
| | 04:13 | only the character.
| | 04:14 | If you select Background, it's going
to put out motion vectors for the black
| | 04:18 | part of the Matte, in other
words, background in the picture.
| | 04:22 | Let's twirl down the
Advanced Tab and see what we've got.
| | 04:25 | The Flicker Compensation;
| | 04:26 | if you turn that on, it's going to
compensate for any dancing lights like maybe
| | 04:31 | caustics or rain falling on the sidewalk.
| | 04:33 | And the Tolerance Tab, this is where
you control the equation that calculates
| | 04:38 | the luminance image.
| | 04:40 | The VectorGenerator analyzes a luminance
version of the image, not a red, green, and blue.
| | 04:45 | This is the equation that is used to
create the luminance version, and by
| | 04:48 | default, it will be 0.3 Red, 0.6 Green,
and 0.1 Blue which is appropriate for
| | 04:53 | normal scene content.
| | 04:55 | But, what if you had a very, very blue picture?
| | 04:58 | Well, you'd want to turn up the
Blue, and turn down the Green and the
| | 05:03 | Red because there's a very little picture
information here and most of it's in the Blue.
| | 05:08 | Use the VectorGenerator node when you have
two or more nodes that require vector fields.
| | 05:12 | So, the vectors are only
calculated once but used multiple times.
| | 05:17 | This will reduce your render times
and speed up your shot development.
| | Collapse this transcript |
|
|
9. Kronos Optical Flow RetimerRetiming a shot with optical flow| 00:01 | The Nuke 7 Kronos is an optical flow-
based re-timer which you find over here on
| | 00:07 | the Time tab, with an
improved algorithm and GPU rendering;
| | 00:11 | it's based on F Kronos previously
available only with the finest plug-in set,
| | 00:16 | but now included with Nuke X.
| | 00:19 | Kronos supports GPU processing for
much faster rendering, but that requires
| | 00:23 | certain NVIDIA GPUs and CUDA drivers.
| | 00:27 | By the way, all the images that we're
using here, you'll find in our tutorial assets.
| | 00:32 | Let's open up this first Kronos Node
here and I have set the speed for a very
| | 00:36 | slow .1 to show you what
happens, we'll play this.
| | 00:42 | You notice we're getting all this
background pulling here and around the legs
| | 00:46 | and especially on the ground, in fact,
virtually everywhere around it and we're
| | 00:50 | using all default settings right here,
Vector Detail of 0.2 and Smoothness of 0.5.
| | 00:55 | We'll stop this, jump back to
frame 1 and let's see what happens if we
| | 01:02 | increase the Vector Detail to 0.5, and
we'll play this. Okay, we've made
| | 01:09 | our situation better, we're not
pulling quite so much in the background here,
| | 01:13 | but we still have some pulling all the way
around, and of course, down here on the ground.
| | 01:17 | We'll stop this and let's
try a higher vector detail.
| | 01:21 | The Vector Detail Setting refers to
how many vectors per pixel, if you have a
| | 01:25 | Vector Detail setting of 0.5 that
means the image is scaled down to half
| | 01:29 | resolution and you get 1 vector for
every 2 pixels. If the vector detail is 1.0,
| | 01:35 | then you have a vector for every single pixel.
| | 01:38 | However, you have increased your
rendering time. Right, so let's play this
| | 01:42 | setting and we're even better now than
we were at the vector detail of 0.5, but
| | 01:48 | we still have an awful
lot of background pulling.
| | 01:50 | So what can be done about this?
| | 01:52 | Well, the answer is to put in a
Matte, to mask off the foreground area.
| | 01:57 | Let's take a look at that.
| | 01:58 | We'll switch back to the Node Graph,
happen to have a Matte right here, we
| | 02:04 | will hook that up to the Matte input.
| | 02:06 | Go back to our Property panel. We need
to tell Kronos that we have a Matte, so
| | 02:13 | we go to the Matte channel setting, pop
that up and how and where to look for it
| | 02:17 | into the matte input, the
Luminance right here, so I'll use that.
| | 02:21 | Now, let's check out the results.
| | 02:22 | We'll play this, ah; much better.
| | 02:26 | We now have no background
pulling by using the mask input.
| | 02:29 | Stop that, jump to frame 1.
| | 02:33 | Now, let's take a look at this
Smoothness setting and see what it does for us.
| | 02:37 | We'll go back to the Node Graph and scoot
over here to another setup that I have for you,
| | 02:42 | and we'll switch to this Viewer.
| | 02:44 | I'll open the Property panel of
Kronos 3, to show you the Overlay Vector, right
| | 02:49 | here you'll find them if you
twirl down the Advanced tab.
| | 02:53 | So when it turns those on, and
this shows you the Motion Vectors.
| | 02:56 | We'll go back to the Node Graph
here where I have two Kronos Nodes set up
| | 03:01 | identically, except for this
Smoothness setting, this one is set for high
| | 03:04 | smoothness and this one is for low.
| | 03:06 | Let's see the effect.
| | 03:09 | I selected the Kronos Node
with the Smoothness set Low;
| | 03:13 | you could see how the vectors
have these little curls to them.
| | 03:15 | We will switch to the high
smoothness and it smoothes them out.
| | 03:20 | You can think of the smoothness
parameter as running a comb through the vectors
| | 03:25 | and smoothing them like
you were combing your hair.
| | 03:27 | See the difference?
| | 03:29 | Smoothness high, smoothness low.
| | 03:32 | With smoothness set to high, it reduces
the little wavies and jaggies, you might
| | 03:36 | see along the edge of your foreground,
but it also will lose fine detail.
| | 03:40 | So again, it's a balancing act.
| | 03:43 | The default of 0.5 works well on most shots.
| | 03:47 | Let's go back to the Property
panel so we can talk about the output
| | 03:50 | setting right here.
| | 03:51 | You have four options, by default,
the nodes Output is result, but you can
| | 03:56 | choose the re-timed Matte or the re-timed
Foreground Alone or the re-timed background alone.
| | 04:04 | In addition to performing a high-
quality speed change on a shot, Kronos can
| | 04:08 | also add Motion Blur;
| | 04:10 | let's take a look at that next.
| | Collapse this transcript |
| Managing motion blur| 00:00 | If a shot has been sped up the
original Motion Blur will not be sufficient to
| | 00:04 | avoid motion stroking.
| | 00:06 | Here we'll see the controls for
adding Motion Blur to a shot using Kronos.
| | 00:10 | First, we'll select the Read Node,
go to the Time tab and get Kronos.
| | 00:18 | The default Speed is 0.5, slowing it down.
| | 00:20 | So let's go to a Speed of 2 in order to speed it up.
| | 00:24 | I need to set some In and Out points,
because now my clip is only half as long,
| | 00:28 | so I only have 12 frames to work with.
| | 00:30 | So let's play that and see what happens.
| | 00:32 | Okay, we have terrible motion strobing,
because it's moving way too fast for the
| | 00:36 | original shutter timing.
| | 00:37 | So let's see how we can fix that.
| | 00:39 | We'll twirl down the shutter menu and we'll
find three things in here that will help us.
| | 00:44 | The first thing we would do, is set the
Automatic Shutter Timing, here you're
| | 00:48 | telling Kronos that you want it to
figure out the appropriate shutter time,
| | 00:52 | unfortunately, nothing has happened.
| | 00:55 | Let's zoom in a little bit.
| | 00:56 | The reason is we only have 1 shutter sample.
| | 00:59 | So I am going to increase that to 2.
| | 01:02 | Aha, now we have a double exposure.
| | 01:04 | So we are going to walk this number
up to 3 and then to 4, until we get a
| | 01:08 | nice smooth motion blur.
| | 01:10 | I'm going to undo that.
| | 01:13 | Let's say we don't use the automatic
shutter time, let's say we want to set the
| | 01:17 | shutter time to 2, again, we don't see
any motion blurring until we take our shutter
| | 01:22 | samples up to 2, 3, 4, 5, maybe 6.
| | 01:28 | So again, you have to increase the shutter
samples in order to smooth out the motion blur.
| | 01:32 | Okay, I am going to re-home the viewer,
let's play that and see how it looks.
| | 01:36 | Very nice, let's take a look
at the Advanced twirl down tab;
| | 01:42 | this is the Flicker Compensation option.
| | 01:45 | This is for a situation in a shot where
you have rapidly changing little lights,
| | 01:49 | like maybe caustics or
rain falling on the pavement.
| | 01:52 | It modifies the motion compensation
algorithm to compensate for the flickering lights.
| | 01:57 | Let's take a look at Tolerances.
| | 01:59 | This is the way to red, green and
blue value that are used to create the
| | 02:03 | luminance version of the image, which is what
Kronos uses to do all of its motion estimation.
| | 02:08 | It does not do it on the red,
green and blue channels.
| | 02:11 | These values are appropriate for normal
image content, but supposing that you had
| | 02:16 | a shot that was really very, very blue,
it had lots of blue information, but
| | 02:20 | very little red and green.
| | 02:21 | So for that, you'd want to dial
up your blue record and drop down the
| | 02:25 | red and green values, so that the algorithm
would have a lot of blue data to work with.
| | 02:31 | Kronos is the latest state-of-the-art re-
timing technology from the Foundry that
| | 02:35 | you can use to perform high
quality speed changes on your clips.
| | Collapse this transcript |
|
|
10. TimeClip Node FeaturesSetting up and operating| 00:01 | The TimeClipNode is new to Nuke 7
and collects a wide variety of timing
| | 00:05 | controls into a single node.
| | 00:07 | The key to the TimeClipNode is that
it shifts the timing of not just the
| | 00:11 | source clip, but all of the nodes in the stack
above it, and it appears in the dope sheet.
| | 00:15 | Now let me show you this, I am going to
select the Read node, we will go to Time
| | 00:20 | tab and add a TimeClip directly to the
Read node, notice that it fills in the
| | 00:26 | frame range of 1 to 100.
| | 00:28 | However, if I select my stack and I add
the same TimeClipNode, it has not filled
| | 00:37 | in the frame range,
because it doesn't read the clip.
| | 00:40 | So you have to tell it what the
last frame is, in this case, 100.
| | 00:44 | What I have here is a simple clip that
has frame numbers in it, makes it easy
| | 00:48 | to see all this TimeClip stuff.
| | 00:50 | First feature is Fade In and Fade Out.
| | 00:52 | So if I add a 5 to the head and 10 to
the fade out at the tail, I am going to
| | 00:59 | get a 5 frame Fade In and 10 frame
Fade Out, very nice. Let's undo those.
| | 01:06 | If I want to read in frame 10 to 90 of the
clip, that's what the Frame Range is for.
| | 01:11 | Now the Frame Range setting
here is identical to the Read node.
| | 01:14 | I'll set the Frame in at 10,
and the frame out at 90 and these
| | 01:22 | before-and-after features are
exactly the same as the Read node.
| | 01:25 | So now if I go past frame 90, the
effect after is all black, and same thing, if
| | 01:35 | I go ahead of frame 10, I also get black.
| | 01:37 | We'll undo those.
| | 01:40 | So while the Read node offers the exact
same controls, the key is the TimeClip
| | 01:45 | will take the entire stack of nodes
and this is especially important if you
| | 01:48 | want to shift the timing of the clip and rotos
for example, because all the animation will go.
| | 01:53 | I'll set the playhead to frame 1 and
we'll click on the Reverse button, and
| | 01:59 | that just plays the clip
backwards. I'll undo that.
| | 02:05 | Now the frame pop-up menu is again
exactly like the Read node, the only
| | 02:08 | difference is which one is the default.
| | 02:10 | If you want to do a slip sync of your
clip, you can select the offset, and
| | 02:14 | set that for like 10.
| | 02:18 | So playhead is on frame 1 and what we're saying here is I want the frame 1 of
| | 02:23 | the clip to be offset to the timeline
by 10 frames, that's why we are seeing
| | 02:27 | frame 11 of the clip.
| | 02:29 | Now if you are just going to do a
simple frame offset, then you might look at
| | 02:34 | the TimeOffset Node instead, for this reason.
| | 02:36 | Here is the TimeOffset Node.
| | 02:39 | TimeOffset Node offers the advantage
that you can use the cursor to walk the
| | 02:43 | clip up and down on the timeline like this.
| | 02:45 | You cannot do that with the TimeClipNode offset.
| | 02:48 | Important point, the time offset
value of -10 is the opposite of the
| | 02:52 | TimeClip's offset of +10.
| | 02:55 | Two other important differences
between TimeOffset and TimeClip.
| | 02:58 | TimeOffset will shift the timing of your
3D geometry, whereas, TimeClip will not.
| | 03:04 | But the TimeClip Node appears in the Dope sheet,
whereas, TimeOffset doesn't. I'll close that.
| | 03:12 | I'm going to set the offset back to the 0
in order to show you the Start at Feature.
| | 03:19 | Okay, we're back to the TimeClip Node
now, and I've set the offset to 0.
| | 03:26 | If I set the Start At feature to, for
example 10, that means, I'll jump the
| | 03:34 | playhead to frame 10, start frame 1
of the clip at 10 on the timeline.
| | 03:39 | And the last option, the Expression
allows you to enter a mathematical
| | 03:44 | expression for the relationship
between the clip and the timeline.
| | 03:48 | So I could say for example, take the
frame number times 2, and now my clip is
| | 03:55 | coming in on 2s; 22, 24, 26, 28.
| | 04:00 | If I'd like to add an offset, I
can just go and modify the expression,
| | 04:05 | let's say it's like +10.
| | 04:07 | So playhead is on 30, the clip is going
to be 2 times 30 plus 10, which is 70.
| | 04:14 | Let's delete that expression and we are now
seeing the playhead one to one with the clip.
| | 04:19 | We can open up the Dope sheet and the
little brackets on this end here, if I
| | 04:25 | slide those forward, you can see I'm
actually modifying the frame range start
| | 04:29 | frame up here, and I can also increment
these, and it will move the Dope sheet,
| | 04:34 | same thing for the last range back here.
| | 04:37 | Notice that the Frame Parameter is set
to Expression, but if I grab the clip
| | 04:41 | here in the middle and slide it,
I get an offset. I'll undo that.
| | 04:46 | The last thing I wanted to show you
is this original range field here;
| | 04:51 | this has no effect on the output of the node.
| | 04:54 | All this does, it will remember your
first and last frame from the original
| | 04:57 | clip, sometimes if you are cutting
frames off the head and the tail of a clip
| | 05:02 | and slip syncing it, it can become
confusing where your original clip is, but
| | 05:05 | again, let me show you, I can put a
number 50 in here and it has absolutely no
| | 05:10 | effect on the output.
| | 05:11 | If all you want to do is reverse or
offset the timing of a node stack, then
| | 05:15 | the TimeOffset node might be simpler
to use, but if you want more complex
| | 05:20 | timing, then the TimeClip Node offers
more controls, plus interaction with the
| | 05:24 | dope sheet.
| | Collapse this transcript |
|
|
11. New Viewer GuidesUnderstanding masks and guidelines| 00:01 | All new for Nuke 7, we now have
built-in Viewer guides for a variety of
| | 00:06 | different film and video formats.
| | 00:07 | There are masks for showing where your
shot will be trimmed for the projection
| | 00:11 | format, as well as title
safe and action safe guides.
| | 00:13 | Let's start here with this big Read6
frame and the first thing we are going to
| | 00:19 | do is set the mask ratio, that is
defining what the format of the output job is
| | 00:24 | going to be, so lets say we are
going to do a 1:85 feature film.
| | 00:28 | Immediately our frame is masked and
here's how you control that, this pop-up
| | 00:31 | here, you can say I want no masks, or
just draw me some lines, or I want half
| | 00:37 | density mask, or I want full density mask.
| | 00:40 | I am fond of half density, that
way I can see outside the format.
| | 00:45 | So let's pickup our Viewer and take a
look at a 2k super 35 scan and we can see
| | 00:50 | now we have the 1:85 format mask
here and it's so noted down there.
| | 00:56 | If we take a look at this HD clip, the
1:85 is very close to the HD 1:78, so we
| | 01:01 | are just going to lose a little
bit off the top and the bottom.
| | 01:05 | However, if we look at a Standard Def
NTSC picture, we are going to lose a
| | 01:10 | lot of picture, of course, I don't
know why you are chopping a 1:85 out of a
| | 01:14 | standard def picture, anyway.
| | 01:15 | But you could, if you want to.
| | 01:19 | Now let's take a look at the guidelines.
| | 01:20 | We can have a title safe or, an action
safe, or both, and for any of them we can
| | 01:30 | turn on the format center, so we get
a little crosshair in the center.
| | 01:34 | And here is the kicky
thing about the guidelines;
| | 01:37 | they will conform to your mask settings.
| | 01:39 | So if I turn the masks on for 1:85, my
title safe and action safe guidelines
| | 01:44 | conform to the 1:85 area.
| | 01:47 | We'll pick that up and put it on the 2k super
35, and you can see the same thing there.
| | 01:51 | So our title safe and action safe
guidelines work within the masks.
| | 01:56 | Now these masks don't
render with the output image.
| | 01:58 | They are just Viewer Overlays to help
you see the composition of your shot in
| | 02:02 | the final output format, as well as to
give you industry standard action safe
| | 02:06 | and title safe guides.
| | Collapse this transcript |
|
|
12. New Alembic GeometryImporting and viewing alembic geometry| 00:00 | Alembic geometry is a major new
development in CG that allows scene content to
| | 00:05 | be shared between
different apps very efficiently.
| | 00:09 | If you're not familiar with Alembic then
I recommend you read the article that I
| | 00:13 | published about it before
proceeding with this video.
| | 00:15 | Now there are several ways to
import Alembic geometry into Nuke.
| | 00:19 | But let's start with the Read node,
because that allows you to bring everything in.
| | 00:23 | Cursor in the Node Graph, we will type R
to punch up a Read node and I'll select
| | 00:28 | the alembic_scene.abc, abc is
the extension for Alembic geometry.
| | 00:34 | By the way, all these files are in
the tutorial assets for this video.
| | 00:37 | So we'll select that, click Open
and it opens up the Scene Graph.
| | 00:43 | Now everything in here was created in Nuke.
| | 00:46 | In the Points folder there
are particles and a point cloud.
| | 00:50 | Under Meshes which is
Alembic, speak for geometry;
| | 00:53 | we have an Earth, Moon, and Sun.
| | 00:55 | On the Axes, we have three Axis nodes and
we have two Cameras, SceneCam and TopCam.
| | 01:01 | I will fold these back up because the
nature of the Read node import is to bring
| | 01:06 | the entire scene in, in one gulp.
| | 01:08 | So I will click on the Create all-in-
one node button, and there we have it.
| | 01:14 | Here are the two
Cameras, the three Axis nodes,
| | 01:20 | and this is all the
geometry in one ReadGeo node.
| | 01:23 | We will look at what we got here.
| | 01:27 | What we have, if I look through my
scene camera here, we have a little
| | 01:32 | solar system scene.
| | 01:36 | We have point clouds out here, we have
a particle system here, we have three
| | 01:40 | geometries, and we have two
cameras and we can see that right here.
| | 01:45 | There are my two cameras, okay, so
back through my scene camera, and we
| | 01:50 | will stop the playback.
| | 01:51 | Now here is the Property panel for the
ReadGeo that has all the geometry in it.
| | 01:56 | If we go to the Scene Graph tab we
can see all of our geometry here, we can
| | 02:02 | unfold them and then we can
enable and disable bits and pieces.
| | 02:05 | For example, if I go to the Points and
turn that off, I lose both the particles
| | 02:10 | and the point clouds.
| | 02:12 | Here in the meshes I could turn off for
example just the sun, and only the sun disappears.
| | 02:16 | So we will turn that back on
and all the points back on.
| | 02:21 | Now the problem with bringing them all
in, in one ReadGeo node is they are now
| | 02:25 | collected as a group, they
are all one logical entity.
| | 02:28 | So for example, if I bring in a
checkerboard and attach it as a texture map
| | 02:33 | everybody inherits the same texture map.
| | 02:35 | Okay, this is not good, so we are going
to want to bring them in as individual
| | 02:40 | pieces of geometry, so
let's take a look at that.
| | 02:43 | So I am going to delete these, we will
punch up the Read node again and select
| | 02:50 | the Alembic scene one more time, Open.
| | 02:52 | Now this time I am going to turn off
the root, right here, you see this dot.
| | 02:58 | That means the root is parent and it
brings in everything underneath it.
| | 03:01 | So I am going to turn off the root, then
I'll go to the Meshes and unfold the Meshes.
| | 03:08 | Now I have Earth, Moon, and Sun separate.
| | 03:10 | So if I select the Earth, I
can click over here, I get a dot.
| | 03:17 | That means the Earth is a parent object,
and so it will get its own separate
| | 03:21 | ReadGeo node, we will do that to the Moon
and the Sun, we will also do it to the Points.
| | 03:27 | Remember, the Points has two
elements, particles and point cloud.
| | 03:29 | So now, I am going to get four
ReadGeo nodes, and I have to click on Create
| | 03:36 | parents as separate nodes.
| | 03:40 | And there they are.
| | 03:43 | This ReadGeo node as I enable and
disable is the Earth, this one is the Moon,
| | 03:49 | there is the Sun, and this is both
the point cloud and the particles.
| | 03:53 | Now you can go up to the Scene Graph of
any ReadGeo node and turn off, enable or
| | 03:59 | disable any bits and pieces of it that you wish.
| | 04:02 | Now that my Earth, Moon, and Sun are
in separate ReadGeo nodes, I can apply
| | 04:09 | separate texture maps.
| | 04:10 | So I will get another Read node, I will
select all three of these texture maps,
| | 04:16 | open, and let's hook them up.
| | 04:18 | There is my Earth, my Moon, my Sun, and
now I have texture maps applied to each
| | 04:27 | piece of geometry separately.
| | 04:32 | However, my Points ReadGeo node contains
both the point cloud as well as the particles.
| | 04:37 | I would like to have those
separate, so let's delete those.
| | 04:40 | Let's do one more Read node, get
the Alembic scene one more time.
| | 04:45 | This time we will unfold Points, turn
off the root as the parent, so it doesn't
| | 04:50 | bring them all in in one giant node.
| | 04:52 | We will select particles;
| | 04:53 | say I want you to be a parent and we
will select point cloud, so that's a parent
| | 04:58 | I will now have each one in its own
separate ReadGeo node, again, Create parents
| | 05:03 | as separate nodes, and there they are.
| | 05:06 | Now I have the particles in one
and the point cloud in another.
| | 05:11 | Now you can import meshes, point clouds,
particles, cameras, and transforms, but
| | 05:17 | no materials, textures, or lights yet, but soon.
| | 05:23 | Now that everybody is in their own
separate ReadGeo node, they can be
| | 05:26 | treated individually.
| | 05:27 | So I am going to select my Sun
ReadGeo, go to the 3D>Modify>TransformGeo.
| | 05:35 | And now I could, for example, scale down the
Sun, and now my scene has a much smaller Sun.
| | 05:41 | I could now export the entire scene as an
Alembic file or just the new Sun as an Alembic file.
| | 05:47 | We will see how to export geometry later.
| | 05:51 | The Read node allows you to bring in
all the elements of a scene and select
| | 05:55 | which you want to load.
| | 05:56 | But the Read node is not the only
way to import your Alembic geometry.
| | Collapse this transcript |
| Importing camera and axis information| 00:00 | You can also use the ReadGeo, Camera,
and Axis nodes to bring in specific
| | 00:06 | individual Alembic scene elements, such as
one piece of geometry or maybe just a camera.
| | 00:11 | Let's start by looking at the
ReadGeo node, so we go up to the 3D
| | 00:15 | tab>Geometry>ReadGeo.
| | 00:17 | We go to the File field, open up the
browser and select our Alembic scene, which
| | 00:28 | opens up the Import dialog box.
| | 00:32 | Now we have in here everything for
the entire Alembic scene, points,
| | 00:37 | meshes, axis, and cameras.
| | 00:38 | But we just want to bring in
one element, let's say the sun.
| | 00:43 | So the first thing we do is turn off
Root as being the parent, then we can
| | 00:48 | select the sun and turn that on as a parent.
| | 00:52 | Since we only have the one parent, it
doesn't matter which of these options we
| | 00:56 | choose, I will just choose this one.
| | 00:58 | We will switch to the 3D viewer, and
there is the sun, just to make it easier to
| | 01:05 | see, let's hook up a little checkerboard to it.
| | 01:07 | And when I play the clip, we get
the animated sun loaded in from the
| | 01:11 | Alembic scene file.
| | 01:12 | Up here is the sub frame option, if
you read my Alembic PDF file, you know
| | 01:17 | exactly what this does.
| | 01:19 | But if we turn this off,
it'll speed up the playback a bit.
| | 01:21 | Now remember that the Alembic
geometry is loading a new version of
| | 01:26 | geometry every frame.
| | 01:27 | So this read on each frame
option is what makes it move.
| | 01:31 | It I turn that off, it doesn't move, I can
do a playhead, but there's no more animation.
| | 01:38 | It's locked to frame 1 by default, but
if I would like to use a different frame,
| | 01:43 | let's say I want to use frame
20, again, lock, no animation.
| | 01:46 | Okay, now, there's still no animation,
but at least I am using frame 20.
| | 01:51 | Now let's take a look at the Scenegraph here.
| | 01:55 | Remember the Import dialog box showed
us geometry and cameras and axis, but
| | 02:00 | the Scenegraph for the ReadGeo node is only
going to show us geometry, no cameras, no axis.
| | 02:07 | Down here is a very interesting option,
even though I've only told it to load in
| | 02:11 | the sun, if I turn on view entire scenegraph
all of the geometry shows up, so I
| | 02:18 | could add or subtract from what I have
got loaded in, no cameras and no axis,
| | 02:22 | just geometry when
you're using the ReadGeo node.
| | 02:26 | Now let's see how to load a camera or an axis.
| | 02:29 | Let's do a camera, we will go
to the 3D, and select Camera.
| | 02:35 | Again, we are going to read from file,
so notice this read from file option here
| | 02:39 | also appears on the File tab,
they are wired together.
| | 02:42 | So if we go to the File tab, select read
from file, browse to our Alembic scene,
| | 02:49 | select our Alembic scene, and we get a
warning, because it's about to load in
| | 02:54 | the camera data and if you had any
animation in this Camera node, it's going to
| | 02:57 | blow it away, so we are
good, so we will say yes.
| | 03:00 | Now if I back out here and play,
you can see I have my camera data.
| | 03:06 | Now the camera node will only read
camera data, but the Alembic scene had two
| | 03:11 | cameras, so to choose we go to node
name, pop-up, and there is the SceneCam
| | 03:17 | which I have, but if I wanted the TopCam,
I can select that, back out a little
| | 03:22 | bit and play the animated TopCam.
| | 03:25 | Okay, but what I really
want is the SceneCam after all.
| | 03:33 | Now the Alembic scene comes in with its
own frame rate and if your job happens
| | 03:37 | to be a different frame rate,
you can override that right here.
| | 03:40 | Now let's say I would like to modify the
camera, so we will go to the Camera tab
| | 03:45 | and note that while I'm playing this,
my numbers are ghosted in the data fields
| | 03:51 | and they are gyrating wildly
as the playhead moves of course.
| | 03:53 | So I am going to stop this and I
am going to turn off read from file.
| | 04:00 | Now the data is baked into the camera,
so if we go to the Curve editor and let's
| | 04:05 | pick Translate Y, there is my
Translate Y Curve, if I select that, now I can
| | 04:14 | edit it, I am going to raise
it up, maybe a little higher.
| | 04:18 | Now as long as I have read from
file turned on, I cannot edit the data.
| | 04:24 | So if I don't like my changes, I can
just go back and say read from file, and it
| | 04:29 | will overwrite my changes and put
it back to the original settings.
| | 04:32 | We have seen how the ReadGeo, Camera,
and Axis nodes allow you to import exactly
| | 04:38 | the elements you want and to modify them.
| | 04:41 | The only thing left is to take a look at
how to write out your own Alembic file.
| | Collapse this transcript |
| Exporting alembic geometry to disk| 00:01 | You can also write Alembic files with Nuke,
either the entire scene or selected elements.
| | 00:06 | However, if you want to export lights,
you will have to use the FBX file format,
| | 00:10 | since Alembic does not support lights yet.
| | 00:14 | Here is the Node Graph used to
create the original Alembic scene.
| | 00:17 | We have these two cameras, we have
three axis, here is our particles, there
| | 00:23 | is the point cloud and we have our Sun, Earth,
and Moon geometries or meshes in Alembic speak.
| | 00:30 | Now let's say I would like
to write out the entire scene.
| | 00:33 | So we will select the Scene node,
because it's connected to everybody.
| | 00:37 | We will go to the 3D pop-up>Geometry>WriteGeo,
I get my WriteGeo node and my dialog box.
| | 00:48 | So I will browse to my destination and
I will name it alembic_scene.abc, and of
| | 00:57 | course, the abc extension is critical,
because that's what tells Nuke that this
| | 01:00 | is going to be an Alembic
scene, so we'll open that.
| | 01:04 | And by default the WriteGeo node is
going to render out all the elements, axes,
| | 01:10 | cameras, pointClouds, geometries.
| | 01:13 | To render it to disk, we will select
Execute and if I want a custom frame range,
| | 01:17 | I can set that here, and then click OK,
we will cancel that, and close this.
| | 01:24 | Now suppose that I just
want to render out the cameras.
| | 01:27 | Remember, I have a TopCamera and a SceneCamera.
| | 01:31 | So I will open up my WriteGeo node, and I
will turn off everything except the cameras.
| | 01:36 | Of course I am going to rename this
cameras.abc, so don't forget the .abc
| | 01:43 | extension, again, that's very important to let
Nuke know this is your Alembic file format.
| | 01:48 | Okay, we are ready to execute, I want
the full frame range, I will say OK.
| | 01:52 | And it renders out the two
cameras to one file called cameras.abc.
| | 01:58 | So let's close this, now I
want to bring those cameras in.
| | 02:02 | So I'll go to the 3D pop-
up and get a Camera node.
| | 02:05 | So now I want to load the cameras.abc
file that I just rendered, go to the File
| | 02:10 | tab, turn on read from file and then
browse to the folder with the cameras file
| | 02:17 | in it, there it is, we will open that,
and again, we get the warning, it's going
| | 02:22 | to overwrite any animation we
may have had, we will say yes.
| | 02:25 | So we can see my two new cameras, I am
going to select everybody and disable all
| | 02:30 | the nodes, except the Camera node.
| | 02:32 | Okay, so now we look in the
Viewer, there is my Camera.
| | 02:37 | If I play that, that's my SceneCamera.
| | 02:42 | Now the Camera node has brought in
all cameras that were in that file and
| | 02:47 | there were two, the SceneCamera and
the TopCamera, so I could select the
| | 02:51 | TopCamera, there he is, play that,
but I really want my SceneCamera, so we
| | 02:58 | will go back to that. And there we are.
| | 03:01 | Okay, we are done with this, we will
delete that Camera node, select all the
| | 03:07 | nodes and turn them all on, get our
scene back, and of course I want to look
| | 03:12 | through my SceneCamera to admire my shot.
| | 03:14 | Now let's say you wanted to render out
all three of the geometries, the Sun, the
| | 03:18 | Earth, and the Moon, no problem, what
we need is a Scene node, get a new Scene
| | 03:25 | node over here, then hook it up to the
geometries I want to render out, then I
| | 03:31 | will connect the WriteGeo node to
the Scene node and render to disk.
| | 03:38 | And because the Scene node is only
hooked up to these three geometries, that's
| | 03:42 | the only thing that will go into the file.
| | 03:44 | Okay, we are done with that, one more case.
| | 03:46 | Let's say I just wanted to do the Moon.
| | 03:49 | No problem, we will go to the 3D pop-
up>Geometry and add a WriteGeo node.
| | 03:57 | Since the Moon geometry is the only
thing connected to the WriteGeo node, it's
| | 04:04 | the only thing that will be written to disk.
| | 04:07 | The ability to read and write Alembic
scene files gives Nuke an important new
| | 04:11 | capability to share 3D scenes
with any app that supports Alembic.
| | Collapse this transcript |
|
|
13. The New PointCloudGenerator NodeTracking and point generation| 00:01 | The PointCloudGenerator has been
rewritten to calculate cleaner, more
| | 00:05 | accurate point clouds.
| | 00:06 | The workflow has been changed so that
you analyze a shot to set keyframes,
| | 00:10 | calculating the accuracy for each keyframe.
| | 00:13 | You can then select the most
accurate frames to create a point cloud.
| | 00:18 | To see how it works, let's
go get a PointCloud node.
| | 00:22 | 3D pop-up>Geometry>PointCloudGenerator,
we hook the source input to our clip and
| | 00:29 | by the way, this clip is in your
tutorial assets and also it has a built-in
| | 00:35 | Ignore Mask, so we are going to need that.
| | 00:37 | So in the PointCloudGenerator node, we
need to set the Ignore Mask to look at
| | 00:41 | the Source Alpha channel.
| | 00:43 | Okay, don't forget that.
| | 00:45 | Next, we need a camera input, a tracked camera.
| | 00:49 | So we'll just go get 3D>Camera, hook
that up, you will find a tracked camera in
| | 00:55 | the tutorial assets.
| | 00:57 | So make sure you're on the File tab and
enable read from file, then we will go
| | 01:02 | load our tracked camera data, open up the
folder, go to the Tutorial assets, here
| | 01:07 | is our PointCloudGenerator tiff files
and here is our TrackedCamera.fbx file,
| | 01:12 | select that, click Open.
| | 01:16 | Yes, I am sure I want to do this.
| | 01:18 | We will go back to the Camera tab and
we will turn off read from file, so that
| | 01:23 | the data becomes live.
| | 01:24 | We are done with the camera, so we
can close that Property panel and take a
| | 01:28 | close look at the PointCloudGenerator.
| | 01:30 | The first workflow we will
look at is Automatic Keyframing.
| | 01:34 | To do that, we will do an Analyze Sequence.
| | 01:37 | So when you click the Analyze Sequence,
it takes the tracked camera and analyzes
| | 01:42 | the clip to determine where to put the
keyframes to calculate the point cloud.
| | 01:49 | So we have keyframes here, and down on
the timeline you can see the blue ticks.
| | 01:53 | Notice that there is a Calculated
Accuracy associated with each keyframe.
| | 01:58 | So if I move the playhead to the next
keyframe, I have a Calculated Accuracy of
| | 02:03 | 0.80, the next keyframe 0.89 and so on.
| | 02:08 | This is very important;
| | 02:09 | we need to keep an eye on
our Calculated Accuracy.
| | 02:13 | Before creating the point cloud, we want
to take a look at the Point Separation
| | 02:18 | and the Track Threshold values.
| | 02:21 | Point Separation first.
| | 02:23 | This parameter puts the points
closer together or further apart.
| | 02:27 | If you have a very large point cloud
you might want to set them further apart,
| | 02:31 | and if it's a smaller one, closer together.
| | 02:35 | Next Track Threshold, this
parameter rejects all tracks that fall below
| | 02:41 | this quality value.
| | 02:43 | If you have a shot with fast cameras
or a lot of motion blur, you will have a
| | 02:47 | lot of low quality track value, so you
might want to lower this, if you leave it
| | 02:51 | high, you will have very few acceptable tracks.
| | 02:55 | Okay, let's say we like all of our
settings, we are ready to go to create our
| | 02:59 | point cloud and we click on Track
Points, and we set the range that we want to
| | 03:03 | track, we will start by
doing the entire clip, click OK.
| | 03:07 | So the PointCloudGenerator is
jumping from keyframe to keyframe using the
| | 03:10 | tracked camera data and the trackers it has,
to calculate the point cloud over the
| | 03:15 | whole length of the shot.
| | 03:15 | All right, let's go see what we got,
cursor in the Viewer, Tab key, switch to
| | 03:21 | 3D, oh, look at that, very nice.
| | 03:25 | And if we play that, we can see our
moving camera and we can look at the
| | 03:32 | point cloud through our tracked camera right
here, lock the viewfinder and play, outstanding!
| | 03:39 | Now we are going to want to confirm
the accuracy and we are going to use the
| | 03:44 | viewer white controls to do that.
| | 03:46 | So I will hook a second input of the
viewer to the PointCloudGenerator node, we
| | 03:50 | will come up here and set the Viewer
white controls for over, and I want the
| | 03:55 | point cloud over the Read6, make sure I am
in 3D, make sure I am looking through my
| | 04:00 | camera and I have the viewfinder
locked, we will pull out a little bit.
| | 04:06 | Now I like to confirm my point tracks
by setting the display to wireframe, so I
| | 04:10 | have a bunch of points.
| | 04:12 | All right, so let's zoom
out and let's play this.
| | 04:17 | I am going to ping-pong the playhead,
let's play this, to see how the points
| | 04:20 | are locked on to target, are they drifting,
are they squirming, no, everything looks great.
| | 04:27 | Okay, I am happy with my point cloud
track, we'll stop this and now we will take
| | 04:32 | a look at setting manual keyframes.
| | Collapse this transcript |
| Keyframing manually and automatically| 00:01 | The purpose of setting manual
keyframes is to tell the PointCloudGenerator to
| | 00:04 | use only the most accurate
frames for its calculations.
| | 00:08 | I would like to call your attention to
the tick marks down here on the timeline;
| | 00:11 | this is a kind of a confusing thing.
| | 00:13 | I am going to open up the Dope Sheet so
that you can see that each one of these
| | 00:20 | tick marks is actually two keyframes,
one for the accuracy, these calculated
| | 00:26 | values right here and the other one
for the keyframes, these guys up here.
| | 00:31 | Now let's jump to frame 1 and
we'll look at the Calculated Accuracy
| | 00:35 | 0.69, not terribly good.
| | 00:39 | So I'm going to delete the keyframe
and notice that the Keyframes field has
| | 00:44 | jumped to 21 and turned light blue;
| | 00:46 | that means there's no
keyframe where the playhead is.
| | 00:49 | So if I jump to the next keyframe, 21,
you will see it's bright blue, so if
| | 00:54 | I get my cursor off the keyframe, it turns
light blue, on the keyframe, bright blue.
| | 00:59 | So this is the way you can tell the
keyframes for the keyframes compared to the
| | 01:03 | keyframes for the
calculated accuracy. OK, 0.80;
| | 01:07 | let's say I don't like that
one, I am going to delete that.
| | 01:10 | And then I'll jump to the next keyframe, ah,
0.89, that looks good, and the next one
| | 01:15 | and the next, OK great.
| | 01:16 | So I am just going to
keep those really good ones.
| | 01:20 | So I am going to track my points again,
and I am going to tell it to do the
| | 01:24 | entire frame range and click OK.
| | 01:26 | OK, notice that my point cloud is now
truncated, it's clipped off because I
| | 01:34 | only had keyframes from 42 to
100, no keyframes out there.
| | 01:39 | So that's one of the consequences of
using a short range for your keyframes.
| | 01:44 | Notice, also down here in the Dope
Sheet I have the keyframes deleted for the
| | 01:48 | keyframes, but I still have
keyframes for the accuracy.
| | 01:51 | All right, let's clear these points.
| | 01:54 | We will take a look at what
happens if you have too few keyframes.
| | 02:00 | So I am going to jump here to frame
42 which is a keyframe keyframe and I
| | 02:04 | am going to delete him and then I'll
jump forward to 61, another keyframe,
| | 02:09 | and delete that one.
| | 02:11 | So now I only have keyframes from 81 to 100.
| | 02:15 | So let's see what happens if we try
to track those points and again, for
| | 02:19 | the entire clip, OK.
| | 02:20 | OK, an error message, I have
insufficient keyframes, all right, say OK, so we
| | 02:28 | will fix that just by jumping over to
here and adding a keyframe there, notice
| | 02:33 | it turns bright blue, jump over
to here and add another keyframe.
| | 02:37 | OK, let's try our track points one more time.
| | 02:43 | OK, now we got a good track, but
again we have a truncated point cloud because
| | 02:47 | we are only working with a
limited range of keyframes.
| | 02:50 | OK, let's clear these points so I could
show you how to render selected frame ranges.
| | 02:54 | OK, I am going to delete all the
keyframes and you'll notice even though the
| | 03:00 | blue tick marks are on the timeline,
| | 03:01 | that's because they are for the accuracy,
all the keyframe keyframes are in fact
| | 03:06 | gone, but you might not
know it, looking at the timeline.
| | 03:09 | This time we are going to put in our
own uniform spacing every five frames.
| | 03:15 | So I insert 5 in this field and I
say add all and down here in the timeline
| | 03:20 | and down here in the Dope Sheet,
we have keyframes every five frames.
| | 03:24 | Now I am going to render
two separate frame ranges.
| | 03:28 | So we will open up track points and set
the frame range to 1 to 20, 80 to 100, OK.
| | 03:37 | Now it's going to render two
separate frame ranges of the point cloud.
| | 03:40 | It will render the first frame range and
put up the point cloud for that section,
| | 03:44 | then render the second range and add it to it.
| | 03:48 | OK, there we go, here is our first
point cloud for 1 to 20, it's now working on
| | 03:53 | the second frame range of 80 to 100.
| | 03:55 | There, we now have the second frame
range, you can actually see two different
| | 04:00 | groups of point clouds, of course,
where they overlap, they are convergent.
| | 04:04 | But we can now get more coverage if we want to
just render separate portions of the timeline.
| | 04:08 | OK, I am going to clear these
points to show you, but you can actually do
| | 04:12 | that render one at a time, so I can go to
track points and say just render frame 1 to 20, OK.
| | 04:18 | There, there's my 1 through 20
point cloud, later I decide I would like to have
| | 04:28 | greater coverage, I can go back to track
points and I can say Render 80 to 100. OK.
| | 04:34 | It will render the second group and
actually add it to the first one as before.
| | 04:39 | There, once again, we have
our two point clouds superimposed.
| | 04:46 | Next, let's take a look at post filtering.
| | Collapse this transcript |
| Filtering, grouping, and mesh generation| 00:01 | We're back to the 2D View to show you
the post filtering and grouping features
| | 00:04 | in the PointCloudGenerator node.
| | 00:06 | I am going to analyze a sequence one
more time, so that we get the default
| | 00:11 | keyframes. Then we're going to track
all of the points, and get a brand-new
| | 00:17 | point cloud. Done, and click Track
Points 1-100, go. And our render is almost
| | 00:28 | done and there you go
| | 00:30 | We'll switch to the 3D View and
I can kind of prefer to look at my
| | 00:36 | point clouds as wireframe.
| | 00:38 | Post filtering is the process of
removing rejected points, now you get to set
| | 00:43 | the rules of rejection, right here.
| | 00:46 | First of all, the Angle Threshold,
this rejects points less than the angle of
| | 00:51 | parallax set in this field.
| | 00:53 | If I raise this up, it'll reject
points that are larger and larger angle.
| | 00:58 | So far, everything I have has greater than
10.8 degrees of parallax though it's a keeper,
| | 01:04 | but if I go up high enough, ah, there is my red.
| | 01:07 | So, it has marked the reject points in
red and this means those points have an
| | 01:12 | angle threshold that are greater than 14.6 degrees.
| | 01:16 | The next filtering parameter is the
Density Threshold, this is actually a
| | 01:20 | proximity rating, as points are closer
together, they tend to be more accurate,
| | 01:25 | and the more isolated they are,
the less accurate they are.
| | 01:28 | So, this is a measure of how
close they are to their neighbors.
| | 01:32 | So, if I raise the Density Threshold, I am
rejecting points that are more and more isolated.
| | 01:38 | So, far, there we go, there
we go here we're. All right!
| | 01:42 | Once I have selected my parameters
for rejecting points, I can click Delete
| | 01:47 | Rejected Points and the un-rejected points
that are left will be turned into my mesh.
| | 01:52 | I am going to undo that rejection and
reset these parameters back to default.
| | 01:58 | Next, let's take a look at Groups,
click on the Groups tab, so we can create
| | 02:04 | groups, select points, and
put them into the Groups.
| | 02:08 | So, let's create a group, click on
Create Group, I am going to rename this left
| | 02:14 | and I'd like to change the color.
| | 02:16 | So, I am going to click on color and I am
going to make it a vibrant red, click OK.
| | 02:21 | Let's make another one.
| | 02:22 | Create Group, we'll call this one right.
| | 02:25 | Now before I can select any of my
points, I have to turn on Vertex selection.
| | 02:32 | Okay now, I can select these points over
here, right-mouse pop up, add to group, left.
| | 02:42 | And they've taken on the
red color from the left group.
| | 02:46 | I'll select these points over here,
right-mouse pop up, add to group, right
| | 02:52 | and they become blue.
| | 02:55 | If I turn off the Vertex selection,
go back to Node selection, I get a
| | 02:58 | lovely colored point cloud.
| | 03:00 | Not only can you change the color of the
groups, but you can also control their visibility.
| | 03:06 | We can also bake out the groups, if I
select a group like the right group, or I
| | 03:12 | could select both groups.
| | 03:13 | But let me just select the right
group for now and I can click Bake Selected
| | 03:17 | Groups and I get a new node
BakedPointCloud right, we push into that.
| | 03:24 | So, it's label it with the group
that I selected, I am going to clear
| | 03:27 | Property bin to get rid of
all the point clouds from my
| | 03:30 | PointCloudGenerator node, double-
click on this guy and there you have it,
| | 03:34 | that's just the right group.
| | 03:36 | Now once you're in the BakedPointCloud
node, you can set the point size smaller
| | 03:41 | or larger if you wish for better visibility.
| | 03:44 | Okay, we are done with that, so I am
going to delete that baked out PointCloud node.
| | 03:48 | Let's go back to the PointCloudGenerator
node and you can also delete the groups.
| | 03:55 | So, I am going to actually select both
groups and say Delete Selected Groups,
| | 03:58 | this does not delete the points, just the
grouping, so, and they lost their group colors.
| | 04:04 | You can also create
groups from within the Viewer.
| | 04:07 | So, I can go up to the Viewer and again
don't forget you must turn on Vertex
| | 04:11 | selection or you cannot pick your points.
| | 04:14 | So, I'll select all of them, right
mouse pop-up, create a group, I'll call it all,
| | 04:21 | and let me turn it a lovely yellow.
| | 04:24 | So, I'll set Vertex selection back
to Node selection and I have a lovely
| | 04:31 | yellow point cloud.
| | 04:33 | With the group selected, I can now bake
the selected group to a mesh, click on
| | 04:40 | that a few seconds of computing,
and I have a new node, BakedMeshAll.
| | 04:43 | Now let's see what we got here.
| | 04:48 | Once again I'm going to completely
eliminate my PointCloudGenerator node, we are
| | 04:52 | going to zoom out and bring over some
nodes here, and let's hook this guy in to
| | 04:57 | this little test setup, hook that up
to a Project3D node, hook that up to the
| | 05:02 | RGB and put a ScanlineRender node.
| | 05:04 | So, I am doing a camera projection
of this checkerboard onto my mesh.
| | 05:09 | Open up that now if I set the display
to wireframe, we can see I have a real
| | 05:16 | good high density mesh that carefully
contours to the compound curve surfaces
| | 05:23 | of this cliff side.
| | 05:24 | I will set it back to textured, and
we're actually seeing the camera projected
| | 05:31 | grid on top of the mesh.
| | 05:33 | I will hook up the ScanLineRender node
to the tracked camera, connect my viewer to
| | 05:40 | the ScanLineRender node, switch to
the 2D View and there's the render of my
| | 05:45 | mesh, with my texture map.
| | 05:48 | Okay, I can then hook up a Merge node
to the original clip and merge that
| | 05:54 | over the background.
| | 05:55 | I set it up for a semitransparent merge, so
that we can check for any squirm or drift.
| | 06:00 | So, I am going to set the Viewer to full
frame for you, so that you can see all the action.
| | 06:06 | We're doing a test render now with
the checkerboard pattern to make sure
| | 06:09 | there is no squirm or drift anywhere in the
scene even in the interior part of our mesh.
| | 06:15 | And there we have it, the grid pattern
is beautifully registered to the mountain
| | 06:19 | side even in the interior regions on
a compound curved surface, normally a
| | 06:25 | really nasty tracking problem.
| | 06:29 | As you can see, the all new
PointCloudGenerator node has dramatically improved
| | 06:33 | workflow and point cloud accuracy.
| | 06:36 | The ability to save out point groups or
export beautiful meshes puts this node
| | 06:40 | at the top of my 3D compositing list.
| | Collapse this transcript |
|
|
14. The Improved DepthGenerator NodeSetting up and analyzing| 00:00 | DepthGenerator has been improved to
calculate cleaner more accurate depth passes.
| | 00:04 | The work load has been changed, so
that you can first analyze the shot to
| | 00:08 | calculate the optimum frame separation.
| | 00:11 | There are also new output options to
convert depth to position and normals passes
| | 00:15 | for use with other Nuke nodes as
well as the ability to convert the depth
| | 00:19 | pass to an extruded mesh.
| | 00:21 | I am using the DepthClip.movie file,
which you will find in the tutorial assets,
| | 00:26 | if you'd like to play along.
| | 00:28 | In addition to the clip,
we need a tracked camera.
| | 00:31 | So, let's go over to our 3D pop up,
we'll get a Camera, and I happen to have
| | 00:36 | tracked camera data for you.
| | 00:38 | So just click on the Import chan file,
browse to the DepthGenerator tutorial
| | 00:43 | assets, and click on the
TrackedCamera.chan file. Say open, and we now have a
| | 00:49 | completely animated camera.
| | 00:52 | We're done with the Camera
Property panel, so we'll close that.
| | 00:55 | Next, we will add our DepthGenerator node,
so we will select the Read node, come
| | 00:59 | over to the 3D pop up, and select
DepthGenerator, and hook up our Camera.
| | 01:05 | We need a little bit more space for our
Property panel here, so let's do this.
| | 01:10 | Now just by way of comparison, here is
the old DepthGenerator Property panel, as
| | 01:16 | you can see compared to this one,
there are many more options and controls.
| | 01:22 | First up in the DepthGenerator
Property panel is the Ignore Mask option.
| | 01:26 | If you have things in the scene that
are troubling your tracking, like maybe
| | 01:30 | people walking or something moving,
you can put in an Ignore Mask, that will
| | 01:34 | tell the tracker to ignore it.
| | 01:36 | You can put it in the alpha channel of the
source clip, or you can put in this Mask
| | 01:42 | input, and then you do the pop up
here and tell Nuke where to look for it.
| | 01:46 | Next, is the Depth Output.
| | 01:48 | By default, it's going to calculate
the typical Depth channel, you could also
| | 01:52 | tell it to do distance but
we're going to stay with depth.
| | 01:55 | The next thing to setup is the Frame Separation.
| | 01:58 | Now the DepthGenerator is
already computing the Depth Pass.
| | 02:02 | Let's go set the Depth Pass into the
alpha channel of our viewer, take a look
| | 02:07 | and I am going to gain down the Viewer,
so we can get a better look at it.
| | 02:12 | So, if I jump the playhead from frame 1
to frame 11, frame 21, you could see the
| | 02:16 | Depth Pass is updating every frame.
| | 02:19 | Now the DepthGenerator triangulates
between the same features on two different
| | 02:24 | frames in order to calculate the
Depth Pass for the current frame.
| | 02:28 | The Frame Separation defines how far
apart those two frames are from the
| | 02:33 | current playhead position.
| | 02:35 | For slow-moving cameras, you want a
larger frame separation, and for a fast
| | 02:39 | camera or smaller separation in order
to get an equivalent camera baseline.
| | 02:43 | So, that's what the Frame Separation is
all about, and we're looking at a depth Z
| | 02:48 | channel with a Frame Separation of 1.
| | 02:51 | If I change that to 2, it's now looking
two frames out from the current playhead
| | 02:56 | position and again, it's only
inspecting 2 frames and again if I say 5, it's
| | 03:02 | looking 5 frames out.
| | 03:05 | Now let's take a look at the Analyze
Frame button, when you click that button,
| | 03:09 | wherever the playhead is and right now
I am on frame 21, it's going to look up
| | 03:14 | and down the timeline analyzing the
clip to determine the optimal frame
| | 03:18 | separation for that frame.
| | 03:20 | So, let's click on Frame Analyze,
watch the playhead go up, and then down,
| | 03:25 | and boom, there we go.
| | 03:27 | So, it felt the best frame
separation for this was 14.
| | 03:32 | Notice, it also produced a Calculated
Accuracy number here 0.96, the closer this
| | 03:38 | number is to one, the more accurate the
calculations and of course, the closer
| | 03:42 | to the zero, the worse.
| | 03:44 | This next button up here, the Analyze
Sequence button performs an analysis
| | 03:48 | of the entire clip, determining at key
points down the shot where your best
| | 03:53 | frame separation is.
| | 03:54 | So, let's click Frame Sequence and watch
what happens, okay, it says it's going
| | 03:58 | to overwrite my current keyframe, we
will click Yes, and it cruises the timeline,
| | 04:04 | calculating the Frame
Separation at key points, and there we go.
| | 04:09 | I'm going to jump the playhead to frame
1, and you can see the Frame Separation
| | 04:14 | it calculated here was 15 frames,
and the Calculated Accuracy is 0.88.
| | 04:18 | If I jump the playhead to the next
keyframe, the Frame Separation changed to 14
| | 04:24 | and my Calculated Accuracy .86, and so
on up to frame 12, but different Frame
| | 04:30 | Separation and a different Calculated Accuracy.
| | 04:33 | Once we have the Frame Separation
established we can move on to the
| | 04:36 | DepthGeneration, the subject of the next video.
| | Collapse this transcript |
| Refining the output| 00:00 | This is where we're going to refine the depth
pass itself, starting with the Depth Detail.
| | 00:06 | This is the pixel sub-
sampling from the original clip.
| | 00:10 | A Depth Detail of 0.5 means that the
image has been scaled down to half resolution.
| | 00:14 | If I set the Depth Detail to 1, it's
using the image at full size, more accurate
| | 00:20 | depth calculations, but slower processing.
| | 00:23 | We will come back to normal
detail in a minute. Next is Noise.
| | 00:28 | The Noise setting tells it how
much noise in the clip to ignore.
| | 00:31 | If I set that up to 0.2 for example,
you see the clip has become so blurred out
| | 00:38 | that we don't get any good results,
so we will put that back to default.
| | 00:42 | The next parameter is Strength, you
increase this to better match the fine
| | 00:46 | detail in the picture, it sort of like
pulls the Depth Map tighter to the image.
| | 00:52 | You see what I mean if I crank this up to two.
| | 00:55 | Okay, we will put that back
to default. Next is Sharpness.
| | 01:01 | Sharpness actually performs a
sharpening operation on the final Depth Map
| | 01:05 | itself, you can see the effect of that,
if I take that from 0.5 to 0.9 for example,
| | 01:10 | there you go, now I can undo
and Redo that so you can see that.
| | 01:15 | Next is Smoothness, Smoothness performs an
intelligent blur on the depth pass itself.
| | 01:20 | So let me take that from 0.5 to 0.9 and it
basically has applied a blur to the depth pass.
| | 01:27 | Here I will undo and redo,
so you can see the difference.
| | 01:31 | The problem with going too far with
Smoothness is you can miss local detail.
| | 01:34 | Now let's take a look at
creating a card from the depth pass.
| | 01:38 | We want to pick an accurate frame and
this one has a 0.75 accuracy, not so good.
| | 01:43 | Well, let's jump back to frame six,
there we go, that's better a 0.86.
| | 01:49 | The greater the accuracy, the
better the mesh will fit the scene.
| | 01:52 | I will click Create Card and it
creates a new node called DisplaceCard.
| | 01:59 | Now, it's building this mesh based
on the current frame of the playhead.
| | 02:02 | So now we will switch to the
3D view and see what we got.
| | 02:07 | We now have a 3D mesh, that we can use to
line up geometry to the live-action clip.
| | 02:16 | Now, we can take a look at the
Surface Point and Surface Normal outputs.
| | 02:21 | Surface Point output can be used with
the PositionToPoints node to create a
| | 02:25 | point cloud, but first we are going
to have to create the channel set.
| | 02:29 | So we will just pop this up, say I
want to make a new channel set, I want to
| | 02:34 | call it points, with an x, and a y, and a
z. We'll say OK and immediately it starts to
| | 02:43 | calculate the points cloud, let's
take a look back at our RGB and pop-up,
| | 02:48 | there is our points pass.
| | 02:51 | Now let's take a look at the Surface
Normal, pop that up, we will create a new
| | 02:55 | channel set, and we call that one
normals and again x, y, and z. Say okay and
| | 03:05 | immediately it calculates the normals pass.
| | 03:07 | Now we can come over here
and look at our normals pass.
| | 03:12 | Now that we have a normals pass, we can
take a look at the Normal Detail setting
| | 03:16 | here, by default 0.25.
| | 03:19 | The Normal Detail parameter controls
how much default in the normals pass, the
| | 03:24 | higher the value, the sharper it will
look, but again more processing time.
| | 03:27 | So let's take a look at that, let me
run that up to 0.9, there we go and now I
| | 03:34 | can toggle between the 0.25
and the 0.9 setting for you.
| | 03:38 | So let's return to our depth pass.
| | 03:41 | The new DepthGenerator node creates a
much more precise depth pass and with
| | 03:45 | its improved workflow design, makes
it an even more valuable tool for your 3D
| | 03:50 | compositing.
| | Collapse this transcript |
|
|
15. The New DepthToPoints NodeSetting up and operating| 00:00 | In Nuke 7, The Foundry introduced an
awesome new node, DepthToPoints.
| | 00:06 | It takes a solved camera and the depth
channel of a clip to create a texture map
| | 00:10 | point cloud that can be used to line
of geometry to the clip in 3D. This
| | 00:14 | technique was famously used in
the making of the movie District 9.
| | 00:17 | Here is how you se tup the DepthToPoints node.
| | 00:22 | First, we need a clip that has a CGI
render, so we get a Read node, go to the
| | 00:27 | hulk folder and bring in the hulk.
| | 00:30 | Now all these images and camera files
are included in the tutorial assets, we
| | 00:34 | will open that, hook up our viewer.
| | 00:39 | So we have a moving piece of
geometry with a moving camera.
| | 00:43 | Now the CG render of course, has an
alpha channel, but it also has a depth Z
| | 00:49 | channel, here you can see it better,
if I set up for you like this, okay.
| | 00:57 | So it's going to take the depth Z data
that you see here, create a point cloud
| | 01:02 | relative to the camera and then texture
map the images onto it, tre cool.
| | 01:07 | A very important point, make sure that
your depth Z channels are not anti-aliased.
| | 01:13 | Okay I'll re-home the viewer and go
back to RGB, and fix my Viewer settings.
| | 01:20 | Next, we need the solved camera.
| | 01:22 | So let's go get from the
3D pop up, a Camera node.
| | 01:27 | And we will browse, then we'll
Import a chan file, and select the
| | 01:33 | hulkCam.chan file, and say Open.
| | 01:37 | Okay, we now have our 3D camera, let's go
take a look in the 3D world. Here it is, okay.
| | 01:50 | So there is my moving camera.
| | 01:52 | Okay. Ah. Let's do our Project Settings
and make sure that the full-size format
| | 02:00 | is set to PC_Video, this will speed up
our render times later and I will close
| | 02:05 | the Project Settings window.
| | 02:07 | Okay to set up our DepthToPoints
node, just select the Read node, go to
| | 02:12 | 3D>Geometry>DepthToPoints.
| | 02:16 | Now immediately, we're seeing something,
but the size in the position is not
| | 02:21 | correct, until you hook up the solved
camera data, there. Now it's correct.
| | 02:28 | So let's take a look at what we got.
| | 02:33 | So, we actually have a point cloud that
shows whatever side is facing the camera,
| | 02:38 | it's exactly the same size relative to
the camera as the original CG render was.
| | 02:43 | We can then use this to line up geometry.
| | 02:46 | The DepthToPoints Property panel has
surprisingly few adjustments, a critical
| | 02:50 | issue of course is that you tell it
where your depth channel is if it's not
| | 02:54 | automatically in the depth Z channel.
| | 02:57 | Point detail, this is the
density of the point cloud.
| | 02:59 | If I change that to something like 0.05, we
get far fewer points, put that back to default.
| | 03:09 | You also have the point size, let's set that
to 1 and you get real tiny little points.
| | 03:15 | You can use these adjustments to set
it appropriate for the scale of your
| | 03:18 | project, let's re-home the viewer.
| | 03:23 | Alright, that's how you set it
up, now let's see how we use it.
| | 03:27 | Let's start by getting a
checkerboard right there.
| | 03:29 | And to the checkerboard, let's
add a cube, 3D>Geometry>Cube.
| | 03:35 | Now we are going to make
a pedestal for our hulk.
| | 03:39 | I am going to start by taking the top of
the cube and just shortening it down to
| | 03:43 | zero and we need a nice big pedestal.
| | 03:46 | So let's take the uniform scale
up a bit to maybe 1.5 or so.
| | 03:51 | Okay, now let's do a little more accurate
alignment of our 3D geometry to our point cloud.
| | 03:56 | We will set the viewer for an ortho Z
by typing Z on the keyboard, we will push
| | 04:01 | in and position it very
nicely right at his feet.
| | 04:06 | We will switch to the ortho side
view with the X key, push in and shuffle
| | 04:14 | this forward and backward until we got that
lined up, back to the perspective view with the V key.
| | 04:20 | And there we go, we like that.
| | 04:22 | Now, an important point is that our
hulk has some animation, as I step through
| | 04:28 | the clip, you can see he's rotating.
| | 04:30 | So now we are going to have to
add rotation to the pedestal.
| | 04:34 | So let's switch to the top view with the C key.
| | 04:38 | We will go to our cube rotate y and
set a keyframe, jump to the last frame.
| | 04:47 | And we will dial in the cube
rotation until it looks about right.
| | 04:52 | There, I like that.
| | 04:54 | And let's see how it looks, as we step
through the frames, looking good, back to
| | 04:59 | our Perspective view, step
through the frames, looks good.
| | 05:08 | Okay, we are ready to render the cube.
| | 05:10 | So we will select the cube, go to
the 3D, add a ScanlineRender node.
| | 05:17 | Now we have to render it with the same exact
camera that we were using for our geometry.
| | 05:22 | So let's hook the viewer to the
scanline render, switch to the 2D view, let's
| | 05:26 | clear the Property bin,
take a look at what we got.
| | 05:30 | I am going to ping-pong this.
| | 05:34 | Okay that looks reasonable, alright.
| | 05:36 | So what we got to do now is
composite the hulk on top of our pedestal.
| | 05:39 | So I will bring in a Merge node, hook
that in because the cube has to be the
| | 05:45 | background, so it has to be the B side input.
| | 05:48 | And we will hook the A side to the hulk.
| | 05:50 | And see what we got.
| | 05:54 | And there you have it, a piece of 3D
geometry, beautifully lined up with a
| | 05:58 | moving 3D object and a moving CG
camera using the DepthToPoints node.
| | 06:04 | The DepthToPoints node is a dramatic
improvement in the ability to line up
| | 06:08 | 3D objects to a source clip and raises
Nuke's 3D compositing capabilities to a
| | 06:13 | whole new level.
| | Collapse this transcript |
|
|
16. The New DepthToPosition NodeSetting up and operating| 00:01 | The all new DepthToPosition node
takes 3D camera information plus an image
| | 00:05 | with a Depth channel to calculate the x,
y and z position of each pixel in the
| | 00:11 | image, the result is called a position pass.
| | 00:14 | Now these files here, you'll
find in our tutorial assets.
| | 00:17 | So I have set up a 3D scene with my 3D
geometry and a camera just the basics here.
| | 00:22 | Now let's take a look at the 2D render.
| | 00:26 | So coming out of the ScanlineRender
node, I have my RGB image with the alpha
| | 00:32 | channel, plus a Depth Z channel, the
Nuke ScanlineRender node outputs a Depth
| | 00:37 | Z channel automatically.
| | 00:40 | We can see the Depth Z channel better if
I adjust the viewer gamma and the gain.
| | 00:46 | There you go, okay back to the RGB
layer and reset my viewer settings.
| | 00:56 | Okay, let's add the DepthToPosition
node, I will go to the 3D tab pop-up,
| | 01:01 | DepthToPosition and insert the node
right after the ScanlineRender node.
| | 01:05 | Let me dial this down
here and hook up our camera.
| | 01:09 | So this is our position pass, the RGB
channels are filled with XYZ data for the
| | 01:15 | position of every pixel
in three-dimensional space.
| | 01:19 | If you look down here, the Red
channel is holding the X position, the Green
| | 01:23 | channel has Y and the Blue channel has
Z. We could see that if we look in the
| | 01:29 | viewer one channel at a time,
here is the Red channel.
| | 01:32 | So this is the X data, the horizontal
data, and as I move the cursor back and
| | 01:36 | forth, you can see the Red channel changing values.
| | 01:39 | Here is our Green channel which holds
the Y value, so if I bring the cursor
| | 01:44 | down to the bottom, very low numbers and as I
slide up to the top, the numbers get greater.
| | 01:48 | And then finally here is our Blue
channel which has the Z data in it.
| | 01:53 | Keep in mind, the Z data we are seeing
here is world Z, the three-dimensional
| | 01:58 | coordinate, not Depth Z which is the
distance from the camera lens to the
| | 02:03 | polygon of that pixel.
| | 02:05 | Okay, let's see how this works
by doing a little experiment.
| | 02:07 | I am going to push in here and I am
going to sample the pixel value which is
| | 02:13 | really the position value right
here on the nose of our character.
| | 02:18 | Let's go get this Sphere, open it up.
| | 02:22 | So the position of our Sphere is at
origin and we could see that if we switch to
| | 02:28 | the 3D view, there it is.
| | 02:31 | Our Sphere, turn off the hulk,
so we can see the Sphere better.
| | 02:36 | Okay so it's sitting at origin,
we will go back to the 2D view.
| | 02:40 | So I've sampled the RGB position of this
pixel which now holds the XYZ data position.
| | 02:47 | So let's enter that to the
sphere and see what happens.
| | 02:51 | For the X, I am going to call that zero, so
for Y, 1.19, ok 1.19, and then for the Z, 0.40, okay 0.40.
| | 03:05 | Okay, switch back to the 3D view.
Where did our sphere go? There it is.
| | 03:10 | We'll zoom in on that.
| | 03:13 | And now you can see, it's perched
right on the beak of our scary monster.
| | 03:16 | Now there is an issue you need to be aware of.
| | 03:19 | Let's go back to our 2d view
and back to the Node Graph.
| | 03:22 | First though, I want to reset the viewer,
so I will home that and switch back to RGB.
| | 03:27 | So we are looking at our
position pass right here.
| | 03:30 | Here is the problem, if I hook the
viewer up to the output of the ScanlineRender
| | 03:35 | node, I am seeing the RGB image,
but if I hook it up to the DepthToPosition
| | 03:40 | node, I am seeing the position data.
| | 03:43 | So my position pass data
has overwritten the RGB image.
| | 03:47 | This is usually not good.
| | 03:49 | So what we are going to want to do is
move it into its own separate channel.
| | 03:53 | So we will go back to our DepthToPosition
node and here is our problem right
| | 03:59 | here the output is in the RGB layer.
| | 04:01 | So we are going to create a whole
new layer for the position pass.
| | 04:05 | So we will click that up,
select New, let's call it PosPass.
| | 04:11 | And we are going to do x, and y, and
z, three channels, we will say OK.
| | 04:18 | Now the output of the DepthToPosition
node is now in its own separate position pass.
| | 04:22 | So we go back to the Node Graph, we
hook our viewer up to the up to the
| | 04:27 | ScanlineRender node, we see our RGB image,
to the DepthToPosition node we
| | 04:31 | still see our RGB image.
| | 04:33 | And if we want to see our position pass, we
will select it here and there you have it.
| | 04:38 | You can also apply the DepthToPosition
node to a CG image with a Depth Z
| | 04:42 | channel and a solved camera just like
we saw in the depth to point video earlier.
| | 04:47 | Let's take a look at that.
| | 04:49 | So here we have actually a CG image and
of course it has its alpha channel and
| | 04:54 | its Depth Z channel.
| | 04:57 | So let's see the workflow here.
| | 04:59 | All we have to do is select our RGB
image, 3D pop-up>DepthToPosition.
| | 05:06 | Okay we will scoot this down a little
bit and hook up our camera and now we have
| | 05:11 | the position pass again in the RGB layer,
don't forget it's going to clobber
| | 05:15 | your RGB layer unless you fix it.
| | 05:17 | The DepthToPosition node is used
together with the DepthToPoints node to
| | 05:21 | create the PointCloud in the depth to
points gizmo, use it anytime you need to
| | 05:26 | know the XYZ
position of a pixel in an image.
| | Collapse this transcript |
|
|
17. The New PositiontoPoints NodeSetting up and operating| 00:00 | The all new PositionToPoints node
takes an image, plus its position pass to
| | 00:05 | create the same Texture Map
PointCloud that we saw earlier in the
| | 00:09 | DepthToPoints Node.
| | 00:11 | To use this node, we need to
already have the position pass rendered.
| | 00:15 | By the way, all these images
are in your tutorial assets.
| | 00:18 | Let's take a look at the first case
where we have our RGB image and the position
| | 00:23 | pass rendered in the same exr file.
| | 00:26 | So here we have our RGB image and
take a look, here is our position pass.
| | 00:31 | We will switch over to the 3D view,
because we are going to make 3D points.
| | 00:36 | So, to add the PositionToPoints node
we go to the 3D pop-up>Geometry and
| | 00:41 | PositionToPoints is in the Geometry
folder, because it's making points geometry.
| | 00:47 | So we will add that.
| | 00:49 | Since the position pass is in the same
data stream as the RGB pass, all we have
| | 00:53 | to do is come up to the surface point pop
-up and say, use whatever your name for
| | 00:57 | the position pass is, and bang!
| | 01:00 | There we go, a 3D PointCloud.
| | 01:05 | It's got the same two adjustments as
the DepthToPoints node, first of all the
| | 01:08 | point detail can be reduced
or the point size reduced.
| | 01:13 | Put that back to default.
| | 01:20 | Keep in mind that your PointCloud is
only going to look correct when seen from
| | 01:23 | the camera's point of view.
| | 01:25 | Next, let's take a look at the workflow.
| | 01:27 | If the position pass is in a
separate render, first, I will tell the
| | 01:32 | PositionToPoints node that I do not
have a position pass in the same data
| | 01:36 | stream, we will go back to the Node
Graph and I will set the Viewer back to RGB.
| | 01:41 | So we are now looking at
the RGB layer of this node.
| | 01:45 | To give the PositionToPoints node its
position pass, pull out this little arrow
| | 01:49 | here where it says pause and hook
that up to your position pass render.
| | 01:53 | Now we will switch to the
3D view and there we have it.
| | 01:58 | Now there's an undocumented secret
here, a position pass must be in the RGB
| | 02:04 | layer of this Read node to
come in on the position input.
| | 02:08 | If it's on another layer like a
position pass layer, it will not see it.
| | 02:11 | So remember, the position data
must come in on the RGB layer here.
| | 02:16 | Now this can get confusing, we have
three nodes that kind of sound the same and
| | 02:21 | kind of do the same sort of
thing, so let's de-conflict that.
| | 02:24 | Okay, the PositionToPoints takes a
position pass and the image and creates your
| | 02:31 | PointCloud, no camera needed, no Depth Z.
The DepthToPoints creates a PointCloud
| | 02:38 | just like PositionToPoints, but you
have to give it a Depth Z Channel right
| | 02:43 | here it says depth, and you
have to give it a camera solve.
| | 02:48 | And the DepthToPosition is actually a
2D node, which needs a camera and the
| | 02:52 | depth pass to create the position pass.
| | 02:54 | So if you take the position pass
combined with the PositionToPoints class, you
| | 03:00 | get the DepthToPoints node.
| | 03:04 | Use the PositionToPoints node when
you want to create a PointCloud and you
| | 03:07 | already have the position pass.
| | 03:10 | Again, with a rendered position pass,
you don't need any camera information.
| | Collapse this transcript |
|
|
18. CameraTracker New FeaturesCreating separate cameras and points| 00:01 | This video assumes you already know
the camera tracker and it only covers the
| | 00:04 | two new features in Nuke 7.
| | 00:06 | The Foundry responded to customer
requests to separate the creation of the camera
| | 00:11 | from the PointCloud, so now
you only create what you need.
| | 00:14 | By the way, the
CameraTracker is a NukeX only node.
| | 00:19 | The image that we have here are in your
Tutorial assets if you would like to load them in.
| | 00:23 | So let's start by adding a
CameraTracker node, 3D pop-up>CameraTracker.
| | 00:28 | So I am going to go ahead and just
click Track Features, because I know this is
| | 00:33 | a sweet clip and the CameraTracker just
loves it, so it will give me a very nice
| | 00:37 | track with all default settings.
| | 00:39 | Okay, done tracking, I will just click
on Solve Camera, get our camera solve.
| | 00:44 | We will go and check the RMS
solve error on the refined tab,
| | 00:49 | beautiful, beautiful.
| | 00:51 | Now our new buttons are right here,
Create Camera and Create Points, but before
| | 00:57 | we use them I want to show
you something on the 3D side.
| | 01:03 | So look what we have here, we already
have a PointCloud, even though we haven't
| | 01:07 | clicked on the Create Points button.
| | 01:09 | So what's going on?
| | 01:11 | Well, the CameraTracker node has
created its own PointCloud, now it's only
| | 01:16 | available to the CameraTracker, you
can't play with it, just to show you
| | 01:19 | what I mean, I am going to select Vertex
selection and I cannot select any of those points.
| | 01:26 | So they are for the internal use
of the CameraTracker node only.
| | 01:30 | Okay, so let's create our camera, we
will click here, here is our Camera node,
| | 01:34 | there is the camera up there, we
play the clip, you see the move, we look
| | 01:40 | through the camera's viewfinder and we
see a very plausible PointCloud movement.
| | 01:45 | Okay, we will stop that, jump back to the
first frame and restore the viewer to the default.
| | 01:50 | Now you could export the camera
solve right now and you'd be done.
| | 01:54 | But if you also want to create the
PointClouds, all right, so let's come over
| | 01:57 | here and click on Create Points.
| | 01:59 | Now interestingly, we get a second set
of points superimposed over the first,
| | 02:08 | these points belong to the
CameraTrackerPointCloud node.
| | 02:12 | I can for example, dial down the point
size and if I switch my Viewer to Vertex
| | 02:18 | selection I can now select these points.
| | 02:21 | So the points in the CameraTracker
PointCloud are real points that you can play
| | 02:25 | with, but the ones in the CameraTracker
node belongs to the CameraTracker node
| | 02:29 | and you can't have them.
| | 02:30 | We will put the Viewer
back to Node Selection, again.
| | 02:32 | Now let's say you want to export the
PointCloud, okay, so we will just come up
| | 02:38 | to our Scene node and add a WriteGeo node.
| | 02:42 | And by the way you could also connect
the WriteGeo node to the CameraTracker
| | 02:46 | PointCloud node directly.
| | 02:48 | So we will browse to our destination,
we will name our file points.fbx, say
| | 02:56 | Open, we now have our
pathname and filename, click Execute.
| | 03:01 | Now, this is a PointCloud, it's static,
so I only need to export one frame, we
| | 03:06 | will click OK, boom done!
| | 03:08 | To see what we wrote out, I am going to
clear the Property bin and disable all
| | 03:12 | these nodes, so there's
nothing in the 3D viewer.
| | 03:16 | Now let's go fetch the fbx file we
just rendered, so we'll go to the
| | 03:20 | 3D>Geometry>ReadGeo, browse to
our points.fbx file and open that.
| | 03:29 | Now we will tell the fbx file that I
want to bring in the PointCloud only,
| | 03:34 | and there we have it.
| | 03:37 | I can now set the Viewer to Vertex
selection and select these and play with my points.
| | 03:44 | These new features in the Nuke 7 Camera
Tracker will allow you to create only and
| | 03:48 | exactly what you need for
your visual effects shots.
| | Collapse this transcript |
|
|
19. Particles New FeaturesUnderstanding new emitter features and velocity controls| 00:00 | The big story for particles in Nuke 7 is
about the ParticleEmitter node, it now
| | 00:06 | has several cool new emitter
options, let's take a look.
| | 00:09 | First up the new bbox or bounding box
emitter, previously your emissions were
| | 00:15 | limited to points, edges and faces.
| | 00:17 | Well, now we have the bbox
option, let's see how that works.
| | 00:21 | Right now I have points selected, so I
am getting particles emitted from just
| | 00:25 | the points of this cube.
| | 00:26 | If I set that to bbox, it now emits
particles everywhere inside the bounding box
| | 00:33 | of the cube, in other words, in its interior.
| | 00:36 | This is very cool for things like rain
or snow, where in the past you had to
| | 00:41 | emit from a surface and do a lot of
pre-roll in order to get the particles
| | 00:45 | filling the frame before you could even
start rendering your shot. We'll stop this.
| | 00:51 | The bounding box it emits from has
nothing to do with the shape of the geometry.
| | 00:56 | So I am going to go over here
to this sphere, turn that on.
| | 00:59 | I am going to get rid of my cube,
there, move out a little bit.
| | 01:05 | And now we'll take a look, let me
orient this right along the Z-axis.
| | 01:10 | You can see it's still emitting
in the bounding box of the sphere.
| | 01:13 | So again, the shape of the
geometry has nothing to do with it;
| | 01:17 | you are getting a simple bounding box.
| | 01:19 | We'll stop that and clear the Property bin.
| | 01:22 | I am going to turn these off and
come over here to show you the new emit
| | 01:29 | from selection feature.
| | 01:30 | I'll hook up my ParticleEmitter node and my sphere.
| | 01:37 | The idea here is that you can emit from
points, but the new feature is to emit
| | 01:43 | from only selected points.
| | 01:44 | I'll open up this Particle Emitter node.
| | 01:48 | So it's set to emit from points and as
I play the animation, you can see that
| | 01:52 | every point in the
sphere is emitting particles.
| | 01:54 | Okay, if I however click right here and
select only emit from selected points, bang!
| | 02:01 | It stops emitting.
| | 02:02 | That's because I don't have any
selected points, so let's select some
| | 02:05 | points, we'll stop this.
| | 02:08 | You have to use the GeoSelect node.
| | 02:11 | So I'll select my sphere node, 3D>
Modify>GeoSelect and add my GeoSelect node.
| | 02:21 | Now in order to select my points,
I'll have to set the system into Vertex
| | 02:26 | selection and I am going to enable Occluded
Vertices so I can select points in the background.
| | 02:32 | I am going to select all these points
right here on the equator and then in the
| | 02:36 | GeoSelect node click Save Selection, okay.
| | 02:42 | And now the Particle Emitter only emits
particles from those points that I have
| | 02:46 | in the GeoSelect node.
| | 02:47 | You can also do this from a PointCloud.
| | 02:50 | I am going to disable these, switch
over to here, I am going to show you
| | 02:57 | my PointCloud, and I'll turn off my Point
Occlusion and turn off my Vertex selection.
| | 03:04 | So here is my PointCloud and now you can
see the particles being emitted, and if
| | 03:09 | we open up the ParticleEmitter node,
we are emitting from points, again.
| | 03:13 | So if we play that, you can see
that every point in the PointCloud is
| | 03:17 | now emitting particles.
| | 03:19 | Again, if I select only emit from selected
points, I'll turn that on, no more emission.
| | 03:26 | So we need another GeoSelect node.
| | 03:29 | I'll select my PointCloud, 3D>Modify>GeoSelect.
| | 03:36 | Again, turn on Vertex selection, swing
around and I'll select all these points.
| | 03:43 | Remember, selected points turn blue.
| | 03:45 | All right, so we'll go to save selection,
turn off our Vertex selection back to
| | 03:53 | regular Node selection.
| | 03:54 | I'll pull out a little bit and check it out.
| | 03:59 | Only the points in this
selected region are emitting particles.
| | 04:02 | Okay, we'll stop that, turn all these off.
| | 04:08 | Clear the Property bin and look
at emit in randomized directions.
| | 04:15 | We'll select that, hook up the Viewer.
| | 04:16 | Now I have two spheres here.
| | 04:21 | When I play the Animation, you can
see that both particle emitters have
| | 04:25 | exactly the same settings.
| | 04:27 | So I'll open up just the particle
emitter for the sphere on the right to show
| | 04:32 | you the randomized direction effect.
| | 04:34 | It's right here, emit from, we're
emitting from points, but no random direction.
| | 04:41 | If I select randomize the direction,
it's now emitting particles 360 degrees around
| | 04:47 | every point and some of them are
going inward and some are going outward.
| | 04:52 | If I choose randomize outward, it only
emits particles going outward from the geometry.
| | 04:59 | Okay, we are done with that, save this,
come over here, turn on these, clear the
| | 05:07 | Property bin and hook up to this.
| | 05:09 | I am going to open a cylinder;
| | 05:12 | this is a really kicky new future.
| | 05:16 | This is the transfer
velocity feature. Here is the idea.
| | 05:21 | I have this cylinder and with the
GeoSelect node I am just selecting the points
| | 05:25 | on the tip of the cylinder.
| | 05:27 | So now I've set the ParticleEmitter to
emit from points only from selected points.
| | 05:34 | So only the tip of the
cylinder is now emitting particles.
| | 05:37 | Now right now the particles just spawn
in place and the cylinder moves away from
| | 05:42 | them, so they just sort of hang in space.
| | 05:45 | The transfer velocity is right
down here, transfer velocity.
| | 05:50 | The idea is the spawn particles are
going to pickup the velocity of their
| | 05:54 | emitters and that will be added to
whatever velocity you give the particles.
| | 05:59 | So if transfer velocity is set from 0
and I change it to 1, now the particles
| | 06:04 | are being moved with the emitter and now
the particles are like flinging off the
| | 06:08 | end of the cylinder like they were water drops.
| | 06:11 | Now if you set the transfer velocity to a
silly number like 3, this gets you a weird effect.
| | 06:16 | This is giving the particles three
times the velocity of their traveling
| | 06:20 | emitters that might be
useful for an effect somewhere.
| | 06:22 | We'll put that back to
something a little more reasonable.
| | 06:24 | The transfer window, to calculate the
velocity of the emitter, you need to use
| | 06:31 | more than one frame.
| | 06:32 | By default, the transfer window is to
use one frame in front and one frame
| | 06:37 | behind of the current frame
to calculate the velocity.
| | 06:42 | We can change that to a
higher number, let's say 10.
| | 06:45 | That means it's going to take 10
frames before and 10 frames after to average
| | 06:50 | the velocity and that considerably
attenuates the acceleration you get from
| | 06:54 | your traveling emitters.
| | 06:56 | Nuke's particle system is a true 3D system;
| | 07:00 | perfectly integrated into Nuke's 3D environment.
| | 07:03 | These new particle emitter features
expand the range of problems that Nuke's
| | 07:07 | particle system can solve.
| | Collapse this transcript |
|
|
20. The New Displacement NodeSetting up displacements| 00:01 | The DisplaceGeo node will
displace a polygonal surface.
| | 00:04 | But you must first create a lot of
polygons which makes it a heavy render.
| | 00:08 | A displacement shader works differently,
by creating needed polygons on the fly
| | 00:14 | only as they enter the render window.
| | 00:16 | In previous versions of Nuke, the
displacement shader was embedded in the
| | 00:20 | ScanlineRender node, but in Nuke 7
it's been moved to a separate node,
| | 00:24 | the Displacement node.
| | 00:29 | So the images we're using are in the
Exercise Files, so you can play along too.
| | 00:32 | Put this back where it belongs.
| | 00:34 | Now let's take a look at our 3D setup.
| | 00:37 | We have a card and a camera.
| | 00:40 | We'll go over to our 3D view.
| | 00:42 | So here is our card and our camera,
and you'll notice the card only has four
| | 00:47 | polygons along here and four there
for a total of 16 polygons for the whole
| | 00:51 | card, in other words, a very low polygon count.
| | 00:55 | Now let's add our 3D shader.
| | 00:57 | We'll select our texture map image, go to 3D
>Shader>Displacement, and nothing happens.
| | 01:06 | The reason is we haven't hooked up
any displacement yet to our picture.
| | 01:11 | So let's go back to the 2D view.
| | 01:13 | We don't see any displacement in the
image yet, because we've not yet hooked up
| | 01:18 | our displacement map, which
goes on this input right here.
| | 01:22 | Now normally you might take the image
that you're using as the texture map, make
| | 01:27 | a luminance version;
| | 01:28 | paint it in Photoshop in order to get the
altitudes, the elevations that you want.
| | 01:32 | But in this particular case, this image
happens to make a very fine displacement
| | 01:37 | map all on its own, so we will use it directly.
| | 01:40 | And there we have our vertical displacement
based on the luminance values of this map.
| | 01:46 | So let's come up to the
Displacement tab and see what we got.
| | 01:49 | First of all, the displacement channel
is talking to this input right here, your
| | 01:54 | displacement image and whether you are
going to use the luminance of it or maybe
| | 01:58 | choose one of the channels or
perhaps an average of all three channels.
| | 02:02 | In our case, the
luminance version works just fine.
| | 02:05 | The scale factor, this is how much
displacement you're going to get.
| | 02:09 | So let's set that 0.1 to
something smaller, like 0.05. Boom!
| | 02:14 | Much less displacement, 0.1, 0.05, back to 0.1.
| | 02:17 | Next is the filter size.
| | 02:21 | Now this is actually a blurring
operation, it's being done on the displacement
| | 02:25 | image, not the texture map image here.
| | 02:29 | This is actually applying a blur and it
knocks out some of the fine detail that
| | 02:33 | would tend to move your
polygons more than you want.
| | 02:35 | Let's see what happens if we change
the filter size from 5 to something
| | 02:38 | larger like 20, so we're really blurring and
softening the detail on the displacement map.
| | 02:45 | See the difference?
| | 02:46 | I'll put it back to 5, much more detail;
| | 02:50 | back to 20, much smoothed out.
| | 02:52 | We'll set it back to default.
| | 02:55 | Now the build normals option here
is designed to be used if you hook up
| | 02:59 | a normals image here.
| | 03:01 | This input will take a normals map and
then you would turn off build normals.
| | 03:06 | Now the normal expansion pop-up wakes up,
it was ghosted out before and you can
| | 03:10 | choose the normal
expansion mode of either XY or XYZ.
| | 03:14 | In our case, we're going to leave the
build normals on, in order to build the
| | 03:18 | normals ourselves with our own displacement map.
| | 03:21 | Once the Displacement tab is roughed in,
we can switch to the Tessellation tab
| | 03:24 | to see the rules for the
actual polygon generation.
| | Collapse this transcript |
| Dialing in the tesselation| 00:01 | Tessellation is the process of
subdividing the geometry into smaller
| | 00:04 | triangles for rendering.
| | 00:06 | The Tessellation tab allows you to
set the parameters for the subdivision.
| | 00:11 | Now tessellation is a triangular subdivision
process, so let's take a look at the 3D view.
| | 00:16 | Now if we look through the camera's view,
we'll see there is no displacement at
| | 00:22 | all, let me back out a little bit, and
you can see that our camera view only
| | 00:27 | covers this part of the geometry.
| | 00:29 | There's no displacement of the geometry,
because the displacement shader is a
| | 00:32 | render event, not a geometry event.
| | 00:34 | Let's go back to our 2D view so we
can see the actual displacement of the
| | 00:39 | final rendered image.
| | 00:41 | Now let's talk a minute
about what is tessellation?
| | 00:43 | I've made this little demo for you.
| | 00:46 | Tessellation is the process of
subdividing the polygons into triangles, remember
| | 00:51 | our card has only a 4x4 arrange of these
polygons here, so tessellation will first
| | 00:57 | divide it into a triangle and then
subdividing it again for a subdivision of
| | 01:04 | two, subdividing it again for a
subdivision of three, and then subdividing it
| | 01:10 | again so on and so forth, each time we
increase the subdivisions, we get a finer
| | 01:13 | and finer polygonal mesh.
| | 01:16 | So that's what this max subdivision
value is, is how many times you're going to
| | 01:20 | subdivide the polygon, this sets an upper limit.
| | 01:22 | For example, the card we have has a 4x4
polygonal array for a total 16 polygons.
| | 01:28 | If the subdivision is set for 3,
we're going to get 2048 polygons.
| | 01:34 | So you have to be careful, you can
easily overwhelm the rendering time with
| | 01:38 | far too many polygons.
| | 01:39 | All right, we'll go back to our 2D view
and take a look at what happens when we
| | 01:43 | change the max subdivision.
| | 01:44 | A moment ago we had 4;
| | 01:46 | you can see we have lots of fine polygons.
| | 01:50 | If I drop that back to 3,
we get a smoother surface.
| | 01:54 | If I lower it to two subdivisions,
the surface smoothes out even more, so
| | 01:59 | we have fewer and fewer polygons as
we walk down the subdivision tree, I'll
| | 02:04 | put that back to 4.
| | 02:05 | The next parameter you want to adjust is
the pixel edge length, by default that's 20.
| | 02:10 | Here's what that number means.
| | 02:12 | Every 20 pixels, it's going to create a new
polygon; now here's the importance of that.
| | 02:18 | The same 20 pixels is used in the front
of the picture as it is back here in the
| | 02:22 | background, that means there'll be the
same number polygons back here in the
| | 02:26 | background, than there
is here in the foreground.
| | 02:29 | That would not be true if you took a
high density mesh and laid it down, you
| | 02:34 | would have far fewer polygons in the
front, than you have in the background.
| | 02:38 | So this is the whole idea of the
displacement shader, the number of polygons
| | 02:43 | stays consistent across the whole scene,
whether it's in the background or the
| | 02:47 | foreground, thus reducing your rendering time.
| | 02:50 | So let's see what happens when we take the
pixel edge length from 20 down to let's say 10.
| | 02:54 | Now that means we're getting a new
polygon every 10 pixels in the foreground and
| | 03:01 | across the background.
| | 03:02 | So our geometrical mesh is now
starting to conform more accurately to
| | 03:06 | the displacement map.
| | 03:07 | Let's see what happens if
we go from 10 down to 5.
| | 03:12 | Now watch the background region of
my polygonal mesh when I set the pixel
| | 03:15 | edge length back to 10.
| | 03:19 | You see, the background twitches,
because the background part of the picture was
| | 03:23 | not yet finely divided enough, so going
from an edge lenght of 10 down to 5, I
| | 03:28 | got a change, but watch what
happens when I take the 5 down to 3, no
| | 03:34 | change, so I had found
the magic point of 5 pixels.
| | 03:38 | Now going back to our Displacement
tab, if we take the filter size, which
| | 03:43 | remember, that's a blur on the
displacement image input only, and if we take
| | 03:47 | that back up to a number like 20,
it really smoothes out the geometry.
| | 03:52 | This has no effect on how many polygons
you are generating, only how smooth the
| | 03:57 | displacement map is, back
to the Tessellation tab.
| | 04:01 | Now let's take a look at the mode,
this is the Polygon Generating mode.
| | 04:05 | By default it's set for screen, that
means it's going to generate polygons only
| | 04:10 | as they fit into the screen.
| | 04:13 | Uniform is going to create a uniform
polygonal mesh, now this is inefficient,
| | 04:18 | because you're going to have a lot of
polygons in the background, and fewer in
| | 04:21 | the foreground, but this can
be faster for initial setup.
| | 04:24 | Adaptive is for a situation where
the displacement has large flat smooth
| | 04:29 | areas, so you can have a lot fewer
polygons there, so it'll adapt to the
| | 04:33 | complexity of the surface.
| | 04:34 | For example, if you had buildings, this
would be a great time to use adaptive,
| | 04:38 | because you have large flat areas.
| | 04:40 | When you select adaptive, you get
another set of parameters to adjust, we'll go
| | 04:44 | back to our screen mode.
| | 04:46 | So how do you know you've got
the right balance of settings?
| | 04:49 | Well, the card geometry precision,
versus the max subdivision, you want to set
| | 04:53 | those two parameters for your best render speed.
| | 04:56 | Next, you want the filter size to
smooth out the terrain, and then finally, the
| | 05:01 | pixel edge length to retain
the detail in the displacement.
| | 05:05 | And by the way, because the
displacement shader is a shader, you can take other
| | 05:10 | shaders and put them in the stack and
add some sophisticated lighting models.
| | 05:15 | The bottom line is use the
Displacement Shader node to optimize your
| | 05:19 | render times for high precision
meshes, like those used in stereo 3D
| | 05:23 | conversion projects.
| | Collapse this transcript |
|
|
21. The New ModelBuilder NodeUnderstanding the workflow| 00:01 | ModelBuilder is a major upgrade to
the old Modeler node which is now gone.
| | 00:05 | Using a clip with a solved camera you
can build basic geometry to project all or
| | 00:10 | parts of the scene onto.
| | 00:12 | The ModelBuilder is a NukeX only node,
but the geometry and projections you
| | 00:16 | create can be used in regular Nuke.
| | 00:18 | You'll find the script in the Exercise
Files the ModelBuilderScript.nk which
| | 00:23 | already has a nice Camera Solve,
PointCloud ready to go, this will save time.
| | 00:28 | So, I'm going to ping pong my timeline,
which I like to do when I'm tracking, to
| | 00:32 | take a look at the little
sample clip we have here.
| | 00:35 | This is a little test clip that I
created that makes for a very easy camera
| | 00:39 | track, so everybody has a happy experience.
| | 00:40 | Okay, we'll stop that and jump to frame 1.
| | 00:43 | So let's take a look at our camera
solve, I'll open up the Scene node, we'll
| | 00:47 | jump to the 3D view and
there is our solved scene.
| | 00:52 | Now you do not need a PointCloud, but
you do have to have a TrackedCamera.
| | 00:57 | We'll go back to the 2D view and see
how to hook up the ModelBuilder node, I
| | 01:02 | don't need the PointCloud, so I'm
going to close that Scene node Property panel.
| | 01:06 | The ModelBuilder node lives on the
3D tab Geometry>ModelBuilder, and we
| | 01:13 | hook that into our Read node in Viewer and
the Cam Input of course goes to the camera.
| | 01:18 | Now to wake it up you might want to
slide your cursor into the Viewer, we're now
| | 01:22 | seeing our clip with our TrackedCamera.
| | 01:25 | So the mission is I want to put in a
piece of geometry, just a card to track it
| | 01:30 | on top of Marci here, to replace that picture.
| | 01:33 | So the first thing I do is I select the
frame that gives me the best view of my
| | 01:37 | target, which in this case would be frame 1.
| | 01:39 | Next, I'll come up here to the Shape
List and you get to pick which kind of
| | 01:45 | shape works best for your target.
| | 01:48 | For a building you might use the
Cube, in this case I just need a Card.
| | 01:52 | I get the little plus cursor and
then I can just click to plant the Card.
| | 01:56 | Now the first thing I like to do is do
a basic resize of my geometry so it's in
| | 02:01 | the ballpark, so we'll select the Edit
mode, come to the pop-up and say I want
| | 02:07 | to edit my object, which is the Card.
If I click on it, it turns green.
| | 02:11 | Now we get to use the new onscreen 3D
interactive scaling commands that are now in the Viewer.
| | 02:18 | Put that over here and that just
basically roughs it in, makes it easier to do
| | 02:22 | the final alignment.
| | 02:24 | By the way, the 3D grid is kind of in
my way, so I'm going to go to the Viewer
| | 02:28 | setups, select 3D and turn off the grid
and then I'll close that Property panel.
| | 02:33 | With the geometry roughed in, now we
can go for the alignment, and here is our
| | 02:38 | Alignment tool here, and the most
important thing is the first point that you
| | 02:43 | pick, you need to choose a corner point
that basically allows you to get things
| | 02:47 | lined up, do not choose
an interior point in here.
| | 02:50 | I'm going to for this corner right here,
click and drag and place it over there,
| | 02:55 | notice we're getting a lovely little
zoom window, look at the crosshair in
| | 02:59 | there and that helps me line things up.
| | 03:01 | Okay, I'll come over here, click and
drag and I get my zoom window, so I can
| | 03:06 | line things up real pretty, and we'll
line this one up and line that one up. Okay.
| | 03:13 | I have my geometry positioned
on frame 1, my first keyframe.
| | 03:17 | Now I'm going to roll the playhead out and
find another frame to set my second keyframe.
| | 03:22 | So I'll drag the playhead out here to
about frame 30 where my target is almost
| | 03:27 | going to leave frame.
| | 03:28 | Now all I have to do is realign it on
this frame, watch what happens when I
| | 03:32 | click and drag on my control
point, I get this purple line.
| | 03:36 | Normally, all you have to do is slide
your point along the purple line and
| | 03:39 | it'll be lined up nicely.
| | 03:41 | Notice we're good up to here, I'll click
this point, I get my purple line, and I
| | 03:46 | line that one up and I'll click
this point, I get my purple line.
| | 03:51 | Now, if you ever have to go off
the purple line, just hold down the Shift
| | 03:55 | key, that allows you to deviate, but you
don't want to do that, I'm going to undo that.
| | 04:00 | So I now have two keyframes and if I
scrub through the timeline between 1 and
| | 04:04 | 30, I got a nice track.
| | 04:06 | So I'm going to set one more keyframe
at the end, so I'm going to just jump to
| | 04:10 | my last frame, I'll zoom in here, and
yeah, I've got a little drift, I want to
| | 04:15 | touch up that right there, okay, and
then come up here and check him out.
| | 04:20 | All right, let's say we like that;
| | 04:25 | we now have a card beautifully
tracked over the entire length of the clip.
| | 04:29 | Now if you have geometry such as a
PointCloud or some other modeling that you're
| | 04:33 | going to use to help line things up,
you can hook that in right here on the
| | 04:37 | GeoNode, so I want to hook that up to
my CameraTracker points, there are my 3D
| | 04:42 | points, let's go take a look at what we've got.
| | 04:45 | I'll set the Viewer to default, we'll
back out and there is my card, and notice
| | 04:50 | it's beautifully embedded in
that wall, okay, lines up very nice.
| | 04:54 | But again, you don't need to PointCloud;
| | 04:56 | all you really need to have is the camera.
| | 04:59 | To restore the ModelBuilder 3D view,
you must do two things, you must
| | 05:04 | select your TrackedCamera and lock
the viewfinder, then and only then, will
| | 05:09 | you get your picture back.
| | 05:10 | If you have some lineup referenced
geometry like this PointCloud, or some other
| | 05:13 | geometry, you can turn it off right
here by clicking the Pass Through Geo button,
| | 05:18 | we're actually done with that, so I'm
going to disconnect my Geo Input and
| | 05:22 | select the ModelBuilder node.
| | 05:24 | Now we're doing the simple match move
case, where I want to export this card,
| | 05:27 | put a texture map, and render it back on top.
| | 05:30 | So to export the geometry, you come
over to the Scene list and you select all
| | 05:35 | the geometry you want to export,
we just have the one Card.
| | 05:40 | Then you click the Bake Scene Selection,
click that button and you get another
| | 05:44 | node, and this has the baked out geometry.
| | 05:47 | We're done with our
ModelBuilder, so we'll turn that off.
| | 05:51 | Hook the viewer up to our
new node, and here we go.
| | 05:54 | Okay, this is the 3D view, we're seeing
it rendered through our TrackedCamera.
| | 06:00 | If we switch to 2D, we're going to need
a ScanlineRender node of course, we're
| | 06:06 | going to need to hook up to our
TrackedCamera and then we'll need a little
| | 06:13 | texture map, give it some
pixels, here we go. Okay.
| | 06:19 | So this is now our 2D render with a
nice alpha channel, so all we have to do is
| | 06:25 | attach a Merge Node and hook that back
over the original clip, but we now have a
| | 06:30 | match move replacing that image on the wall.
| | 06:34 | We'll play that and have an admiration moment.
| | 06:38 | And so the geometry is now beautifully
matched moved with the original clip,
| | 06:42 | we'll stop that, jump to the beginning.
| | 06:47 | This demonstrated the basic
workflow for a simple match move case.
| | 06:51 | Next, let's kick it up a notch, and
see how to use ModelBuilder to create
| | 06:55 | something a bit more
complex and use camera projection.
| | Collapse this transcript |
| Modeling complex geometry| 00:00 | You can use the ModelBuilder node to
create geometry more complicated than a
| | 00:04 | simple card that can be
used for camera projection.
| | 00:07 | Here's an example of the workflow, by
the way I'm using the ModelBulderScript
| | 00:10 | you'll find in the Exercise Files.
| | 00:13 | So let's add our ModelBuilder node, go
to the 3D tab>geometry>ModelBuilder, hook
| | 00:20 | it into our source clip, hook the
camera input to the TrackedCamera, nothing
| | 00:26 | happens until we move
the cursor into the Viewer.
| | 00:29 | Again, we'll turn off the grid, so
we'll select our Viewer settings, 3D grid off,
| | 00:36 | close Property panel.
| | 00:39 | Now its time to take a look
at the Shape Defaults tab.
| | 00:43 | This tab defines the precision, the
number of polygons for each of the shapes
| | 00:46 | you're going to create.
| | 00:47 | We created the card earlier and it had a 4x4,
if I want that to be a 2x2 I could do that.
| | 00:54 | So you can create the geometry here,
just as well as you can from this popup,
| | 01:00 | this popup list of course is going to use
the settings over here on the Shape Defaults.
| | 01:04 | So let's say I'm ready to create a Cube,
I want to create a Cube for this box
| | 01:09 | and camera project it.
| | 01:10 | So I'm going to click Create here,
notice my cursor is turned into the plus,
| | 01:15 | come over to the Viewer
and click to create my Cube.
| | 01:19 | As before we'll do a rough edit to
ball park the geometry where it belongs.
| | 01:24 | So we'll come over and select the Edit
mode, and then from this pop up we'll
| | 01:28 | choose to Select objects for editing,
which of course the Cube is an object.
| | 01:34 | Select the Cube, I'll zoom in a little bit.
| | 01:37 | Using the new 3D scale on screen
controls, Command+Shift or Ctrl+Shift, I'm going
| | 01:42 | to make my box a little smaller, bring
it over here, turn it around, rough in
| | 01:53 | the length, bring it over here.
| | 01:55 | Here we go that's pretty
close, okay, that's a good binging.
| | 02:00 | Once I have a basic placement on my
preliminary shape we're ready to do the alignment.
| | 02:04 | First thing you got to do is make sure
the playhead is where you want it. I want to
| | 02:09 | start this on frame 1, switch to the
Alignment mode, and again the first point
| | 02:14 | is very important, so we're going to
zoom in here and I'm going to pick this
| | 02:19 | corner here because this is your basic
placement point and put it, using the zoom
| | 02:24 | window, right where I want it. Come over
here. Select the next point. Position it,
| | 02:31 | and I think we will put this point
below here, bring this one down there, and
| | 02:39 | the rest look pretty darn good.
| | 02:41 | Note, that whenever I edit a point it
changes color, these are now purple, a
| | 02:47 | purple point means that, that point
has been animated and you're sitting on a
| | 02:51 | keyframe. I move the playhead
one frame, and they turn blue.
| | 02:55 | So blue means it's animated, but not on a
keyframe and purple it is on a keyframe.
| | 03:01 | Okay, I want to select my next keyframe
so I'm going to roll out here to let's
| | 03:06 | say frame 40, reposition my box. Again, we
get our purple line and usually all you
| | 03:10 | have to do is slide along the purple
line that's over here, I want to pull that in
| | 03:16 | a little bit, and this guy up
here that looks pretty good, okay.
| | 03:21 | I now have keyframes at one and 40
and the box is tracking very nicely.
| | 03:25 | We'll go to the last frame in the clip
over here, I will zoom in, we'll pull this in a
| | 03:31 | little bit here, pull that one in a little
bit there, maybe tuck in the top here,
| | 03:37 | check the other end, got to fix this, there.
| | 03:41 | All the other points looks just fine.
| | 03:43 | We now have a box nicely tracked
over the whole length of the clip.
| | 03:48 | So now we're ready to export, I'm going
to home the Viewer, to export the box we
| | 03:54 | go back to the ModelBuilder tab, make
sure we go to the Scene list and highlight
| | 03:59 | everything we want to export and
click Bake Scene Selection, boom!
| | 04:04 | We get a Cube node.
| | 04:05 | Open that up in the Property bin
here and we see it on the screen, I can
| | 04:11 | close the ModelBuilder node now.
| | 04:13 | Now we're looking through our
TrackedCamera into the 3D world and there is our
| | 04:17 | new Cube for the box.
| | 04:20 | Now let's set up the camera projection,
I'm going to hook the Viewer node to the
| | 04:24 | original clip and scrub through the
clip, looking for the frame that gives me
| | 04:29 | the best view of my target just
like we did with the Marci picture.
| | 04:32 | I'm going to say frame 20 gives me my
best view of the side and the backend here.
| | 04:37 | So I'm going to use frame 20 and
project it on the box, we'll select the Read
| | 04:41 | node, go to the Time tab, select a
FrameHold right there, set it for frame 20.
| | 04:49 | Now if I look at the FrameHold, I'm going to
see frame 20, no matter where the playhead is.
| | 04:55 | Next we'll select that and add our
Project3D node fromm the 3D shader, there's
| | 05:02 | Project3D, which of course was a camera.
| | 05:06 | Now the camera that I need is going
to be the TrackedCamera at frame 20.
| | 05:11 | So let's copy the TrackedCamera, paste
it up here, and by the way I'm going to
| | 05:16 | rename that projection camera, ProjCam.
| | 05:19 | So we can clearly distinguish between
the two. I want this one held at frame 20
| | 05:27 | as well, so we'll go the Time>
FrameHold, set that one for 20 as well.
| | 05:36 | Now if we switch to the 3D view take a
look at our set up, so in the 3D view
| | 05:42 | now as I scrub the playhead we can
see our TrackedCamera and our static
| | 05:46 | Projection Camera, so the Projection
Camera is going to project frame 20 on to
| | 05:51 | the box and the TrackedCamera is
going to re-photograph it with the same
| | 05:55 | moving camera as the clip.
| | 05:57 | So let's look through our TrackedCamera
of the viewfinder and we can see now the box
| | 06:03 | is moving correctly.
| | 06:05 | So all we have to do is hook up the
camera projection, so the Project3D node needs
| | 06:10 | the camera that's held on frame
20 and we're going to then use a
| | 06:15 | 3D>Shader>ApplyMaterial to hook up that
projection to our Cube, and now we have it.
| | 06:24 | So frame 20 is being re-projected
on the Cube no matter what frame the
| | 06:28 | TrackedCamera is looking at, to re-comp
this over the original clip we'll need
| | 06:32 | a ScanlineRender node, so we will
select ApplyMaterial, 3D>ScanlineRender, hook
| | 06:39 | up the camera input to the original
TrackedCamera of course, hook our Viewer up to that.
| | 06:52 | Now if I switch to my 2D view this
is my rendered box from the camera
| | 06:56 | projection. We're ready to comp this
over the original clip, but just to show a
| | 07:04 | difference, so we know what we're
looking at, I'll add a Grade node and I'll just
| | 07:07 | gain this down so it's a lot darker,
that way we'll know we made some change to
| | 07:12 | the picture. Alright, then we'll take
that Grade node and add a Merge node to
| | 07:15 | comp it over the original clip and now
we have a camera projected box comped back
| | 07:23 | on top of the original clip and of
course we could have made whatever change
| | 07:27 | we want in that box.
| | 07:29 | Exporting 3D geometry for camera
projection is of course a key part of 3D
| | 07:33 | compositing with Nuke, however if you
had a lot of geometry to export then
| | 07:38 | setting up individual camera
projections for each piece of geometry like
| | 07:41 | this would be tedious.
| | 07:43 | Next we'll see what to do if you have a
lot of geometry for camera projection.
| | Collapse this transcript |
| Exporting geometry| 00:00 | If you've modeled a lot of geometry
then exporting it one object at a time and
| | 00:05 | setting up individual camera projections
would be a real time consuming project;
| | 00:09 | here we'll see how to do it all in one go.
| | 00:12 | Now I'm using the ModelBuilderScript2
which you'll find in the Exercise Files;
| | 00:16 | here the geometry has already
been modeled to save us some time.
| | 00:20 | We're not seeing our clip with the
wireframe, and we won't see it until we open
| | 00:24 | up the ModelBuilder node itself, so
let's double click on the ModelBuilder node
| | 00:28 | and switches to the Property panel, so I
have a Node Graph here, Property panel there.
| | 00:32 | So we're going to do a full up camera
projection of the entire scene onto all
| | 00:38 | the geometry we have.
| | 00:40 | First thing we need to do
is select our best frame.
| | 00:42 | So we want a frame that shows us a good
view of all the items of interest, which
| | 00:47 | is going to be of course
the Marci picture and our box.
| | 00:50 | So I am going to use frame 20, so after
we've selected our projection frame, in
| | 00:55 | the Scene list, we'll make sure we've
enabled everything we want to export.
| | 00:59 | This time we'll click on Create
Projection and the ModelBulder node makes this
| | 01:03 | backdrop, loaded up with all the nodes we need.
| | 01:06 | Notice that it's marked frame 20 just
like the timeline, and we have the frame
| | 01:12 | holds for both the clip and
the camera would be on frame 20.
| | 01:17 | So let's take a look at what we got here.
| | 01:20 | this of course is our held frame,
I scrub through that, no change.
| | 01:25 | So now we want to put some
effects into this projection.
| | 01:27 | I need to put some graffiti on Marci and
we're going to put a logo placement on the box.
| | 01:32 | So let's come over here and move our
frame hold up a little bit, we'll zoom in,
| | 01:37 | and let's add a PaintNode right here.
| | 01:39 | I'm going to use the PaintNode to put in my
graffiti, so I'm going to go and select my
| | 01:45 | Brush, let's select the lovely red
color and I'll do my graffiti on Marci.
| | 01:52 | Now notice if I move the playhead, my
graffiti disappears, because the paint
| | 01:57 | stroke is only valid on frame 46, so we
have to go change the lifetime of that
| | 02:02 | paint stroke, so we'll go to the
Properties bin for the RotoPaint node and
| | 02:07 | change the lifetime to all
frames, back to the Node Graph.
| | 02:13 | And now when I move the playhead,
the graffiti doesn't disappear.
| | 02:16 | For the logo placement, I've
prebuilt a little tchotchke for you here.
| | 02:20 | So all we have to do is select the
CornerPin2D node and add a Merge node and
| | 02:25 | slide it in, hook it up, and there's our logo.
| | 02:29 | Again, this is my held frame, so no
matter where the playhead is, I'm going to
| | 02:33 | see my graffiti and my logo.
| | 02:36 | Okay, my effects are all ready, so
let's hook them into the geometry.
| | 02:39 | I'm going to hook the viewer to the
ApplyMaterial node, we jump to 3D and
| | 02:44 | we don't see anything.
| | 02:45 | First, we need to connect the geometry
to this input right here, that of course
| | 02:49 | will come form the ModelBuilder node.
| | 02:52 | Still we don't see anything, what's going on?
| | 02:55 | Let's open the ModelBuilder Property
panel, and the issue is right here,
| | 03:00 | display wireframe, remember, this is the 3D
display, so we're going to switch that to textured.
| | 03:06 | Ah, that's more I like it.
| | 03:09 | Now we're looking at our Textured
3D geometry, but we have this funny
| | 03:13 | transparency thing going
on, what's all that about?
| | 03:16 | Well, if we go back to the Node Graph and
take a look at the Read node, here is the issue.
| | 03:22 | The ModelBuilder node wants an input
clip that has a solid alpha channel, not a
| | 03:27 | 3 channeled image, so we'll open up our
Read node, and come down here and click
| | 03:33 | auto alpha, problem fixed.
| | 03:37 | Now we can clear everything out of the
Property bin, go back to our Node Graph
| | 03:42 | and we're ready for a render.
| | 03:45 | To do the render, we're going to need a
ScanlineRender node and, so we'll select
| | 03:48 | ApplyMaterial, come over 3D,
select ScanlineRender and I'll move the
| | 03:55 | ScanlineRender node over here and the
camera input of course is going to be our
| | 03:59 | original TrackedCamera.
| | 04:01 | So now if we switch the Viewer to 2D,
so as I scrub the timeline, we can see we
| | 04:06 | have our modified camera projection on
the geometry, all we have to do is comp
| | 04:12 | this over the original clip. So we'll
select the ScanlineRender node, add a Merge
| | 04:17 | node, and hook that back to the
original clip, and we're done.
| | 04:22 | You can use the ModelBuilder node
like this to create geometry, then camera
| | 04:26 | project whole streets or room interiors;
| | 04:29 | however, we need to be able to make more
complicated geometry than cards or cubes.
| | 04:34 | So next, we'll take a look at
how to add fine detail to the
| | 04:37 | geometric primitives.
| | Collapse this transcript |
| Editing geometry| 00:00 | In the real world you need to model
more complex shapes than cubes and spheres.
| | 00:06 | ModelBuilder comes with a complete
comprehensive suite of editing tools to allow
| | 00:10 | you to refine the shape, and add
details to the geometric primitives.
| | 00:13 | I am using the ModelBuilderEdit.nk script,
which you will find in the Exercise Files.
| | 00:19 | It has the prebuilt geometry
and a TrackedCamera to work with.
| | 00:24 | To see the ModelBuilder Node, don't
forget, you have to do two things, first,
| | 00:28 | turn on TrackedCamera, and
second, lock the viewfinder.
| | 00:34 | Now I am going to gain down the viewer
a little bit, just so our white lines
| | 00:38 | show up a little better.
| | 00:39 | I'll double-click on the ModelBuilder Node
to open it up and we get all of our tools.
| | 00:44 | First, let's take a look at how to edit
vertices. Again, we have an align mode
| | 00:49 | here and you can see how the points
color up for the align mode, and we have the
| | 00:53 | edit mode here where the points are not lit up.
| | 00:56 | When you are in the edit mode you get
to choose between editing vertices,
| | 01:00 | edges, faces, or entire objects, so
we're going to look at the vertices.
| | 01:03 | As I click on a vertex, it lights up
and I get my cardinal coordinates, so you
| | 01:09 | can just choose whichever vertex you want.
| | 01:10 | Now here is a critical point, you want
to move the vertices using these cardinal
| | 01:16 | axes, this moves it in Y, this moves
it in X relative to the original shape.
| | 01:22 | In other words, this point is still co
-planer, it hasn't a move off the plane
| | 01:27 | of the card, this is a critical issue.
| | 01:29 | Let's take a look at in 3D.
| | 01:30 | So as you can see the vertex I moved is still
perfectly coplanar with the rest of the card.
| | 01:37 | Let's go back to our ModelBuilder view
by TrackedCamera and lock viewfinder.
| | 01:45 | However, if you make the mistake of
grabbing a vertex and just pulling on a
| | 01:50 | central point like this, you will have
pulled it out of alignment. There, look at that.
| | 01:56 | Okay, so we want to be very, very careful.
| | 02:01 | We'll go back to our ModelBuilder view.
| | 02:03 | I am going to undo that and let's push
in a little bit more here, so I can show
| | 02:10 | you how to select with the edges.
| | 02:16 | When you select an edge, it turns blue
and you get your cardinal axis again.
| | 02:20 | So, here's this edge, and there
is that edge, and that edge there.
| | 02:24 | Again, I can translate the edge, but
there's another thing you can do with an
| | 02:31 | edges selected and that is to subdivide it.
| | 02:33 | So, I am going to select this edge and
then with the edge selected, right mouse
| | 02:38 | pop up and select subdivide.
| | 02:42 | And what it does is it plants a
vertex in the midway point of that line.
| | 02:46 | So, what I'm going to do here is
switch to selecting vertices, grab my new
| | 02:53 | vertex and move it up and down,
click to the side to deselect.
| | 03:00 | Another very important tool is the Carving tool.
| | 03:03 | Let's see how to carve.
| | 03:04 | Now you can carve whether you are
selecting vertices, edges, or faces, so I am
| | 03:11 | just going to pick edges.
| | 03:12 | I select this edge and with the edge
selected, I then select the carve mode and
| | 03:19 | when I click in any face, it turns red.
| | 03:21 | So, I'll click to other face, another
face and another face, that red outline
| | 03:26 | tells you you're in carve mode.
| | 03:28 | carve allows us to personally
subdivide the polygons anyway we want.
| | 03:32 | To get out of the carve
mode hit Return. All right,
| | 03:35 | I am going to go back to the carve
mode and show you what to use it for.
| | 03:40 | In the carve mode you can
divide a polygon up any way you want.
| | 03:45 | So, with this polygon selected red, I
can select any place on any edge and
| | 03:50 | insert a vertex like this, then I can
go to any other edge anywhere I want and
| | 03:55 | click a second time, and I have now
subdivided it or carved it anyway I want.
| | 04:00 | I'll click off to the side to deselect.
| | 04:03 | More than that, I'm going to select this
guide to carve and I can insert a control
| | 04:09 | point here, and here, and here, and
here, and here, and there, and there.
| | 04:14 | I do not have just crossover to another edge.
| | 04:18 | Deselect, this is now a whole new polygon.
| | 04:22 | So, I can go over here and select
Selecting faces, and there it is.
| | 04:27 | So, the carve mode would be extremely
valuable for you to draw your own polygonal
| | 04:32 | edges exactly where you
need them on top of the image.
| | 04:35 | Now I am in Face edit mode --you can see
I have the face icon here-- so that means
| | 04:40 | any face I click on turns blue, let's
take look at what we can do with faces.
| | 04:44 | Obviously, I can translate them in X or
Y --undo, undo-- but I can also pull them
| | 04:49 | out in Z. Now I have translated this
face out, there is no polygon here, I can
| | 04:55 | show you that if I set
the display to solid, okay.
| | 04:59 | So, I have polygons everywhere, but there's
no polygon here, so I am going to undo, undo.
| | 05:04 | However, with the face selected, if I
go right mouse pop up, extrude, now it's
| | 05:10 | going to extrude this and actually
build a polygon here, in fact, I can show
| | 05:15 | that, by showing you the
solid again, there you go.
| | 05:23 | I'll undo, undo that.
| | 05:25 | Another thing we can do in the
Face mode is to merge polygons.
| | 05:29 | For example, I am going to select this
face Shift+Click to select that one, so
| | 05:33 | Shift+Click will allow you to pick
multiple elements, then right mouse pop
| | 05:38 | up, merge and those two polygons
have become one, or I can tessellate it.
| | 05:45 | I'll select this guy again, right mouse
pop up, tessellate, triangular fan, this
| | 05:52 | will allow you to subdivide
polygons to add a lot more detail.
| | 05:55 | And of course, if I switch to my vertex
mode, I could select this vertex and I
| | 06:00 | could pull it out the Z and
build myself an extruded section.
| | 06:06 | Next, let's look at Bevels.
| | 06:07 | I am going to zoom out here, let's go
over to our cube, our box, our window box.
| | 06:14 | Very few things in the real world have
sharp edges like this box, so usually
| | 06:18 | you're going to want to add a nice bevel to it.
| | 06:20 | So, let's go to select edges, and now I
can select the edges on my box. With edge
| | 06:26 | selected, right mouse pop up, bevel,
notice I get a little bevel here.
| | 06:33 | Now you can control how large that
bevel is right here, relative insert. If I walk
| | 06:38 | this up, it gets larger, smaller.
| | 06:41 | You can also control how rounded it is
over here with the round level, set that
| | 06:45 | to 1, 2, 3, 4, as much as you want. So
this gives you a very elegant and simple
| | 06:52 | way to bevel or add
roundness to the corners of things.
| | 06:55 | The last thing I want to show you is the
edge loop and edge ring, but I am going
| | 06:59 | to need to add a piece of geometry for that.
| | 07:01 | So, I am going to re-home the Viewer,
I'll go over to my Object Creation and I
| | 07:06 | am going to say I want a cylinder and
click in the middle of the picture, switch
| | 07:11 | to the Editing Mode, Select objects,
and then I will scale this guy down with
| | 07:18 | the On Screen Scale Control
Jack, so we know and love.
| | 07:21 | Now I am going to switch to selecting
edges, deselect over here, come in here,
| | 07:29 | so I can show you this.
| | 07:30 | I am going to select this edge right
here and right mouse pop up, if I select
| | 07:35 | edge ring, I am going to get this
entire set of polygons around here, and then we
| | 07:40 | can translate that if we wish.
| | 07:42 | I'll undo that. Or, I could select
edge loop, which gets this ring here and
| | 07:50 | then we can translate that and even rotate it.
| | 07:55 | The very powerful polygon editing
tools in the ModelBuilder Node will make it
| | 07:59 | possible to build complex scene
geometry for sophisticated camera projections
| | 08:03 | without turning to the 3D department.
| | Collapse this transcript |
|
|
22. TimeOffset Node New FeaturesSetting up and operating| 00:00 | Previous versions of the TimeOffset Node
can only shift the timing for 2D elements.
| | 00:06 | The updated TimeOffset Node in Nuke 7 can
now also shift the timing for 3D objects.
| | 00:12 | The setup here is I have a little
clip that just has numbered frames.
| | 00:17 | So if I take that Read Node and go to
the Time tab and select a TimeOffset Node,
| | 00:22 | if I set the TimeOffset by let's say
25 frames, now when I move the playhead
| | 00:30 | nothing happens till I get to frame 25
and then off it goes, so that's exactly
| | 00:36 | how the old TimeOffset Node worked.
| | 00:38 | Now let's take a look at the 3D setup that
I have, we will switch over to the 3D view.
| | 00:44 | So here is my 3D scene setup, I just
have these three cards and they all have a
| | 00:48 | little animation tool.
| | 00:49 | Notice that the animation on the
geometry starts on the very first frame, but of
| | 00:54 | course the Read node doesn't start
updating until we get to frame 25, but now let's
| | 00:59 | see what happens if we move the
TimeOffset Node to a piece of geometry.
| | 01:03 | I am going to select the TimeOffset, pop
it out or Shift+Command+X, you all know
| | 01:08 | that one, bring it in here
and hook it up to just one card.
| | 01:12 | So now card 1 has got the 25 frame
Offset, so if I scrub the playhead, the
| | 01:18 | others cards move, but card 1
doesn't move until we get to frame 25.
| | 01:22 | So the TimeOffset Node has shifted the
animation of that piece of geometry,
| | 01:28 | but there's something else we can do.
| | 01:30 | I'm going to pop out that TimeOffset
Node, hook it to the entire scene, and now
| | 01:39 | the entire scene has a 25 frame
offset for all the geometry animation.
| | 01:47 | In addition to offsetting the timing,
you can also reverse the timing of
| | 01:51 | your clips and geometry.
| | 01:55 | You can now use the TimeOffset Node to
shift the timing, as well as reverse the
| | 01:59 | animation, for both 2D and 3D objects.
| | Collapse this transcript |
|
|
23. New Shadow-Casting FeaturesSetting up and adjusting| 00:01 | ShadowCasting has been in Nuke for a
while now, but Nuke 7 added a couple of new
| | 00:05 | features that we will take a look at here.
| | 00:08 | In addition to that there are a couple of
issues you need to know about to avoid problems.
| | 00:13 | I'm using the ShadowCasting.nk script
that you'll find in the Exercise Files to
| | 00:17 | help speed things along.
| | 00:20 | First to know is that a shader is
absolutely required in order to do any ShadowCasting.
| | 00:25 | So, let's open up the Spot
light and switch to the Shadow tab.
| | 00:30 | The first thing you have to do is
of course turn on ShadowCasting.
| | 00:35 | Now one of the new features in Nuke 7
is that each piece of geometry has a
| | 00:39 | shadow cast and a shadow receive control.
| | 00:41 | So, let's go to the card and open
that up and right here you'll find the cast
| | 00:46 | shadow and receive shadow controls.
| | 00:49 | Cast shadow means it does not cast a
shadow on anything, receive is it doesn't
| | 00:53 | receive a shadow, and oops, our shadow went away.
| | 00:55 | So, for the card, we want
to leave receive shadow on.
| | 00:59 | For this sphere, same thing, with
cast shadow turned off it will not cast a
| | 01:05 | shadow, and with receive shadow
turned off, it won't receive any.
| | 01:09 | So, we're done with these.
| | 01:10 | I am going to close both the sphere and the
card back to our spot light Property panel.
| | 01:16 | First up I would like call your
attention to these little marks right here,
| | 01:20 | these are self shadowing marks.
| | 01:24 | We can eliminate those by adjusting the
bias parameter. If I increase the bias
| | 01:29 | it moves the sample point away from
the surface of the geometry so it doesn't
| | 01:33 | cast a shadow on itself.
| | 01:35 | However, that can introduce artifacts
in some situations --I want to put that
| | 01:39 | back-- and let's take a look that slope bias.
| | 01:43 | The slope bias solves the
ShadowCasting by looking at the angle between the
| | 01:47 | geometry and the lens.
| | 01:49 | So, if you turn up the bias and
introduce problems, turn it back down and try
| | 01:54 | using the slope bias and you
juggle the two, until things look right.
| | 01:58 | I want to put that back to default
and of course our self shadows appear
| | 02:01 | again, to show you this.
| | 02:03 | We will open this sphere again.
| | 02:04 | And if it's not going to receive any
shadows, turn this off and the self
| | 02:09 | shadowing disappears.
| | 02:10 | All right, back to our Spot Property panel.
| | 02:14 | So, let's take a look at
how to make the shadows soft.
| | 02:18 | Soft shadows require three adjustments,
the samples, the jitter scale, and the
| | 02:23 | depthmap resolution.
| | 02:26 | If I just crank up the jitter scale,
I don't see any softness, I have to
| | 02:31 | increase the number of samples along
with the jitter scale before we start to
| | 02:35 | see some softness, there we go,
let's zoom in a little bit.
| | 02:41 | The jitter scale controls the thickness
of the soft edge, whereas, the samples
| | 02:46 | controls the quality, how smooth it is.
| | 02:49 | If I turn samples down to
6, see I got the uglies.
| | 02:51 | I'll turn that back up and
set it for a higher level.
| | 02:55 | But of course, as I increase
these numbers my render time goes up.
| | 02:59 | If your shadows are looking a little
crunchy, you can increase the depthmap
| | 03:03 | resolution and that will smooth it out.
| | 03:06 | So, you juggle the three parameters,
samples, jitter scale, and depthmap
| | 03:10 | resolution to get the shadow you want
and the reason for the different settings
| | 03:14 | is depending on the scale of your geometry.
| | 03:17 | You don't have to change these
values for the best look of your shadow,
| | 03:21 | compromised with your best render time.
| | 03:23 | I am going to re-home my
Viewer and turn on the cylinder here.
| | 03:33 | Now the shadow from the sphere is
landing on the cylinder and the cylinder is
| | 03:36 | casting a shadow on the card.
| | 03:39 | So, if turn off cast shadows, then the
cylinder will no longer cast a shadow on
| | 03:44 | the card, and if I turn off receive
shadows, the sphere shadow will no longer
| | 03:48 | fall on the cylinder.
| | 03:49 | We'll turn that back on and disable the
cylinder and we will close the Property
| | 03:56 | panel, back to our Spot light.
| | 03:59 | Next, I would like to show you an
idiosyncrasy of the directional light that
| | 04:03 | you really need to be aware of.
| | 04:04 | I am going to turn off the Spot light;
| | 04:06 | disable that, and enable the directional
light, and open it up in the Property panel.
| | 04:10 | Now there is something seriously wrong
with our shadow here, here's the deal the
| | 04:15 | directional light has to be scaled up
large enough so that it sees all of the
| | 04:20 | geometry you're trying to cast the shadow from.
| | 04:22 | So, let's take a look. We'll switch to
the 3D Viewer, punch up our directional
| | 04:28 | light, lock the viewport, and
we will back out a little bit.
| | 04:34 | As you can see the directional light
only sees this much of the geometry, and
| | 04:39 | that's why we get a partial shadow.
| | 04:41 | So, we go to the directional light
Property panel, we go to the uniform scale,
| | 04:45 | and I'll slide up the uniform scale
until the geometry is completely encompassed
| | 04:51 | in the viewport of the light.
| | 04:53 | Now we'll switch back to 2D and
now we have a complete shadow.
| | 04:57 | You can cast shadows from spot lights
and direct lights in Nuke, but not from
| | 05:01 | point lights without using Renderman.
| | 05:03 | Next, we'll see how to
cast semitransparent shadows.
| | Collapse this transcript |
| Casting semitransparent shadows| 00:00 | The solid shadow works fine for
solid geometry, but if you want to cast
| | 00:05 | semitransparent shadows, you have to
change the light's shadow mode to use the
| | 00:08 | alpha channel as a
transparency mask for the shadow.
| | 00:11 | I am using the ShadowCasting.nk file, which
you will find in the Exercise Files folder.
| | 00:17 | Our semitransparent element is this
leaf here and we will look in the alpha
| | 00:22 | channel, there is our semitransparency.
| | 00:25 | Back to our Scanline render, let's
take a quick look at the the 3D setup.
| | 00:29 | So I will switch to 3D view.
| | 00:32 | So I have this card floating over the
other card and the top card has the leaf on it.
| | 00:37 | We have our camera and our light.
| | 00:39 | Okay, back to the Scanline render node.
| | 00:43 | So we have a Phong shader on top of the
card, so it will receive the shadows and
| | 00:47 | another Phong shader on top of the
floating card, so it will cast a shadow.
| | 00:51 | And we have hooked up
our leaf to all the inputs.
| | 00:54 | Now here is the big issue
you have to keep in mind.
| | 00:57 | It is this unlabelled arrow input here that
receives the alpha channel for the transparency.
| | 01:03 | If you don't have it connected,
you are going to get a solid object.
| | 01:08 | So you must hook up a semi-
transparent image to the unmarked input as well
| | 01:12 | as the diffused specular and emission inputs, if
you would like to adjust those for your lighting.
| | 01:17 | Now with semitransparent object, the
Shadow mode has a couple of options
| | 01:21 | you want to know about.
| | 01:21 | I will open up the Light4 Property
panel, switch to the Shadows tab and right
| | 01:28 | here the shadow mode, we are using
full alpha, which means it uses the entire
| | 01:33 | alpha channel to determine the
semitransparency and you get variations and
| | 01:37 | lightness and darkness.
| | 01:38 | If we use the clipped alpha, you see
the alpha channel has basically become
| | 01:43 | binarized into transparent and opaque.
| | 01:46 | Now there is threshold here, the
clipping threshold, you can lower it to get a
| | 01:51 | more solid alpha channel or raise
it up to make it more transparent.
| | 01:54 | The solid shadow mode is for solid
geometry, and as you can see, we have
| | 02:00 | completely lost all of our transparency.
| | 02:02 | So I am going to put it back to
full alpha, it looks very nice.
| | 02:06 | Now sometimes you want to output the
shadow mask itself, so you can process it
| | 02:12 | and do your own thing with the shadows.
| | 02:15 | And the way we do that is
to enable the output mask.
| | 02:17 | I am going to send the Shadow mask
through the mask.a channel just because
| | 02:22 | it's quick and easy.
| | 02:23 | We will go over here and set mask.a
into the viewer's alpha channel, then
| | 02:33 | switch the viewer to the alpha channel,
so I can now look at my shadow mask and
| | 02:38 | back to the RGB render.
| | 02:40 | Nuke's ShadowCasting uses Depth
Map Shadows not Raytraced Shadows.
| | 02:44 | Depth Map Shadows are much faster
to render, but don't look quite as
| | 02:48 | realistic as Raytraced.
| | 02:50 | If you want to Raytraced Shadows, then
you'll need to use the Renderman option.
| | Collapse this transcript |
|
|
24. The New Relight NodeSetting up and operating| 00:01 | The long-awaited ability to do normals
relighting is now here with Nuke 7, using
| | 00:05 | the new RelightNode.
| | 00:07 | Given a normals pass, a point position
pass, and the original CG camera you can
| | 00:13 | Relight an RGB image.
| | 00:15 | I am using the Relight.nk script from
the Exercise File, which also contains
| | 00:20 | all of the images here.
| | 00:21 | Now you have to have your
normals and point position passes,
| | 00:25 | and I have them right here. My normals
pass is in the layer called norms and my
| | 00:31 | point position pass is in the layer called ppos.
| | 00:36 | If your normals or point position
passes are in separate files then use a
| | 00:40 | shuffle copy node to slip them into
the color image data stream, this render
| | 00:45 | obviously has them all right there.
| | 00:47 | So, this is our CG render
and we look at the composite.
| | 00:54 | So what happened here is, the director
decided to make this a day for night shot.
| | 00:58 | So, bang!
| | 01:00 | The CG render is no longer any good.
| | 01:02 | So, we're going to Relight this
CG render for a night lighting.
| | 01:07 | We will find the
RelightNode in 3D>Lights>Relight.
| | 01:16 | The color input gets hookup to our
RGB image, the lights input goes to the
| | 01:21 | Lights, we're going to use 3D
lights to Relight the RGB image.
| | 01:27 | Now if you just have a single light you
can hook it directly up to the lights input.
| | 01:32 | But as in this case I am using two or
more lights you hook them both up with
| | 01:36 | Scenenode, and Scenenode
hooks up to the light input.
| | 01:39 | Once you've hooked up the lights you
get another arrow and that's for the
| | 01:42 | camera, so we will hook that up to our camera.
| | 01:45 | This is the camera that did
the original CG scene render.
| | 01:50 | Once you've hooked up the camera you
get one more input arrow which is
| | 01:53 | the material input.
| | 01:55 | So, for this we're going to use a phong shader.
| | 01:58 | So, we'll go to 3D>Shader>Phong and hook
that up to the material input like so.
| | 02:07 | We will work with the Phong shader later.
| | 02:11 | Right now I want to talk
about the Relight property panel.
| | 02:14 | The first thing you have to do is
tell the Relight where to find the normal
| | 02:17 | vectors and the point positions.
| | 02:19 | Our normal vectors are in a layer
called norms and our point positions are in
| | 02:24 | the layer called ppos.
| | 02:26 | Now this use alpha button up here is
for a situation where you might want to
| | 02:30 | just mask off one part of the image
for relighting and that mask goes into the
| | 02:35 | alpha channel and this tells the node to use
the alpha channel to mask off the relighting.
| | 02:40 | We're going to Relight the whole
thing so we don't need that here.
| | 02:43 | Now let's take a look at the
output of the Relight node.
| | 02:49 | This isn't very useful.
| | 02:51 | This is not the relit version of the RGB image;
| | 02:54 | this is in fact the lighting passes that
have been produced by the Phong shader,
| | 02:58 | the camera, and our lights
| | 02:59 | by using the normals layer
and the point position layers.
| | 03:05 | In order to apply the lighting pass
we get out of the Relight Node to the
| | 03:08 | original RGB image we're going
to have to multiply them together.
| | 03:11 | So, I am going to add a merge node, hook
it up to the original RGB image, and set
| | 03:19 | the operation to multiply.
| | 03:22 | I have now relit the original RGB render.
| | 03:25 | I can toggle that on and off here and
you can see how that looks when we do
| | 03:30 | the composite here.
| | 03:32 | I will switch this to be my A
side and then we'll look at our length
| | 03:37 | composite, much better.
| | 03:40 | And we didn't have to send
it back to the 3D department.
| | 03:43 | Okay, so let's dial in our lighting.
| | 03:45 | First, let me show you
the 3D setup that we have.
| | 03:48 | I'll go to the 3D view and here's
my setup, these are my two lights.
| | 03:54 | Now what I have done, is I've attached
them to an axis node so it makes it easy for
| | 04:00 | me to adjust them, this is going to swing
it around the equator, and this is going
| | 04:04 | to swing it from poll to poll.
| | 04:06 | The axis node is only used to make it
easier to position the light that's all.
| | 04:10 | Okay I want to reset that back to default.
| | 04:13 | Let's go back to our 2D
View and see what we got.
| | 04:17 | So, the MoonAxis node is going to
actually reposition the moonlight.
| | 04:21 | So, I am going to rotate it around the equator.
| | 04:23 | All right, I am going to rotate it
from poll to poll, here you go, I will put
| | 04:30 | that back to default.
| | 04:32 | Next I can adjust the
characteristic of the moonlight by opening up the
| | 04:36 | MoonLight node, and for example, I
could decrease the intensity, or change the
| | 04:45 | color, and we will put that
back to the original settings.
| | 04:53 | And finally, we can actually adjust the
surface attributes of our CG element by
| | 04:57 | opening up our materials, in this case
the Phong nod,e and I could, for example,
| | 05:01 | increase the diffuse and lower the specular.
| | 05:06 | The Relight Node brings the long awaited
capability to do normals relighting in Nuke.
| | 05:11 | This can save valuable production time
by relighting during compositing rather
| | 05:15 | than rerendering in the CG Department.
| | Collapse this transcript |
|
|