Nuke Tips – Kronos, MotionBlur, Oflow, or VectorBlur?

The last draft for this article dated back to Jan 25, 2016… time to revive it!

Kronos, MotionBlur, Oflow, or Vector Blur? I’m blur…

The great thing with Nuke is that we have many ways to skin a cat problem.

I’ll be focusing on adding motion blur to CGI FX elements like fire, blood and debris. Yes you can abuse I mean use Kronos and Oflow for adding motion blur instead of slowing down footage!

Time to explore the various methods and see which one make or break depending on the situation.

Do not get confused with MotionBlur2D and MotionBlur3D which are not designed to generate motion blur by analysing the image sequences. MotionBlur2D uses the Transform animation while MotionBlur3D uses the camera animation to generate motion blur.

Continue reading

Nuke Tips – Hide Node Input

Hide Node Input or Node Hide Input or Input Node Hide

When your Nuke script start to grow bigger and complicated, it is time to start to organise the node graph flow for better readability and remove redundancy.

I don’t feel like writing a lengthy explanation on when or why to use it but here’s a bullet list on the usefulness of hiding node input:

  • Reduce redundancy by avoiding duplicates “copy” of the same node
  • Ensuring a cleaner and easier to read Node Graph (if only I can keep my personal belongings as tidy as the node graph–)
  • Avoid potential Nuke script corruption when cloning a node which can happen if you’re not lucky

Potential Nuke Script Corruption?!! Eeeepssss

To further explain on the last point, there are situation where you want to clone a node so you can use it on another part of the comp and prefer the ease of manipulating either of the clone nodes to propagate the same changes.

BUT

It comes with the risk of Nuke potentially screwing up the actual render and one of the way to fix it is to have a copy of it (think Clone Nodes is like Duplicate Special in Maya or Instance in 3ds Max) or linking the master node directly to where you want to use it.

This is where the Hide Node Input comes in handy!

Quick GIF Demonstration

Nuke Tips – Normalize Depth Pass

Time to Conform to Normality

Another short Nuke Tips so without further ado let’s normalise/normalize!

Hmmm Red Channel Mask. Wait a second...

Hmmm Red Channel Mask. Wait a second…

If you output a Pz (depth) image plane from Houdini’s Mantra and read it in Nuke, you’ll see it as a solid flat colour and wonder why it doesn’t match the view in the Render tab or Mplay in Houdini.

What happened is Houdini remap the Pz data by normalising the min/max value to a viewable range for the artist.

So here’s what we can do to make it viewable (and usable for various compositing tasks) in Nuke.

  • Shuffle the depth channel to RGB and alpha to… Alpha. (this step can be skipped if you prefer to manipulate the depth channel directly)
  • Optional but you can use the CurveTool node and use the Max Luma Pixel operator to find the brightest pixel in the depth channel.
  • Use Grade node to remap the black and white point. Typically you can leave the blackpoint at 0 and set the whitepoint to the brightest pixel.
  • Another optional step but in case there is no valid sky dome geometry to represent the sky depth, you can fake it by using a Constant node with a value way higher than the brightest pixel and merge it using Under if you been following the above steps.
  • To adjust the midpoint of the depth, tweak the Gamma value.

This is the end result:

Silent Hill-esque normalize depth after remapping in Nuke

Silent Hill-esque normalize depth after remapping in Nuke

Here’s the node graph setup for your reference:

Why Normalize Depth Pass?

Seeing a solid colour in the viewer is not exactly practical!

This tutorial is more for Mantra way of handling the depth pass where it renders depth using the distance from the camera.

Pixels that are nearer to the camera are closer to zero and pixels that are further from the camera are represented by the distance from the camera.

Imagine you have a building geometry that are 100 units away from the camera, the pixel value will be approximately 100.

Traditionally most renderer will offer options to set the near and far clip value to render the depth pass but this are destructive as we “bake” the possible depth range to fit into the final depth render. Unless you render and export the depth pass at least in 16-bit Half Float or better 32-bit Float (which can be overkill unless you need the highest accuracy during compositing).

Further Reading/Viewing

Learn How to Render and composite Z-Depth in Houdini: https://www.sidefx.com/tutorials/learn-how-to-render-and-composite-z-depth-in-houdini

Houdini Rendering to Z-depth with Mantra (Pz) (Japanese): http://yoong-cut-and.blogspot.com/2015/02/houdini-rendering-to-z-depth-with.html

Houdini – ZDepth Passes (Japanese): http://www.technical-artist.net/?p=92

Tutorial – Demystifying Camera Depth Passes in Maya Mental Ray: https://vimeo.com/113997080

The right way to render a depth pass: http://forums.cgsociety.org/showthread.php?t=901605

VRayZDepth using 3ds Max: https://docs.chaosgroup.com/display/VRAY3MAX/Z-Depth+%7C+VRayZDepth

VRayZDepth using Maya: https://docs.chaosgroup.com/display/VRAY3MAYA/Z-Depth+%7C+vrayRE_Z_depth